profile-pic
Vetted Talent

Sudhanshu Rajendra Bawane

Vetted Talent

Have a knack for programming and Does not back down from taking risk and love challenges, willing to take the unconventional path(s), to get tasks done, with a sense of responsibility and leadership.

  • Role

    Senior Development Engineer (Team Lead)

  • Years of Experience

    6.50 years

Skillsets

  • ETL
  • Team Leadership
  • System Design
  • PostgreSQL
  • MongoDB
  • Golang
  • C++
  • Git - 5 Years
  • Java
  • Grafana
  • Django
  • Kubernetes - 4 Years
  • RabbitMQ
  • Terraform - 3 Years
  • Git - 5 Years
  • GCP - 2 Years
  • Python - 4 Years
  • AWS - 4 Years
  • Django

Vetted For

9Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Golang Engineer (Remote)AI Screening
  • 54%
    icon-arrow-down
  • Skills assessed :Communication, API development, Database Design, AWS, Go Lang, Kubernetes, Problem Solving Attitude, Redis, Security
  • Score: 49/90

Professional Summary

6.50Years
  • Feb, 2023 - Present2 yr 10 months

    Senior Development Engineer (Team Lead)

    Calsoft
  • Nov, 2021 - Oct, 2022 11 months

    Founding Engineer (Sr. Software Engineer)

    IDFC First Bank
  • Mar, 2021 - Oct, 2021 7 months

    Grade 2 Software Engineer

    Xoriant
  • Jul, 2018 - Jul, 20202 yr

    Associate Developer

    Cognizant
  • Oct, 2020 - Feb, 2021 4 months

    SDE 1

    Sigmoid Analytics

Applications & Tools Known

  • icon-tool

    Kafka

  • icon-tool

    Spark

  • icon-tool

    Informatica

  • icon-tool

    GCP

  • icon-tool

    RabbitMQ

  • icon-tool

    Django

  • icon-tool

    Terraform

  • icon-tool

    AWS

  • icon-tool

    Git

  • icon-tool

    GCP

  • icon-tool

    Terraform

  • icon-tool

    AWS

  • icon-tool

    Golang

  • icon-tool

    Python

  • icon-tool

    Java

  • icon-tool

    C++

  • icon-tool

    GCP

  • icon-tool

    Terraform

  • icon-tool

    AWS

  • icon-tool

    PostgreSQL

  • icon-tool

    Kubernetes

  • icon-tool

    Grafana

  • icon-tool

    ETL

  • icon-tool

    MongoDB

Work History

6.50Years

Senior Development Engineer (Team Lead)

Calsoft
Feb, 2023 - Present2 yr 10 months
    Spearheaded CISCO and VMWare security integration using Golang, showcasing expertise in backend development. Led a 30+ member team across two projects, demonstrating strong leadership in GCP, Terraform, and AWS environments. Successfully closed pre-sales negotiations with industry leaders NVIDIA and CISCO. Ensured seamless task creation and monitoring, emphasizing technical proficiency in Git, Grafana, and ETL.

Founding Engineer (Sr. Software Engineer)

IDFC First Bank
Nov, 2021 - Oct, 2022 11 months
    Designed and enhanced UPI payment systems, owning the credit module and implementing risk-based pricing models. Successfully led three teams in UPI, Insurance, and Demat, overseeing product-related issues and driving positive changes.

Grade 2 Software Engineer

Xoriant
Mar, 2021 - Oct, 2021 7 months
    Developed impactful terraform scripts for infrastructure deployment on GCP and AWS. Created a robust architecture for AWS, effectively mirroring infrastructure in GCP via terraform.

SDE 1

Sigmoid Analytics
Oct, 2020 - Feb, 2021 4 months
    Orchestrated the migration of the entire system into Python executables. Built an API-driven publishing service handling 17 million page-views per month, operating at 94% cache efficiency.

Associate Developer

Cognizant
Jul, 2018 - Jul, 20202 yr
    Spearheaded the optimization of the ETL layer in three projects, significantly enhancing data processing efficiency. Wrote Python pipeline scripts and created/modifed workflow jobs on MDM tools, automating processes.

Achievements

  • Designed and developed an entire new UPI payment system
  • Leading three teams in UPI, Insurance and Demat
  • Responsible for all prod related issues and to change the existing system
  • Responsible for maintaining all the services uptime and also the entire CI-CD pipeline
  • Creation of terraform scripts for infrastructure deployment on GCP and AWS
  • Filtering out all vulnerabilities in GCP and providing resolution to them
  • Created new services in python with spark to maintain the parallel process
  • Migrated the entire system into a python executables
  • Extended the build system to integrate with the mercurial sparse feature
  • Developed 17 business rules for various scenarios to automate entire process
  • Spearheaded CISCO and VMWare security integration using Golang, showcasing expertise in backend development.
  • Led a 30+ member team across two projects, demonstrating strong leadership in GCP, Terraform, and AWS environments.
  • Successfully closed pre-sales negotiations with industry leaders NVIDIA and CISCO.
  • Ensured seamless task creation and monitoring, emphasizing technical proficiency in Git, Grafana, and ETL.
  • Implemented a sumo-logic based solution for task automation directly from monitoring tools like Grafana to even auto-scheduling the AWS load balancing.
  • Designed and enhanced UPI payment systems, owning the credit module and implementing risk-based pricing models.
  • Successfully led three teams in UPI, Insurance, and Demat, overseeing product-related issues and driving positive changes.
  • Implemented cutting-edge risk-based pricing models, significantly boosting lending profitability.
  • Documented and implemented robust risk assessments, mitigation strategies, and compliance activities.
  • Revamped the infrastructure for Sole-Proprietary businesses, addressing all product-related issues and driving system evolution.
  • Pioneered the establishment of a new infrastructure tailored for Sole-Proprietary businesses, utilizing Kubernetes, AWS, and PostgreSQL databases. Managed product-related issues and system changes.
  • Responsible for the creation of new tasks, work cards, and ensuring the uptime of all services and the entire CI-CD pipeline.
  • Developed impactful Terraform scripts for infrastructure deployment on GCP and AWS.
  • Created a robust architecture for AWS, effectively mirroring infrastructure in GCP via Terraform.
  • Identified and resolved vulnerabilities in the GCP environment.
  • Successfully managed applications across Kubernetes, AWS, and GCP.
  • Implemented multi-threading, significantly improving throughput and reducing execution time.
  • Designed and implemented a multi-threaded background task manager for automated volume snapshot creation and maintenance.
  • Orchestrated the migration of the entire system into Python executables.
  • Built an API-driven publishing service handling 17 million page-views per month, operating at 94% cache efficiency.
  • Deployed dockerized applications on kube clusters, streamlining the development process.
  • Extended the build system to integrate with the mercurial sparse feature, addressing performance issues with large repositories.
  • Rebuilt the mercurial sparse subsystem, transforming it into a user-friendly tool.
  • Spearheaded the optimization of the ETL layer in three projects, significantly enhancing data processing efficiency.
  • Wrote Python pipeline scripts and created/modified workflow jobs on MDM tools, automating processes.
  • Developed match-link algorithms using STIBO MDM, ensuring accuracy in data relationships.
  • Created an inbound processor for XML loads, reducing load time by 30%.
  • Developed 17 business rules for various scenarios, automating entire processes.
  • Created an outbound processor for hot folder linkage, improving overall system efficiency.

Major Projects

1Projects

GOLANG (IN HOUSE PROJECT)

    Work with a team to create a server-side application via the use of various functionalities like go routines and package import like multi-version. Assist in designing and developing a scalable recommendation platform that can be used by various systems/applications using the CRUD framework. Echo server implementation to change priority network pipeline and workflow.

Education

  • Graduation

    Government college of Engineering, Amravati (2018)

AI-interview Questions & Answers

Yeah. So this is Sudanshu. I have been working for 6 years now. Uh, it's more of a core f and back end. I would say, sometimes the requirements and

Well, to have an, uh, rate limiting algorithms, you can use multiple functionalities and methods. The one I had used, uh, in one of my previous project, it was with the least connection approach. So the way this works is, like, uh, if we let you say you have 2 servers, one server has already 10 request processing, and another has, like, 9 request processing. So what your rate limit algorithm does, uh, is that once something comes in, uh, it is going to block that number of API entries. So certain layman terms you can say, like, uh, bucket and other things. Like, let's just say there was a bucket, and, uh, you configure that bucket in such a way that it would only accept, uh, somewhere around, let's just say, 5 balls. So as soon as the 5th ball comes in, uh, it will just get spilled out. So these are, you know, certain certain ways you can rate limit things. And the reason one basically uses a rate limit to control, you know, API responses and to just make sure that there is not an overload of things, so whenever the API is getting performed.

If I have to do a polymorphism in a go line and without modifying the existing functionality, To me, I think, uh, only thing I can think of with, uh, would be use of your empty interfaces with struct embedding.

If my service is really heavy, one data was designed aspects in Golang would have put priorities to ensure optimal performance. K. So if the service is a read heavy, that means, uh, first thing that's clear to me is that my DB transactions, uh, whatever they might be, they will have to be, uh, item potent and independent of one another. That means some assigned calls will be going on, and I don't have to worry about write operations. So if it's such a case, then I would usually go for a structured database design, uh, which will be our SQL one, as it's easy to retrieve and search, uh, from a structured database compared to your, uh, you know, document study database, which would be NoSQL ones, like MongoDB and things like that. But for this exact requirement, I would usually prefer, uh, a structured database, uh, normal, like, traditional SQL one. This will give me an optimal performance because if, let's just say, I have to search in a very segregated manner or for a very particular, uh, 1 or 2 bits from the data database. It's easier, and it's also easier to formulate a query for that. Not to mention, I can pass certain exact parent results, which is not doable with, uh, uh, you know, unstructured ones. So, yeah, that would be it.

So in my Go applications, if I need to make sure that each and every transaction or anything that's happening with my DB, It's following the asset properties. So there are certain ways I can make that happen. So let's just say you have a architecture. Uh, it's a microservice architecture, and everything is basically going on. So in such cases, let's just say we have, like, assets so first would be, uh, atomicity and then, you know, consistency, independent, nascent, and then things like that. So to make sure each and every one of those things are getting properly validated. The way I would start constructing things, uh, it would be in such cases like my Microsoft design. I would I would take a first approach as an us in calls, because whenever you have an asynchronous, uh, you know, calls, you don't have to actually wait, uh, for your acknowledgment. And then at the end of the day, you can basically run a lookup or something like that, which will, you know, synchronize your entire DB transaction that has been happened. So and if there is any mismatch in your transactions, you will basically able to do that. So that's one way of doing it. But if you want an atomic transaction with consistency at the same time, then and then, uh, an async model is not an right approach. You need a model with request response acknowledgment, And then DB transactions would usually be preferred in your standard structured databases, uh, like SQL and all. So, yeah, this is how I would usually think of it.

Well, in terms of SQL injections, uh, I don't have exposure to it. Uh, later, I have used, So I'm not sure, uh, how can I prevent an SQL injection attack, uh, when my goal end is having, you know, user generated database queries? Yeah.

So end shouldn't be there. It should be all. Uh, well, uh, the for loop is perfectly fine, uh, I would say, as it's assorted code, and the smaller one is will be getting upended every time things are happening. Uh, same goes for your equal scenarios. I would say the logic error would be in the forward statement, the conditional. An operation, uh, will fail in so many corner cases because it has been never specified that both slices are of the same length. So in such cases, uh, the usual the standard approach should be to check for the smaller slice length. So how would that be? Let's just say if slice a means, uh, first one, it would be of a smaller length. So I would have, like, 4 I is less than, uh, length of a. And then I will, uh, do my, you know, everything, Result and all. There would be something else, and it would be a pending, um, based on things. And as soon as this loop gets over so whatever counter for the j would be, I would take that counter and the remaining element for the, uh, j array and directly append it to the result. So that's a better way to do this. I can even put, uh, let's just say for a I can even put or statement there because and would fail, uh, in a major cases. So or would be better approach instead of, uh, logical and. Or the first approach is always good. Uh, you just have things there. Or what you can do is you can do a reverse append. Like, uh, the append shall be from the back of the error. That's also one of the ways, uh, you can do it.

Interface area. Okay. Floor 64. Shape.area. Well, one thing I can think of is that the way we are using interface here, if I need to use other shapes, then I have to create more area methods, means more area functions instead of just using one and then getting done with it. What we can do is that we can have my interface as kind of a, you know, something like template. So what will happen? Let's just say if I need to if I have a square structure, also, which in which case, I will have, uh, other things like diameter and radius. My area or whatever, uh, you know, that would be called, like, circle, semicircle. It would be according to this code. Yeah. So it will create some, uh, independent mode functions, which is not usually encouraged as the reason we use interface is to make sure to, you know, smaller the length span of the entire code and to have an, uh, what you can say, reusability of whatever function that has been defined in it. So that is the only thing I can think about it. Otherwise, it seems kind of okay with me.

Well, uh, I don't know about how will I have it in AWS. But for the usual adherence, uh, for a, you know, go for my go services being highly valuable and they're having other things. Certain things, uh, you can do. First of all, it's usually encouraged to have a profiler. So in Go, you can create a profiler and things like that. Uh, one more thing that I usually follow is create a watcher. So it watches who are on the APIs, and it has a connection to an HCT manager. So as soon as there is something or there is any modification or let's just say my API has 3 calls, create, delete, update. Whichever call is getting performed, my entity manager informs me continuously. And as soon as something goes wrong, uh, there is usually some, you know, recovery management code written behind it based on the scenarios. So that would be profiler voucher. What you can do is that whenever you write a service so to make sure that service is up to the mark from the start itself, it's a good habit to create some benchmarks, uh, in your project. So service is created. So to test that service, uh, keep some benchmarks and things like that. That will give you an, uh, you know, proper running services. In terms of AWS, don't know. My, you know, my work experience with Go wasn't, uh, like that in a way that I would have to deploy my service on AWS. But the way, uh, think I can think of certain things. Highly available. I'm not sure of the sure about the AWS, but I can think of in a Kubernetes scenario. What you can do, uh, is that you can make sure that your selectors in your services, uh, let's just say if you have cluster IP and things like that, you can configure them in a way that your resource management is properly handled. Uh, you can have your load balancer to make sure your services are highly available. Let's just say you can configure it in such a way that whatever ports they are running on, as you say, you had 10 ports and they were communicating, uh, with their local host, and every communication was going on. Uh, what you can do is that you can implement certain Kubelet services in a way that a utilization, once it goes beyond 60% or 70%, just shift remaining, um, you know, load that is coming to the load balancer, uh, to that, you know, those ports which are identical or, you know, uh, ideal, basically. So let's just say you had 4 ports and you had 4 replica sets of of them, and they'll those replica sets were ideal. So as soon as utilization goes beyond 60% or 65%, a replica set port will be used. So that's the one way you can do it. I think we call it something like, I I guess, uh, weighted, uh, round robin load balancing approach. So you can do that. For this question, in terms of AWS, I'm not sure. I haven't used my Go services with AWS yet. But with Kubernetes, yeah, you can do it.

I would say that instead of choosing the a singular service, what you can do that you can create a family of multiple services, Have a Kubernetes. Have Redis. And, uh, both of them in conjunction, what they will do is that they will provide you with, uh, so much options and, uh, so many services in such a way that, first of all, your services or whatever that you would be running in your go applications maybe, they will always be available. Uh, they will have so many disaster scenarios covered. Now once that is done, what you can do, now you can take that entire family, and now you can, uh, uh, tow it on your AWS SDK. So it was highly available, highly scalable. Now you got fault tolerance also, uh, because they just say it was only a premise thing or premise setup. Now it's on the cloud. Uh, you have more range to scale it, you know, uh, horizontally. You have more, uh, replica availability. So in such cases, uh, plus on top of that, AWS itself, it provides certain fault tolerance utterance. And because of that, uh, I would say it's good to create a family of components instead of just choosing a singular one. You can do it with a singular one. There is not an issue. But in terms of cost and effectiveness, I usually suggest or what I usually do is I create a convention of multiple families. So I take good from Kubernetes. I take good from radius, and I take good from AWS, and I combine them to make a better, uh, you know, better performing product. So, yeah, that would be my thinking on this.

How would I manage human rights, uh, Deploying, scaling, and operations on the whole and the API on AWS. See, uh, in terms of Kubernetes, uh, management and deployment, now what you can do, uh, from the start, you can enable your horizontal scaling. So now irrespective of the service that you choose, maybe cluster IP, node port, anything like that, it will always be having some horizontal scaling. So your scaling is done. In terms of deployment, uh, it's good to have replica sets so that if one ports go down, uh, you already have some another ports to take its place. And, you know, things are always highly available and going on. So your service won't go down. Because, uh, in terms of pause, uh, one thing people forget that they are item protect. Uh, that means those pods are constantly getting destroyed one after another, and new ones are getting cleared. So it's good to have some replicas, you know, as a back backup. No. If there are multiple, you know, go back and APIs are running, what you can do is that you can configure your selector in cluster IP that will give you a highly scalable API request response or you can say performance. And now when you put that on AWS, things will be a little bit different. Because what happens is that under AWS, you have more scale. Your resource absorbing capability has increased drastically. On AWS, now you don't have to worry about your on premise setup. You don't have to worry about how to, you know, scale it now going forward because horizontal and vertical scaling both are available to you at a higher degree now. Plus, uh, there are ways, uh, in which you can configure your load balance. Let's just say if you are using a weighted round robin approach on Go, uh, you can do it, uh, in a way that let's just say your API service or your entire Kubernetes were running on some buckets and things like that. You had some Lambda to support it and maybe some, you know, cut as current scenarios, current training, you had some LML models going on. So based on such architecture and, you know, complex system, what will happen is that, uh, your back end API will always be running irrespective of what's happening. You will have fault tolerance. Uh, you are going to have your services, uh, you know, always in a scaled manner. What you can also do is you can create a profiler, uh, have a dashboard assigned to it, maybe graph on a dashboard or Prometheus for simpler things, And, uh, keep consuming the matrices. So as soon as something happens and that metrics produces some kind of alert, configure your AWS in such a way that, uh, it will do your on time load balancing. AWS has that functionality, so, you know, just make a use of it. So yeah. Uh, that is how I would usually utilize. I mean, that's what I think that's what I can think of in terms of utilizing micro.