Tackling engineering challenges with creativity, simplicity, and a fresh perspective is what drives me. I thrive on learning and applying new concepts and technologies to develop impactful solutions.
Areas of Expertise:
Algorithm design & analysis, data structures, optimization of space and time complexity, design patterns, OOP
Concurrency models: resource sharing, message passing, actor model, reactive model, CSP, co-routines
Distributed systems design: HA, eventual consistency, CQRS, event sourcing, distributed transaction management (saga)
Microservices: choreography and orchestration, cloud design patterns
Operating systems, computer architecture
Programming languages: Java, Python
Databases: SQL (Oracle, MySQL), No-SQL (HBase, Cassandra, MongoDB)
Message brokers: Kafka, RabbitMQ
Security: OAuth2, OpenID Connect
Frameworks: Spring Framework, Spring Boot, Spring Cloud
Lead Engineer, Growth Backend Team
MeeshoMember Of Technical Staff
SalesforceSoftware Engineer (Intern)
GE DigitalKafka
RabbitMQ
K8s
Jenkins
Docker
I'm currently I'm currently working as a software developer at Visual, and I've been here for the last 3 years. And in my role, I've been managing multiple services, especially growth related services, which are responsible for acquisition and activation of new customers in the platform. So these include offers platform, loyalty platform. Uh, there is some referral service. Then there is a group marketing service that sends which shows data to, uh, Google and Facebook so that we can show we show ads on their platforms. So I also manage that service. Prior to Nisha, I was working with Salesforce for around 2 years. So in Salesforce, I was part of the loyalty coaching. So we built that product from scratch called loyalty program product, and our customers can use that product. Uh, I mean, that was a multi tenant product. And I have been working with Microsoft with architecture for the last 5 years and, uh, I have explored various SQL and NoSQL databases along with Fastparadis. Yeah.
Okay. So to configure this stopper container, what I will do is, uh, I will have a Jenkins job, uh, first of all, which will do the build of my code and I will have a Dockerfile in my code. So using that Dockerfile, Docker will know what are the requirements that I need for this build to complete. So Docker will resolve those resources and create a build along with the requirement of the VM and memory CPU, those things are also specified in the Dockerfile and Dockerware. Create a virtual machine which has all those functionalities. Now, uh, to pull the code from this repository, uh, in my Jenkins file, I will give AS3 link, first of all. And in that s 3 link, So I will specify this s 3 link, and you can pull so, basically, my git repository will be uploaded to that s 3, and I will pull the code from that s 3 link.
Okay. So basically, q 1 eighties will help. There is an HPA side card which I will use, uh, along with n y proxy. And n y proxy will basically work as a load balancer and HPA will be a sidecar which will see the resources for my pods, like, my current resource utilization of the pods. And if required, HPA will scale out or scale in the ports as per the current load of my system. In this way, I can build a persistent storage using Kubernetes. And also, uh, uh, if this question is about the databases, so basically, I will, uh, utilize any of the cloud providers like GCP or AWS, And I can connect to let's say if I'm using my SQL, I can connect to RDB and, uh, the RDB service for my sequin, and I can store my data there.
Okay. To optimize the API, first of all, uh, I will see that I will have to have proper competency so that I can run my ports at 60% CPU minimum and 60% memory minimum. And, basically, I need to define, uh, thread pools inside my service, the other service, so that I can parallelize the work to maximize the throughput here. And, also, what I can do is I can delay processing. So, basically, I can just push the processing to Kafka and do it async and my free HTTP thread so that it can handle other requests. Also, I can move to reactive Java. In reactive Java, as soon as my reactive new thread comes with the request and it starts processing, uh, it gets free. So, basically, um, there will be a callback then my response is ready. My controller will give a callback and after that, it will the SDP or new thread will pick up the response and give it to that client. Otherwise, it will be free to take up the request.
Uh, there will be a master branch which will go into production. There will be a developed branch which will be used for pre prod environments testing. It will be copy of the master branch only, but let's say a new feature comes, it will first go to dialogue, and it will be tested on the keypad environment and then it will get nice to master. All the feature branches will be pulled from, uh, this double up branch.
I will be maintaining thread pools, uh, from my sequence in the code so that every time a request comes to read or write, I do not create a connection at 1 time. So I will be creating thread pools. I will be specifying the main threads that I will need and I will also specify the max threads, uh, if the requirement is there during 1 time, during high load.
I cannot see any unit riser. So if the process builder don't start phase, first of all, there is in the catch block, there are no, like, I'm only printing the stat rays, uh, which is wrong. I think I should have logged the request because log, but the log library does this. Let's say if I'm using SMS 4 j, it will have that process even background, uh, sync. And this e dot print stack trace will require the stack trace to be printed to my console in real time which will take a lot of time. So I really should be using log files and logs here. Also, we can have some alerts here, uh, to notify the developers that the docker starts with. Also, there are no retries here, uh, if, uh, in the catch note, there should be retries also.
So the user load balancing strategies are round robin, where I distribute the load equally in a round robin fashion to all the servers and that is pretty scalable. That is there are multiple requests from the same user. There are some power users in our system. Then we can use sticky sessions for them and cache the responses for them in the pods. So for power users, we can also go to sticky sessions. Otherwise, round robin round robin fashion basically usually works well. Also, we can use consistent hashing here to transfer the load to my servers in a balanced way if if if I want to use sticky sessions. So in that case, I can use consistent hashables.
Okay. So first of all, all the configs, uh, will can be specified in YAML files in my code, and my Kubernetes cluster basically will consist of a very load balancer which will be handled through n y proxy. And in my pods, basically, there will be multiple it will be a cluster and in that cluster, multiple services pods of multiple services will be running as a shared resource of my q 1 80s cluster. Inter request communication can, uh, can also be enabled if 1 service wants to call another cluster. So we should enable that also.
If there are peaks in the requests. So there are some key times where I'm getting a stipend request. So as architecture can scale on the fly automatically. We do not need to set up to do scaling on our side. So it's usually works well for spiky loads.
Yeah. Grafana so basically Grafana reads data from Prometheus server usually, the user setup. So I can have some custom metrics which my ports will send to Prometheus server. And in Grafana, I can set up alerts, uh, various alerts that, let's say, if in a minute, the number of 5 x x goes above a particular threshold then some alerts to the system, uh, to the developers. So and also, I should have graphs to for different GMX metrics. I should have graphs for different uh, application metrics like 5xx, p 99, number of requests. These kind of metrics are very essential to help.