profile-pic
Vetted Talent

Abhishek Soti

Vetted Talent
I am a software engineer with 9 years of experience in developing enterprise-level applications. My expertise lies in Java, Spring Boot, MySQL, and MicroServices. I have a proven track record of delivering high-quality software solutions that meet customer requirements on time and budget. I am experienced in working with teams of diverse backgrounds and skillsets, and I have a solid understanding of the software development lifecycle. I also have an eye for detail and a passion for creating robust, scalable applications. My goal is to use my experience and skills to help create innovative solutions that will make a difference in people's lives.
  • Role

    Senior Software Engineer

  • Years of Experience

    10.11 years

Skillsets

  • SQL
  • Data Structures and Algorithms
  • Cursor
  • Rest APIs
  • Kafka
  • E2E Testing
  • AI Prompt Engineering
  • NoSQL
  • Microservices
  • AWS - 3 Years
  • Spring Boot
  • Kubernetes
  • Kubernetes
  • Java - 9 Years
  • Hibernate
  • Docker
  • AWS

Vetted For

6Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Staff EngineerAI Screening
  • 74%
    icon-arrow-down
  • Skills assessed :CI/CD, Python, Java, Micro services, Spring Boot, System Design
  • Score: 67/90

Professional Summary

10.11Years
  • Jan, 2025 - Nov, 2025 10 months

    Senior Software Engineer

    Agoda
  • Oct, 2024 - Jan, 2025 3 months

    IT Expert (Short-term Contract)

    Volkswagen
  • Feb, 2022 - Oct, 20242 yr 8 months

    Application Development Team Lead

    Accenture
  • Mar, 2014 - Apr, 20173 yr 1 month

    Java Developer

    Principal Financial Group
  • Jul, 2017 - Jul, 20181 yr

    Founding Engineer

    Pool Counter Internet
  • Jan, 2019 - Feb, 20223 yr 1 month

    Senior Analyst Programmer

    Fidelity International

Applications & Tools Known

  • icon-tool

    Eclipse

  • icon-tool

    Intellij

  • icon-tool

    Postman

  • icon-tool

    Jenkins

  • icon-tool

    Git

  • icon-tool

    Github

  • icon-tool

    Bitbucket

  • icon-tool

    Maven

  • icon-tool

    Confluence

  • icon-tool

    Jira

  • icon-tool

    Sonar

  • icon-tool

    SonarQube

  • icon-tool

    Fortify

  • icon-tool

    Maven

  • icon-tool

    Fortify

  • icon-tool

    AWS

  • icon-tool

    GitHub

  • icon-tool

    Maven

  • icon-tool

    Maven

  • icon-tool

    Fortify

Work History

10.11Years

Senior Software Engineer

Agoda
Jan, 2025 - Nov, 2025 10 months
    Acting Lead for a squad of 4 developers; reduced bug density by 25% and drove the adoption of mutation testing across the core backend service within 6 months. Resolved a critical 10- minute bottleneck in a large-scale 37,000-test suite by decoupling a single-file combinatorial explosion of 17,000 tests into a parallelized job, ensuring faster CI/CD cycles for the core backend. Improved pipeline stability by 30% by implementing test-level retries for flaky tests with over 25 dependencies.

IT Expert (Short-term Contract)

Volkswagen
Oct, 2024 - Jan, 2025 3 months
    Retained as a specialized Technical Consultant to upskill a cohort of 20+ internal developers on SOLID principles and Clean Architecture, providing the foundational expertise required for a major legacy modernization initiative.

Application Development Team Lead

Accenture
Feb, 2022 - Oct, 20242 yr 8 months
    Led a 30 -month transformation program at Vodafone UK, modernizing their entire e-commerce buying journey with a high-scale dynamic environment capable of handling 10,000 reads per second, powered by Redis. Minimized Kafka rebalance downtime to 0 in a mission-critical, Kafka-based event-driven system by implementing a cooperative rebalancing protocol, ensuring uninterrupted service for over a million customers.

Senior Analyst Programmer

Fidelity International
Jan, 2019 - Feb, 20223 yr 1 month
    Architected and delivered low-latency REST APIs for multiple domains with a throughput of under 200ms, leveraging multithreading and Heap monitoring to ensure high-performance remote procedure calls for anti-money laundering and GDPR compliance checks serving 2.67M users.

Founding Engineer

Pool Counter Internet
Jul, 2017 - Jul, 20181 yr
    Created a cloud-based school ERP from scratch; designed UI, content, and multiple features using Java, Spring Boot, MySQL, and AWS; implemented Parent, Teacher, and Admin portals plus school search, homework, and attendance modules; successfully onboarded 9 schools with 15K daily active users.

Java Developer

Principal Financial Group
Mar, 2014 - Apr, 20173 yr 1 month
    Maintained and enhanced functionalities of a legacy application following monolithic architecture written using Java, JSP and Struts and rewrote batch application in Java previously written in COBOL involving pulling thousands of lines of logic from procedures.

Achievements

  • Helped onboard 100K+ customers within the first month of Vodafone UK project launch.
  • Built extensive test coverage of 95% for all new features which reduced the average defect count by 20%.

Major Projects

3Projects

Post-Booking Quality Optimization

Jan, 2025 - Present1 yr 2 months
    Developed innovative features to optimize bug density and improve pipeline stability by implementing test-level retries and OpenStack migration.

Vodafone UK E-commerce Platform

Feb, 2022 - Oct, 20242 yr 8 months
    Headed development of a cloud-based e-commerce platform for Vodafone UK, achieving 95% test coverage and enabling successful onboarding of 100K+ users.

SchoolCounter ERP

Jul, 2017 - Jul, 20181 yr
    Developed a school ERP solution with multiple functionalities for 15K+ users, enabling automation of school activities.

Education

  • Bachelor of Technology, Computer Science and Engineering

    Guru Gobind Singh Indraprastha University (2013)

AI-interview Questions & Answers

Hello. Yeah. So, uh, I'm Abhishek Soti. I have, uh, 9 plus years of experience with back end, um, and my tech stack is basically Java, Spring Boot, uh, microservices, REST APIs, uh, SQL databases, NoSQL databases, uh, Docker, Kubernetes. So these are the, uh, technology that I have worked with, and I am open to learn any new technology which is required for the job. And, uh, regarding so I have worked with, uh, in different domains. Like, uh, first my first company was a finance company. Then my second, uh, startup was, uh, I started this own my own startup, uh, which was an EdTech based. And, um, after that, I, again, was working in investment management and, um, mutual funds company, which was Fidelity International, and I was working there as a Java developer. And, currently, I'm working with Accenture, and, uh, my project is Vodafone UK. And and it we are I'm building a, uh, ecommerce based web application, ecommerce based, uh, application, and I'm currently working on the back end side of it. And it's based on Java, Spring Boot, uh, microservices, REST APIs, and and I have also managed the CICD of this application using Amazon code code build, code pipeline. Um, it's deployed on ECS using Fargate, and the Docker images are pushed into the elastic container registry. So, yeah, that's pretty much about me.

Okay. So to implement a Python based script to automate health checks across a network of Spring Boot microservices, uh, we would need to have actuator enabled in our application. And and we we would also need to have, um, some sort of logging and monitoring system, which can be uh, promit which can be comprised of Prometheus, Grafana, or we can have an an ELK stack and through which we can, uh, monitor the health of our Spring Boot microservices. We can also have Dynatrace. And, uh, on the AWS, we can write few scripts which can keep a track of whether our application is up and running or not. It will keep on triggering the this Python based script will keep on triggering the, uh, or the, um, will keep on triggering the, actuator endpoints, uh, actuator help endpoints. And through that, we will be able to if the endpoint is not uh, working, we will send we will also write the logic to send an email notification to the respective owners of the microservices or whoever is looking into the support of that application.

So to introduce a new microservice into an existing ecosystem, um, so, basically, I would. So if the current architecture is based on the Eureka, um, Eureka servers and Eureka discovery, so I would, uh, need to have it have this new service also mentioned in the I would also have to register this new microservice in the Eureka server. And if, uh, if I'm not going ahead with the Eureka server, uh, so, uh, whatever, uh, new communication that needs to happen with this application will be based on, um, on a through, uh, through rest API through, uh, through rest template or web client call. And if, uh, if this microservice needs to needs to be needs to just based on an event driven architecture. So I would also have 1 Kafka queue implemented between this new service and the calling application, which will just, uh, cons which will and this application will act as a consumer in that case. And, uh, I will be consuming these app, uh, messages from my partition. And, uh, yeah, so that's how I will be doing it. And, uh, for a deployment, I'll be, uh, using a Docker based, uh, a containerized system. Like, I will create the Docker images, and I will use ECR and ECS or an EKS on Amazon to deploy to containerize this application. I will also have log monitoring and log monitoring through AWS CloudWatch or, um, Prometheus, Grafana, Dynatrace, ELK Stack, or or any other new tool that is available in the market. And, uh, yeah, that's how I will be, uh, managing this application.

So, um, the strategy to ensure zero downtime deployments in microservices using Spring Boot would be based on, um, the green blue green blue deployment. So what happens is, uh, there will be 2, uh, servers which will keep the copy of my microservice. And, first of all, the, uh, deployment would be done on the green, uh, server. And after the proper health checks are done on this of of this artifact on the green server, then then we will be initiating our deployment on the so so, uh, initiating a deployment on the blue server. So all of this will be managed through a Python script, and, uh, and we also have would we would also have written the automation suite, which will take care of the testing the green deployment, whether it is done correctly or not or whether, uh, we do have everything in place or not. So after that is done, we will be doing the blue deployment. And and that's how the deployment would be done to make sure that there is zero time downtime for, uh, my microservice. And first of all, uh, in my also in my load balancer, uh, when I will be deploying on green server, so I will be switching off my, uh, incoming request from the net from the external network so that, uh, I am not making any calls, uh, so that my green server is not taking any calls. So all of the calls will be routed to the blue server. And once that green server is up and running, uh, I will redirect the traffic from the blue server to the green server. Now the deployment would start happening on the blue one. And once the blue deployment is also done, uh, the load balancer will again come into picture and will, uh, distribute the load between these two blue and green servers.

So, um, yeah, so Python can be integrated with Java based microservices and, uh, the potential so, basically, uh, when we are trying to, uh, manage the deployments in the, um, in the cloud environment. So that's when the Python scripts can come very handy, and, uh, everything that is going from the, uh, like, image creation to, uh, to the actual deployment being happening. So that will be managed through can be managed through Python. And the potential pitfalls for of such an integration is, uh, basically, if the developer is not aware of, uh, like, how to do it in Python, how to write such a code in Python, so there will be a learning curve involved. And, uh, basically, uh, and, uh, that could have, um, that could actually hamper the, um, like, the actual, um, downtime. So that will make the, um, there will be a learning curve in learning curve involved in that case. So that might slow down the actual deployment. And so, yeah, that's what I think. That's the major pitfall while doing such a, um, integration through Python. So, also, we, um, if the, um, if the automated script is not correctly written and if the test cases are not correct correctly, uh, like, defined. So in that case, uh, there will be there will be challenges with this approach. So we would have to be very sure of whatever we are writing and that it is of the, uh, it is of it is correct and it is of the highest quality.

So, uh, these steps of integrating Python based AI models into an existing Java microservices architecture. And so the steps for, uh, this for such an integration would be to, uh, to place it, uh, for for for deployment, for automation checks. And if the microservice is, uh, have is is taking all the um, is healthy enough for the health checks to be done correctly. And that's where, uh, I think the AI models can come into picture, and the Python based AI model can come into picture and streamline the, uh, the entire testing and, uh, deployment of our mic Java based microservices.

So, um, the problem with this the the problem with this code snippet is that, uh, this in the multi multi threaded environment, so this could, uh, so this could lead to multiple instances of, um, the singleton getting created, which we do not want. And, uh, so to take to, uh, to, make sure that this does not happen, we will, uh, implement we will be implementing a double checked, uh, locking mechanism for, uh, for creating this instance. So when we are writing this get, uh, inside this get instance method, we will be, uh, putting in a a synchronized block, uh, when in the if block. So if instance double equals to null. After this, we will be writing a synchronized of and, uh, we will be making a lock in the singleton dot class. And inside that synchronized block, we will be creating the ins uh, we will be writing this instance is equals to new singleton line so that, uh, we are completely sure of, uh, that only one singleton instance is getting created in a multithreaded environment. So, yeah, so that's how it will be done.

Yeah. So the problem, uh, so this code is in is in intends to lazily initialize an instance of a service. The problem with this approach is, uh, when, uh, when a single when a single thread is trying to ex to access this, uh, get service instance method, uh, the another thread can also come in, um, could also access this. And while trying to create a new service instance, so there can be 2 threads that can be doing the same thing, and, uh, it will eventually have 2 service instances and, uh, which is not correct for the multithreaded environment. So instead of that, we should be implementing some locking mechanism for, uh, the service instance so that only a single instance of this, uh, service factory is getting created. Single instance of this service instance is getting created. So that's how it should be done.

So to develop a comprehensive backup strategy for a microservices ecosystem that spends, uh, multiple data sources, first of all, we should be, uh, having these data stores at different availability zones, uh, so that if there is miss happening at at one of the availability zones, so other of other data store is, um, is keeping a copy of the entire database, um, safe. And, uh, we should be we should be implementing a master slave architecture in case of multiple data stores because, uh, to maintain so that we can maintain the consistency across all the data source. So we can have, uh, multiple slaves and only 1 master with, uh, so those multiple slaves will be responsible for the reading operations, and that one master will be responsible for the write operations. So that write is only happening on the on one, um, on one data store, one copy of the data stores. And in case, um, so we should be regularly backing up all the data that is there in the my master. So we should, uh, regularly update the data in, uh, amongst the slaves so that in case, uh, the master gets down at for some time, So any of the slaves can take place of that master and can act as a master copy for the entire data store, uh, multi data store architecture. So that's how, um, the back ex backup strategy should be working. And we could have, um, as I already said, that we could have multiple availability zones where these data stores are kept, and that will ensure that we have we have we have 0 data loss policy. And uh, and