
Senior Software Engineer
AgodaIT Expert (Short-term Contract)
VolkswagenApplication Development Team Lead
AccentureJava Developer
Principal Financial GroupFounding Engineer
Pool Counter InternetSenior Analyst Programmer
Fidelity International
Eclipse

Intellij

Postman
.png)
Jenkins

Git

Github

Bitbucket

Maven

Confluence
Jira

Sonar

SonarQube

Fortify

Maven

Fortify

AWS

GitHub

Maven

Maven

Fortify
Hello. Yeah. So, uh, I'm Abhishek Soti. I have, uh, 9 plus years of experience with back end, um, and my tech stack is basically Java, Spring Boot, uh, microservices, REST APIs, uh, SQL databases, NoSQL databases, uh, Docker, Kubernetes. So these are the, uh, technology that I have worked with, and I am open to learn any new technology which is required for the job. And, uh, regarding so I have worked with, uh, in different domains. Like, uh, first my first company was a finance company. Then my second, uh, startup was, uh, I started this own my own startup, uh, which was an EdTech based. And, um, after that, I, again, was working in investment management and, um, mutual funds company, which was Fidelity International, and I was working there as a Java developer. And, currently, I'm working with Accenture, and, uh, my project is Vodafone UK. And and it we are I'm building a, uh, ecommerce based web application, ecommerce based, uh, application, and I'm currently working on the back end side of it. And it's based on Java, Spring Boot, uh, microservices, REST APIs, and and I have also managed the CICD of this application using Amazon code code build, code pipeline. Um, it's deployed on ECS using Fargate, and the Docker images are pushed into the elastic container registry. So, yeah, that's pretty much about me.
Okay. So to implement a Python based script to automate health checks across a network of Spring Boot microservices, uh, we would need to have actuator enabled in our application. And and we we would also need to have, um, some sort of logging and monitoring system, which can be uh, promit which can be comprised of Prometheus, Grafana, or we can have an an ELK stack and through which we can, uh, monitor the health of our Spring Boot microservices. We can also have Dynatrace. And, uh, on the AWS, we can write few scripts which can keep a track of whether our application is up and running or not. It will keep on triggering the this Python based script will keep on triggering the, uh, or the, um, will keep on triggering the, actuator endpoints, uh, actuator help endpoints. And through that, we will be able to if the endpoint is not uh, working, we will send we will also write the logic to send an email notification to the respective owners of the microservices or whoever is looking into the support of that application.
So to introduce a new microservice into an existing ecosystem, um, so, basically, I would. So if the current architecture is based on the Eureka, um, Eureka servers and Eureka discovery, so I would, uh, need to have it have this new service also mentioned in the I would also have to register this new microservice in the Eureka server. And if, uh, if I'm not going ahead with the Eureka server, uh, so, uh, whatever, uh, new communication that needs to happen with this application will be based on, um, on a through, uh, through rest API through, uh, through rest template or web client call. And if, uh, if this microservice needs to needs to be needs to just based on an event driven architecture. So I would also have 1 Kafka queue implemented between this new service and the calling application, which will just, uh, cons which will and this application will act as a consumer in that case. And, uh, I will be consuming these app, uh, messages from my partition. And, uh, yeah, so that's how I will be doing it. And, uh, for a deployment, I'll be, uh, using a Docker based, uh, a containerized system. Like, I will create the Docker images, and I will use ECR and ECS or an EKS on Amazon to deploy to containerize this application. I will also have log monitoring and log monitoring through AWS CloudWatch or, um, Prometheus, Grafana, Dynatrace, ELK Stack, or or any other new tool that is available in the market. And, uh, yeah, that's how I will be, uh, managing this application.
So, um, the strategy to ensure zero downtime deployments in microservices using Spring Boot would be based on, um, the green blue green blue deployment. So what happens is, uh, there will be 2, uh, servers which will keep the copy of my microservice. And, first of all, the, uh, deployment would be done on the green, uh, server. And after the proper health checks are done on this of of this artifact on the green server, then then we will be initiating our deployment on the so so, uh, initiating a deployment on the blue server. So all of this will be managed through a Python script, and, uh, and we also have would we would also have written the automation suite, which will take care of the testing the green deployment, whether it is done correctly or not or whether, uh, we do have everything in place or not. So after that is done, we will be doing the blue deployment. And and that's how the deployment would be done to make sure that there is zero time downtime for, uh, my microservice. And first of all, uh, in my also in my load balancer, uh, when I will be deploying on green server, so I will be switching off my, uh, incoming request from the net from the external network so that, uh, I am not making any calls, uh, so that my green server is not taking any calls. So all of the calls will be routed to the blue server. And once that green server is up and running, uh, I will redirect the traffic from the blue server to the green server. Now the deployment would start happening on the blue one. And once the blue deployment is also done, uh, the load balancer will again come into picture and will, uh, distribute the load between these two blue and green servers.
So, um, yeah, so Python can be integrated with Java based microservices and, uh, the potential so, basically, uh, when we are trying to, uh, manage the deployments in the, um, in the cloud environment. So that's when the Python scripts can come very handy, and, uh, everything that is going from the, uh, like, image creation to, uh, to the actual deployment being happening. So that will be managed through can be managed through Python. And the potential pitfalls for of such an integration is, uh, basically, if the developer is not aware of, uh, like, how to do it in Python, how to write such a code in Python, so there will be a learning curve involved. And, uh, basically, uh, and, uh, that could have, um, that could actually hamper the, um, like, the actual, um, downtime. So that will make the, um, there will be a learning curve in learning curve involved in that case. So that might slow down the actual deployment. And so, yeah, that's what I think. That's the major pitfall while doing such a, um, integration through Python. So, also, we, um, if the, um, if the automated script is not correctly written and if the test cases are not correct correctly, uh, like, defined. So in that case, uh, there will be there will be challenges with this approach. So we would have to be very sure of whatever we are writing and that it is of the, uh, it is of it is correct and it is of the highest quality.
So, uh, these steps of integrating Python based AI models into an existing Java microservices architecture. And so the steps for, uh, this for such an integration would be to, uh, to place it, uh, for for for deployment, for automation checks. And if the microservice is, uh, have is is taking all the um, is healthy enough for the health checks to be done correctly. And that's where, uh, I think the AI models can come into picture, and the Python based AI model can come into picture and streamline the, uh, the entire testing and, uh, deployment of our mic Java based microservices.
So, um, the problem with this the the problem with this code snippet is that, uh, this in the multi multi threaded environment, so this could, uh, so this could lead to multiple instances of, um, the singleton getting created, which we do not want. And, uh, so to take to, uh, to, make sure that this does not happen, we will, uh, implement we will be implementing a double checked, uh, locking mechanism for, uh, for creating this instance. So when we are writing this get, uh, inside this get instance method, we will be, uh, putting in a a synchronized block, uh, when in the if block. So if instance double equals to null. After this, we will be writing a synchronized of and, uh, we will be making a lock in the singleton dot class. And inside that synchronized block, we will be creating the ins uh, we will be writing this instance is equals to new singleton line so that, uh, we are completely sure of, uh, that only one singleton instance is getting created in a multithreaded environment. So, yeah, so that's how it will be done.
Yeah. So the problem, uh, so this code is in is in intends to lazily initialize an instance of a service. The problem with this approach is, uh, when, uh, when a single when a single thread is trying to ex to access this, uh, get service instance method, uh, the another thread can also come in, um, could also access this. And while trying to create a new service instance, so there can be 2 threads that can be doing the same thing, and, uh, it will eventually have 2 service instances and, uh, which is not correct for the multithreaded environment. So instead of that, we should be implementing some locking mechanism for, uh, the service instance so that only a single instance of this, uh, service factory is getting created. Single instance of this service instance is getting created. So that's how it should be done.
So to develop a comprehensive backup strategy for a microservices ecosystem that spends, uh, multiple data sources, first of all, we should be, uh, having these data stores at different availability zones, uh, so that if there is miss happening at at one of the availability zones, so other of other data store is, um, is keeping a copy of the entire database, um, safe. And, uh, we should be we should be implementing a master slave architecture in case of multiple data stores because, uh, to maintain so that we can maintain the consistency across all the data source. So we can have, uh, multiple slaves and only 1 master with, uh, so those multiple slaves will be responsible for the reading operations, and that one master will be responsible for the write operations. So that write is only happening on the on one, um, on one data store, one copy of the data stores. And in case, um, so we should be regularly backing up all the data that is there in the my master. So we should, uh, regularly update the data in, uh, amongst the slaves so that in case, uh, the master gets down at for some time, So any of the slaves can take place of that master and can act as a master copy for the entire data store, uh, multi data store architecture. So that's how, um, the back ex backup strategy should be working. And we could have, um, as I already said, that we could have multiple availability zones where these data stores are kept, and that will ensure that we have we have we have 0 data loss policy. And uh, and