profile-pic
Vetted Talent

Sakthivel Balakrishnan

Vetted Talent

Full Stack Developer with nearly 6 years of professional experience building enterprise microservices applications effectively providing Enterprise Industries solve the problem with more efficient and effective solutions with ReactJS, Java, Spring Boot and microservice technologies utilising GCP & CICD best practices. Well versed in analysis, design, development and testing of Enterprise applications using JAVA, Spring Boot as Core Technologies.

  • Role

    Full Stack Developer

  • Years of Experience

    7 years

  • Professional Portfolio

    View here

Skillsets

  • Microservices
  • AWS
  • Spring Test
  • Spring Security
  • Spring Data JPA
  • Spring Batch
  • Restful APIs
  • Postman
  • PCF
  • Maven
  • SQL
  • PostgreSQL
  • Oracle
  • MySQL
  • Mockito
  • Java - 6 Years
  • Kubernetes
  • JUnit
  • HTML5
  • GraphQL
  • Docker
  • CSS3
  • Azure DevOps
  • Angular
  • react - 2 Years
  • JavaScript - 2 Years
  • Jenkins - 2 Years
  • Spring Boot - 6 Years
  • Git - 6 Years

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior System Engineer- BackendAI Screening
  • 67%
    icon-arrow-down
  • Skills assessed :Apache Kafka, DevOps, RabbitMQ, web, hibernates, Restful APIs, Spring, Android, Docker, Git, iOS, Java, Kubernetes, SQL
  • Score: 60/90

Professional Summary

7Years
  • Mar, 2025 - Present 9 months

    Full Stack Developer

    Wayfair
  • Sep, 2023 - Mar, 20251 yr 6 months

    Full Stack Developer

    Walmart Global Technologies
  • Apr, 2021 - Sep, 20232 yr 5 months

    Full Stack Developer

    NielsenIQ
  • May, 2019 - Apr, 20211 yr 11 months

    Back-end Developer

    Cognizant Technology Solutions

Applications & Tools Known

  • icon-tool

    Java

  • icon-tool

    Spring Boot

  • icon-tool

    microservices

  • icon-tool

    React Native

  • icon-tool

    Angular

  • icon-tool

    ReactJS

  • icon-tool

    GraphQL

  • icon-tool

    Azure Devops

  • icon-tool

    PCF

  • icon-tool

    Zipkin

  • icon-tool

    GCP

Work History

7Years

Full Stack Developer

Wayfair
Mar, 2025 - Present 9 months
    Engineering new features for the Admin Sales Suite using React, Java, and Spring Boot to enhance support team capabilities. Implementing automated CI/CD pipelines with Buildkite to guarantee timely and reliable software delivery. Architecting and deploying scalable Spring Boot microservices on AWS to ensure high availability and resilience. Developed a user impersonation module with an iframe, driving a 15% reduction in support ticket resolution time. Managed and resolved critical production incidents through PagerDuty, ensuring 99.9% uptime and stability of the Quotes Suite. Improved the deployment process by introducing automated pipelines, leading to faster and more frequent releases. Worked closely with design teams to integrate user feedback into product features, enhancing overall user satisfaction.

Full Stack Developer

Walmart Global Technologies
Sep, 2023 - Mar, 20251 yr 6 months
    Designed and implemented microservices from the ground up using Java and Spring Boot, modernizing a legacy employee benefits management system. Developed a high-volume batch processing tool, Coverage Audit, leveraging Spring Batch to audit and verify coverage for thousands of associates. Contributed to the BOSS (Benefits One Stop Solution) application by developing Angular front-end components and Java-based microservices to enhance functionality. Collaborated with Walmart's cloud-native platform built on Looper Pro to deliver reliable CI/CD pipelines. Led back-end development of the Nextgen Beneficiary Online platform, achieving a 20% increase in processing efficiency. Improved system performance by optimizing back-end services, resulting in faster response times and a better user experience.

Full Stack Developer

NielsenIQ
Apr, 2021 - Sep, 20232 yr 5 months
    Architected and developed the E-POS microservices platform using Java, Spring Boot, and React, centralizing data collection from vendors for thousands of auditors. Developed the Advance Purchase microservice application, enhancing data collection and maintenance for key customer-facing applications. Ensured timely delivery of projects by implementing CI/CD pipelines using Jenkins to automate deployments to AWS. Streamlined the deployment process by implementing an automated CI/CD tool, leading to faster deployments and more frequent releases.

Back-end Developer

Cognizant Technology Solutions
May, 2019 - Apr, 20211 yr 11 months
    Architected and developed enterprise microservices for the 7-Eleven Enterprise Platform, enabling efficient chunked data transfer and deployment on Pivotal Cloud Foundry (PCF). Designed and tested robust Spring Boot applications for the American Airlines Pilot Trade System, considerably enhancing pilot scheduling efficiency. Engineered a scalable data transfer microservice for PepsiCo, consolidating large volumes of data from diverse sources into a centralized repository. Ensured consistent and timely delivery by leveraging Jenkins and adopting pipeline-as-code best practices. Led the implementation of CI/CD pipelines using Azure DevOps, achieving an 80% reduction in manual deployment efforts across multiple projects.

Achievements

  • Patent : http://ipindia.gov.in/writereaddata/Portal/IPOJournal/1_4785_1/Part-1.pdf (201841045042)

Major Projects

3Projects

E-POS microservices platform

    Centralizing data collection from vendors for thousands of auditors.

Advance Purchase microservice application

    Improving the data collection and maintenance experience by key customer applications.

Coverage Audit (high volume batch processing tool)

    Audit and verify coverage for thousands of associates manually.

Education

  • B.Tech. in Computer Science Engineering

    SRM Institute of Science & Technology (2019)

Interests

  • Technology Research
  • AI-interview Questions & Answers

    Can you help me understand more about your background by giving a brief introduction about yourself? Sure. So I have around 5.5 years of experience in IT industry, and I have worked for Tech James, uh, like, uh, Walmart and Nielsen. Like, I have also worked for most of the, uh, retail companies like 711, PepsiCo. Uh, that's the, uh, like, most of the client background that I have. And my expertise in Java is being good on microservices. And I have, uh, a provided enterprise solution for, uh, most of the clients that I have worked with. So, uh, I have experience I have some experience on building React applications as well. So, technically, I'm a full stack engineer right now.

    How would you manage your control of job with using it, ensuring best practices for, uh, collaboration? Sure. So for, uh, version control system, ECS. So I would say, uh, for whenever we take a story, uh, we'll create a future branch for the story or, uh, like, depending on, uh, for this. Say, we'll have we'll create a future branch or hotfix branch. Uh, if we'll create a specific branch, we'll cut our specific branch, and, uh, we'll push we'll make changes to the code in that particular branch, and it'll create a pull request. Okay? So with that pull request, we'll show the difference between the branch that we got over and the change that we pushed. So and we'll we may like, we we'll push the pull request to several, uh, core reviews. And after, uh, making sure the, uh, bribe our branch is clearing all the code reviews, we'll go through, uh, uh, we'll continue with merging the, uh, branch, merging merging the changes to the branch from each week October, and it will go, uh, based on the stages. Say it will go for the first to the dev environment then to the, uh, queue environment then to the environment. Then I'll after all these confirmations and after all the back and forth, uh, changes, then we'll take it to the production.

    What technique would you use to monitor Java and troubleshoot applications deployed on, uh, Kubernetes? So yeah. Sure. So, like, there are multiple ways to monitor and troubleshoot your applications. Either we can use third party log fishing applications, uh, like a ERK stack and, uh, uh, like, where where we can fetch the logs, and we'll be able to show it. Uh, we also have, uh, in Kubernetes, we'll be we also have liveliness probe and readiness probe. So readiness probe, uh, is when, uh, uh, the we'll be we'll be able to check whether the if the application is ready. And, uh, l liveness probe is, like, when where we'll be able to check if the application is live or not. So using this probe, you'll be able to check whether the application is live or not. And mostly, uh, like, for in Kubernetes log, you'll be able to check logs using, um, Kibana. Like, uh, we also have some we can also can connect, uh, some third party log management, please, like data log, ERK stack, and, uh, like many others, basically.

    In what scenario you would say no SQL database or? So it depends on, uh, business, uh, use case to use case, uh, of the business. Say, for example, uh, on a relational database, you'd say when we are planning to store a a structure a particular structure of data, uh, we use, uh, relational database where we have all the columns being filled and, like, my minor, uh, empty statements. And noise scale database, you can see where we are going to have nonstructural database where for every, uh, for every, uh, row, like, we have different different scenarios. Say, for example, uh, let's take an ecommerce application. So, uh, a scenario where we can use MySQL database for an ecommerce application is, uh, when, uh, say, for example, if there is a pen, it would have different features like so pen pen can have a color and it's an ink, and a pen can have a pen type like a ballpoint pen or anything else. And we can have a pen type like a gel pen or a, uh, pointing pen or something like that. And if we take the other one, say, you can take a a monitor, for example. So monitor will have a size as 32 inch, uh, 27 inch, something like that. So, like, these are all the different parameters where if you create a relation database, you'll have a lot of, uh, empty rows. So in such cases, we'll be using, uh, noise code database. And there are some other solutions like Cassandra where we'll be using the combined solution as well.

    What precautions should we take when processing sensitive information through ASPU API? Yeah. Sure. So while processing the, uh, sensitive, uh, information and then the rest RESTful APIs, uh, we make sure, uh, that, uh, the sensitive data is not paused in the the request parameter. So it's it's paused as a request body. And we have all the tokens being paused in the headers and all those things. And we make sure, uh, like, the encryption happens in the back end. Say, for example, uh, if you take a practical use case, you can say an an SSN. Okay. So the Social Security number is highly, um, like, is is a data that's highly secure. So we, like, we kind of encrypt it in the database while, uh, while phishing from the database, we kind of decrypt it and send it to the UI. And, basically, like, we cannot, uh, send, like, send it as a request parameter or, uh, like, we cannot send it as a request parameter or something like that. So you, we can send it as a, um, uh, in in the request body so that, uh, the data is more secure, basically.

    How can you optimize a SQL query that takes, uh, too long to fetch data in a Java based application? So on specific to Java based application, like, there are multiple scenarios to optimize their SQL query. On specific to Java based application, we can say, uh, we have something called Java Java JPQL, so Java persistence query language, where, like, the data the, uh, quite the we'll write a JPG using the entities, and in turn, the, uh, hibernate will write the, uh, query for the, like, the more optimized query for the for the data to be executed. And if if you're writing a native query, so I'd like, we have some optimization steps where we kind of check to remove the, uh, distinct values and use more, uh, in indexes in the conditions and more unique IDs in the conditions and join statements and the like, index the indexes and the join statements. And if the data is huge, we go for partitioning the table, and we make sure that particular, uh, partitioning is being used in those queries.

    So the community is the ML configuration snippet for deploying a ML model application. There seems to be a that, uh, with model response. What might be the cause of the issue and which section of the configuration could be contributing to it? API version kind, metadata spec. It's label and template metadata labels. Yes. So this request limit. It's right. It's wrong. So in this, I see in this, I see, uh, there might be some limitations of the memory and CPU, uh, mostly due to that because I don't see any, uh, like, probes that's been happening whether, like, the power is ready. Readiness probe or a liveness probe check is being happened here. So I would go with the the memory CPU, uh, values being, uh, updated and check whether the port has a sufficient memory, uh, to execute the application.

    To get from this, using your Docker file, the application is unable to connect to the MongoDB database. By examining the following data, can you please pinpoint the possible reason for the failure? Yeah. Endpoint. By examining the following Docker snippet, can you pinpoint the possible reason for the failure? So, basically, uh, in this Dockerfile, I don't see any, uh, Mongo, uh, like, Mongo image or any, uh, thing that is being added here. So to be specific, I'd say to be specific, I'd say, uh, spin hyphen, boot hyphen, app dot jar. Okay. So there might be multiple reasons. 1, uh, I don't see any, um, environment, uh, parameters being caused. You'll see the application might have multiple environments, something like a dev stage, uh, UAT, uh, product, etcetera. So I don't see the environment, uh, parameter being passed here. So that might be an issue. And, uh, like, without considering the database that, uh, application is being connected to, we might not be able to figure out the, like, the the issues. Like, the other issue that, uh, might be possible here is our network firewall configuration. So that I use that matter. So it'll be handled, uh, there's so all those things might be handling based on the logs that we see.

    How would you isolate and fix a memory leak in the application running in a Docker container? So there are multiple ways. So if you are, uh, like, finding a memory leak, uh, in the Java application, so, uh, like, technically, whenever there is a a memory leak, so Java application creates a Href file. So we'll be able to clean the HVAC file, and they'll be able to run it. So based on the memory leak. So if if there is, uh, uh, if you want something else to be done, so we have auto auto auto cleanup, or, uh, we technically, uh, like, to be on the forefront side, you'll be able to restart the part, and, uh, you'll be able to get it going. So if if you want to have the application, uh, running under a cleaner particular container, maybe you need to check-in and clean the HRA files, uh, that this Java application creates for memory leak.

    Propose a monitoring solution. They're managing from a JSON for a splitting application. Sure. So, um, like, from my understanding, so Grafana, like, I have used it. So Grafana, what it mentions is it it mentions is the it gives us the utilization percentage, uh, of the for application. Say, for example, if you have given 500 megabytes of, uh, RAM for an application, and, like, we from the Grafana dashboard, you'll be able to monitor, uh, how much the how much percentage of, uh, memories is used being used by the application, basically. Okay? So they'll be able to monitor that. And, uh, like, with the help of Grafana, we'll be able to send notification that, uh, send a mail if we have 80 percentage of the utilization being. Uh, so we we have we'll be able to, uh, send, uh, notifications and such kind of monitorings we'll be able to, uh, use through Grafana, basically.

    Describe an application to handle describe an approach to handle schema changes in SQL databases for Java application following microservices architecture. Schema changes in SQL databases following The microservices architecture. So there are multiple ways. Say, if you on a microservice architecture, if you are connecting same database to all the microservices, we call that, uh, single database anti pattern. And, uh, so, like, that that is the design pattern that we call, uh, while designing our microservice. And if you want to handle, uh, schema changes, so you technically, uh, have to, um, define the, uh, schema basically in the properties file, and, uh, we can try to utilize it. If you want to handle it in like, if you are utilizing the native queries, like, you'll be able to pull those. Like, I have seen scenarios where, uh, multiple schemas are being used and single schema like, schema for different environments being used, like, all those things. So, basically, this this particular thing should, uh, like, the properties and, uh, like, the proper following proper architect, like, design pattern, uh, should help us, uh, solve the issue. So and, basically, talking about Microsoft's, uh, design patterns, say, we have on database, we have, uh, like, a security with anti pattern. Our security database connects to that and the database per service, uh, patterns. So where each microservice, uh, has its own database, basically.