profile-pic
Vetted Talent

Prateek Modi

Vetted Talent

With over seven years of experience as a software engineer, I have developed a passion for creating innovative and scalable solutions that address complex business and technical challenges. My field of expertise is identity management, where I have worked on various projects involving AWS Identity and Access Management (IAM), Docker Swarm, and Zuul API Gateway. I am currently seeking new opportunities to leverage my skills and knowledge in a dynamic and collaborative environment that values diversity, creativity, and excellence.


My most recent role was as a Senior Software Engineer at Lowe's India, where I was part of a team that developed an identity management application from scratch. I was mainly involved in designing, developing, and debugging the features that enabled the communication between the application and the source systems, operating systems, and interfaces. I applied my proficiency in Golang, AWS IAM, Docker, and Zuul to deliver high-quality code, documentation, and test scripts. I also contributed to the performance optimization of the application by reducing the memory consumption and improving the code efficiency. I enjoyed working with a talented and supportive team that shared my vision and goals for the project

  • Role

    SDE-II (Java)

  • Years of Experience

    6.6 years

  • Professional Portfolio

    View here

Skillsets

  • CSS
  • Jconsole
  • Jenkins
  • Maven
  • NoSQL
  • Postgres
  • Splunk
  • TeamCity
  • Visualvm
  • Gorpc
  • Groovy
  • Go
  • Hibernate
  • HTML
  • Microservices
  • OAuth
  • REST
  • RHEL
  • SVN
  • Kubernetes - 2.5 Years
  • Angular - 1.5 Years
  • Java - 6.5 Years
  • Spring Boot - 4.5 Years
  • Docker - 3 Years
  • Apache Kafka - 3 Years
  • Elasticsearch - 2.5 Years
  • Git - 7 Years
  • Git - 7 Years
  • MySQL - 5 Years
  • Kubernetes
  • Algorithms
  • Ansible
  • Apache camel
  • Couchbase
  • Data Structures
  • Gin
  • Gradle

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Software Engineer - (Onsite, Ahmedabad)AI Screening
  • 56%
    icon-arrow-down
  • Skills assessed :.NET, Azure DevOps, C#, Git, SQL, Strategic Thinking, Leadership, Problem Solving Attitude
  • Score: 50/90

Professional Summary

6.6Years
  • Apr, 2025 - Aug, 2025 4 months

    SDE-II (Java)

    Tesco
  • Mar, 2023 - Nov, 2023 8 months

    Senior Software Engineer

    Lowes India
  • May, 2022 - Jan, 2023 8 months

    Senior Associate Technology

    Synechron Inc
  • Nov, 2018 - May, 2019 6 months

    Senior Java Developer

    World Wide Technology
  • Jun, 2019 - Apr, 2020 10 months

    Senior Java Developer

    Argus System
  • Apr, 2020 - May, 20222 yr 1 month

    Senior Java Developer

    Reliance Jio Infocomm
  • Jun, 2014 - Jul, 20151 yr 1 month

    Java/J2EE Developer

    Argus System

Applications & Tools Known

  • icon-tool

    Zookeeper

  • icon-tool

    API Gateway

  • icon-tool

    MySQL

  • icon-tool

    Postgres SQL

  • icon-tool

    Splunk

  • icon-tool

    Docker

  • icon-tool

    AWS

  • icon-tool

    Git

  • icon-tool

    SVN

  • icon-tool

    Maven

  • icon-tool

    Gradle

  • icon-tool

    Jenkins

  • icon-tool

    Groovy

  • icon-tool

    TeamCity

  • icon-tool

    Apache Kafka

  • icon-tool

    AWS

  • icon-tool

    Maven

  • icon-tool

    Gradle

  • icon-tool

    Gin

  • icon-tool

    RabbitMQ

  • icon-tool

    Spring Security

  • icon-tool

    Redis Cache

  • icon-tool

    OAuth 2.0

  • icon-tool

    Apache Camel

  • icon-tool

    Redis

  • icon-tool

    AWS

  • icon-tool

    Maven

  • icon-tool

    Kubernetes

  • icon-tool

    Elasticsearch

  • icon-tool

    Redis

Work History

6.6Years

SDE-II (Java)

Tesco
Apr, 2025 - Aug, 2025 4 months
    Worked as an SDE-II on Contract to Hire Opportunity for the Stock Services team at Tesco, Bengaluru where I was part of the Platform & development team which was a backbone of all Stock-Services teams, Responsible for about 40 Stock applications development & maintainability as well as Infrastructure and the Development tools across Stock team which is a Microservices based team responsible for Stocks availability & warehousing capabilities of the modern Tesco stock microservices & infrastructure. Individually developed features like Pattern analysis from stock-level APIs microservice which keeps level of Stocks are required as part of availability, forecasting & warehousing, where this feature analyzed new vs old product query trends and tweaking the caching mechanisms according to trends observed to provide for faster responses from these Partner APIs. Designed a Parent POM implementation as part of maintainability across all 40 Java microservices teams, where multimodule projects & internal dependencies were made a part of a single parent out of which each application used the parent POM.xml file in Spring-boot projects. Worked on Splunk Visualization Dashboards for trends of APIs and Caching mechanisms. Part of maintainability of Kafka Clusters across Production Clusters where new ACL Related Kafka SASL changes wrt security were adopted. Involved with RHEL VM setups for Kafka, APIs and Applications including Ansible Scripting to automate the Processes. Involved with Kafka Mirrormaker setups and introduction into current Kafka Clusters. Worked on maintaining Couchbase clusters Indexes, buckets and Scopes for Comparing across different environments and fixing if issues found in data across. Worked as part of Disaster Recovery Team for migration to SAFE & Replica setup of Production Infrastructure in case of Disaster like whole region Data Center Loss. Part of new dev Environment Development & implementation across all projects.

Senior Software Engineer

Lowes India
Mar, 2023 - Nov, 2023 8 months
    Working as a Senior Software Engineer (Golang) for an Identity Management Application. It was an Identity management Application which kept track of Identities within the organization. Involved in Design, Development and Debugging the features developed. Mainly responsible for developing drivers to communicate with the source. Deployments in Dev/TEST/PROD using Docker containers and Kubernetes clusters. Implemented Pub-Sub model between microservices using Kafka for Event driven callbacks and integrating with other Microservices. Worked with Golang with Frameworks & libraries such as Go-micro, GoRPC & Gin for microservices based architecture. Revised Authentication and authorization with OAuth 2.0. Refactored DDL for some DB Schemas for Efficiency in data retrieval with the help database Normalization techniques.

Senior Associate Technology

Synechron Inc
May, 2022 - Jan, 2023 8 months
    Data Services is the Business Unit in WM (Wealth Management) in Morgan Stanley for Mutual Funds Orders Platform. The goal of the project is to transform the Legacy systems (Mainframe) to the Resiliency services (Java Microservices). Worked on UAT and PROD releases and 40% of the time in Debugging UAT bugs/issues on a daily basis in coordination with other team stakeholders as well as QA. Implemented Async Communication between microservices using Kafka for integrating with other Microservices. Planned, documented, and executed tests to ensure code changes meet requirements and specifications. Implementing and integrating REST APIs for some new helper services introduced. Implemented Authentication and authorization with OAuth 2.0 security from Basic Auth for some internal utility Projects. Deployments using Docker and Kubernetes clusters. Implemented Velocity Template Engine to integrate Java code with HTML static pages for Health Check Info HTML pages. Created and improved existing Apache Camel Routes.

Senior Java Developer

Reliance Jio Infocomm
Apr, 2020 - May, 20222 yr 1 month
    PEAG RAD is a software platform developed by Reliance Jio Infocomm Limited (RCP, Navi Mumbai). The PEAG RAD project is a Broadband services user management platform. Internal users have this portal to manage services taken up by subscribers. The architecture was micro-services based. Excelled in rapid application development and management of technological issues for assigned projects, earning the highest customer satisfaction rating for all software solutions delivered. The Packet Collector Framework was a microservices based Framework designed and developed for internal use purposes for Network packets capturing. OCS is a Provisioning Application for provisioning Customers and their Metadata (like Plan, Top up, Tariff, Details, etc.). Analyzed and designed program changes. Reviewed and updated requirements documentation, wrote design documents (SLA, HLD, LLD, etc.). Applied relevant technical skills to deliver specifications, program changes, unit test scripts, and documentation. Planned, documented, and executed tests to ensure code changes meet requirements and specifications. Supported programming changes during quality assurance, user acceptance testing, and post implementation. Achieved a 30% reduction in the app's memory consumption by eliminating duplicate strings via Garbage Collection Logs. Implemented Async Communication between microservices using Kafka in OCS (Online Charging System) Project where User Billing and Tariffs were managed across whole Organization. Facilitated customization of systems by encouraging software engineering team to adopt emerging standards for software application development architecture and tools.

Senior Java Developer

Argus System
Jun, 2019 - Apr, 2020 10 months
    The award-winning product ACADEMIA has been conceptualized and developed to cater to the growing and varying needs of Educational Institutions. Code quality enhancement to improve the performance of the application, leading to a 23% reduction in the number of bugs. Developed and presented findings and solutions to audiences including senior executives and stakeholders.

Senior Java Developer

World Wide Technology
Nov, 2018 - May, 2019 6 months
    The ORCA (Order Orchestrator) is a Supply chain management Project. Helped upgrade existing code and added SSO code. Evaluated existing portal applications and estimated time to fix defects to increase customer satisfaction. Production support includes analyzing and fixing defects. Implemented and updated application modules under the direction of Senior Software Developers. Experience with SoftwareAgs WebMethods to relate and Map Business Rules both via tools as well as Java Code. As a Full-Stack Engineer, about 20% of work was involved in UI as an Angular Front End. Implemented and integrated Business Rule validation and mappings via WebMethods, both as a tool and framework. Offered experience working with client-server architectures, networking protocols, and databases.

Java/J2EE Developer

Argus System
Jun, 2014 - Jul, 20151 yr 1 month
    Developing, deploying Web Applications and client server multi-tier applications using Java/J2EE technologies were the day-to-day tasks in this project. We worked on a Dispute alert system which mainly focuses on the fraudulent activities that happen in healthcare. We were also involved in developing Webservices for various internal applications related to the project like Regulation, EADs, Risk, KYC, and Fraud Recovery. Implemented new web services to improve functionality. Refactored code and test suites to promote code-reusable infrastructure. Contributed to design planning meetings & documentation.

Achievements

  • 5G Packet Collector Project
  • Microservices Architecture
  • GoRPC to HTTP Migration
  • Elasticsearch Scaling Optimization
  • Circuit Breaker Implementation for Morgan Stanley WM
  • Data Normalization in MySQL
  • Microservice Segregation for Critical Operations
  • 5G Packet Collector Inspection & Analyzer
  • Kafka Load Testing
  • Production Troubleshooting

Major Projects

10Projects

5G Packet Collector Project

    Developed and implemented a 5G Packet Collector solution utilizing a modified open-source library and custom Queue Data Structures (based on Queue ADT). Achieved a 2 million TPS goal with minimal resource consumption, operating effectively on VMs with only 8GB VRAM and 1% vCPU. The custom Queue Data Structure significantly improved performance, resulting in a 100% (2x) increase in service efficiency.

Microservices Architecture

    Successfully implemented microservices architecture using Golang and Java, adhering to the Separation of Concerns principle. Resulted in a 30% reduction in project bottlenecks and significantly improved system scalability, enabling handling of larger transaction volumes.

GoRPC to HTTP Migration

    Successfully migrated inter-service communication from GoRPC to HTTP, leading to a significant performance improvement. Response times were reduced to less than 50 milliseconds for large input cases, eliminating timeouts previously experienced with GoRPC.

Elasticsearch Scaling Optimization

    Optimized Elasticsearch cluster configuration by adjusting shards, replicas, nodes, and cluster settings. Resulted in a 15% increase in query performance and 20% reduction in indexing time while effectively handling rapidly growing data volumes.

Circuit Breaker Implementation for Morgan Stanley WM

    Implemented and optimized a Circuit Breaker pattern for a critical migration project at Morgan Stanley Wealth Management. Resulted in a 90% reduction in service failures during peak load periods and improved system stability and resilience.

Data Normalization in MySQL

    Re-implemented data normalization within the MySQL database, leading to a 200% reduction in query response times, bringing them down to the millisecond range and eliminating timeouts caused by slow string matches or searches.

Microservice Segregation for Critical Operations

    Introduced and proposed the segregation of critical and complex operations into dedicated microservices. Resulted in increased system resiliency and improved performance by isolating potential failure points and enhancing overall system stability.

5G Packet Collector Inspection & Analyzer

    Developed and implemented a 5G Packet Collector Inspection & Analyzer project with a focus on minimal resource allocation to each VM in production. Conducted load testing, achieving 4 million TPS while optimizing for G1 GC Garbage Collection and memory leaks.

Kafka Load Testing

    Conducted a Proof-of-Concept and load testing for Kafka, achieving a throughput of 10 million TPS, demonstrating the system's capacity to handle high-volume data streams.

Production Troubleshooting

    Troubleshooted production service failures within a specific node by analyzing code handling and optimizing Linux server OS configuration, resolving the issue and preventing future occurrences.

Education

  • MS in Computer Science

    City University of Seattle (2018)
  • Bachelor's degree: Engineering

    Rajiv Gandhi Technical University (2014)
  • Apache Camel Framework with Spring Boot

    Udemy (2022)

Certifications

  • Apache camel framework with spring boot, udemy

AI-interview Questions & Answers

Uh, so I've been mostly into Java back end roles. So throughout my professional career, I have been into, uh, as a back end developer only, but, um, for a brief period of time in the USA. Um, I had worked as a full sender developer also. So, basically, there, I used Angular JS also, uh, in one of the opportunities deployment. So we were using Groovy with AngularJS and normal other tech stack. It was a retail domain project. So, um, other than that, I have expertise in, uh, telecom domain, uh, finance domain, banking domain, health care domain. Yeah. That's all. And, uh, I have, uh, like, worked extensively upon Java 8. And Java 11 also, I worked pre I have also worked, uh, on Java, which was earlier to Java 8. And then, uh, normal Java frameworks, uh, like Spring Boot, Spring Spring MVC, and then Hibernate for ORM frameworks. And then I have worked with SQL, NoSQL, both kind of databases. And in SQL, I have worked with MySQL or SQL and, um, your Postgres. And then, um, in MySQL, I worked extensively in Elasticsearch. And a few times I've used MongoDB. Other than that, I personally also have knowledge about Angular fourteen. And then, um, your normal, like, unit testing and testing frameworks like, uh, Mokito and all, uh, I've used as it is used in the enterprise applications. And then your, uh, designing this for web services and then Swagger, Apache Kafka, Zookeeper, and then microservices. Monolithic also. I have I have also worked in phone kind of models and then, um, uh, you're, uh, agile also. I have worked on.

So, basically, I think the scenario you are asking is basically when we have an enterprise level application in which different microservices are involved and we have maybe one or more more than one type of database and different transactions are happening between microservices and data is getting stored into the DB and for that concurrency also plays a role because the end users there will be maybe millions of users using the system and then suppose for example a record update in the DB is we want a record to be updated and that included and that included with multi-threading and concurrency and then so in a project where different large number of microservices are there and concurrency is also happening so we should make sure of the ACID properties used as a optimization of a DB so that the data integrity and consistency should be there should not be inconsistent it should be very robust in fact by using and optimizing the DB using the ACID properties.

So I think, uh, repository pattern in c sharp, uh, as I don't have much experience in c sharp other than college level, but I can explain the repository pattern. I think it is used for this thing. Um, basically, uh, I'll try to give you an example if, uh, a project is there. For example, we are making our own DB. We're writing our own DB, a new kind of database. Uh, I'll assume it is a relational database. And in that, uh, repository pattern could be implemented. Or, uh, let's take another example. So repository pattern in the sense it will, um, before sending to data to the DB for persistence, it will kind of act as a repository, The services would be written. So the repository service, uh, following the repository pattern, would be advantageous in a scenario where, uh, we want the data to be optimal before insertion into the DB. So we want to collect as much data as possible. First, uh, do this, uh, like, filtering and sanitization of the data into our own, uh, written repository service. And then after all the checks and everything, sanitization of the data, we will send it to the DB for persistence.

So I think I will opt. I think I'll always go for a NoSQL database over a relational database. First thing that comes to my mind is that we want our the the data that we want to process is document based data. So it could be a complex JSON. And it's not relational, obviously. That's why we want to store it into SQL. And, uh, for example, if we are maintaining a a repository of all the employee resumes and their profile stats and all. So I think the best suited for that is a NoSQL database where we'll have a complex JSON of the profile of the candidate and database and whatever we have extracted, uh, properties from the database, we will store it at store it as a, uh, maybe a JSON document or XML document. So JSON document, I mean, obviously, no SQL database will be much preferred. And we can in introduce if it's a very large application. So for scalability, we can introduce database sharding also. So that database sharding will make sure that the get the retrieval of the data from the databases very fast. And and availability also will be there for if we have maintained clusters. And within each cluster, we'll have the database sharding, so where the database performance will be good.

So I'm not sure about Azure much. In fact, I have more knowledge, I would say, in AWS, but I'll try to explain it it with it in it in, um, in the perspective of any public cloud services. So I think, uh, as AWS, DevOps also does the same thing. So we can, uh, basically we'll have our own c CICD pipeline for continuous integration and continuous deployment. So it if if if we want, it could automatically take care of As soon as we push into a given branch, it automatically goes to the build, and then the if the build is successful, it will automatically be deployed at the given location or server where we want we configured. So for the for all this, uh, configuration, we can use the DevOps build and release pipelines. And in case of failure, there is a, like, uh, first, we'll have a check, be it in a configuration, that if the build is failed, then just roll back to the previous deployment. But I think the question here is asking that if something at production is failing after a new deployment, then how would we go back to the previous deployed, uh, stable state? That, we will have the images of the builds, uh, recent builds. And then we will we can probably mark a very stable build and, uh, save it as a stable build. Whenever we want, we can basically configure the DevOps system also, the cloud's DevOps system, to roll back to that default stable version every time in case of failure. This can be done automatically also or manually also.

So, dependency injection basically if I related with high throughput is basically high throughput we will get only like if we want the high throughput as a priority instead of latency. So, if we are not considering much about latency and if we are just consider about high throughput then obviously the object creation or the instance creation whenever required is very helpful for a given application. So, using the dependency injection it automatically manages the dependencies or the objects required at a runtime, it will automatically take care of its life cycle also the objects and dependencies which are getting created or referenced that there will be a life cycle only if it is used then only it will be instantiated and it is like loosely coupled system in dependency injection. So, it is not tightly coupled, so the objects can also be reused and can also be I mean terminated out of the application if some dereference objects are there and the objects we do not need the dependencies which we do not need but automatically take care. So, all these things are taken care by dependency injection if implemented correctly and this will result in high throughput if we are only considering high throughput not latency. So, basically for a given scenario in an application in concurrency scenarios also this thing is very helpful because if you have correctly implemented the dependencies injection along with the concurrent model then it would take care of all the object creation and all at the runtime and then with concurrency it will definitely prove out to be very performance wise good in high throughput systems.

So, I mean, um, as the note below says that the viewing guide details is a view that selects from another view. So if there's a performance issue, I'll first look into the view employee base issue view employee base view. The view which is the innermost view, which is listed. I'll first run that individually. And then after that, I'll run the outer one. That is the view employee details view. And no. So, I mean, it is a step by step procedure. So if the employee base view is the culprit, so we'll fix that. And, otherwise, uh, the employee details view will be taken into consideration that if it is the bottleneck or not. And, uh, yeah, that's probably it. That's how to debug it. And the performance issue, I think, uh, we should use aliasing instead of all this select and all. Creating nested views, we should use aliasing and, uh, join

So, basically, I mean, uh, the answer to this would be as the developer has written only 1 try and catch. So first first try should be I mean, there should be various checks, like, uh, the where the first WAR that is payment details. If it is null or, uh, it it is null or any issue is there or not. So it should be checked individually in a try catch. So and then there could be, uh, like, multiple try catches also. So, uh, first payment details that should be checked, and then, uh, only should be proceeded if the first one gets cleared, and then validation result should be checked that we we've got the at least the some data. It is not null. And then yeah, I mean, different exceptions we can throw in the catch block, uh, according to the different tries or whatever or for a given single trial. So we can have different catch, uh, exception catch blocks also. So, um, that is how to correctly implement the solid principles, and this is how this should be fixed instead of just having a single

So, this question depends upon several things like depends on if the application is artificially intelligent or AI or machine learning is it is it doing AI or any kind of machine learning that I think you would be implementing or using the tensor flow that is one thing example that I just said that I mean if it is handling large volumes of data at real time with complex transactions we would first have the architecture correct in place the technologies and all the system architecture and system design should be very robust and I mean if there is any bottleneck anywhere should try to minimize it there will be some trade-offs also in the system design or architecture some trade-offs can be avoided I mean not avoided they can be ignored but the throughput of the system and the performance should be very good so for that the first thing first step is the system design and this thing architecture of the system we should have everything in place and then as it is handling large volumes of data so basically we will have to have very good if the application is stable we will have to have the basically whatever we want either vertical scaling or horizontal scaling of the deployed nodes or servers so I think in this case I would pick this thing horizontal scaling I would put big and then horizontal scaling of the servers and then load balancer should be there which will distribute the load according to the like strategy we have I mean finalized beat I mean beat around Robin or priority based scheduling also we can use so that load balancer will make sure that every node in a horizontally deployed servers or infrastructure is basically able to handle and distribute that load across the all the servers and the whole utilization of all the infrastructure is getting used and if there is even more number of transactions happening then we will have the scaling of the database also and we will have this automatic scaling which is which can be easily achieved through this DevOps features in the Azure or AWS services so if the if the load balancers and the proxy servers feel that the load is too much they will scale automatically

So I think that dot net core application architecture, uh, while designing this core .net app application application architecture, obviously, they will make sure that the microservices based, uh, uh, be it any kind of microservices design pattern we can use according to our needs, be it service oriented architecture or anything else also, or client server architecture, whatever we want. So, I mean, each microservice should be independent, and then that individual microservice will do whatever the contract it has to do with I mean, just that the responsibility of each microservice should be limited, and it should be always available. And so the availability and the independentness of the microservice and a sole responsibility also should be there for each given microservice. For for example, a payment gateway microservice will only look after, um, the payment related transactions or whatever, I mean. Should be only reached out to whenever the payment, uh, related, uh, things is there. Otherwise, it won't be reached out to at all. And then should be scalable also. Dynamically scalable. And then uh, and for maintainability, then this thing will come into play. Like, we'll have a contract for each microservice that it is the domain for each microservice will be limited and listed out that it will perform only these operations. Will be solely it will be solely responsible for these, uh, operations only. And it will be independent and loosely coupled with respect to other microservices also. So if we are fixing or developing or enhancing 1 microservice, should be independent enough that it is still working with the old, uh, versions of other microservices. So that is how the maintainability is there. And for scalability, I think, uh, this thing, uh, as I already said in the previous answer, the scalable thing is whether we want it to be vertically scalable or horizontal scalability, and we can introduce the load balancers also so that the each cluster in the deployed infrastructure is getting the correct and optimal load, and every node in the cluster is performing to its I mean, I mean, it's performing well, and it shouldn't be like as a load balancer's task is to just distribute the load among all these things. And it should it can also be dynamically controllable than the scaling thing.

So the advantages of using asynchronous programming patterns in .net applications is that, uh, basically, it is not first thing that comes to my mind is that it is not acknowledgement based things. So it is like asynchronous thing. So if microservice a is is requesting something from microservice b, then it will be asynchronous instead of synchronous. So, in case of asynchronous, it won't it won't get into a a stub state if it doesn't get any response from that, um, microservice leak. So, basically and the timeouts and all that happen in the synchronous architecture, like architecture, they won't be there. So whenever microservice a is requesting something from microservice b, won't have a time out in asynchronous pattern. And, uh, I mean, it is, like, um, loosely coupled, I think. Not tightly coupled because in case of, uh, synchronous REST based communication, it is kind of tightly coupled. And only if we get the proper response, then only the transactions will proceed further. In case of asynchronous, I mean, it doesn't get into a stuck state and all these things.