With over seven years of experience as a software engineer, I have developed a passion for creating innovative and scalable solutions that address complex business and technical challenges. My field of expertise is identity management, where I have worked on various projects involving AWS Identity and Access Management (IAM), Docker Swarm, and Zuul API Gateway. I am currently seeking new opportunities to leverage my skills and knowledge in a dynamic and collaborative environment that values diversity, creativity, and excellence.
My most recent role was as a Senior Software Engineer at Lowe's India, where I was part of a team that developed an identity management application from scratch. I was mainly involved in designing, developing, and debugging the features that enabled the communication between the application and the source systems, operating systems, and interfaces. I applied my proficiency in Golang, AWS IAM, Docker, and Zuul to deliver high-quality code, documentation, and test scripts. I also contributed to the performance optimization of the application by reducing the memory consumption and improving the code efficiency. I enjoyed working with a talented and supportive team that shared my vision and goals for the project
SDE-II (Java)
TescoSenior Software Engineer
Lowes IndiaSenior Associate Technology
Synechron IncSenior Java Developer
World Wide TechnologySenior Java Developer
Argus SystemSenior Java Developer
Reliance Jio InfocommJava/J2EE Developer
Argus SystemZookeeper
API Gateway
MySQL
Postgres SQL
Splunk
Docker
AWS
Git
SVN
Maven
Gradle
Jenkins
Groovy
TeamCity
Apache Kafka
AWS
Maven
Gradle
Gin
RabbitMQ
Spring Security
Redis Cache
OAuth 2.0
Apache Camel
Redis
AWS
Maven
Kubernetes
Elasticsearch
Redis
Uh, so I've been mostly into Java back end roles. So throughout my professional career, I have been into, uh, as a back end developer only, but, um, for a brief period of time in the USA. Um, I had worked as a full sender developer also. So, basically, there, I used Angular JS also, uh, in one of the opportunities deployment. So we were using Groovy with AngularJS and normal other tech stack. It was a retail domain project. So, um, other than that, I have expertise in, uh, telecom domain, uh, finance domain, banking domain, health care domain. Yeah. That's all. And, uh, I have, uh, like, worked extensively upon Java 8. And Java 11 also, I worked pre I have also worked, uh, on Java, which was earlier to Java 8. And then, uh, normal Java frameworks, uh, like Spring Boot, Spring Spring MVC, and then Hibernate for ORM frameworks. And then I have worked with SQL, NoSQL, both kind of databases. And in SQL, I have worked with MySQL or SQL and, um, your Postgres. And then, um, in MySQL, I worked extensively in Elasticsearch. And a few times I've used MongoDB. Other than that, I personally also have knowledge about Angular fourteen. And then, um, your normal, like, unit testing and testing frameworks like, uh, Mokito and all, uh, I've used as it is used in the enterprise applications. And then your, uh, designing this for web services and then Swagger, Apache Kafka, Zookeeper, and then microservices. Monolithic also. I have I have also worked in phone kind of models and then, um, uh, you're, uh, agile also. I have worked on.
So, basically, I think the scenario you are asking is basically when we have an enterprise level application in which different microservices are involved and we have maybe one or more more than one type of database and different transactions are happening between microservices and data is getting stored into the DB and for that concurrency also plays a role because the end users there will be maybe millions of users using the system and then suppose for example a record update in the DB is we want a record to be updated and that included and that included with multi-threading and concurrency and then so in a project where different large number of microservices are there and concurrency is also happening so we should make sure of the ACID properties used as a optimization of a DB so that the data integrity and consistency should be there should not be inconsistent it should be very robust in fact by using and optimizing the DB using the ACID properties.
So I think, uh, repository pattern in c sharp, uh, as I don't have much experience in c sharp other than college level, but I can explain the repository pattern. I think it is used for this thing. Um, basically, uh, I'll try to give you an example if, uh, a project is there. For example, we are making our own DB. We're writing our own DB, a new kind of database. Uh, I'll assume it is a relational database. And in that, uh, repository pattern could be implemented. Or, uh, let's take another example. So repository pattern in the sense it will, um, before sending to data to the DB for persistence, it will kind of act as a repository, The services would be written. So the repository service, uh, following the repository pattern, would be advantageous in a scenario where, uh, we want the data to be optimal before insertion into the DB. So we want to collect as much data as possible. First, uh, do this, uh, like, filtering and sanitization of the data into our own, uh, written repository service. And then after all the checks and everything, sanitization of the data, we will send it to the DB for persistence.
So I think I will opt. I think I'll always go for a NoSQL database over a relational database. First thing that comes to my mind is that we want our the the data that we want to process is document based data. So it could be a complex JSON. And it's not relational, obviously. That's why we want to store it into SQL. And, uh, for example, if we are maintaining a a repository of all the employee resumes and their profile stats and all. So I think the best suited for that is a NoSQL database where we'll have a complex JSON of the profile of the candidate and database and whatever we have extracted, uh, properties from the database, we will store it at store it as a, uh, maybe a JSON document or XML document. So JSON document, I mean, obviously, no SQL database will be much preferred. And we can in introduce if it's a very large application. So for scalability, we can introduce database sharding also. So that database sharding will make sure that the get the retrieval of the data from the databases very fast. And and availability also will be there for if we have maintained clusters. And within each cluster, we'll have the database sharding, so where the database performance will be good.
So I'm not sure about Azure much. In fact, I have more knowledge, I would say, in AWS, but I'll try to explain it it with it in it in, um, in the perspective of any public cloud services. So I think, uh, as AWS, DevOps also does the same thing. So we can, uh, basically we'll have our own c CICD pipeline for continuous integration and continuous deployment. So it if if if we want, it could automatically take care of As soon as we push into a given branch, it automatically goes to the build, and then the if the build is successful, it will automatically be deployed at the given location or server where we want we configured. So for the for all this, uh, configuration, we can use the DevOps build and release pipelines. And in case of failure, there is a, like, uh, first, we'll have a check, be it in a configuration, that if the build is failed, then just roll back to the previous deployment. But I think the question here is asking that if something at production is failing after a new deployment, then how would we go back to the previous deployed, uh, stable state? That, we will have the images of the builds, uh, recent builds. And then we will we can probably mark a very stable build and, uh, save it as a stable build. Whenever we want, we can basically configure the DevOps system also, the cloud's DevOps system, to roll back to that default stable version every time in case of failure. This can be done automatically also or manually also.
So, dependency injection basically if I related with high throughput is basically high throughput we will get only like if we want the high throughput as a priority instead of latency. So, if we are not considering much about latency and if we are just consider about high throughput then obviously the object creation or the instance creation whenever required is very helpful for a given application. So, using the dependency injection it automatically manages the dependencies or the objects required at a runtime, it will automatically take care of its life cycle also the objects and dependencies which are getting created or referenced that there will be a life cycle only if it is used then only it will be instantiated and it is like loosely coupled system in dependency injection. So, it is not tightly coupled, so the objects can also be reused and can also be I mean terminated out of the application if some dereference objects are there and the objects we do not need the dependencies which we do not need but automatically take care. So, all these things are taken care by dependency injection if implemented correctly and this will result in high throughput if we are only considering high throughput not latency. So, basically for a given scenario in an application in concurrency scenarios also this thing is very helpful because if you have correctly implemented the dependencies injection along with the concurrent model then it would take care of all the object creation and all at the runtime and then with concurrency it will definitely prove out to be very performance wise good in high throughput systems.
So, I mean, um, as the note below says that the viewing guide details is a view that selects from another view. So if there's a performance issue, I'll first look into the view employee base issue view employee base view. The view which is the innermost view, which is listed. I'll first run that individually. And then after that, I'll run the outer one. That is the view employee details view. And no. So, I mean, it is a step by step procedure. So if the employee base view is the culprit, so we'll fix that. And, otherwise, uh, the employee details view will be taken into consideration that if it is the bottleneck or not. And, uh, yeah, that's probably it. That's how to debug it. And the performance issue, I think, uh, we should use aliasing instead of all this select and all. Creating nested views, we should use aliasing and, uh, join
So, basically, I mean, uh, the answer to this would be as the developer has written only 1 try and catch. So first first try should be I mean, there should be various checks, like, uh, the where the first WAR that is payment details. If it is null or, uh, it it is null or any issue is there or not. So it should be checked individually in a try catch. So and then there could be, uh, like, multiple try catches also. So, uh, first payment details that should be checked, and then, uh, only should be proceeded if the first one gets cleared, and then validation result should be checked that we we've got the at least the some data. It is not null. And then yeah, I mean, different exceptions we can throw in the catch block, uh, according to the different tries or whatever or for a given single trial. So we can have different catch, uh, exception catch blocks also. So, um, that is how to correctly implement the solid principles, and this is how this should be fixed instead of just having a single
So, this question depends upon several things like depends on if the application is artificially intelligent or AI or machine learning is it is it doing AI or any kind of machine learning that I think you would be implementing or using the tensor flow that is one thing example that I just said that I mean if it is handling large volumes of data at real time with complex transactions we would first have the architecture correct in place the technologies and all the system architecture and system design should be very robust and I mean if there is any bottleneck anywhere should try to minimize it there will be some trade-offs also in the system design or architecture some trade-offs can be avoided I mean not avoided they can be ignored but the throughput of the system and the performance should be very good so for that the first thing first step is the system design and this thing architecture of the system we should have everything in place and then as it is handling large volumes of data so basically we will have to have very good if the application is stable we will have to have the basically whatever we want either vertical scaling or horizontal scaling of the deployed nodes or servers so I think in this case I would pick this thing horizontal scaling I would put big and then horizontal scaling of the servers and then load balancer should be there which will distribute the load according to the like strategy we have I mean finalized beat I mean beat around Robin or priority based scheduling also we can use so that load balancer will make sure that every node in a horizontally deployed servers or infrastructure is basically able to handle and distribute that load across the all the servers and the whole utilization of all the infrastructure is getting used and if there is even more number of transactions happening then we will have the scaling of the database also and we will have this automatic scaling which is which can be easily achieved through this DevOps features in the Azure or AWS services so if the if the load balancers and the proxy servers feel that the load is too much they will scale automatically
So I think that dot net core application architecture, uh, while designing this core .net app application application architecture, obviously, they will make sure that the microservices based, uh, uh, be it any kind of microservices design pattern we can use according to our needs, be it service oriented architecture or anything else also, or client server architecture, whatever we want. So, I mean, each microservice should be independent, and then that individual microservice will do whatever the contract it has to do with I mean, just that the responsibility of each microservice should be limited, and it should be always available. And so the availability and the independentness of the microservice and a sole responsibility also should be there for each given microservice. For for example, a payment gateway microservice will only look after, um, the payment related transactions or whatever, I mean. Should be only reached out to whenever the payment, uh, related, uh, things is there. Otherwise, it won't be reached out to at all. And then should be scalable also. Dynamically scalable. And then uh, and for maintainability, then this thing will come into play. Like, we'll have a contract for each microservice that it is the domain for each microservice will be limited and listed out that it will perform only these operations. Will be solely it will be solely responsible for these, uh, operations only. And it will be independent and loosely coupled with respect to other microservices also. So if we are fixing or developing or enhancing 1 microservice, should be independent enough that it is still working with the old, uh, versions of other microservices. So that is how the maintainability is there. And for scalability, I think, uh, this thing, uh, as I already said in the previous answer, the scalable thing is whether we want it to be vertically scalable or horizontal scalability, and we can introduce the load balancers also so that the each cluster in the deployed infrastructure is getting the correct and optimal load, and every node in the cluster is performing to its I mean, I mean, it's performing well, and it shouldn't be like as a load balancer's task is to just distribute the load among all these things. And it should it can also be dynamically controllable than the scaling thing.
So the advantages of using asynchronous programming patterns in .net applications is that, uh, basically, it is not first thing that comes to my mind is that it is not acknowledgement based things. So it is like asynchronous thing. So if microservice a is is requesting something from microservice b, then it will be asynchronous instead of synchronous. So, in case of asynchronous, it won't it won't get into a a stub state if it doesn't get any response from that, um, microservice leak. So, basically and the timeouts and all that happen in the synchronous architecture, like architecture, they won't be there. So whenever microservice a is requesting something from microservice b, won't have a time out in asynchronous pattern. And, uh, I mean, it is, like, um, loosely coupled, I think. Not tightly coupled because in case of, uh, synchronous REST based communication, it is kind of tightly coupled. And only if we get the proper response, then only the transactions will proceed further. In case of asynchronous, I mean, it doesn't get into a stuck state and all these things.