
To leverage my 12 years of IT industry experience, with a focus on mobile and distributed application development, to lead technical teams in delivering innovative solutions that enhance organizational efficiency and customer satisfaction.
Senior Technical Lead
MediaKind IndiaSenior Backend Engineer
Aviso Software IndiaSenior Technical Lead
MediaKind / GlobalLogicSenior Software Engineer
Smart ChipSoftware Engineer
TavantSenior Software Developer
Taritas Software Solutions
Git

GitLab
.png)
Jenkins

TFS

SonarQube

Visual Studio Code

Visual Studio
Hi. So myself, Puneet, and I have I'm a software engineer working in the IT industry for last twelve years. And, primarily, I have been working with Microsoft tech stack. There, I have basically worked on many tech tech technical things. Like, starting from my career, I have worked on wind farms and the WPA. After that, I I got chance to basically work on mobile development using Jammering, which is a crash platform, uh, framework. And after that, I have been working on the API development where I I was mainly part of basically building or building the API, which can basically scale to millions of future, and we can support that load, basically. Apart from that, also, I am part of, basically, architecturing team. So I I I and my team, basically, have invested rearchitected couple of modules in our product line so that, basically, we can support multiple other customers, which are basically not in same same, uh, data model or you can say, uh, they are basically huge in in those dimensions. So, basically, to support them, we have to basically re architect some of our modules. I also basically migrated my dot net, uh, projects, which are basically in running in the Windows only, and then we had this and that we were not able to use them on Linux or Kubernetes, basically. So I migrated all my media kind products, basically, to me, uh, dot net core. And since then, we are basically in the cube world. And, also, last three years, I am working as a senior tech lead. There, I am basically playing basically three roles. So one is, mainly, I'm doing development. But other side, also, I'm part of basically requirement analysis and gathering and then seeing those those aspect, basically, whether they are good to good to be put in product line or there is some architectural change. There is crash team collaboration as well in the same time, and then my team is there where we are doing grooming and we are delivering those features. And, also, I'm part of, uh, performance testing and, uh, you say, um, whenever there is some issue is happening on the, uh, production environments, I'm the one who known as the sub subject matter expert. So I also play a crucial role in those calls where I I basically can try to mitigate or resolve those problems. So that's pretty much me. And apart from that, I'm also a good learner and, uh, uh, keep to adapt basically anything. So last one year also, I am supporting one of my Java project, which has came as a transition to me. So I and couple of two, uh, I mean, two more people in my team, we were basically supporting that. So, usually, we were able to basically deliver, uh, three to four features within six months. And so, yeah, Java is something I am also comfortable, not proficient, but you're comfortable. And then apart from that, I also very comfortable with the front end development where I can basically bring, uh, build something using Angular or React. And then I can integrate, uh, APIs and all in in the those modules. So, yeah, pretty much, yeah, I'm, um, I'm, I have in like this. So yeah. Thank you.
So at your service, we are not using our product line. Uh, we are using Kafka to be pasting, but they are pretty much same. So, uh, I can relate how your service bus is basically working. So idea is basically when earlier, we had had a monolithic architecture where one service was deployed and all the load was coming to that service only. And scalability was a problem with those type of architecture because you cannot scale horizontally those services. It can scale up to vertically, and then there is a limit to RAM and, uh, hard disk. And that is where these microservice basically came or decoupled component came in the picture. And to basically communicate with each agent, uh, they'll they'll need some as asynchronous, uh, mechanism. So that is where Azure service bus plays a crucial role, which can basically play as a publisher and subscriber. So when a event is published, uh, in asynchronous manner, that can be basically listened by other dependent microservices, and they can perform some business. Right. So yeah. So topics are basically where the messages are goes. Message are basically put. And from there, subscriber are basically consumers or someone who was basically consuming those those topics. So first, they subscribe to those topics, and then whatever message is basically coming to those topics, they they can basically, uh, get consume. So retry, we are using Poly. It can be used basically because see, you unnecessarily should not basically calling your API, you and the services basically not response. You are it is down. So that is where this retry mechanism or that letting strategies came in the picture so we can basically use Poly, which is in the library in .net and can be used for basically retrying. And there is there is basically on off, uh, toggles, which can basically use, basically, how many times you want to basically hit that. If any after that, you can basically do a short circuiting and be ideal so that, uh, your breaking service should not get impacted with the other service, which is not response. You are basically break got breakdown. And apart from that yeah. So okay. Yeah. So pretty much yeah. So I have covered this question and yeah. So, Yeah. So correct. So, uh, reliable communication between microservices or decoupled component can be done using these Kafka, Redis, or a dual service bus because they are basically message is written basically on the topic, and then topic is subscribed to consumer, and consumers are basically continuously got those message, and they they can basically play. So it's more so called as event driven architecture in the microservices world.
I do not have much exposure to Azure functions, so I will not be able to answer this. And I'll skip this for now, but this is something which after Java, I will be learning. I have written couple of Azure functions just for practice, how how they work and how we can basically using a sample project. But in our product line, we were not using it first thing, and then I will not be able to answer this question.
So dependency injection is a, uh, design pattern, which is used in modern day applications to basically keep two dependent, uh, classes, uh, independent. Basically, let's say there is a dependency in terms of a class object which has to be available in the second class when we are creating the object of it. So that can be resolved using this dependency injections. So either one way is to basically manually create that dependency and pass it to that class, and other ways to basically use this inversion of control frameworks, which on runtime can basically resolve your dependencies, whatever is required by your, uh, component or classes. And in dot net core, there is inbuilt, uh, dependency framework, which we can use. But in we also developed one dependency, uh, module in our dot net framework application and that we migrated in the dot net core after migration. So we are not using that in built, uh, framework from the dot net core, but pretty much they are doing the same thing. You can basically inject your dependency. Either it can be transient or it can be singleton. Right? Uh, k. So yeah. Then yes. So services are re registered using services configuration. You can use add scope, basically, transient or singleton. Right? And then you can basically when you are basically lifting that dependency, it will be resolved using, uh, via the framework. So we'll have, uh, default object of that class, which you have injected in the class. So that is how it is specifically done.
Not too much idea about Azure even grid, but, uh, basic concept is very same. Whatever messaging services are available, be it Kafka, be it Azure service or Amazon. Events all are same, basically. So you publish a message to a topic, and there are subscriber or consumers which has subscribed to those topics, and they will be getting those messages. Okay. And so we are using in our image basically, in our application, it's a media orient I mean, media driven product line. So what happens when we are getting schedules or whatever whenever we are getting any images as part of the GLF images in our product line. Idea is basically to improve basically increase the and basically do that work. On more machines, we are using Kafka and Kafka So in our image processing module, we are heavily using this public publish and subscribe button where, uh, whenever there is a image has to be processed, we are basically keeping it in a topic, and then there are the consumers who are basically getting that, uh, message to process. And then we are performing image resize. Same is true for our scheduled processing where we are getting hundred k schedules in a GLF, and then idea is to basically split this to parallel machines. So we are using, again, uh, publisher and subscriber pattern where we are keeping those those in, uh, topics, schedules, and then there are consumers who are basically getting those schedules to be processed. So that is how the work which we are supposed to do in, let's say, two hours, for example, we are able to do in twenty, twenty five minutes just by splitting that work in multiple machines. So this is one part where we are using this. And then our microservices for communication, we are using again public subscribe button. So yeah.