Highly experienced Front End Developer with 12 years of experience using MVC, JavaScript, Node.js, Kubernetes, Azure, Kafka, Redux, CI/CD, REST API, SQL Server, QA, MicroServices and Cosmos Db. Skilled in developing high-performance applications for web and mobile platforms utilizing the latest technologies.
Senior IOT Consultant
Avery Dennison IndiaConsultant
TechneplusTechnical Lead
Expert Global IT SolutionsSoftware Engineer
Syntel LimitedDeveloper
Capgemini IndiaJavascript
C#
Node.js
Apache Kafka
REST API
ReactJS
AWS (Amazon Web Services)
Azure
Visual Studio
SVN
Report Builder
MS SQL Server
Cosmos DB
MongoDB
AWS
NGINX
Docker
Terraform
GitHub Actions
Cassandra
PostgreSQL
SQL Server
AngularJS
Bootstrap
HTML5
CSS
ASP.Net
Entity Framework
TypeScript
SSRS
SSIS
Could you help me understand more about your background by briefing the introduction of yourself here? Sure. Uh, I am as I have overall 12 years of We didn't send IT. Uh, I have been part of many different companies like Capgemini. Global. Currently, I'm working with every Denizen as a full stack developer. Um, in my core Skills, uh, senior I'm working as a senior full stack developer. My core skills are around React, Node. Js, TypeScript, MongoDB, PostgreSQL, uh, NestJS. Uh, I have been, uh, Yeah. Uh, mostly, uh, My background is around, uh, full stack development, and my domain knowledge is around IoT. Uh, and And, Mozilla, you have worked on Azure as a cloud, uh, uh, cloud solution.
How does construction, uh, container How does container orchestration with Kubernetes enhance the development of the microservices Build with Node. Js and gRPC. Okay. So, uh, Kubernetes, Like, container, Like, majorly, this will help around scaling and load balancing. Uh, like, uh, Kubernetes will allow easy scaling of Node GSM microservices. Uh, so and then if you have something called as, uh, if you want the consistent environment and deployment, consist So Kubernetes offers that, and that is how it will help, uh, enhance the deployment of microservices. We would have a version controlling as well and the ruling of the updates, you know, then we would have good resource management maybe. And we can, um, provide isolation and and and security, Which is critical in microservice architecture for security and dependency management. So yeah. Definitely, this, So, uh, container orchestration will will help around all that stuff.
What methodology would you use to implement automated testing in microservice environment focused on Kafka and Node. What methodology? Implement? Like, We would have, uh, focus on the unit testing. Uh, majorly, uh, we would focus on the unit testing, uh, like, unit testing framework? Like, Jest would help us. And even we can mock, uh, with the mock external dependencies like Kafka producer or consumers using JustMock, uh, JustMock? Then we can we can also do, integration testing, uh, end to interesting, we can do load testing. But for automation, like, you can have a CICD and, um, like, you can have a continuous gradient continuous delivery environment setup? Yeah. That will make sure our environment is stable. So That is major thing. Integration, end to end testing? We can do
Suppose you have a monolithic architecture running in Node. Js, and you are planning to break it into microservice using Kafka and gRPC, what would you plan? How would you plan this transition ensuring minimum downtime? How would you plan this transition chain ensuring minimum downtime. So for this kind of thing, I think we can use, um, we can use pattern gradually migrating functionality from logistic to micros in it, so this will help us a lot. Uh, and and and and, like, uh, the other things you can do it in in in parallel cited, and then, uh, first of all, you will understand the modeling, map the functionality, dependency, and data flow within that infinity, and then identify the microservices, uh, like, breakdown the monolith into logical and independent and deployable services, and prioritize them on easy to extraction business values and according to the risk that we do have. Room, and then we can set up Kafka and other stuffs, uh, accordingly. We might also need a gRPC infrastructure, then we can, and we can iteratively do this. We can iteratively, uh, do this kind of thing. That would be better. Once we are gun we can document and do the training to other individuals? So the clear roadmap is also, uh, important to be defined.
Okay. Imagine the use case where your Node. Js application is facing problem with slow start up and response times after being containerized and deployed via Kubernetes onto the Azure platform. How would you approach troubleshooting and optimizing this? We can start with, uh, for this solution, we can start with, uh, with the logs, like Kubernetes logs, and we can look for the errors and warnings that might indicate issues during startup or run time. We can analyze the startup processes, uh, uh, and then we can optimize container image, uh, maybe. Is. And then we the major thing we can do the most important thing after all this is, um, you can check and review the CPU and memory been allocated and if it is properly allocated to the to the container. And and then we can use the try to trying to do horizontal codes, auto scaling, uh, maybe to manage this. And, like, you can do performance profiling. Like, you can use different profiling tools like profilers to see what has been going wrong. If if if the all of this doesn't work, we have to go through our Node. Js code and trying to optimize it. You can see how much database is taking time. Uh, you can see the network latency. You can is Check around the Azure net like, and then the network provide like, the network, uh, how it is impacting the network, like Azure network. Latency can be impacting. And yeah. Is we can also review the Kubernetes service and ingress. Is if all this doesn't work, then you can definitely contact the Azure because, uh, as you are, uh, concerned as your support and documentation.
Like, when implementing a new feature, feature, how do you ensure it integrates well with both Kafka event streaming, uh, event stream and existing Azure cloud infrastructure? To implement a new feature like this, Azure cloud infrastructure, you know, a series like, we can do series of things to ensure the the performance and the reliability of the application, first of all, you can understand the existing infrastructure feature and then review the current, uh, uh, Azure infrastructure and Kafka setup, and then understand that existing configuration such as network setup, uh, security rules, Kafka topics, and then etcetera, then we can clearly define our requirements. Uh, and once we have defined, like, we can review our design, uh, and then we can set up our development machine. So, feed so so we have to do feasibility test if if you give you our our new feature if the feature interacts with other services as well, and then we need to do a feasibility test. And if if everything feature, uh, if if right, then we ensure that correct serialization details of the Kafka message happens, and we can handle the Kafka messages, uh, offsets and partitioning, feed, maybe, um, and implement the, uh,
Given a specific, uh, given a specific scenario where a fine grained microservice architecture needs to be set up with Node. Js, gRPC, and Kafka, how would you design the system for smooth intercommunication among the services and periodic data syncing with minimum latency? Okay. So to design a fine grain micro service like you were given a scenario where all this setup. So if you want to design this, probably so we assume that we have event driven architecture. Then what we do is, um, we define boundaries. How would you design the system? Like, you if you want to design the system for smooth intercommunication, like, you can consider channel, your integration, you can do load balancing. We can do a staging deployment environment, if you do the staging deployment environment, that will help a lot. And after we get the feedback and everything, then we move to production. Sorry. We would always have a server in between, like, uh, staging production, uh, staging deployment kind of thing.
Last day, the step you would take to ensure deploy application on Azure utilizing both PaaS and IaaS offerings? App so to deploy an Azure, uh, or to deploy a Node. Js application on, uh, we Prepare our Node. Js application that it is ready for production. Uh, we have source control. Uh, uh, we generally set up the database required. Uh, and then, uh, you have to ensure that your code is on GitHub or or Azure WebPods or Bitbucket or Azure DevOps. Then you can set up the Azure services, uh, like, uh, pass offering is done by Azure App Service for hosting Node. Js. For IAS, you can consider Azure, uh, virtual machine or Azure Kubernetes service if you needed a more collaborative, controlled environment or specific configuration with the PaaS that, uh, that the PaaS does not offers. So and then you need databases, network and security services. Uh, we're gonna have a a VNet configured, and that would help you a lot. Js. Then you can configure your Azure app service, and then you can set up the, uh, IAS environment. Of if you are opting for AKS, you have to compare Kubernetes concern according to your complication needs. Js app you can set up CICD pipelines. Up. So both PaaS and IAS offerings will be taken care. Uh
In Kubernetes environment, how would you address pod scalability based on incoming traffic? To do this, you can generally use the horizontal pod auto scaler. Horizontal pod auto scalar automatically scales the number of ports in the deployment, uh, scales the number of ports in the deployment or replica set based on the address the CPU utilizations or other metrics? Address like, you can configure your, uh, your request, uh, your resource request and limits, like, with the help CPU? How many CPUs and how many limit, uh, request CPUs and limit CPUs? That will help you scale scale scale the environment.
How would you debug performance bottlenecks in a system that uses Kafka for real time processing and Node. Js services? If you are using Kafka for a real time processing and Node. Js services. How would you do the performance issues? Clearly, if you can if you generally, we connect, uh, monitor and collect metrics, uh, like, um, like, input output operations being done, Uh, the Node. Js CPU utilization, the Kafka CPU utilization, uh, we need to Determine, uh, high latency and low throughput. So what where are the performance issues? Is it from latency issue or performance ratios? And you can, uh, we we have to check the the bandwidth and see if there is a flag. And To profile Node. Js services using tools like, uh, inbuilt tools like, uh, inspect flag dash dash inspect Flag we have. We have clinic dotjs that will help us get the profiling of the Node. Js application to identify CPU, you tend intense operations or memory leakage. So we can review that algorithms and code, uh, our algorithms and Or we can, uh, analyze, uh, we can, uh, we can analyze Kafka producer and consumer performance and check the batch size and and and linger time and compression settings. So this way, you can, uh, you can do, uh, debug the performance bottleneck in the system that uses Kafka real time processing measures other logs that that we'll get at the metrics that we receive
How would you ensure data consistency across distributed service when applying changes in an event driven architecture using Kafka. To ensure, uh, ensuring data across distributed system. Um, we can use some strategies like event are seeing, uh, exactly one semantic Kafka, or you can use atomic transactions. Actions. We make sure item potency is there, uh, make operation item potent. Uh, so retrying the same operation does not change. That means that if you retry the same operation, does not change the state beyond the initial application. Uh, we can also use compensating translation sagas, saga pattern. We can use we can use, uh, event ordering. Then we can use read and write models. This will, uh, and even we can capture the data. We can do good documentation and governance. Graceful error handling, uh, will help us ensure that the distributed service apply the changes, uh, so help you on the consistency on the data.
Can you can you suggest, uh, a for monitoring and handling Kafka Messenger as this delivery failures in the robust manner? There there are some comprehensive strategies that will help, uh, run-in robust manner, Like, uh, you can configure Kafka reliability, like, set appropriate values of acknowledgments, retry, and retry backup in producer configuration? And, also, you can use, uh, mean in sync replicas and replica factors, uh, in the broker settings. Uh, and then you can monitor the Kafka clusters. So monitoring is one of the strategy you can use. Uh, proper logging and, uh, strategy, proper alerting mechanism strategy, uh, an dead dead letter queue, like, set up a dead letter queues to capture undelivered messages? Uh, this ensures the problematic messages are set aside for analysis and don't block the processing. So dead letter q is one thing. Uh, you can always use, uh, message independently, item potency techniques that don't send the duplicate messages kind of thing? And you should always have end to end tracing of the application. You should have retry logics in the consumer error handling in producers. This way you can, uh, do robust manner.
In case, uh, where there is a sudden search in the user request causing your microservice architecture architecture orchestration why a Kubernetes to fail, how would you debug and manage and prevent such incident in future? Uh, handling the sudden search of the users and failure, we can have Uh, immediate response mechanism, you can do, uh, like, you can use immediate response and mitigation, uh, can be you can scale up the service, uh, for the images, sir. But, uh, to add any like, You can do the rate limiting. You can divert the traffic info around this. So you can debug and manage the signature. But if you want to prevent this from the future. You can identify the bottlenecks. Like, you can monitor the Kubernetes metrics. You can analyze your logs, And then you can do a root cause analysis and try to, uh, mitigate and, uh, root cause analysis and and and give a solution around it. Like, uh, we can do load balancing. And, also, uh, you can do a capacity of planning to optimize this kind of, uh, scenarios. So after post review Most incident review, you can find out the solution, and you can make sure that, uh, uh, next time, if the same, uh, how it was resolved, you can document it. Documentation, document your learning, uh, is is important.
How do you approach database schema changes in way while ensuring compatibility with the microservice? So handling database schema change. So first of all, we can do a version controlling, to ensure compatibility? Uh, we can make sure everything is backward compatible. We should also despondent contract patterns like, uh, add a new element without reviewing the old ones? And then you can migrate, uh, like, gradually the migrate the application. Disc then there are database refactoring tools that will help us analyze the effect of this approach. That would help you, uh, and even you can do a phased rollout or a fallback plan. You should always have a fallback plan. So this way you can ensure the compatibility with the non nondistriptive dis way while ensuring compatibility with microservice?
So how would you recover and ensure minimum service, disruption for a large scale Kafka Node. Js system where a substantial number of Kafka brokers have failed? So if, uh, recovering from such a scenario, Means you can do few approach. The first approach would be, like, immediately, you can Try again impact. Like, you you can do an impact assessment. Like, try again. You can do immediately impact assessment. Like, you can quickly assess the impact of the local failure on your system, determine which Kafka traffic and the partition have affected, and how service? How they have a attack, uh, affected? Uh, you can engage in a disaster planning recovery. Like, you can engage in a of planning require recovery plan? Like, if you have it right now, it is time to engage it. Like, you have to, implement? You you have to do it, uh, despite, uh, include switching to the standby Kafka cluster if available. Service? You can do a restart and data integrity check and, uh, restart the failed Kafka broker that would you to do immediate? Once we start, we can check the integrity, Uh, and then you can also do some replication and failure, uh, failover. You will always have a replication and failover mechanism, and you can redistribute the load around. So this will help you reduce the load, uh, and ensure minimum service disruption is there, uh, around for Kafka brokers? There are some more things, like, you can do root cause analysis. You can review and improve disaster recovery plans. You can have to dub you always have to document this incident.
In what ways have you used functional programming principle in Node. Js to stall complex coding problem? So functional programming principle, like, first of all, function problem, like immutable dataset, uh, so that would, um, um, have say so major thing is you have, immutable data structures? There I have used the function programming, and a function programming is always better if you have pure function. Your you you you immutable function that is, uh, that doesn't change? And if it is chain if it has less in? Then there is a less well, never a less, uh, less vulnerability, less less chances of failure. You can always have higher order function like the map. For instance, map filter and reduce method integral functional programming that are used to transfer data without mutating the original array, like, without changing the original array? So, oh, that is again, uh, helpful. Uh, I have used the functional programming around, uh, curing during and partial application recursiveness? Recursiveness is always best. Like, I have used multiple time recursion. Used like, regression is always a better complexity than the traditional
How would you approach network security in Azure cloud environment especially when dealing with external API and services? Network security in cloud Azure cloud? Particularly when dealing with external API, means we we generally use Azure virtual networks, uh, virtual network's VNets? Azure VNet? Like, Azure VNet will isolate our all resources from the third party. Then you can have, uh, you you will create a network security groups, like, to to to so that for every VM so that they are not, approach? Everything is not export, uh, exposed, and only few things are exposed. Whatever is needed, that is export to the external APIs. Then we can always have set up application gateway with the web application firewalls. Uh, you can now also other, uh, or whatever this says, like, uh, like SQL injection? And, like, you can, uh, that will help us protect against that, uh, kind of scenario? We can have Azure firewall set up. We can do Azure API management. Uh, API Azure API management is the is also one of the Azure, uh, Azure service that will help you, uh, manage the external APIs, uh, with implementing, like, authentication policies, rate limiting, IP filtering? Any other The one approach can be you can make it a a private endpoint. Um, then you can also use I'm management service of Azure, uh, that will help you, uh, achieve the