
I do have 8 years of experience in software development, I worked in angular, NodeJS, React Native, PHP, python, during this 8 years. I explored good knowledge about AWS Cloud computing service and can built good solutions based on that.
I worked with overseas clients and also got chance work work with FNBO (NBFC from USA), learnt many thing from that team
Principal Software Engineer I
York IE APAC IndiaSenior Technical Lead
Solitera SoftwareSenior Software Engineer
Qacesoft TechSoftware Engineer
WittybeeTech. Lead
Eternal Web
MySQL

Git

Angular
REST API
Node.js

Apache

Slack

Visual Studio Code

Postman

Teamwork

Microsoft Teams

Lambda

API Gateway

SQS

SNS

Event Bridge

Step Functions

EC2

Route53

ELB

AMI
.png)
Datadog
Jira

AWS
.png)
Jenkins

AWS CodePipeline

NodeJS

PHP

ReactJS

ExpressJS

Serverless

React Native

DynamoDb

API Gateway
Hi. Myself, Sunil Pajabadi, and I do have total 10 year of experience as a software engineer. I started my career with, uh, PHP, and then I moved to the Node. Js. In Node. Js, I have totally 8 plus year of experience. And, uh, in in the script technologies, I worked on different frameworks like Next. Js, Angular, React, and Express. Js, uh, for back end development. And I also have experience with AWS services where I, uh, did I mean, I worked on serverless frameworks and, uh, creating different, uh, serverless projects. And I also provided, uh, architecture, uh, solution to client. Like, uh, I created different architecture for event driven, uh, miss event driven architectures for projects, and, uh, I provided solution for creating data pipeline or creating any long running, uh, processes as a event driven process. I also helped in designing step functions and, uh, creating workflow designs for different complex, uh, scenarios. Then, uh, I worked on, uh, as a I worked as a team lead also in different companies, uh, managing, uh, like, 6 to 12 percent of team at at different time. So, like, in solitaire, I I used to manage, uh, 12 persons team. Uh, miss 3 team of, uh, 5 5 5 persons and 1, 2 persons team. So I I used to work on 3 projects parallel in Solid Era where I used to work as a developer in one of the project. And, uh, in other in other project, I used to provide solutions, and I used to review their code, and I used to provide, uh, different architectural decision, uh, over there. And, uh, in along with Node. Js, I have experienced with, uh, different kind of DBs like, uh, MongoDB, DynamoDB, Postgres, and MySQL. And, uh, I also have experience with design it means, uh, writing code using different design patterns like, factory patterns, singleton, uh, pattern, dependency index and creation, and all that thing. That's all about my, like, background, where I'm coming from, and what do have as a technical and, uh, like, com professional skills.
Okay. For caching, uh, like, uh, for caching, uh, we can cache at the API call level. Means, we can cache API request, uh, which will, uh, we can add some TTL over there. So it will, uh, reduce the, uh, number of DB calls for same kind of request. Like, if same kind of parameter are passed in that particular API, it will return, uh, same results. And after some TTL period, uh, that cache will be deleted. So it's we'll get auto refresh on, uh, after TTL period. Second solution is, like, uh, caching, uh, caching data from, uh, up to DB queries. So in this solution, we can do, like, uh, create cache on request and, uh, delete cache on update. Means, if some data we are querying first time for, uh, particular, uh, query miss filters, we can cache that. And if, uh, for that particular filter or, like, for that particular condition, if data get updated in DB, we will delete that. So it will auto create it once, uh, when, again, user require that.
Okay. So, uh, describe a situation where you had to refactor a Node. Js application for better performance and maintainability. Okay. So this happened, uh, with my, uh, first project where, um, like, we implemented, uh, different, uh, miss one method in different controllers, then I create we move that code to the middlewares. So this is, uh, for, like, code maintain maintainability. And in in same project, we face some, uh, DB, uh, like, some delay in the, uh, data. Means, uh, there was a time taking queries over there in MySQL. So we optimize, uh, like, queries using query analyzer in MySQL. And then along with query analysis, we also, uh, implemented, uh, caching over there using a Redis cache of, like, elastic cache of AWS. That, like, improved a lot the API performance. And in recent project, uh, like, my second, uh, situation where, uh, uh, we were dealing with the MySQL, uh, DB. But, gradually, uh, we found that, uh, we have, uh, like, we have very unstructured data from different, uh, services. Like, we we integrated different LinkedIn, uh, Glassdoor in the kind of, um, uh, platforms in our, uh, project where they used to provide different structure for their data. So we switched to the, uh, from MySQL to MongoDB for better management of the project. Now here, uh, the best part is, like, when we started this project, we used Prisma, uh, ORM. So, like, DB switch was kind like, we did it in just 1 week, and it was kind of, uh, like, amazed for the client, like, how we switched from MySQL, uh, MySQL DB post grace to MongoDB in just week 1 week, and we we did a successful demo again, uh, in front of client, and client was very happy about this. And, uh, it's also improved the performance. Uh, we created different indexes in MongoDB to, um, for performance improvement over there. Now caching related thing, uh, like, we we are using CloudFront here. So we are caching at request level. So we we added CloudFront in front of API gateway. So we're using caching of CloudFront in, uh, in this scenario. We are not caching, uh, data manually over here.
Okay. So, uh, for data consistency, uh, when using MongoDB in distributed system context. So here, uh, for distributed, uh, system, uh, like, we can use cluster management here. We can create clusters in different region of MongoDB. So we can create different read replicas, uh, of a DB. And when we update any record, uh, in a MongoDB, it will auto, uh, sync to the other DBs. And we can use, uh, Mongo Atlas kind of services for this kind of, uh, distributed DB. That is, uh, I think one thing we can use. Then second, uh, we can use transactions also in MongoDB. So for managing transaction, uh, like, uh, it's about actually, I never get chance to work on transaction of, um, MongoDB, but I think there should be some transaction kind of mechanism like, uh, SQL. Okay. Okay. Okay. Yeah. I think one solution is, uh, creating clusters in different region for distributed, uh, system.
Okay. So, uh, sorry to say that, um, I never worked on, uh, Azure Azure serverless offering, like Azure Azure functions, but I worked on, uh, AWS Lambda functions and, uh, GCP, uh, Cloud functions. Uh, I think this, uh, these both are, uh, same kind of services provided by different cloud provider. So I can work on Azure serverless also.
Okay. Uh, considering JavaScript event loop specific, uh, specifics, how do you prevent blocking operation in Node JS servers list, uh, servers, Node JS? Okay. Here, uh, we can use promises or callback, uh, to prevent blocking operations. So we can add a call to promise. It means we can add a call. Uh, it means we can call a function as a promise, and it will, uh, add that particular function call into the event stack. And once that get executed, it will, uh, return operation. Uh, If we want to avoid means if we want to utilize another, uh, CPU of server, we can also, uh, we can also use, uh, different child process also here. So it will also avoid blocking of event loop because we are adding our call to the child process, and we can, uh, add promise over here to answer. Once that trial process get executed, we got, uh, a response. And we can maintain here, uh, different, uh, like, we can maintain here, uh, counter kind of mechanism, which we can count, like, whether our all CPUs are busy or not. If CPU is available, then only we will let that, uh, particular thing, uh, means particular call to the trial process. And it will like, using this mechanism, we can avoid over usage of CPU so that our application not get slow.
Okay. Now purpose of this middleware function is to, uh, validate the, uh, request content type. So we are just allowing application JSON request over here. And in case if it's not matching with the, uh, content type, then we are, uh, responding as a as a 400 status code. And for, like, for, uh, positive identify any issue present. Now in this, uh, if request header. Okay. So, uh, actually, there is no issue, uh, with this middleware. I think one thing we can do, we can improve, uh, this, uh, means writing separate function and adding, uh, means adding this code to a separate function that will make it more readable. And Yeah. So here, it may be possible that in headers, we don't have so we should check whether header exist or not first, then means we can all we can add one more condition to check whether in, uh, request we have headers or not.
Okay. So, uh, single responsibility from, uh, principle. So, uh, for single responsibility principle, uh, like, uh, we should, uh, we should make sure that for a same same kind of input, we should get same output. So, uh, if, uh, miss, single is okay. That is, uh, their function. For single responsibility functions is, like, we should use a function, uh, for single, uh, thing only. Means, like, a function should have one responsibility. If we are creating a function for create users, then it should create user only. It should not do, uh, upstart kind of operation. Means it it should not be used for update and create both thing. So here, uh, reset password. Send welcome email. Delete users. Delete user. Yeah. Uh, this, uh, code address the single, uh, responsibility principle.
Okay. So may I explain a complex note? There's memory management scenario where garbage collection wasn't sufficient and how you handled it. Okay. So here, uh, like, I was working on a, uh, on a script where we used to import, uh, data of about 8 GB in a DB. So in the script, uh, what we did, uh, we created, uh, like, reading, uh, that particular file, uh, line by line and parsing, uh, every line into the JSON form. But, uh, here, like, complex memory, uh, management scenario, let's say here you we used to, uh, we we initially, we're not clearing the using that case, we need to make sure that whether we are clearing all the variables on each event or in on each call of functions. So, uh, like, we we were collecting 1,000, uh, prod product in a variable, and on collection of 1,000 project products, we used to dump that into the DB. And then we used to clear the, uh, local variables. So this is where we, uh, created garbage collections. Uh, so here, we we did, uh, like, we did manual manual, uh, clearing of variable instead of depending on garbage collection. Uh, so here garbage collection is not sufficient in case of creating product. I mean, in case of using that different same variable in a, uh, inner functions. So we used to, uh, clear that manually.
Okay. So, uh, for Docker containers, like, uh, I have recent experience also, and earlier also I worked on Dockers. So in Docker container, uh, we we created a a Python, uh, code over there. So, like, our main application in JavaScript, all the part in JavaScript, but we, uh, we are also uh, doing, uh, using, like, generating, uh, sentiments of user, uh, based on the data. So for sentiment, we we need to use natural language processing tools, and they they are best supporting they have best supporting pass in Python language. Means we have best libraries in Python for that purpose. So for that, we created a a separate project where we used to write our code in Python, and we used to containerize that particular code, uh, and, uh, used to deploy that as a ECR service. Okay. For Node. Js deployment, uh, I'm okay. For Node. Js also, we can use same kind of mechanism, like, for the we can create Docker containers, and then we can use that Docker containers, uh, image as a code provider for our serverless function or a batch job. And it will help, uh, like, using, uh, means we can use same kind of environment on different PCs. We can, uh, work on same environment using Windows machine also. Means in the that is the main benefit of using Docker containers. And is, uh, provide, uh, like, testing easy means we not need to depend on, uh, deploying a function on the, like, serverless, uh, cloud. We can easily test our container and make sure whether it's working or not using commands, and we can deploy that on, uh, ECR or, like, any container repository, uh, service where we can find that our that particular endpoint with our function.
Can you say at a time when you efficiently utilized observer pattern? Actually, I not get chance to work on observer pattern in Node. Js back end application.