
Principal software engineer
Zemoso TechnologiesSr. Nodejs Developer, Team Lead
RationariumSr. Fullstack Developer, Team Leader, Technical Consultant
Techuz Infoweb Pvt. Ltd.Software Developer
Tatvasoft
Redis

Bitbucket

Trello
Jira

AWS Lambda

AWS S3

AWS EC2

AWS SES

AWS SQS

AWS SNS

AWS RDS

AWS EFS

Github

Apache

Nginx
.png)
Docker

Mailchimp

SVN

Git

Beanstalk

Stripe

Paypal
Working as principal software engineer
Okay. So my name is. I am a software engineer. I'm having, uh, today, 8 years of experience. Out of that, I'm having 6 years of experience in the Node. Js. I started my career as a PHP developer, and then I just moved to the Node. Js. Since last 6 years, I've been working in the Node. Js in the, uh, mostly express framework with the JavaScript and TypeScript. I've worked on different domains like education portal, uh, job portal, And, uh, currently, I'm working on the blockchain platform. But my major role is not on the victory operations, but I'm working on the, uh, Node. Js
So in the note, uh, when you are working with the Node. Js and AWS services, we have different options to manage and, uh, rooted our secret keys. So there is 1 AWS service, which is a secret manager AWS secret manager. So there, we can just define the our secrets, and, uh, there's already a rotation policy provided by AWS. So we can and there are various customization also available over there. So we can use either default rotation policy, or we can just customize it, like, uh, uh, as, uh, as per our needs. So we can use in the both way, and we can just manage our credential and as well as we can
Okay. So for, uh, error logging and monitoring, uh, docker container environment, Uh, what we can do, uh, we can use different logging modules like log log is available. So, uh, we can, uh, and, uh, when we are using the docker and, generally, for the logging, I should prefer the centralized logging. So whenever you're using a single or a multiple multi lane architecture, whatever the architecture we have, we can just have centralized log. And apart from that, uh, what we can do just, uh, we can just export our logs to the particular place to just review them, uh, all of them from particular place instead of, uh, checking from different different places. So, yeah, we can do that same thing, uh, multi container in the offline moment as
So when we think about the high available node JS application, there's a failover database solution. So, uh, regarding database, uh, failover, we can first of all, we can just give take approach of master slave concept. So whenever you or your master get down, slave just can solve the things or slave can be the master in some database of database is basically, if I tell, um, MySQL, uh, Aurora Aurora is the best option to it will just, uh, whenever your master just get failed, it will just create your, uh, slave as a master. So there's a biggest benefit, uh, in the failover. And, also, we can use the multiple read replicas through by fetching, uh, like, if any instance or any particular will be failed, another will be in the use. We can have a database in the multiple regions. So by chance, every region is down, Uh, we can get some database from different regions. So there are different options available for that.
So when we come to the horizontal scaling so, basically, horizontal scaling is means that we are just increasing our, uh, instances or kind of stuff. So I prefer with the AWS. Uh, yeah. We can do with the utilizing docker as well. So what we can do, uh, we can use the application load balancer, and, uh, we can just, uh, we can view available minimum size of instances. So, uh, and we will put it into the auto scalable mode. So when the demand will be higher, it will add more and more instances. And when the demand will be lower, it will just decrease your instances. So in this way, we can just, uh, we can be more cost efficient as well. So the same thing we can use with the docker as well. Docker is this container. Nothing else.
So when we are dealing with the Node. Js and we want the caching, so the best and easiest way to implement the caching in the node just is the Harish. So we can use the AWS service that is memcache as well. And, uh, if you if you don't want to use, we can just simply use the Harish. So by using that, uh, what we can do, let's say, very frequent data we have, then we can just store those and the release. And instead of fetching from database, we will just pass those data from the as well. We have some dashboards where we need to show some ranking or some information. We can fetch all this information from the. We don't need to go directly to the database to fire the queries. And apart from that, uh, there's various option as well. Like, we are using database services, and we have a steady content, then we can use the CloudFront to cache our, uh, content. So it will also be helpful.
Okay. Let me see the code. Product, find by ID. Product ID. Check product Let me check the course method. Users reported they are often getting product note form message even for existing product. What might be the issue in this? Snippet, uh, and the The code looks good, um, but just need to check once, like, uh, find by ID is returning array or it's object. Because if it's array, array, uh, spirit should match, I'm listing products. Yeah. I just need to check once the response of the product, like, what response we are getting. And, uh, that is the only concern it can get the error from this part. Otherwise, all looks good. So need need to check, like, need to add the debug point on the product. Uh, if condition, uh, you will just add the debug pointer and can check what we are getting from the product. And based on that, we're going to
I have both technical Docker file for the node. Can you find the potential Expose at 0.0. Cmd mode address. Yeah. We can just improve the copy part. Like, copy dot dot part, uh, if we are in the same directory or not, we we can just check for that part once. Otherwise, exposing code is fine. We can increase the node version to the latest version. That is one form. And we need a copy this one because I I don't think copy this one is required. So we can do
So, basically, uh, to automate the deployment, simply we can use the CICD. So and we can use the green blue development as well, deployment so we can reduce the downtime. So by using Docker also, we can just do the deployments. Like, uh, we are using Docker, but also using GitHub actions to just, uh, so whenever the code is pushed, we can just redeploy to the our deployment server. So the same thing can be the used by the docker as well. And, uh, if you want to deploy it on the different environment, uh, I think deployed on the different environment. The script is the same. You just need to rerun the script. And uh, about the configurations, uh, it's in the environment based