I’m a backend engineer with expertise in Golang, Node.js, cloud platforms, CouchDB and distributed systems, building reliable, scalable products across fintech, SaaS, and AI domains.
Senior Backend Engineer(SDE-III)
Weave CommunicationsSenior Software Engineer(IC)
Channel19Technology Lead/Principal Architect
Pipli TechnologiesSoftware Development Intern
Pharos SolutionsSoftware Development Engineer
Pharos SofttechInnovation Lead
Pharos SofttechBlockchain Application Development
Bookingjini LabsMedical Records Management on Blockchain
Lyfscience MedTechSoftware Development Engineer
Tata Consultancy ServicesApplication Development Intern
Vodafone
.NET Framework

Angular

PowerBI

NodeJS

NGINX
.png)
Docker

Kubernetes

GCP

AWS
.png)
Hyperledger Fabric

Python

OpenCV

MySQL

Swift

Go

C
C++

R

MS-Access

Cassandra

CouchDB

Ethereum

Solidity

PyTorch

Scikit Learn

YOLO

Docker Swarm

Google Cloud Platform

Microsoft Azure

Outsystems

Mendix

ExpressJS

React Native

ReactJS

Cognos

Tableau

Jaspersoft

Arduino

Apache Kafka

Apache Zookeeper
.png)
Apache Spark
.png)
Jenkins

MongoDB

Cassandra

YOLO

AWS

Terraform

Cognos

Power BI

Tableau

Arduino
So, um, I am a software engineer, uh, with more than five and a half years of experience. In total, I have been working with, uh, multiple startups across, uh, multiple domains, including hospitality, health care, as well as, uh, briefly, uh, end finals. Uh, I have gotten a chance to explore a lot of, uh, languages, tools, uh, across, again, multiple technologies, which includes blockchain, uh, low code application development, um, BPM tools, and, of course, uh, your native, uh, application development using, uh, Node. Js stack or that, uh, Node JS stack like, uh, MongoDB, uh, Angular, Node JS Express, as as well as, uh, MongoDB React, Node JS, and Express. So I have built scalable solutions, uh, which are which are, uh, being, uh, used by more than uh, 10 thousands of users handling over tens of GBs of data every single day. And, uh, the peak usage, uh, for one of the applications that I had built was, uh, almost, uh, like, 100,000 users, which which, uh, and 100,000 users in a period of, um, it was basically within a period of a span of 3 hours, which translates to around few 1000 users every minute. So, uh, mostly, my experience has been on the back end side, uh, using Node. Js. I do have a fair bit of capability on, uh, Python as well. On the front end side, I have, uh, worked on Angular briefly as well as on React. And, uh, on the DevOps, uh, the deployment side, I have been working with all the 3 major clouds that is AWS, Azure, and GCP. Primarily, uh, worked a lot on GCP and Azure. Uh, I have a fair bit of understanding on Kubernetes, uh, deploying, uh, infrastructure as a code, and I strongly believe, uh, in, uh, system designing. So I have been doing that for quite a bit now, uh, especially the high level design and the low level designs. And I do have a fair bit of understanding on microservices, and I have been developing all my applications from scratch using the microservice architecture. Thank you.
Uh, a Node. Js back in application, uh, could be, uh, potentially divided into 3 or 4 parts. The one is, uh, let's start with the data part. So we need to create our data access objects so we could use, uh, an ORM something like Mongoose to connect with MongoDB. And, uh, we can create our DAOs or data access objects, uh, over there where we could have helper class to access the databases or not. So, basically, uh, all the logic for accessing databases could be, uh, potentially done under that. Then, uh, there could be a layer, uh, on top of it, which handles, uh, most of your business logic and everything. So which would be something like, uh, the service layer, let's call it. Uh, it will contain all the business logic and other details over there. Then on top of it, we could have our handlers, uh, which would potentially incorporate these business logic and, uh, potentially, uh, uh, add or, like, connect the interfaces of our DAOs as and business logic. And finally, uh, finally, on top of it, we'll, uh, have our roots, which will be using handlers per se to create the endpoints on API. Now, uh, coming back to the way I'll be designing things, properly structuring things, uh, so, uh, for MongoDB per se, I would separate out or create a single ton for MongoDB connections, which would be used globally. So I use I use OOP's concepts, uh, like, on a daily basis and follow the I'll I'll follow the proper design patterns. So creating, uh, using ORMs or creating models for validation is something that I would do to handle a Node. Js, uh, and MongoDB combination. And all my business logic could have utilities and helpers associated with their service logic or the business logic. So that is how I would design my application.
For securing sensitive data, uh, on a MongoDB application, uh, there could be few ways to address this. 1 is, uh, of course, all the connection between Node. Js and our, uh, cloud service application should, uh, be, uh, uh, like, use, uh, SSL certificates to prevent, uh, any kind of attack over there. The second is, uh, on the MongoDB side, we should have proper access controls, uh, to prevent unauthorized access to the app to the database. Initially, we could, uh, store, uh, these data as clear data on the MongoDB side. And on the application side, while while, uh, serving the data, we could mask that data and provide it to the end user. If those things don't work, we could store encrypted data on MongoDB and decrypt it every time when we want to fetch data. So that is one of the approaches that I understand and I know of. There may be few other approaches, but, uh, this is what I know at this point.
Uh, which Node JS tools and libraries? I am not sure of this.
To integrate error tracking and monitoring, uh, tools into, uh, Node. Js back in deployment on Azure. So there are multiple, uh, multiple ways to do this. Uh, basically, if I'm using Azure log stacking or log logins, what I need to do is either I can use, uh, the Azure SDKs to add, uh, these things, add the monitoring tools, or I could always have an endpoint which could be called every time an error happens, or I could basically write utility for it. And every time something happens, it basically pushes that data on to Azure. But but coming back so if I'm not using an external logging service or I am using an internal one per se, let's say I start storing my log data on MongoDB. So I'll be, uh, creating that utility which basically add, uh, or fence the logs to, uh, our MongoDB database every every single time something happens. If that solution is not a viable one, I could always go with, uh, something like a file system or a file write it to a file a file, uh, system, and then I could basically upload, uh, or pick that file and process that using ELT stack or anything. But, uh, problem with file system would be if we are using something like a distributed deployment environment, those file systems could, uh, potentially be, uh, be a bottleneck.
So especially, uh, while using or designing a distributed, uh, system, uh, for what I would do, uh, to, uh, ensure consistency of my data is I could follow certain patterns like saga pattern, which would help me and allow me to track everything that I'm, uh, or create basically orchestration or please give me a second. Uh, I I'll be using, uh, saga pattern like an orchestrated or a choreographed feedback for the data consistency part of it. And everything would be, uh, associated with even sourcing, which would help us analyze or deeply that entire thing to prevent any kind of mislead. Like, uh, prevent any kind of any kind of issues in the data consistence. But if we have to make our MongoDB asset compliant, what we could do is there is a possibility there is something called session or transactions, which allows you to write multiple write to multiple collections on MongoDB and then complete that session or transaction. If if anything or any failure occurs, it would roll back all the previously, uh, done operations. So that is what will help us ensure consistency of data on on MongoDB. But, uh, to ensure consistency of data models is something, we could use an existing ORM, or we could use some kind of validated libraries where we create the models and everything on Node. Js. I don't remember any particular library for Node. Js, but for example, in Python, you have something called PyDentech, which will help you create a model. Then we can integrate it with, uh, Mongo, uh, PyMongo or something like that.
So, uh, this particular application creates an express app. The only thing which is missing right now is it is, uh, not exposed yet on any kind of the server is not created. But what this part particularly does is it has created a middleware which validates our authentication token. If that authentication token is valid, it proceeds, uh, or allows you to use your, uh, endpoints. If if, uh, the token authentication authorization fails, it sends back a response saying authentication token is, uh, failing. If if at all sorry. Uh, right now, what happens is, uh, it basically checks if, uh, this thing is, uh, checking or not. The return value of authentication not happening is not there. But if it the only thing that middleware does right now is it checks if that token is available or not. If yes, uh, it proceeds or proceeds for validation, there could be certain returns over there as well. But if that token is not there, it basically sends back a 403 response asking for authentication, uh, token. And finally, if all those requirements are suffice, it proceeds to the endpoint, which basically sends back hello world. Over here, uh, we could, uh, to ensure better error handling our performance, uh, I'm not really sure, but we could add certain try catch blocks if if things are failing or if it is not not going, uh, in the way that we would like it to. And there could be potential problems like call center and other things, uh, on this API as well.
This has an entire phase, high depository where you have get on, get my ID, and you have the DB context. Uh, this is, uh, kind of following, uh, my, uh, this this part is following dependency injection at this point, uh, particularly when that repository class is being initialized. Uh, it is taking, uh, my DB context as the dependency over here. Then it the basically, the pattern being followed here is dependency injection. And we also have the abstract class of, uh, the repository, which has an the, uh, abstract class, uh, or interface at this point, uh, for insert, update, delete application. And, uh, once once this repository class has initialized, uh, the details for each of those abstract classes, that is insert, update, delete functions. They have added it. This could, uh, this repository could be initialized where we pass our, uh, database helper, let's say, or database context. And then, uh, finally, we call the call any any other functions that we might use on you. I don't have a very good suggestion at this point. And, ideally, the DB context should be a singleton pattern. That is the only thing I think. Right? Uh, I'm able to think right now.
How do you architect the system? If, uh, we want to have millions of operations happening through MongoDB, uh, or through our Node. Js application onto MongoDB. Uh, the application should be horizontally scalable. We can't have, uh, we can't vertically scale the application per se because it has a certain ups uh, limits and potentially very expensive. But what we could do is we could potentially horizontally scale this entire thing so that, uh, it, uh, allows allows, uh, multiple concurrent requests to come into the Node. Js application. Then about, uh, similarly, we would need to scale our MongoDB horizontally having multiple, uh, multiple, like, multiple instances in that particular cluster. And, uh, of course, we need to enable sharding so that, uh, we could, uh, potentially store a lot of data across, um, these individual instances. Now, finally, uh, apart from that, we need to have purge mechanisms or archiving processes in place so that we don't exceed, uh, the limit of database at any given point in time. Or we need to have auto scaling enabled on MongoDB. And all of these need to be on an orchestration layer so that auto scaling hap could happen, uh, potentially easily. We don't have to manually scale it. So as as the load increases on Node JSOM, MongoDB for that matter, it should automatically scale. So that is how I would, uh, try to handle, um, or go about handling a lot of concurrent requests. That would be the starting point. Uh, maybe, uh, the exact process might require some additional, uh, processes and other things in place, but at this point, uh, that would be my
So, uh, environment configuration, uh, could potentially store, uh, things like database, database URI, then, uh, username, passwords, any, uh, other third party endpoints, any other variables that you might like it to be configurable. And, uh, this could be potentially set for each of the individual, uh, environments that an application is, uh, an application is, uh, running for, let's say, production environment, pre prod, or staging, and finally, uh, production. So that is how I think environment configuration would help manage different stages of Node JS application life cycle, but I I don't exactly know how this works.
Yeah. One of the, uh, major things that I did was, uh, to implement, uh, caching. So I created a pass through cache, uh, for our applications, especially while the transactions were being written to our MongoDB database, while being written to a MongoDB database. So, um, it put it basically, uh, it basically wrote, uh, your data to, uh, to our, uh, Redis Redis DB. And, uh, then finally, from, uh, that data, we created events to write it to our MongoDB application to our MongoDB database. So what it did was it potentially reduced, uh, huge, uh, latency on our application. So it basically took down the response time for our API, one of the APIs, from around, uh, 900, uh, 50 to 1.25 seconds or something like that from that point to around, uh, 300 milliseconds. 200 to 300 milliseconds for that particular API, which had a lot of things being written to the database, that particular API. So that is what I did, which enhanced the application profile. And apart from that, I, uh, took people at a very high level, uh, for application and performance. Try to use, um, multi threading over here. Uh, so, like, adding worker threads as well and also running it as, uh, multiple, uh, processes on multiple course. So both of these, uh, potent could potentially enhance the application performance quite a bit.