I architect and lead high-performance systems that don't just handle growth—they accelerate it. At Crickpe, I designed a distributed architecture that seamlessly manages 100k requests/minute, leveraging Redis and RabbitMQ to ensure lightning-fast responsiveness. My database optimizations kept user experience smooth even as traffic surged.
Currently, as the Solution Architect at Zerope, I'm orchestrating a cloud-native future with Kubernetes. Our microservices architecture isn't just a buzzword—it's a strategic choice that enhances our agility and scalability. We're not just building a product; we're building an adaptable tech ecosystem.
Tech Lead
Third unicorn pvt ltdTechnical Lead
Third Unicorn Pvt LtdTeam Lead
Codebrew labs pvt ltdSoftware Engineer
Brucode Technologies pvt ltdMEAN Stack Developer
Innovation Pvt LtdMeanstack developer
Innovation pvt ltdNode.js

MySQL

NoSQL

Angular

React

Nest.js

Redis

Express

MongoDB

Postgres

Next.js

SignalR
I'm working as a tech lead in a zero-payout position. So, I am handling two products, zero pay and click pay, in my team. So, basically, we are a product-based company, and I'm dealing with uploaded data. Okay. So, I am dealing with leading the tech team over here. So, we are designing the system that should be more scalable for the users. So, mostly, we try to design a system that should scale horizontally. So, we are using Node.js. Our text type is in Node.js. So, we are using NestJS with TypeScript because we decided to have more safety checking. So, our backend is in NestJS. And we are using Postgres as a database, and we are using RabbitMQ for queue management. And on the server side, we are using Google-managed Kubernetes with GCP. So, basically, we want to scale our server horizontally so it can handle as much traffic as it will come. So, yes, we have tested our server with a higher number of users also. So, our ports get increased in case. So, it is handling the load well. And we try to write optimized code so that it takes less server resources.
We can implement role based access control in Node.js API using Passport.js and JWT token by following these steps: So, by using JWT auth, we can easily implement role based access control. When we generate the JWT token, we can pass user ID and the role in the token itself. This way, we can check what is the role of a user. We can then check in the database if the user has that role. We will also cache the role information in Redis to improve performance. This way, we don't have to check the database every time we get a hit from the user. We can simply check in Redis. This approach will allow us to easily implement role based access control.
How would you use Type ORM to manage database schema migration in production environment? Yeah, currently in the Nest JS product, we are using Type ORM only. So to manage a schema migration, we are using Type ORM commands to generate the migration. So, in Type ORM, we define our entity. The entity we define includes the columns we need. And then we run the command to generate the migration. It compares the entity table and the database, identifies missing rows or schema, and adds the necessary alter commands to the migration file. I think this is how we are doing it, and then we run the migration on prod. We are using Type 1 migration for Cedar also. Like, we use the Cedar functionality to seed prefilled data to our system.
In which cases would I prefer to use Type ORM over other ORMs? Basically, we just used Type ORM because it is much lighter than the other options we were considering. So we did research and development for both SQLize and Type ORM. We found out that SQLize is for cases where you need a heavy and feature-rich ORM, but Type ORM is a lightweight TypeScript ORM. So this works fine, and I think it has built-in support. I chose Type ORM for this.
When implementing pagination of records from a MySQL database, one concern that must be taken into account is, okay, if you have a large number of records, then if you send those records and then you send the total count, so when you send the total count, you have to scan all the table and then you have to provide the total count. So if the queries get complex, it is a little bit heavy to get the total count. Like, when you have lots of filters and everything, then the total count will be a challenge. So, I think, you have to implement indexing on those filters. So, basically, if you have a filter of name, for example, you have to index your name field as well. And in the app, when I do the pagination, I normally use scrolling, so in there, we don't have to specify the total pages. What I do is I query 11 records and check if there is one more record. I slice it before sending it, but now I know that there is one more record, so I send a key, "has more," true. So, if they get "has more" true, they will again hit our API. This is how I implement app-based pagination. And in this, I just have to fetch 11 records. That's it. I don't have to count all the records. So, if you have 1,000,000 records, for instance, counting all the records will be a challenge.
How can we leverage the Type ORM library to enforce referential integrity across table relationship in a MySQL database. basically, referential integrity, yeah, we implement a foreign key. We implement, foreign key. And in the entity class, we define, suppose there is a user table and then there is a role table. So in user enrolled role. So, there is a user table. There is a OTP table. So in the OTP table, we have user OTP. So in the OTP table, yeah, we define that, okay, OTP, will have many. So that will be has many has many relation with user, and user will have, you know, has user will have has many with the with OTP, and OTP will have one relation, like, one user ID. So this is how, like, we have has 1, has many relation belongs to. So we use this type of, this type of, relationship definition. So we define it, using this in my seat.
Can you give the JavaScript code is in the spot, any button? Client socket requires Socket.IO. The optimization, I think, is that the code looks good. Spot any potential performance issue? Not much idea about it, but, yes, we can use connection pooling so that we can limit the number of connections. I think this is my suggestion.
We ensure that API endpoint developed with Express are strongly typed when using TypeScript by using TypeScript in our Express project and then using TypeScript to give us the error. Okay. And, yeah, we just have to use TypeScript in our Express project, and we can run it using that. So that will help us. And, the API endpoints, yeah, we can define everything with type checking. So I think yeah.
What approach would you take to migrate a complex database schema without downtime using type ORM and Node. Js? Okay. So migrate a complex database schema. So, like, if you run any schema so, yeah, it I don't think, Node. Js or Type ORM will give any error. So, yeah, I think it will work fine. there won't be an issue with that.
What method do you recommend for implementing a custom valid logic in type ORM that are not supported? What customer? What method? So we can create our own class, and we can use it.
Details on how you would manage asynchronous programming in Node.js to optimize performance when integrating multiple third party APIs. To optimize performance while integrating multiple third party APIs, we can use several techniques in Node.js. First, when we use multiple third party APIs, we can call them in promises and use promise.all to call all the 3rd parties, which will run in parallel. If the APIs are dependent on each other, we can use the async library, which gives us the option to use the waterfall model or call everything at once. We can also use async/await, which provides the option to call all the APIs if they are not dependent on each other. In this case, we can call all the functions without waiting for each other, and then store the results. However, if the APIs are dependent on each other, we can use await to wait for each function to complete before calling the next one. This is how we can manage and optimize the process.