
As a dedicated Node.js developer, I excel at building scalable solutions and tackling complex challenges. Always eager to explore new technologies, I bring a fresh approach to every project. Let’s create
something amazing!
Senior Backend Developer
BLDONSoftware Developer (Node.js)
Saffron StaysJr. Node JS Developer
AqlutNode.js

Express.js

MySQL

PostgreSQL

MongoDB

Sequelize

Mongoose

TypeScript

Git
Azure

AWS

TypeOrm
.png)
Docker

RabbitMQ

Apache Kafka

Redis Stack

NestJs

Redis

Microservices

AWS
Hi, myself Raman Thakur and I am from Himachal and I have total 3.4 years of experience in NodeJS and in NodeJS I have worked with ExpressJS and NestJS frameworks and I have also worked with TypeScript and in databases I have worked with Postgres, MySQL and MongoDB and I have also worked with Microservices Architecture, primarily on KafkaJS, Revit MQ and Redis and as of my projects I have worked with various projects such as netting applications, property related applications and my last project was about a social media agency where someone can hire a creator or publish a content he wants to make, reel or anything, a YouTube short or YouTube video based on his product and he can hire any creator based on that and if the creators will bid for that particular project and if he accepts then our platform will deduct a 2% charge of total exchange between the buyer and the creators.
Basically, TYPEORM is more advanced than other ORMs such as SQLISE and they are more used in NestJS framework because it provides us an inbuilt support for TypeScript so that our models and schemas are properly, statically typed and there is no such concept in SQLISE or anything else because SQLISE is good with our JS, common JS, but when it comes to the TypeScript, SQLISE does not provide us so much leverages which are provided by the TYPEORM library. So, yes, it is mostly used in NestJS and it enforces us to properly define our tables and structures according to our database.
So, in ExpressJS in order to handle an error we have a number of ways, but out of them most common are try-catching mechanism, like we can add try-catch in our particular file where we want to catch an exception or an error. In the catch block we can handle the error according to us, like if we want to define a custom error based on the exception we have encountered either from our codebase or our database, we can define it in the catch block. And other mechanism are also there such as away2js package in ExpressJS in which we do not need any try-catch, it just allows us to handle any error based on our input. For example, if we have a database query for find1, then we can define two variables, in one variable it will store any exception or error that will occur and if it doesn't, then in second variable it will give us the provided data that will be getting from the database. And third is a global exception or global error handling, we can define a global file where we will define every exception or we can add a switch case for every exception or error and based on that our controller services will, if any error comes in our service file or controller file, it will be directly handled through the global exception that we have defined or global error handling file.
The main consideration I would like to say is that make sure the page is and page and limit and offset basically we have two parameters that we have to pass to our MySQL database in order to implement the functionality of page initial. But from front-end we will be getting page and page number. So if we have page then we have to make sure that the page and page number parameter must be coming from the front-end and in case the front-end hasn't sent us that we have to define a default value for them so that undefined value or any other nullify value does not resist our database and if that is the case then it will produce an error to our application. And also another consideration will be there that if we are implementing the page initial we must return the metadata to the front-end like overall records we are having in our database so that they can properly maintain the page initial at their end like they have to show number of pages in the front-end at the cursor and all. So yeah.
In ExpressJS, we don't get any inbuilt support for TypeScript, we have to manually add libraries to allow ExpressJS to handle TypeScript. So for AP endpoints, we can define types for our functions, like we have to make sure what will be the response of a function. As we know, it will be returning a promise by default, but what kind of promise or what kind of data will be there after the promise is resolved must be returned from the controller so that on a router file we have proper context of the value that will be returning. And also the endpoints, generally we pass string to the endpoints or the base URL of any page. So we can declare a variable, and we can declare its type as string, and in that variable or a constant file, we will be defining all our routes. So in that way, we can handle the typecasting of our API endpoint and make sure that only the string value is passed to the endpoints and the promise is returned from the endpoint that has been attached to that API endpoint.
When I am working with TypeScript or NestJS, I would prefer TypeORM because it makes us easier to interact with TypeScript than other ORMs because of its inbuilt TypeScript support. And also it has more flexible query builder than other ORMs like SQLite. In SQLite, we have a disadvantage because if we have a nested query and all, in some cases we have to use nested literals and all. But in TypeORM, it is like a mixture of our functions and raw queries. So we can easily define our nested queries, even sub-queries in TypeORM as compared to SQLite. So if I have an application in which I have a lot of number of joins or a lot of tables and that must be statically typed, I would prefer TypeORM in that case over any other ORMs.
So, one performance issue that I can identify from this code is that event is not unique for example it will be universal, so if I have a large number of user base the event or socket may hung up here because the event will be triggered n number of times depending on our user load, so this will create a performance issue.
Rate limiting is a concept where we can limit the number of bits that our APK can handle in a defined number of seconds. So in Node.js we have throttling, we can build a throttling mechanism using the npm packages or if we are working with Nessius framework, we have a throttle decorator predefined where we can define a number of bits that our APK can take in a frame of seconds. For example, if I want my create user to only be triggered once in every 5 seconds, I can define there a number of hits it will take in 5 seconds or any number of times. In Express.js we have to use a npm package for throttling that we have to define on our main.js file or a particular route where we want to use it, so that it can limit the number of hits on a particular endpoint.
First of all, I will make sure that I don't have any congruent database schemas to be migrated into the database. First of all, I will make sure all other schemas have been migrated to database and I will handle this particular database schema individually and I will look for changes to reduce its complexity into the code. Like if I have any complex data structure, I would say if I'm taking an array of objects in a field, I would like to break it into multiple tables or multiple database schemas so that the complexity can be reduced and the main functionality could be achieved using fragmented or distributed tables in that case.
The most effective way to manage state in real time application is to publish the event only in real life. I don't want my event to be subscribed all the time at the front end or at the back end because we have other libraries like pusher.js mainly which uses web sockets and more lighter. So I would only connect to a particular event when a data is coming from a particular event. Like I will not allow that particular event to be subscribed all the time. It will be a just-in-time subscription like if data is coming from a particular event it will be subscribed at that particular moment rather than it will be subscribed all the time at the front end.
While dealing with multiple third-party APIs, I would manage my codebase in such a way that I have helper functions for every, like for example, if I'm integrating Twilio or any third-party, I would make sure there is a separate function for its connection and separate function for its message, sending message or receiving message. And in that particular message, I will use try-catch and in try, in catch, I will catch all the exceptions and because on Twilio or any other third-party, they have defined all their error codes or all their error messages that will be sent by them, if in case of any bad request or any other exception from our end, in catch block, I will try to handle all of them. But yes, to handle them asynchronously, I will use async await and I will define all the helper function with async function, as async function and will use await in my main API or main service where I'm calling those functions. But the main approach will be to use a distributed system like a different function for connection and different function for sending message or any other functionality that I'm using. And in those, I will use async and await at the main function.