
Senior Software Engineer
SmartifAIConsultant
BAssure SolutionsSoftware Engineer
Edureka
Bitbucket

RabbitMQ

AWS Lambda

Amazon S3

Amazon CloudFront

Redis
Yeah. Myself, Michelle Kumar. I had 24 years of experience. I graduated in 2019, and then I started working with, uh, Edureka. So during my stay in Edureka, I worked mostly with, uh, PHP and JavaScript. Come to PHP, it's mostly cake PHP framework and sometimes in, uh, bulk and go PHP as well. And coming to JavaScript, it is, uh, vanilla JS and then the node JS. So I worked on projects like, uh, maintaining their ecom port and building few APIs for for smooth functioning of the recomporter. And then, uh, some back end functionalities for the records admin panel. And then I worked on, uh, computer location system and the invoicing system. So, uh, the major works I have done is, like, optimizing their uh, landing pages, uh, so that they get a better score in their core web portal steps. So and that improves their, uh, uh, Google search page results. And I was with them for 3 year, 9 months. And after that, uh, I worked with, uh, for 3 months. Uh, during my stream, I can work with, uh, Deeppage and MySQL. Yeah. During my entire career, the databases, I mostly work with MySQL. So I only have 4 years of experience with PHP and JavaScript and 2 years with Node. Js and 3 plus years in MySQL. So, yes, uh, this is about, uh, my past experience.
So, yes, uh, to implement the rate limiting in, uh, Node. Js uh, express application. So, uh, the first thing to implement rate limiting is to identify, uh, from, uh, where this traffic is coming from. Let's say, if your user is locked in, you can identify him, uh, with this, uh, user ID. Or, like, if a client is logged in with his client ID, you can identify from, uh, which client this traffic is coming from, and then you can implement rate limiting based on the client ID. Like, only for, uh, some particular amount of time, this client could hit this API only, uh, this many times. And next, if this is not an login client, uh, we could use, uh, its IP address, like the IP of the machine, uh, to implement rate limiting for him. And coming to the technical implementations of it, uh, we could, uh, I mean, there are libraries, uh, which could uh, take care of it, like, the NPM packages. Uh, so you could install that NPM package, and then, uh, based on this IP address, uh, it, you know, it is going to take care of the rate input for each endpoint, uh, which you are going to, like, uh, when you implement this, you implement it as in a middleware because the Express JS or the Core JS, uh, they have this middleware. Right? So you can implement it, uh, you can implement a middleware using the NPM package or, like, you can write your custom code, uh, which does these things, like the client ID and how many times he hit a particular endpoint or, like, checking, uh, the client, uh, IP of the machine and how many times a particular endpoint it did. So, technically, you double digitize in middleware of your application. And to store how many times, uh, a request is made to a particular endpoint by client or, like, uh, by a particular IP address, you need to store this data somewhere. You could either, uh, store it in your DB or, uh, you could store this data in in memory databases like, uh, Redis, uh, where you create an entry for each IP or each client for particular endpoint, how many times you, uh, hit that particular endpoint. And using that data, you can manage the rate limit. And, yes, apart from this, there is one more thing as well. Uh, instead of handling it on the application end, if you are using, uh, some cloud providers like AWS. So yeah. Uh, for AWS, you can implement, uh, rate limiting, uh, using the WAF. Like, they provide a WAF, uh, functionally, and using that, uh, you could, uh, do the rate limiting for your endpoints. Yes.
What mode just to be confused about? Mhmm. Event loop name and compute heavy application. Yeah. So, basically, uh, 1st, uh, vendors and event loop get blocked when there is an uh, synchronous operation that could run for large number of times because, you know, only the synchronous operations run on event loop and the synchronous operations get pushed to the callback queue. So if there is an, uh, operation which could be synchronously calculated, like, let's say, in incrementing and variable and infinite while loop, If you write some, uh, while loop code, uh, which, uh, could keep on incrementing the variable, uh, this is a synchronous code. Right? So this will continue since it only went loop. So in order to reduce the load on, uh, for computer v applications, uh, the first thing we could consider is, like, creating workers because a worker, um, will be, uh, I mean, since the node change in JavaScript is in a single threaded, so it will be running in a sing single thread. So, uh, what we would do is we can, uh, consider, uh, creating workers because and worker would run on a different thread and then pass on the results to the main thread. And the second thing is we can use, uh, clustering, where you, uh, fork, existing process to create the child processes, and this child process, uh, can share the row. Either, uh, you can, uh, send your requests to these child processes or, like, uh, yeah. You could send the request to this child processor, like, uh, yeah, this computer, the operations, whichever you have, you can pass on to this child person, then get the result from it. Or else you can span new threads. I think there are, uh, libraries which which support I mean, like, the Node. Js is support for this as well. Since it is single threaded, there is support, uh, to span a new thread. Basically, workers are the same things. Right? Spine a new thread and, uh, make this heavy comp pass on this heavy computations to that new thread. And once the entire computation is done, uh, get those recently back to the main thread so that the node JS will appear as if it is not, uh, getting blocked. So, basically, spanning new workers, uh, new threads, and then using clustering, uh, to have the child, uh, child force in place. Uh, these things will, uh, help in preventing the blocking of human
This one will be the rest of the next page update. So using express to build an API for the CRUD operations to the post the first thing, we need to connect to the post with this, uh, database. There are NVM packages, uh, to do so. Like, you would use uh, packages like SQL, I believe, to connect with the database. And the next thing is, like, uh, creating the endpoints or, like, the routes. Using express, you could create a route for each CRUD operation, like, uh, create can be 1 operation. I mean, read can be 1 operation. Delete can be one operation. Update can be 1 operation. And yeah. So it will be noted. Uh, for update, you can use the post request. Uh, for insert again, you could, uh, use, uh, post request output through, uh, the data. You can use, uh, uh, get request. So, you know, you create endpoints using Express, and then you connect to the PostgreSQL database, uh, using some SQL based packages like, uh, when, uh, the the SQL
So database, uh, schema migrations, in Node. Js. Uh, I think you can use the SQLite library, the SQLite ORM, whichever is present. Right? And the SQLite ORM can be used for migrating the database, uh, schemas. You create a schema, and then you run the code so that, um, this, uh, I mean, uh, whichever, uh, whichever, uh, database schema you have, that could be migrated to the new database. So is using the SQL as ORM with more JS, you can migrate, uh, the database
To do, uh, any, file related operations in ModJIS, we will be using the FS module. The FS module is responsible for reading, creating, or, like, uh, uploading the files I believe. So, uh, we could use, uh, the first module to do this stuff in
Yes. Uh, assuming this code is used to identify identify the presence of an, uh, API key for an, uh, express middleware, the issue I could find is, like, this API key is hard coded key because you are reading it from an enrollment file. So when you are reading it from an enrollment file, this API key is hard coded. So if this API key is hardcoded, the same key needs to be shared by all the clients or, like, uh, all the users who is hitting this particular endpoint. So if all the users are being used I mean, like, if all users are using the same key for your endpoint, then there is a chance that, uh, this key might, uh, get leaked. And if someone gets a hold of this key, they will be able to access this endpoint, uh, forever because, uh, this key is not being generated using some logic since it is hard coded. So that was the issue. Basically, the API key is hardcoded and it is shared by all the users of this endpoint. And there a handy someone gets hold of this key, uh, they could use this endpoint with no issues until unless you change it in the environment file. Yeah. And, again, to change something in the environment file, either you need to make the changes, deploy your application, or, like, you need to, um, log in to the server and, uh, make these, uh, changes to the environment file. So this is not a good, uh, practice. You should be generating your, uh, API keys and then invalidating them, uh, in a timely interval so that, you know, your, uh, API stays safe.
Yeah. So Yes. So, basically, you know, you are trying to access, uh, whenever the user is hitting this particular endpoint. Right? Uh, you are trying to read something from the DB, get user from DB. So the first issue is, uh, there is no, uh, exception handling in here. Let's say if this get user from DB function throws an error. Uh, I mean, uh, if if it throws an error, since there is no exception handling, uh, this is going to, uh, break your, uh, no uh, Node. Js application since, uh, your server and the app dot legend, right, which is a WebSocket. Both of them are coupled. So your application will break. If there is, uh, if some error is not being handled, so it it is going to click the entire application for 1 for 1 request. And if someone is, again, trying to push this since the application was broke, they are going to get an, uh, and this application is not going to be accessible, uh, to them. The first thing is there is no exception handling in place. And, again, if there is, uh, suppose the user data is not found, you are throwing a new error. So, uh, throwing an error is, like, you are throwing an exception, and this was not being handled anywhere. So instead of, uh, throwing the new error user not found, uh, better, it is, uh, to send a response in here. Uh, you could send a response, uh, within a status code like 404. It's status code 404 and message, like, uh, user not formed. So now what happens is, like, if there is no user data, instead of throwing an error and breaking your entire application, you just, uh, you just send an 4 zero four response to the user saying that user is not found. So yes. There needs to be exception handling in place, first thing. And the second thing is instead of throwing an error at the route level because if you are throwing an error at the route level, somewhere it needs to be caught on. Right? If there is a response middleware or something, we could use that. But since then nothing is in here, it will break the application. So instead of throwing an, uh, error if the user data is not found, you could send in, uh, response to the user with status code as 404. I'll restrict the 400 status code response you could send with the message you are in your phone. That would, uh, avoid breaking of this application.
Okay. So, uh, if you are asking about, uh, implementing a caching layer so when you say caching layer, if you are asking about, uh, caching the results of the queries from Postgres, uh, database. Let's say there is an employ employee table in Postgres database, and then it is going retrieve data from that table. If you are go if you're asking about, uh, caching the data, which is retrieved from the database or, like, uh, the results of some queries. If you want to cache the, uh, resultant data from a query, you could use, uh, Redis, and you could, uh, integrate the Redis cache, uh, with your, uh, Express application. Uh, there are NPM packages for Redis integration. So you could use those packages and then integrate your Redis with, uh, your express application would which could cache the results of the queries that are run on your host, uh, database. And these queries, whichever you are caching, they should not change often. Like, uh, there should be, like, at least some 1, 2 minutes interval. Uh, and there must be, like let's say, uh, uh, if this data is not going to be changed, uh, often and it is going to be requested if it's not going to be changed often and it's going to be requested often. In that case, caching such data will be beneficial to reduce the load on your, uh, database servers. And if you are asking about, uh, caching the response of uh, request. Uh, let's say, uh, if it is requesting some HTML patient, if you want to cache, uh, these responses instead of calculating the results every time. So, uh, you could use edge caching, like, uh, something at the enterprise level. You could use CloudFront. And if you are thinking about using something at the server level, we could use something like varnish cache, or, uh, we could implement something like, uh, like, you know, when you could store the response whichever the HTML file is, uh, generated on somewhere in the file in the server and then the name of the file which are, uh, some hash you create and that hash and that particular request you've they get request, you could store this data in post request database and then, you know, request comes. If there is a similar request, you already get the location of the file and then so the, uh, same file instruction, regenerating the file again and again, uh, that you could do for web pages kind of request. But if it is mostly about caching the results of inquiry, uh, you can have, uh, ready sync place to store the results of queries to catch the results of