With a robust background spanning over a decade in full-stack web and mobile development, I have consistently demonstrated prowess in architecting and delivering end-to-end solutions tailored to diverse business needs. Proficient in an array of technologies including React.js, React Native, JavaScript, TypeScript, Node.js, MongoDB, AWS, and Python, I have contributed significantly to numerous projects, from e-commerce platforms to AI-backed talent clouds. My career journey showcases a keen ability to lead development teams, manage project lifecycles, and ensure seamless integration of complex technologies across web and mobile platforms. My commitment to staying abreast of industry trends and my passion for leveraging technology to solve real-world problems underscore my status as a dynamic and impactful technical professional.
Full Stack Web Developer
MeltwaterFull Stack Web Developer
IbexlabsFull Stack Web Developer
MobikasaFull Stack Web Developer
BidchatFull Stack Web Developer
GrassdoorFull Stack Web Developer
TuringFull Stack Web Developer
WealthshareFull Stack Web Developer
HeimFull Stack Web Developer
Reactive BurgerNode.js
React
MongoDB
HTML5
CSS3 & CSS5
Javascript
Python
AWS (Amazon Web Services)
React JS
Express.js
MySQL
REST API
Next.js
Git
GitHub
GitLab
GraphQL
Swagger
Postman
AWS
EC2
RDS
S3
Lambda
DynamoDB
API gateway
Google Cloud Platform
Heroku
Azure
Github
Bitbucket
This is to certify that Aditya Singh was working with Mobikasa as Web Wizard .
During his above tenure we found him to be an honest and intelligent candidate. His continuous attendance and
performance on the task assigned are praiseworthy. His character and conduct are also satisfactory. We are sure
that he has the capabilities to be an asset to any organization.
We at Mobikasa wish him all the best for future endeavors.
Marshall Badri here. I have been a full stack web developer for quite some time now. I've had the privilege to work with some of the best web technologies out there, build web applications spanning across a multitude of categories. I have worked in the capacity of a technical lead, a DevOps, an architect, an individual collaborator, a developer, uh, building end to end applications on the front end as well as the back end, so on and so forth. That would be just about myself. Yes.
Yeah. So, basically, what I would do is I can give you my approach. It's basically, let's say I have a MongoDB cluster, and let's say I have a Node JS server already running. Um, I'm I'm basically implying that if we have a situation starting from scratch, then I'll basically install my MongoDB cluster, MongoDB database on instance. And once it is up and running, I'll configure the, uh, what you call the replica set itself because, you know, we are getting involved with data replication. And then what I would do is that, you know, there is this r e p l set name. Basically, I will set the configuration of the replica. I will initiate the replica set, ensure that, you know, after application is happening and it should. So, basically, uh, the initiate method, the ID goes into it, the members are there, all the data pieces. And then I'll configure the Node JS application to basically now connect with this replica. How will I do this is that the MongoDB URI, the connection string that we use, we can specify the particular, uh, replica that we are referring to. So we have the primary server IP. You have the port. Then you can also specify the replica set. And then you can, you know, with this property, you can provide the corresponding value, which will be the replica set itself. What will basically happen is that I can then use properties like reconnect rise or, you know, reconnect sync or reconnect. Basically, what I'm trying to do is using this replication set. I'll try to access the data based on this replication. And if there are connection drops, I can retry. I can also ensure that the connection is maintained in a synchronized manner, so on and so forth. This would provide a high availability by itself. Yep. Interesting inherently.
Yeah. So what I would basically do in this case is that, let's say, we have a React application and a server side Node. Js back end. In most of the instances, I would let's say I'm using Axius for client server communication. Client is sending the request, server sends back a response. I would set up the access instance. Let's say access is the library being used, and then I'll set up the default headers and the base, uh, route. So the base URL to, uh, you know, to which the requests are being sent. And I'll and I'll ensure that, you know, we are sending the required authorization headers if involved such as bearer tokens, so on and so forth. This inherently brings in secure API calls in a detached manner because rest APIs are stateless. Then on the back end, this, uh, token can be extracted, validated, and, you know, the requests can move forward in that particular manner. There are more headers which can be set to ensure that, you know, we are and this also depends upon the back end because the back end has to be, say, secure in order to utilize those headers. But, yeah, this is, uh, one of the examples, please. There are many ways to do it. This is one of them. And this is for a React front end and a Node JS back end. Let's say it's an express API. Yep.
Uh, there are various ways to deal with it. One of the ways is spawning child processes because that's where you basically use the, uh, utilize the multithreading of the CPU at the maximum. Other ways to other way to do it is you can use, uh, library called clusters. So you basically use clusterization. And then, uh, what else can you do? Right? So there are various ways to do it. Also, depends upon the what kind of situation you're dealing with. Another thing that you can use is that you can find you could use the OS libraries, find the length. So, basically, the the number of CPUs that you have, the cores that you have, and you can utilize them to the maximum. Because if you're using the cluster library, you can use methods like is master. So you can identify that whether you're referring to the master code. And then you can spin off, uh, worker processes and run different processes in parallel. That's, uh, yeah, certainly a way to do
What I'll do is is that both the approaches I would say in this case is that both the approaches have their own pros and cons. I don't think there is one size fits all because sometimes what really happens is that you want to use so in most of the cases, if you had this option, in many of the cases rather, I think Mongoose would be a very, very good option. The simple reason being you are getting a declarative approach and a really robust one at that out of the box. Right? So Mongoose is written by a team of very good developers. You're getting too much out of the box. You are almost all you will almost always be safe while using that library for schema validation. In other cases, you might not be using Mongoose at all for the entirety of the code base for an existing application. Then to bring in Mongoose just for schema validation might not make a lot of sense for you. Right? So so if you want automatic validation, if you want validations out of the box, go ahead with Mongoose. Um, what I would say is that Mongoose as an ODM is a really good choice for majority of the cases. But if you want certain sort of flexibility, you want you want a kind of validation which might not be provided by Mongoose or any other libraries for that matter, or you want 3rd party integrations with those libraries, then, yes, a custom middleware might make sense in that case.
Yeah, so horizontal scaling basically can be, you know, done with a lot of methods. One is clustering, which we were discussing in the one of the previous questions. And you can utilize multiple threads of your CPU, you can run different processes in parallel. That is one way to do it. So basically, I have this master process distributes the processes out to the child processes gets back the result processes then moves on so on and so forth. One of the pitfalls using this is that you might want to use connection pooling here. And why do I say that is because if you're using clustering without connection pooling and you're using clustering with MongoDB, you may overwhelm the database with a lot of parallel and simultaneous connections. The other thing would be I would say sharding. Sharding is a very good example of horizontal scaling. In sharding, you have to be careful about our potential pitfall would be the configuration of the shards. Otherwise, you might overwhelm the instances. The other thing is load balancing you can use. And you know, that's a pretty, pretty good way to achieve horizontal scaling because you have this incoming traffic load balancer. It is meant for load balancing, it basically checks based on the configuration, what instances are available and distributes load accordingly. And there are very, very, very good options out there. Another way is containerization containers can be very large, you have to be careful about that. So there are a couple of things which you can do, I would say, yeah.
Silence. Silence. So I have this example in which. I have a Mongoose library being imported. We have to use a model using method to find one person by email. I call it. OK, I used to call back as I say that user is found right. So. The key patterns. What I would say is because that's a very unique question. And this piece of code is not very intricate as well. When it comes to anti patents. Yes, there are few potential anti patents and areas of improvement that I see. One is that you're using callbacks. You can very soon reach a situation of callback hell. I would rather I would advise against that. If you can use promises, you must do that. Error handling, you might want a more robust or comprehensive. You know, error handling over here for instance that in user dot find one. If you get an error, you're returning the callback with the error I would say you might want to do a little bit more and log it. There is a lack of input validation. What kind of email are you getting? It can be checked. And, you know, there are no error types situated. So, you know, basically mentioned so on and so forth. So if you bring in promises, try cats, so on and so forth. I can see this particular piece of code improving.
Yep. So you can do a couple of things. If you are unmounting the component, you need to clear the states. You might wanna call another API, which basically, uh, deactivates the session of the user. You might want to clear, uh, temporary storages, like local storage being used if any. So, you know, for instance, you get the user, but now you're unmounting and the user profile should not be available now and it's a protected route. You might want to clear out those credentials. You might want to clear out the cookies. Uh, basically, it depends upon the situation that you are in, but you might want to basically perform a couple of cleanup methods in component will unmount so that, you know, your components are inconsistent, your functionality is inconsistent with the expected results. So the actual outcome should match the expected outcome. And, yeah, I think that should, uh, help a lot.
So I would say this is another scenario which really depends upon the situation. I mean, it's, again, not, uh, one size fits all. And the reason why I say this is that you have these different advanced encryption algorithms. Right? You have SHA 256. You have AES, uh, 256, so on and so forth. I would propose AES for various reasons, or SHA. Then you can bring in how to implement this. You have libraries like crypto and Node. Js, which through which you can basically build hashes. Then you can what you can do is, uh, you can use if you don't wanna use 2 if you don't wanna use crypto, you can use other well established crypto libraries such as sodium native or lip sodium wrappers. And it depends upon what kind of data you have. Sometimes you want to hash the incoming payload. You want to hash the certain parts of the incoming event. And then what, uh, you can use, uh, approaches like, uh, Diffie Hellman exchange or exchanging the key so that only the source and the destination are aware of, uh, you know, the payload and can decrypt it. So source encrypts it, sends it to the destination, and then only the destination should be able to decrypt it. If there are listeners in between, that should not be the case. I mean, that's the whole idea behind end to end encryption. The key exchanges should happen over a secure connection and SSL secure sockets layer. So I'm referring to HTTPS. Uh, you you need to use different secure modes. There are different secure modes such as, uh, you can use, uh, when I say secure modes, I'm referring to key generation and distribution. You use HSM hardware security modules to do that. So, yeah, through this and it now it depends upon what kind of compliance you have, whether are you talking about HIPAA, GIDR, so on and so forth. I I think you can pretty much maintain that through the methodology I shared with you here. And and that too in a a really, uh, well defined manner that is. Yeah.
Yeah. So best practices to follow, you know, if you you want to secure rest API endpoints. Let's say you have a Lambda. It's mapped to an API gateway. API gateway gives you an endpoint. First of all, you might want to set it up in a manner that certain types of requests are allowed. Then you may want to expect some kind of token. Let's say it's a JWD, a bearer token, and you should have, uh, you know, the secret key set up and secured. You might also want to rotate them. And then, you know, the headers should be in a particular manner, and only the request should go through if they are all there and valid. The token needs to be validated for each request. You may have a middle there for the same. Then you may put up a different, uh, shield such as WAF so that you can you ensure that, you know, the lamp the lamp is on not being overwhelmed by DDoS attacks, so on and so forth. Uh, Yeah. And, you know, the Lambda's exemption should not be accessible. So I am principle of least privilege you can use. So there are many various ways to do it. I can certainly elaborate that, but these are some of them.
Okay. So we have rate limiting, and we want to somehow, uh, ensure that we are considering cashing and retry policies. Right? That's interesting. So what I would do is if, you know, we have an API for rate limiting, first of all, caching is a really, really good strategy because what you're doing is you can use, first of all, packages like React Native async storage. You have these other options such as caching library, reactive caching library for basically caching the data. And if and it is important to understand that caching should only happen if the data is not time critical. If you are looking at data which is time critical and really needs to update and it's sensitive on that front, then caching may not be the best option. And so what basically happens is whenever an API request is made, first of all, the cache is checked. If the output is available, it is returned. If the cache is not available or it has been invalidated, uh, for a part with a particular time out, that's another thing to note that the time out should be, uh, for the TTL for cache should be really carefully considered. If the cache is not there, then the, you know, the API endpoint is accessed. The request is made, and the tender data is returned. Other thing is exponential back off, I would say. If someone is, uh, you know, really hitting the APIs too frequently, then you might want to increase the delay between subsequent requests. That's what, uh, you know, the exponential back office. And it should only happen in case you may also only make it only happen in case of API failed request. So because when they are going through, you don't want to disturb them. But this is basically one of the ways to apply rate limiting, exponential backup, and the retries will happen, uh, in within particular duration. So you have, uh, different, uh, ways to do it as well again, uh, but, yeah, these are kind of few of them.