Highly skilled and results-driven Full-Stack Engineer with 3+ years of production experience in building robust backend servers using various programing languages like Node.js, Python and smooth and responsive frontend using React.js and Next.js and android-ios cross-platform applications using react native.
Proven expertise in JavaScript and TypeScript, collaborating seamlessly with teams to develop scalable APIs. Adept at database schema design, leveraging SQL and NoSQL databases.
Seeking to contribute technical proficiency and innovative problem-solving skills to a dynamic IT team.
SDE- II
TatvaCareSDE-II (FullStack Software development Engineer)
FamilyHiveZSDE-I, Full Stack Software Engineer
Jasper ColinAssociate Software Engineer
Alltrak
VS Code

JetBrains
Azure

Javascript

TypeScript
Figma
Okay. Okay. So, uh, I'm actually, I have been working in a software development field for approximately four years now. Uh, in my experience, uh, in my work experience, I have worked on, um, multiple industries. I have worked in multiple industry like health care, entertainment, hospitality. In my most recent job, uh, I have, uh, built, um, property onboarding and property hooking web, Android, and iOS applications from this patch, uh, using their tech stack on OGS in the back end, XGS in the front end, uh, AWS as the cloud hosting environment. And, uh, we have also, uh, like, put up a couple of microservices for different stuff, and we haven't done 13 phone paying for payment integration. And, uh, and and, yeah, that should be all.
Technically, use the sound distributed transactions with the Microsoft architecture. Okay. So to solve the distributed transaction problems, um, in the Microsoft architecture, I would use basically, uh, event driven communication and, uh, saga pattern. So by saga pattern, uh, I've learned that instead of using a single asset transaction, uh, a saga pattern will break the transaction into a series of local transactions, and each service will execute its local transaction on its own and publish event event. If a if any failure above compensating the transactions are triggered to undo the previous steps, there will be basically on the choreography on the event based and the, uh, the orchestration on the central coordinator. So the item potent operation and return mechanism will be implemented to ensure efficiency during the refresh, and all the API, uh, should work, uh, uh, like, uh, in item potent. So I also would like to use tools like machine protocol such as Kafka or hybrid, um, based on my preference and based on the the requirements to assist the event and retrieve deliveries. Um, I would also be using, uh, distributed phasing and hogging, uh, for tools like, uh, Zipkin and OpenTelemetry, which help us which will help me, uh, track the transaction across the services for debugging and root holdup automation. So in brief, uh, by combining the saga pattern, uh, reliable messaging and the observability, I can handle distributed transaction reliably in Microsoft without compromising on this availability.
What do we do to monitor and optimize performance of a home chase application? Okay. So, basically, to monitor and optimize. Okay. So basically, to monitor and optimize the whole JS application, I would follow a basic structured approach. Uh, like, I would use some monitoring tools like, uh, ThetaDog, Prometheus, or Rafana, uh, to monitor the CPU utilization, the memory utilization if they have any event loop lag or did request throughput in their real time. Uh, I would integrate window oh, not window. Sorry. Uh, I would integrate, uh, Winston or I know, uh, with a centralized logging system, uh, like cloud watch for hogs, uh, like, cloud watch for dogs. And, uh, for alerts, I would set threshold for memory leak or high latency using tools like Harafana alert or paper duty. For the code level optimization, um, I would use some built in tools of how JS, uh, like inspect or. Um, I will be looking for the blocking code, for example, uh, the trade file sync or something like that. Um, any unhandled async operation, any memory leaks, any, uh, error handling that's been a store, anything like that. For database optimization, I would reduce the query times, uh, by indexing in MongoDB collection properly and using the ping queries in Mongo. Also, I will use connection fully, uh, to avoid the k n plus one problem the n plus one problem with the MongoDB connections. Uh, for frequent data access, you know, like, if there are any data that I need frequently and which might not change very frequently, uh, I would use any in memory database like headers to reduce them all, uh, on the service event databases. For code testing, I would like to use tools like JMeter, uh, to simulate the real word graphic and identify performance of bottlenecks. Um, like, uh, and so by combining these practices, I would ensure that Node JS application my Node JS application is hailable and optimized, and it would help me to, uh, also look through the performance of this.
Illustrator method to manage state consistency across distributed MongoDB instance. Okay. Um, so to basically manage the state consistency across the MongoDB instances, I would typically use, uh, the MongoDB replica sets combined with, um, read and write and some strategies. So if I talk a bit about replica set, MongoDB allow us. Uh, MongoDB basically supports the automatic application through replica set function, which will include a primary node, uh, in which we are gonna hide, uh, we are gonna perform the height operations, and the secondary nodes will be there, which will be a replication of the primary nodes, uh, through which we see will be mostly using, uh, those for the operation. Um, so, basically, the secondary nodes, uh, the, uh, replication nodes, uh, would sync through the oplock setup, uh, to ensure that, uh, event can suit and efficiency. For this hidden right concerns, uh, to improve and efficiency, uh, to guarantee consistency, uh, we would ensure that these types are acknowledged only after a majority of votes confirm the operation. And for the real concerns, uh, ensure we will ensure that the reads reflect the most recent omitted data and not any, uh, previous data. So this is basically a critical setup in our distributor system, uh, but it is required to prevent its heading state later, uh, after a failover or a network partition part. For okay. Uh, but else we can do is we can, uh, let me see if we can. So we can also use a sharded left, uh, if we want. Uh, so for a large scale distributed system, uh, for a large scale MongoDB deployment, uh, I would use a sharding method. In in a case, uh, like, each shard, if it's will it shall, uh, be a replica set. Uh, Mongo's router will be going to handle query routing and the metadata part. The consistency would be managed or shared via a hyperlink of set, so it will be taking the guarantee of the consistency. Now if you talk about the application level strategy, uh, to handle the edge, if you like, um, network, uh, like, uh, network partitioning or of failover over the, I would implement a heat trial logic for any transient or tight errors, like if there's any or, like, if any error occurs, uh, during the tight part, uh, there would be a free trial arguing with, uh, exponential time and a particular, uh, axiom limit of these hate drives. Um, I would use the I will use, uh, I'll time potentially to prevent the duplicate heights and, like, any during any, say, heat rails to avoid any duplication. And distributed logs or versioning, I would be using, uh, like, a version field and optimist concurrency control. So if I summarize, uh, by combining the MongoDB's, uh, native application feature with a proper heat rate concern configuration and a app logic, I can ensure, um, like, a strong event consistency even if they're distributing, uh, consistency, uh, evenly in the distributed MongoDB deployments.
How is that? We went a distributed lock in mechanism with Microsoft to coordinate tasks. Okay. To basically implement to implement a distributed lock in mechanism in a Microsoft with architecture for any coordination task, I would, um, one of the most common, uh, approach that's coming in my mind is is utilizing a distributed data store, uh, like it it had this and leveraging the, uh, atomic operation and election feature to manage any lock. Additionally, the headlock algorithm can be implemented for enhanced resilience against any network partition problem. Uh, choosing a distributed we need to choose a proper dispute and locking service. Like, even with headaches or there are other options, uh, like, if if people is also there. So I would prefer, uh, I would prefer simplicity and speed, so I would be doing a combination of with, uh, hidelock algorithm. So I would often use, uh, headlock distributed locks, um, with the headlock algorithm, which will ensure safety and fault tolerance across multiple headysmodes. So how it will be working in my mind is that, uh, each microservice is going to try to acquire a lock, uh, using a unique value with a set time to live, uh, like, t value and XPX pair. And the lock is considered acquired if, um, majority of Redis nodes confirmed it. So, uh, it's so it is also going to handle auto expiry to avoid any deadlocks and retries and jitters. Now why is Redis? Uh, it is because of its fast, simple, and widely supported with, uh, node headers or, like, with modules. It works it fell with short lived log and code for data election, task scheduling, etcetera. So so in summary, um, my go to approach would be, uh, add this with a add local algorithm due to its balance of simplicity, performance, and resilience. Uh, Uh, for more complex coordination, uh, I would choose Zoompaper for quick internal solution, even level logs, work as in tight weighted options.
Apply a solid change by the entire script while creating a new service layer. Okay. So for a So while creating a new service layer in a in my TypeScript application, Applying the solid principle would definitely help me write a clean, scalable, and maintainable code. How would I do it? If, um, I would be, like, following a single responsibility principle that, uh, life should have only one reason to change. So in time frame, uh, I would split the logic into focus service classes. For example, the user service will only handle the user object, and while, uh, user validator will handle only the validation objects. Um, I would use the open close principle. Uh, this software will be entities, uh, that should be open for extension, but closed for modification. So for example, like, I will use interface event dependency injection to allow extensive quality. And, uh, I would also use, uh, uh, like, a substitution principle, uh, that the classes would be arrived and must be substituted for the safe classes. So, basically, I will ensure that the child classes implement the pace behavior correctly. And, uh, okay, I would also be using some interface segregation principle that line should be forced to depend on methods that they do not use. So I would create small purpose specific interfaces, uh, and services can implement only what they need. Um, the high level module should not be depend on low level module, uh, but on extraction instead of that. So I would use extraction for all the dependencies and inject them via instructive. Then I can easily, uh, swap, uh, let's say, Mongo user repo with Postgres user repo in test or in production if I need to. So if I summarize, uh, applying solid in TypeScript with service layer means using interfaces, classes, and compositions to ensure separation of concerns, testability, and extensibility. It's gonna make services robust and easier to evolve and pre work over the time whenever needed.
Provide and design provide and approach to design a high throughput event treatment system using The US technology. Okay? So I want to design a high throughput event driven system in AWS. I would follow an air architecture to ensure scalability with reliability and fault tolerance. So, uh, like, would be, uh, ingestion air injection layer, uh, where the Amazon API gateway for our test events. And the four machines will handle a million of request per So with building spelling, I would use, uh, event broker or message use, uh, like, for okay. Then we are talking about AWS. So, uh, I can use AWS, uh, SNS for broadcasting to multiple consumer and the Amazon's SQS, uh, for the coupling microservice for the high throughput rate. Basically, SQS can handle the unlimited transaction per second with at least once with at least once, uh, delivery. So the processing there after that, uh, I'm gonna use AWS Lambda for lightweight serverless, uh, like, serverless event processing. And for high output, I could consider, uh, something like ECS, uh, if task card CPU in, uh, intensive or for streaming, uh, to destination like f three or head shift, I can use theta INFV data fire hose. For this storage layer specifically, I would be using Amazon f three for the for data, like images and anything like that, uh, if I want to set up a a CDN for a quick, uh, data handling data transaction for that time of day. Uh, I can also use Amazon for Dynamo EV for, uh, all latency. Uh, if you are using an off sequence database or, uh, Amazon's RDS, uh, for a relational database. And for monitoring, I would definitely use a loud watch for real time metrics and alerts. Uh, the TL using SQS for tam data to capture fail events and auto scaling for EC two containers or, um, parallel tam data invocation. So let's take an example scenario here that, um, let's consider an IoT system. Uh, so the virus will publish the data and the events sent to Lambda or ECS for passing. The data would be stored in f three or DynamoDB. We can do FNS for alerts admin wavefront threshold and, uh, head shift for querying data later on. So, like, all these things combined would allow me to be, uh, like, this will allow millions of, uh, uh, like, million of events of checking to proceed, uh, efficiently and is is reliably.
What measure would you implement to secure sensitive data in MongoDB? Okay. Uh, so to secure the sensitive data in MongoDB, there are couple of things that I would like to implement, um, like encryption, authentication, access control. Uh, to enable encryption, I will do like, I would have two options here, uh, to use encryption have, uh, to use MongoDB Enterprise encrypted storage engine or a file system, I will encryption, or I can also, uh, just simply embed the encryption logic in my, uh, applications code base. Or I can and so it will enforce the, uh, SSL and TLS encryption also between the client and MongoDB encryption, including their instances. I'm gonna enable the authentication then, uh, uh, whole base access control here. And field level and application level encryptions will also be added for highly sensitive fields. Um, for example, any passwords or any files or encrypt I can also encrypt the complete application layer before even writing to. That will always give me another layer of, uh, security. I can secure the networks, um, network access by running the running my MongoDB instance in, uh, inside of private network, or I can restrict the access to any specific IPs using firewall for security loops. Uh, so only, uh, particular people can access a particular type of data, uh, that would be there. I would use, uh, the auditing tool, like, any action or access attempts, uh, any illegal attempts that might have been happening, uh, any multiple attempts to avoid that, I would be logging and monitoring, and I would set up some alerts for any suspicious activity. Also, I would be taking a have fuller backup of the encrypted data and secure it, uh, and, like, store it securely in a separate place. Uh, yep. So if I summarize to secure a MongoDB and the phone combine encryption with authentication, it's all based access control. I will isolate them at working field. I would be, uh, putting my encryptions on each one by one, like, application layer, network layer, then data layer or client side layer as well of for that matters. Uh, these paired defenses would protect any sensitive data both in transit, uh, and at health by ensuring only authorized user and access it.
Your application experiences a spike in traffic, causing performance, quartering. How would you identify and resolve that issue with? Okay. To okay. So to identify the bottleneck, I will be using something like Datadog or e m two to monitor response time, memory usage, CPU load, and the throughput. Um, I would add a structured aux system using Winston or something like that. And with distributor tracing, it's using, um, something like open telemetry to identify, uh, flow endpoints and database files. I would be using more built in inspect and cross flags, uh, to profile the application and identify any synchronous blocking code of any CPU having task. I would then I would resolve the portal next, uh, by simply opening code. So I would be looking for any, uh, locking calls for any, uh, like, for any large loop of file sync or any database. Already, that'd be, uh, putting everything's flow, and I would be replacing them with the async version or a batch processing version. I would use left string. Uh, as our four g s runs on a single thread, I would use a left thumb model or two, like, e m two, uh, to spawn multiple instances, uh, uh, across the CPU pods. I would be implementing all pod balancing, uh, to distribute traffic, um, using something like AWS ELB in run on multiple pod instances. I would ensure the in the indexes I list for my database optimization action frequently, which is user data. For example, with the I'd use to avoid any n one plus query problems. I would offload any expensive operation towards job queues. For example, uh, have it in queue and return a quick response. Uh, to plan for this telepathy, I would make sure that I place stateless and use a shared session, uh, to allow scaling across the multiple machine, uh, uh, like, uh, implementing a basically implementing a horizontal scaling or, uh, auto scaling using allowed platforms like AWS, which TCP to auto scale instance, um, paved on the traffic spikes.
Okay. What is your process of for designing a resilient and scalable messaging system with AWS SQS for microservice communication. Okay. Process two two five. So I would basically okay. So SQS would help, uh, decouple microservices by simply enabling the asynchronous message passing. It will provide durability, and availability out of the box, Uh, to design for resilience, I would use, uh, 10 to handle any failed message per session. Uh, I would set a maximum received count and a direct failed message, uh, to my 10 letter u four letter analysis. And I would enable retrace with the exponential back off to avoid any overwhelming downstream services during the spike for failure. I the potency, uh, in consumer would ensure, uh, that any repeated message do not have any duplicate processing. And, uh, I would tune I would tune the services, uh, to allow consumer enough time to process message without speed delivering it. If I have to design teleplating, uh, I would be using horizontal scaling. I would spend up multiple instances of consumers behind an auto scaling group or a server or a serverless group. For example, something like AWS and Lambda, uh, to scale based on the package volume and not unnecessary. I would use, uh, FIFO, a in, out, uh, that would guarantee, uh, like, any deduplication and standard for high throughput without ordering needs. Patch processing will be pulling multiple messages at once, so it would simply reduce any overhead. To monitor and maintain, I would use a loud watch. I would use logging and tracing. I would use l like, alerting with something like TLQ or, uh, stack queue. For security, I'm gonna use some I'm policy to log down the producer and consumer access strictly. I would use I would just enable, uh, server side and correction for message data and from message filtering. Um, I will do, like, when using, uh, an SNS fan out, uh, to multiple use, uh, app I would apply a filter to reduce any.
Is that how for you optimize documents, schema design, and what would it be for a rapidly evolving application? Okay. So if our application is, um, like, if you already know that our application is gonna be evolving rapidly, I will definitely be taking some precautions from the tech thing itself that I will do be, like, I would embrace the flexibility of HEMA, uh, but without this main method. So MongoDB is basically HEMA less and which is great for, uh, like, evolving application. Uh, to use schema validation tools, I would use schema validation tools to enforce hierarchical structure without being it's rigid, and I would apply a versioning field, like, of email versions inside the documents to track any email changes over the time. And for any and, basically, for these mood iterations, I would be designing for access patterns. Uh, like, I would structure my document based on how they are queried or just normalized. Uh, as you know, sometime, the normalization is better than normalization. We should not always go by the books. For example, if an order is always queried with human data, uh, and I, uh, embedded relevant customer field inside the order document. I would use, uh, embedding for one to few relationships and referencing, uh, like, with the hookup for manual joins, uh, for one to many or many to many. I would keep the document within a limit. Like, uh, I would not like to roll my document beyond a huge five, like, 10 MB or 15 MB at max, I would say. That would be maximum. For frequently updated sub documents, I would consider separating them, uh, to reduce the high temp verification, uh, like comments or logs in a separate collection. I would be using indexing for performance, like adding a compound or partial index based on the pattern. Uh, would choose TTL index for expiring data automatically. And, uh, for a very, the and multi key indexes can cause index load, so I would be taking care of that as well. For schema evolution, I would be using some migration scripts with the Mongoose to, uh, make the data transition smooth. I would refer backward compatible changes that would add an optional field, uh, don't remove or rename the existing ones abruptly, uh, that can cause any unknown errors or unknown issues. Uh, I would use what else? Yeah. And at last, I would be using monitoring, uh, to like, I would use host, like, at left, MongoDB at left, or, uh, uh, I think, MongoDB top is there to monitor gradient schema performance. I would also automate schema drift detection using tools like MongoDB schema on on those schema testing, and that would help me with the optimization of MongoDB database, which I know that is going to evolve happily over the period of time.