
Backend 5+ years experienced software engineering professional possessing hands-on experience in developing back-end frameworks for highly scalable applications.
Engineering Lead
FlexiLoansSenior Software Engineer - Backend
FiteloSenior Software Engineer - Backend Consultant
FreelanceSoftware Engineer - Backend
SearchUnifySenior Software Engineer - Backend
FreeCharge, Axis BankSoftware Engineer - Backend
FreeChargeBackend Developer - NodeJS
Code Brew LabsBackend Developer(NodeJS)
Code Brew LabsSoftware Engineer - Backend
Code Brew LabsNode.js

Express.js

MongoDB

Redis

Kafka

Elasticsearch

Microservices

TypeScript

TypeORM

PostgreSQL

Redis

ElasticSearch
So, ever since graduating in 2018, I've been working as a backend developer, and I have experience in working with backend frameworks like Spring Boot, Java, and I have also experience in Node.js and Express.js. I have worked with Nest.js as well using TypeScript. So my jobs have been mainly backend heavy, and I've made use of my skill set to contribute in organizations which are working in service-based domains, B2B product-based domains, and also I've worked with FinTech environment and made use of basically my skill set in the wellness domain.
So how would you tackle connection pool in a post Grace Java backed application at scale? So for tackling Java based, uh, application, PostgreSQL, at scale, I can use library pooling library like hierarchy CP configuration parameters, such as maximizing pool size, connection time out, and idle connection time out. They should be optimized based on the application's requirement and workload. Additionally, we can, uh, monitor connection usage, tune tuning pool setting dynamically, and implement retry mechanism for database connections, which can help manage scalability efficiently.
How can you create a change history of table which is written by many systems, some of which are not even known. So so as to create the change history of table, which is written by many systems and some of which which are not known, uh, would be a challenging to, uh, achieve. Our the approach is to implement our database trigger that captures changes made to table and logs them into a separate audit table. Triggers can record, basically, time stamps and user making the changes, the nature of change. And, uh, additionally, we can employ the metadata management tools and data, uh, lineage tracking to reduce the, uh, trace changes backing to the source even if some sources are unknown.
What is rank and row number in window functions? So, basically, rank and row number are in context of SQL, uh, and, uh, basically, rank, uh, assigns a unique rank to each row with within the partition, whereas rows with some same values get the same rank. And the next row gets a rank incremented by the number of tied rows. And row number assigns a unique integer to each row within the partition regardless of their duplicate value.
How would you monitor a SQS or any other queue in production? So so as to implement and monitor a SQS, uh, or any other queue in production, we can make use of CloudWatch matrix matrix, and we can make use of CloudWatch logs. We can make use of, uh, queue lengthening monitoring, tracking the queue size and depth over time to ensure it doesn't grow too large, and we can make use of dead letter queue. So as for handling messages which could not be processed and its activity is separately, uh, monitored, and we can make, uh, q consumer metrics to monitor the health of our, uh, basically, queue and make use of external monitoring tools, 3rd party tools for q, uh, query mon
What is a vac vacuum and when is it used? So, basically, vacuum is basically a operator that marks space occupied by delete or absolute rows available for reuse. It basically updates the internal data structure to reflect the changes, and it reclaim disk by compacting the storage and moving absolute data. It updates statistics to help the query planner to make better decisions.
Given Java code, identify the code, which can, okay. So we are using max event as the, uh, parameter to the maximum events, which can happen. And the maximum event is sent, set to be 10. So in cases where the sample event entered is greater than a hundred. So it can, uh, uh, only have events which are less than a hundred. So the max event, uh, acts as the, uh, uh, highest, uh, goal, which can be achieved by using this code.
In a SpringBoard microservice architecture, how do you handle shared domain models? So, basically, in a Spring Boot microservice architecture, we can handle shared domain models by using shared libraries, uh, basically, creating libraries and modules containing shared domain models, enums, and interfaces. And these can be versioned as well, and we can make use of defined API contracts. And, uh, we can make use of event driven architecture. And, also, we can make use of service mesh. Uh, domain driven design to apply and identify bounded context can define, uh, boundaries around the domain models. And, basically, testing and validation also helps us to handle shared domain models. And, basically, this can help us in urine testing and integration testing, and contractless test can help us verify interactions and prevent regressions.
What approach would you take to refactor a monolithic Java application into microservices without causing a system downtime? So refactoring the monolith Java application into microservices without causing downtime can basically, we can have to identify the boundaries and and analyze the monolith application to identify, basically, based on the domain driven design principles, which components can be decoupled and encapsulated into separate services. We can implement refactoring and rig the monolith application into smaller modules, um, by isolating the functionalities and with that can be extracted as microservices without, uh, affecting the overall functionality of the system. We can implement a API gateway to introduce the API routes to request monolith application and newly created microservices. And we can make use of basically, we can make use of the strangler design pattern and deploy the microservice newly created microservice, uh, along with the monolith and gradually use techniques like green blue deployment to minimize the risk, uh, of downtime. And we can make you monitor and test it and iterate and refine the Microsoft with architecture based on the feedback which is provided to us. And we can continuously evaluate the performance and maintainability of the system.
How would you optimize the usage of a AWS and MoDB for a microservice with highly variable access patterns. So to optimize the DynamoDB microservice with highly value variable access patterns, We can make use of their partition key design and choose a appropriate partition key, which distribute work workload across partitions and basically make use of on demand capacity considering automatically adjust the read and write capabilities based on the actual design, uh, actual usage. And we can make use of the DynamoDB accelerator to improve the read performance, basically, by caching frequently excess items and reducing the number of reads from the Dynamo table, uh, directly. We can make use of time to live feature to, uh, basically, automatically delete the items from the DynamoDB. It basically has manages the time to live of the particular, uh, item. We can also implement it using the monitoring, uh, continuous DynamoDB performance such as, uh, consume capacity, throttling events latency, and make use of AWS, uh, CloudWatch alarms to trigger automatic scaling or optimize the index configuration on a workload pattern.
How would you troubleshoot how would you what would you approach to diagnose and troubleshoot a memory leak in Java based microservice?