profile-pic
Vetted Talent

Samsul Hoque Choudhury

Vetted Talent
With over 12 years of experience in various roles, I have developed a versatile skill set that enables me to excel in a wide range of responsibilities. Throughout my career, I have honed my abilities in leadership, communication, problem-solving, and project management. My experience has equipped me with the expertise needed to navigate complex challenges, drive strategic initiatives, and deliver results that exceed expectations. I am passionate about continuous learning and growth, and I am committed to leveraging my skills to make a positive impact in any role I undertake.
  • Role

    Senior Software Engineer

  • Years of Experience

    12.6 years

  • Professional Portfolio

    View here

Skillsets

  • react
  • Load Testing
  • Microservices
  • modular monolith
  • MongoDB
  • MySQL
  • Nodejs
  • OpenAPI
  • PostgreSQL
  • Postman
  • pytest
  • Python
  • RabbitMQ
  • Lambda
  • Redis
  • Restful APIs
  • S3
  • Serverless
  • SNS
  • SQL
  • SQS
  • Swagger
  • TypeScript
  • Vuejs
  • XUnit
  • Entity Framework
  • Angular
  • API Gateway
  • ASP.NET
  • AWS
  • AWS DynamoDB
  • Bitbucket Pipelines
  • C#
  • CI/CD
  • CloudWatch
  • Dapper
  • Djangorest
  • Docker
  • .NET Core
  • Event-Driven Design
  • EventBridge
  • Expressjs
  • FastAPI
  • Flask
  • Git
  • GitHub Actions
  • Integration Testing
  • Java
  • JavaScript
  • Jira

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Technical LeadAI Screening
  • 76%
    icon-arrow-down
  • Skills assessed :Node Js, Type Script, Jest, Cloud Services, APIS, Git, Jenkins, Java, Spring Boot, Python, third party APIs, payment systems
  • Score: 68/90

Professional Summary

12.6Years
  • Feb, 2025 - Present1 yr 1 month

    Senior Software Engineer

    Unient India
  • Mar, 2024 - Feb, 2025 11 months

    Senior Software Engineer (Backend)

    Blue Blaze Earth
  • Mar, 2021 - Mar, 20243 yr

    Senior Software Engineer

    rithmXO
  • Jun, 2013 - Jun, 20141 yr

    PHP Web Developer

    Aleoy Software
  • Jun, 2014 - Oct, 20173 yr 4 months

    Senior Full Stack Developer

    Innosolv Consultancy Services
  • Nov, 2017 - Mar, 20213 yr 4 months

    Technical Lead

    Microexcel
  • Sep, 2011 - May, 20131 yr 8 months

    Faculty (Dept Of IT)

    R.K. Degree College

Applications & Tools Known

  • icon-tool

    Visual Studio Code

  • icon-tool

    WampServer

  • icon-tool

    Jira

  • icon-tool

    Postman

  • icon-tool

    Git

  • icon-tool

    Docker

  • icon-tool

    Jenkins

  • icon-tool

    Redis

  • icon-tool

    .NET Framework

  • icon-tool

    FlaskAPI

  • icon-tool

    ExpressJS

  • icon-tool

    MySQL

  • icon-tool

    MSSQL

  • icon-tool

    MongoDB

  • icon-tool

    DynamoDB

  • icon-tool

    PostgreSQL

  • icon-tool

    Redis

  • icon-tool

    GraphQL

  • icon-tool

    Docker Swarm

  • icon-tool

    SVN

  • icon-tool

    ReactJS

Work History

12.6Years

Senior Software Engineer

Unient India
Feb, 2025 - Present1 yr 1 month
    Led the modernization of a logistics platform by refactoring a legacy .NET monolith into a cloud-ready, modular system using C#, .NET 6/8, and Python (Flask), enabling a more scalable and maintainable architecture. Implemented event-driven microservices with serverless patterns, improving API response times by 40% and enabling asynchronous processing for port operations. Delivered SaaS features as part of a product modernization project on AWS using CI/CD pipelines and Docker. Designed PostgreSQL and MySQL schemas and optimized complex queries using indexes, materialized views, and stored procedures, improving data retrieval performance by 25%. Migrated and engineered legacy services to RESTful APIs, replacing SOAP and enabling seamless third-party integrations across the logistics ecosystem. Designed and deployed RESTful APIs and business logic integrating with third-party platforms and legacy infrastructure. Created and documented OpenAPI-compliant (OAS3) API specifications for external and internal service integration. Collaborated with cross-functional teams to gather API requirements and deliver cloud-native backend solutions under tight deadlines. Integrated observability through AWS CloudWatch, DataDog, and custom log parsers in Python, reducing incident response time by 35%. Applied SOLID principles and introduced domain-driven design and clean architecture practices, leading to higher code maintainability and reusability. Adhered to secure coding practices and actively participated in vulnerability reviews and security patching cycles.

Senior Software Engineer (Backend)

Blue Blaze Earth
Mar, 2024 - Feb, 2025 11 months
    Led backend initiatives using Node.js (ExpressJS) to build performant microservices supporting sustainability analytics. Contributed to transitioning backend components from Python/.NET to Node.js for performance-critical modules. Improved backend performance by 35% by migrating critical components to .NET Core and optimizing service orchestration. Delivered backend APIs using Node.js and FastAPI, orchestrating integrations with external APIs to ingest and normalize sustainability data. Containerized services using Docker, streamlining development, testing, and deployment across environments. Implemented event-driven workflows using AWS Lambda and EventBridge, supporting real-time processing and decoupled service interactions. Built and deployed RESTful APIs and asynchronous queues to deliver sustainability insights and carbon tracking tools for enterprise clients. Maintained a clean and well-documented codebase, applying best practices in modularization, testing, and fault tolerance. Tuned PostgreSQL queries and schema designs to optimize performance in high-volume data environments. Collaborated on frontend integration using React, ensuring clean API contracts and supporting seamless UI workflows without owning full-stack responsibilities. Participated in backend incident handling and resolution, applying structured testing and root cause analysis before raising PRs. Designed optimized PostgreSQL schemas for handling complex geospatial and carbon footprint data. Conducted internal API design reviews and ensured secure, performant endpoints for internal and partner-facing systems.

Senior Software Engineer

rithmXO
Mar, 2021 - Mar, 20243 yr
    Led backend architecture discussions and implementation for a greenfield microservices project using C#.NET, Node.js and TypeScript hosted on AWS. Built backend services using Node.js and TypeScript for a multi-tenant SaaS platform; ensured extensibility and multi-region support. Built custom internal tools and client-specific applications, accelerating delivery of bespoke solutions. Wrote unit and integration tests, ensuring stable and bug-free deployments. Designed scalable MySQL schemas and implemented query optimization for latency-sensitive endpoints. Participated in frontend integration using Angular, ensuring smooth interaction between the UI and backend APIs without being involved in extensive UI logic. Performed code reviews and mentored junior developers, improving team efficiency and knowledge sharing. Led a team of 5 engineers, set engineering best practices, and facilitated sprint planning. Drove backend code quality through debugging, and detailed documentation processes. Acted as the go-to person for architectural decisions and backend design discussions. Interfaced with stakeholders to gather requirements and ensure product alignment with business needs. Ensured compatibility with North American timezone coordination by operating in distributed agile teams across regions. Created and maintained modular backend services with RESTful API endpoints using Flask and SQLAlchemy. Optimized database operations for PostgreSQL with custom indexing and performance profiling techniques. Collaborated in security audits and incorporated threat mitigation practices during service design and deployment. Created monitoring tools using Java and upgraded previous tools versions and did code refactoring.

Technical Lead

Microexcel
Nov, 2017 - Mar, 20213 yr 4 months
    Owned architecture and low-level design for healthcare-focused custom software projects from the ground up. Led legacy-to-modern application migration, including full-stack redesign and data migration. Developed RESTful APIs and backend services adhering to SOLID principles and design patterns. Integrated SAP APIs with CRM systems, automating key support operations. Built and integrated frontend applications using Angular and VueJS frontends with backend APIs to deliver responsive, modular user interfaces tailored for healthcare workflows. Built internal business tools to streamline workflows and data reporting. Mentored junior developers and conducted regular code reviews for continuous codebase improvement.

Senior Full Stack Developer

Innosolv Consultancy Services
Jun, 2014 - Oct, 20173 yr 4 months
    Owned architecture and low-level design for healthcare-focused custom software projects from the ground up. Led legacy-to-modern application migration, including full-stack redesign and data migration. Developed RESTful APIs and backend services adhering to SOLID principles and design patterns. Integrated SAP APIs with CRM systems, automating key support operations. Built internal business tools to streamline workflows and data reporting. Mentored junior developers and conducted regular code reviews for continuous codebase improvement.

PHP Web Developer

Aleoy Software
Jun, 2013 - Jun, 20141 yr
    Developed product features post-training in web frameworks and AWS environments. Wrote unit tests with PHPUnit and integration tests using Selenium. Handled deployment of features to EC2 via FileZilla. Created documentation to support maintainability and team onboarding.

Faculty (Dept Of IT)

R.K. Degree College
Sep, 2011 - May, 20131 yr 8 months
    Taught BCA and MCA students, conducting lectures, labs, and project mentorship. Facilitated curriculum development and organized technical seminars to foster innovation. Guided final semester projects and served as exam invigilator, maintaining academic integrity.

Testimonial

rithmXO

Andrés Sánchez

I have worked with Sammsul in the recent past and I can only recommend him as the great developer, leader and colleague he is. He was always supportive and helpful, bringing new useful ideas to the team but being also open to embrace new ones from other co-workers.

March 27, 2024, Andrs worked with Sammsul Hoque on the same team

rithmXO

Dave Watson

I have had the pleasure of working with Sam for 5 years now. We have worked together on applications of varying sizes and complexity. Sam is full of great ideas and not afraid to share them. Sam is driven and always willing to take on the hard tasks. When given a task, Sam can be trusted to get the job done in a timely manner. On top of being a hard worker, Sam has been an excellent mentor to me and to the other members of the team. I would recommend Sam to anyone looking for a trustworthy, hardworking individual.

March 21, 2024, Dave was senior to Sammsul Hoque

rithmXO

Jeff Stockett

I have had the pleasure of working with Sam on mutliple projects at multiple companies. I have always been impressed with Sam's problem solving skills. He is someone who is not afraid to jump in and make things happen. I've seen him come up with elegant and efficient solutions in a number of languages and technologies. I've seen him solve problems and build features across the full stack. Whether it's front end, API, data layer, you name it, he has the skills to get the features across the finish line. It's been an honor to work with Sam and I hope I have the opportunity to work with him again in future endeavors.

March 19, 2024, Jeff managed Sammsul Hoque directly

Major Projects

12Projects

Kompass 2.0

Innosolv Consultancy Services Pvt. Ltd.
    An online portal used internally by client to manage their process of candidate verification.

Logistics

Innosolv Consultancy Services Pvt. Ltd.
    An online portal where the client manages the business operations and can also track the packages/fleets to ensure that their business runs smoothly.

Impact Index

Innosolv Consultancy Services Pvt Ltd
    An online portal that generates reports to be used by Analysts for various data analysis. It is designed to calculate metrics for players performance in the game of Cricket.

Aleoy

Aleoy Software Pvt. Ltd.
    Developed for real estate agencies to showcase properties and services, this project included a module for scraping content from other real estate websites. I designed tables, relations, and implemented MVC structure, writing test scripts. Regular code refactoring preceded merging and deployment to the UAT server, ensuring robust functionality.

Global Trip Manager

Microexcel Inc
    An online portal used by client to cater services for customers who opt for chartered flights in the Middle East and Europe.

BlueBlaze.Earth

BlueBlaze.Earth
    A SaaS platform which provides methods for sourcing ESG data, calculating emissions, managing emissions targets, connecting to carbon offset markets, producing corporate and regulatory reports, offering ESG insights for investments, performing risk modeling, and ensuring compliance with regulatory obligations.

1-STOP

    Provides suite of online SaaS products for the shipping industry which handles booking, tracking, invoicing, billing, and other commercial operations for the clients.

ORION(Freeus)

    A SaaS platform which provides dealers with tools to manage customer devices, view key metrics, and access marketing materials, thereby supporting business operations and enhancing customer service.

iDINE

    A POS application which is widely used in restaurants in major cities to take bookings, reservations, restaurant management, inventory and personnel management.

rithmXO

    A SaaS platform to enhance business operations. Their expertise includes M&A services for seamless transitions, IT infrastructure design, fractional CTO services for strategic leadership, integration solutions to automate processes, and collaboration services to improve communication.

Rithm

RithmXO Software
Jan, 2021 - Mar, 20243 yr 2 months

    Project Details-

    As a leading SaaS product, revolutionizes business orchestration with its microservices architecture, optimizing operations and project management efficiency for organizations.

    Responsibilities:

    • I participated in creating the system design for the product.
    • Wrote code for the backend services which was implemented using event-driven microservices architecture.
    • Did code review for other developers
    • Created two internal tools for automating the process of the company's evaluation.
    • Tech Stack - C#, .NET, ExpressJS, MySQL, MSSQL, MongoDB, Docker, AWS, ReactJS, Event Store, Microservices (Event Driven)

1-Click Cure

Microexcel Inc.
Dec, 2017 - May, 20191 yr 5 months

    Project Details:

    The web app unifies patients and doctors, enabling patient registration, family member addition, and document uploads.

    Patients request consultations online/offline, search for doctors via name, specialty, or location, and make online payments.

    Prescriptions are stored in patient profiles. Doctors manage consultation requests and access previous prescriptions, utilizing patient intake forms.

    Responsibilities:

    • Participated in creating the architecture for the application.
    • Wrote both frontend and backend code for the application.
    • Interacted with clients to take their requirements and delivered them the application in phased manner by deploying the application in their cloud space.
    • Tech Stack: C#, .NET 4.5, AWS, VueJS

Education

  • Masters degree, Computer Science

    Assam University, Silchar (2011)
  • Bachelors degree, Computer Science

    Assam University, Silchar (2009)

Interests

  • Travelling
  • Reading
  • Photography
  • Cooking
  • AI-interview Questions & Answers

    Hi. So my name is Samsula Chaudhry. I have done my graduation and post graduation in computer science. And after that, I did, um, teaching for, like, almost 2 years in a college where I should teach graduate and post graduate students. After that, uh, I came to Bangalore. I look at it here, and then I started working with, you know, a start up, uh, which I do real estate. And I got my initial days of training into, uh, web development, uh, testing, AWS infrastructure, and all those, you know, necessary elements that are required for development. So I worked there for 1 year. Uh, the company got closed, unfortunately, and then I moved to, uh, you know, some consultancy where I was working on product as well as projects. And then I had the opportunity to work on different kind of projects, which helped me to gain more technical skills to learn more about different text tags like dotnet and all. I was working with PHP, then I also learned dotnet. And then towards the end, I started learning Node. Js using Express. Uh, so that was a thing. And then I worked started working with Microsoft. And since 2017, I'm working, um, um, I was working at the, uh, Microsoft start from 2017. Uh, and since then, I'm working remotely. Uh, after that, whatever companies I have joined, I'm working remotely with them. So I used to collaborate with team who are based out of United States or UK, uh, and they're into different, you know, places. So I used to collaborate with them. Initially, I used to work in their time, uh, uh, in their time zone. And then later on, I started working on my time zone so they can collaborate better. But then during all this process, I also learned a couple of more tech stack using Python and all. So and gradually, I started focusing my shift towards, uh, back end mostly because I want to expand my knowledge and my expertise more to back end than the front end. Uh, but I it doesn't mean that I'm not, you know, interested in front end or working with them, uh, with anything like that. I'm familiar with Angular. Uh, last time I worked with Angular 6, and then I worked with Vue. And then recently, you know, I have worked with React. Just 6 months back, I worked on React. Have used MySQL, MSSQL, MongoDB, uh, with my current brother. I'm working with the AWS DynamoDB as well and into PostgreSQL. Of course, I'm there. Uh, apart from that, I have experience in microservices architecture, in serverless architecture as well. I have participated into, you know, different kind of, uh, client interactions also along with, you know, designing the application from the scratch. That also experience I have, especially with SaaS products. So with my previous experience with the so in my previous employer, I have built over there the the the whole SaaS product along with 7 different engineers from the scratch. The whole, uh, architecture was laid out and there are a couple of iterations to evolve the whole thing. Uh, and then we build the whole thing with dotnet and then a couple of modules in Python as well. So right now, my main motivation is to be with the company where I can stay for very long, be a part of the team where I can be, you know, for a very long time and, uh, and grow with them personally as well as professionally. So that's the main motivation behind it. Apart from that, I have interest in learning new things, and I'm just in learning AI, ML, and, uh, blockchain. Uh, starting to put it on, I started learning blockchain as well. So that will also add more skill set into my profile, and I may help in a more better way, uh, you know, proposing better solutions in future as well. Thank you.

    Um, I have not exactly worked on, you know, automation pro process with the Genkis, but one thing I can tell you that if there's a if if if any kind of project is there, typescript or, uh, what was it, Python and or or dot net, it doesn't matter. First of all, we have to create a Docker container, which will, you know, encapsulate the whole project and all its dependencies over there. So we will create Docker image. I'm still we'll create Docker image over there, and that will encapsulate the whole project as dependencies into one single place. And then we will create, uh, the all the all the dependencies, like database and internal networking and other stuff, volumes and all, uh, into that, you know, docker compose file. And once these changes are, you know, deployed into CICD, we can create a CICD pipeline over there, which would pull the latest changes, implement this docker container over there, run the tests, validate everything over there. Once it's validated, then we are going to, you know, uh, publish it using Jenkins to the respective servers. Jenkins can help in, you know, uh, act as a mediator over there to see what has changed, if there any actions has been triggered or not. So Jenkins takes care of all these things. So, basically, I rely on the CICD work pipelines over there, and Jenkins, you know, act as a mediator or, you know, orchestrate over there to to take all these charges and then, you know, makes metrics over there. I can do the things manually, uh, on Jenkins application, but I don't have I I don't remember exactly how to automate that one using a step or something like that. Thank you.

    Okay. So there are a couple of dimensions that need to cover when you're talking about security, uh, any restful APIs, not only about Proteus. First of all, we have to understand that, uh, any restful APIs is is going to access some kind of resource in the back end. Uh, it could be data we file. It could be talking to any cloud resources over there. So we had to, you know, be very specific what that particular API is meant for and what are the versioning that we are going to follow so that, you know, in future, we are going to implement all the security measures for all the different versions over there. Uh, apart from that, we have to make sure it should be always communicated over any kind of secure layer like SSL and all. So that gives more security if you are communicating with the restful APIs or HTTPS rather than HTTP. For development purposes, fine. But for manual production and HTTPS is the core thing that you should be doing first. Uh, and then there should be, you know, authentication and authorization there should also be there. The authentication can be done in any ways. You can use both. You will go for JWT authentication. There are n number of ways that can be do over there. And then we can have that put in over there into some API gateway or, you know, double load balances where you can implement that one. Then we can add, um, the authorization process based on the role based access, uh, whether the particular, uh, call is being there or not. And then we can also implement some rate limiting and rate throttling as well. Otherwise, there'll be, you know, bottleneck. There'll be charts of videos over there. So we should avoid will that one by implementing, uh, the rate limiting or rate throttling. Uh, so that also gives them, you know, more added security over there. And whatever data we are fetching, we should not be, you know, open to, you know, returning every data. We should be very specific to what access, uh, to the resource we are giving, what resource we are accessing, or what intent, and what are the different filters, and, you know, what kind of data we are trying to fetch it over from there. So all these things make the the data as well as the, uh, the restful API more secure, that we are not exposing anything unnecessary to the outside world. And, also, we need to implement the cost policy, uh, because there's also, uh, uh, very important. If you know very particularly that this particular API is going to be accessed from certain domain, then we should be incorporating that into our cost policy in the back end. And anything who is, uh, or or any access that is not done from that, uh, an identified resource from the front end or any other source, that that should not be entertained. So that so these are a couple of things that we can do to make sure that the APIs are very secure, uh, and they are, you know, uh, working smoothly whenever we are trying to interact with them from any either mobile application or from the front end application. Thank you.

    Okay. So when you're talking about sessions, there are different things we can do. 1 is we can use, uh, cookies. Cookies can be stored over there. Cookies can tell about what the sessions is going all about. Then we can head JWT, uh, in JWT when we're implementing, we can have refresh tokens. Apart from access token, we can have refresh tokens. So access tokens, uh, allows you to access that resource at that particular API, certain limitation, uh, between which you can access that particular resource. Uh, so refresh token also, you know, is used by the front end, uh, to to see whether the session can be there or not. And if that particular timeline is over, it again cost for a next fresh access token, and the back end will, you know, renew that whole access token once again for the next interval of time. And this different token is stored in the database in the in the back end, or you can store it in Redis or any other caching mechanism also to make it much more faster. Uh, based on that, uh, whole behavior, we can, you know, store if since we are storing it somewhere, we always have the option to kick out that session from that particular storage either from Redis or database, whatever is your preference. And, um, that really helps us over there. But the main there are certain constraints out there which we need to always keep in mind since we are storing it somewhere either in Redis, it's much more faster. But, again, then if there is one Redis instance where every session is being stored, then that becomes a bottleneck. And if there are multiple out there, then that improves the whole performance. But then there is some kind of, uh, turnaround time is there, which, you know, becomes very slow, uh, which may reduce it to certain microseconds over there. That that would be a difficult issue. So it depends on what is the size of the project and what kind of, you know, interaction we're talking about, how much will get traffic, and how frequently the sessions will be, you know, in, uh, in change, how will they be managed, how long they will persist. All these are various factors that are used to keep in mind while, you know, designing, uh, the sessions for, you know, for particular audio, uh, for particular application. In my previous project, uh, as well as my recent project, we are using already single instance which takes keep you know, which keeps track of all the sessions, uh, all the JWT sessions over there. So we have, like, a key value pair. The key is basically, uh, the refresh token refresh token and the value is the access token. So and we have this in such a way that whenever there is, you know, a refresh token, uh, is you know, gets expired, uh, it again goes back to the back end. It requests over this real process, which updates the refresh token and access token and sends back the access token and updates the release cash also. And next time onwards, you keep checking from the release always rather than come to the back end. Thank you.

    Okay. So this is a very, uh, you know, the name is hotfixes. Well, the issue is also with hotfix. So, basically, we create individual task for each of the features as well as, you know, for the bugs and fixes that we are doing over there. So, ideally, the practice that I follow is to have the fee, uh, to first mention the environment as in-depth UAT or production, then hyphen and then the ticket number over there, and then hyphen, and then I give a description for that one. So that helps us to, you know, to do the things parallelly as well as since there are different tickets allotted to different issues. Even if there's a parallel features going on, that will have a separate ticket for that one in Jira. And then for the hotfix also, this over there. So whenever it's a hotfix over there, it's a rich production, and we mark we, uh, we also mentioned that in the tag and also in Git when we can create a pull request, there are certain labels over there, which are, you know, gives us which does the priority. So whoever is the reviewer of that, uh, branch, he will know based on the level that this is on high priority. And based on that, they will review. Even I do the code review. Sometimes I do peer review as well. So based on the level, I check first what's the priority. If it's a high priority, and that goes into, uh, the review first. If that is a a normal priority or if it's a low priority, then I take it accordingly. So, usually, that's the whole structure. The environment name, the ticket name, and the branch name. If there's a hot fix, then that is measured towards the end with the capitalized, uh, saying that it's a fix or a bug or a hot fix over there. Thank you.

    Okay. So when we're talking about tight deadline, uh, we have to first be very sure that what tight deadline means or whether it's a day or hours over there. Sometimes it's days. So, basically, what we do is I practice test driven development. So what happens with test driven development is that related to the feature or the bug that we are fixing, I write the test first. So that covers the main aspect of the whole feature over there. Then based on what changes I did, what refactoring I did, whatever the new methods I might have created, I add few more tests. So that are, like, secondary testing elements over there. I know that reduces the coverage of the code if that goes into per into, like, deployment over there without those tests, but that I think that can always be improvised. But the test written in context of the feature is nonnegotiable. So that's why I write that one first in test your development. And I write then the feature according to the those tests that I have written. Since my test cover all the aspects of scenarios of the requirement, there is no loophole and there's no mistakes over there. Uh, that also gives me an advantage that even though the deadline isn't near is nearing over there, then I'm not missing on the crucial aspects of that whole development that the feature and the feature, uh, like, the functionalities and their respective test is already there. The refactoring, uh, after the refactoring, I might have created a few more functions. And if that misses out in order to meet my deadline, that is absolutely okay because I feel that that can be taken up in the next release or the next cycle as well. I can make a note of it. I can do it at once and later, but that is not going to be a very important one in terms of the feature that I'm developing. Always the priorities related to to the feature functionality and its testing over there. So that is already met in the first place. So there's nothing like that that can, you know, go wrong in that one. And definitely code coverage is always there. So I keep on checking my code coverage every 15 days to make sure that it is somewhere around 75 to 78, 80% depends on what's the standard that that whole team is following. My personal is that it is 340 is always covered, uh, and and that's it should not go beyond that. It goes beyond that one. That is something that we have already missed, and that needs to be addressed in the retrospective meeting. Thank you.

    It clearly shows that we are we are first fetching all the users, and then we are trying to find in that collection. So instead of doing that, we can do something like find by ID and pass the specific ID to return that exact data that you're looking for instead of searching the whole dataset once again, what has been written. So first of all, the line where we are doing await user dot find is giving us all the data. I don't know how many calls will be there, 100,000, 10000. And record all the reports. So let's start from in the database. And then once it's there, then we are, like, trying to go find method over there, which will again iterate to all the list over, uh, in the, uh, in the dataset that has been returned in the previous line. So these 2 will become the bottleneck. So best way is to index is to create an index to the ID field. If it's not there, then they should be created. And since it is created over there, if it's already there, uh, then if you're doing the find by ID and we pass the ID over there, then if you return that specific record, uh, that we are in you know, having that we're interested in, basically. And definitely, it is the whole, you know, turnaround time as well as this bottleneck will also go away.

    So what do I can see over here? Like, you know, this will capture just a general exception. Uh, if something goes wrong by touching the the resource, then that, uh, that will catch the whole thing. But we are not checking what the response code is from that particular API, which is very much important. We are all we should always console or we should always return data if the status code is acceptable. It was 200, 201, 202, whatever it is, we we decide at the time of, you know, designing the API contract or how the whole functionality is going to be. Um, and then if there's something else, like, 400 and then, uh, 50, 55, double 0, 541, 503, could be anything. Anything other than 200 should give a proper error message. And then if if you want to handle it in such a way that it should always go to the, uh, the catch part, Then if you encounter a status score other than 200, we can always throw that 1, and the catch will receive it over there. Um, so, yeah, I think that should that should handle everything over there. And, yeah, I think they should be.

    Now I'm not very much sure about cloud native Node JS application. I've I've never handled that kind of thing, uh, but just trying to understand the whole problem is that there is a printable hundred of failure scenarios. Uh, so this comes to when we are, you know, talking about how we design the whole system over there. So if I take it from that perspective, not taking in terms of specific to cloud printing more just application. I'm just giving you a rough idea based on my understanding is that all the printable scenarios, uh, you know, already listed when we design the whole system so that we know how to deal with that one. Maybe that's a failure with the data consistency. Maybe it's a failure of the data availability over this. We can create replicas. We can create multiple instances. We can scale them all the time. So that is already in place. In terms of unpredictable failure scenarios is that it's talking about availability. So we're talking about availability. We should always make sure that there are more than one instances running of whatever application we're talking about. If it is going to be only single instance, then whenever there's a high network traffic or high load over there, that prediction may not address that one. So there's one more thing that can happen. So we can always scale horizontal data. At least have at least have 3 instances when there's a scenario. There's a possibility is there that our applications will do host in such an environment and we're exposing it to, you know, that kind of high traffic over there. So in that case, uh, we can be, you know, be ready, be cautious about the whole scenario, and have few more resources already running. Uh, and that can be done by Kubernetes and, you know, load balancer with AWS and all those that, uh, So I think these are the general measures that can be, you know, uh, used for handling the general scenarios. Apart from that, all the printable scenarios is that that means we are already aware of the exceptions and other errors that might happen, and that can break the solution and, uh, take it down. So that should be done in a very graceful way. I'm not saying they should take it down in a graceful way. I'm saying that they should handle the errors and the exception gracefully and rather than exit the whole application. So that should never happen. In terms of printable scenarios, unpredictable scenarios, something goes wrong, so there should be one more instance who's running, and that should, uh, always, you know, make sure that the application is available. Now in both the scenarios, one of the key aspect is that we should have logs. Without logs, we cannot tell what went wrong. Even if we have already listed every scenarios and errors and everything over there, and even if it fails over there, logs are the go to place where, you know, we can go and check what has went well, you know, what went wrong over there and how we can mitigate that one immediately. So, um, that can be done in many ways. Different the levels of logging are there. Different types of logging are there. So that's up to how we design system and what kind of, you know, measures we are taking to ensure this logging is happening, you know, without a failure. So, yeah, that's how I would like to handle this whole purpose scenario.

    So when you're talking about middleware, we are trying to basically delegate couple of, you know, uh, redundant tasks or some of the things that are done by some other process separately. And once that is done correctly, we are actually trying to, you know, give an access for the back end resource to any particular API and the microservice, whatever you have written over there. So, basically, what in, uh, middleware can really help us, It is part of authentication, authorization that I've already explained on the previous questions. So all the middleware stuff, authentication, authorization, security, caching can go into middleware level where, you know, the general things and the redundant things can be handled over there. That did not have to be very specific to any particular service or any particular resource in the back end. So these are, like, general elements that has to happen at every interaction. So that's why we are we always keep on implementing o r two in the middleware and all the error handling, all the different scenarios, like what o r two will respond from over there because we are trying to integrate couple of other, you know, 3rd party, uh, uh, 3rd party elements over there. Like, it could be Google. It could be LinkedIn. It could be Facebook. It could be anything. So they have their own signature. They have their own, uh, uh, request response cycle over there. They have their own status code. They have their own response methods. The response structure not method. They have their own response structure over there. Uh, so based on the integration that we're doing, we can do all these things into middleware. That also ensures that tomorrow if you're able to change anything or you want to introduce or remove anything, that will be done in that one single place rather than going into all other services or microservices or whatever back end resources we have to or whatever servers we are running in the back end. So they they they will remain unaffected, and all the security level things are being handled in the middleware. So the changes pertaining to that will always remain at one place. So one change, one place, and that reduces the complexity of being in the middleware. And that also ensures that, uh, whatever changes are being done is being followed by all other, you know, uh, all our services, they are following the with a common place that will go through that part. Thank you.

    How do you how do you see the security server? Payment system integrations. Uh, I have no practical knowledge of payment system integrations over there. Um, I have done integrations with ERP APIs, uh, SAP APIs to be precise. So I can tell you in terms of that. I'm not sure how much that is applicable in terms of payment integration because payments are totally different domain. Uh, but based on my experience with, uh, integrating with the SAP APIs, the thing is that we have to first understand the documentation, go through that one and see what is the request and response architecture they have given, what contracts they have given, what are the limitations they have given in terms of the request patterns and the response, uh, structure over there. And based on that, we can define our own APIs, like because we cannot go beyond the scope of their API design that they have given. Uh, but whatever they're exposing will keep in within that boundary only. So based on the documentation and understanding and whatever, you know, they have designed already and what was the in place, we can write our own APIs according to that one. We can have our own level of, you know, security that is obviously, uh, very much feasible. But then you have to also adhere, you have to also comply to the security measures that the 3rd party has given. So I believe these are common lendees that we can follow in terms of any third party integration, be it payment, be it SAP, or be it some other application. So I think that l also applies here as well. Now the payment integration can have a few more things because there are, you know, uh, moderate transactions are involved and couple of few, you know, very fast are involved over there. So that can go to different levels over there. So, again, you have to check the whole application, how it works. So based on that, I can, you know, go through that one, and maybe in near future, I can tell you better answer for this one. Thank you.