profile-pic
Vetted Talent

Brahmabit Mahapatra

Vetted Talent

I’m a backend engineer with expertise in Golang, Node.js, cloud platforms, CouchDB and distributed systems, building reliable, scalable products across fintech, SaaS, and AI domains.

  • Role

    Senior Backend Engineer(SDE-III)

  • Years of Experience

    5.7 years

Skillsets

  • Microsoft Azure
  • Atmel studio
  • BPMN
  • Cassandra
  • Cmf
  • Cognos
  • CouchDB
  • Design patterns
  • Detectron2
  • Docker Swarm
  • Eagle
  • Expressjs
  • Jaspersoft
  • Jenkins
  • Leveldb
  • Messaging queues
  • Arduino
  • Mongodb charts
  • Ms-access
  • Multitenancy architecture
  • PubSub
  • React Native
  • SocketIO
  • Terraform
  • Unix scripting
  • Weka
  • Yolo
  • Quicksights
  • Mendix
  • Oracle APEX
  • OutSystems
  • MongoDB
  • C
  • C++
  • Go
  • Java - 1 Years
  • Python
  • R
  • Swift
  • .NET Framework
  • Angular
  • Docker
  • Ethereum
  • Google Cloud Platform
  • hyperledger fabric
  • Kubernetes
  • Microservices Architecture
  • AWS - 2 Years
  • MySQL
  • Nodejs
  • OpenCV
  • Power BI
  • PyTorch
  • ReactJs
  • Scikit learn
  • Solidity
  • Tableau
  • TensorFlow
  • WebRTC
  • Apache Kafka
  • Apache Spark
  • Apache zookeeper

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Back-End Developer - Node (Remote)AI Screening
  • 62%
    icon-arrow-down
  • Skills assessed :Excellent Communication, Restful APIs, Node Js, Azure, Git, JavaScript, Leadership, Mongo DB
  • Score: 56/90

Professional Summary

5.7Years
  • Feb, 2025 - Present1 yr

    Senior Backend Engineer(SDE-III)

    Weave Communications
  • Aug, 2024 - Feb, 2025 6 months

    Senior Software Engineer(IC)

    Channel19
  • Jan, 2021 - Sep, 20243 yr 8 months

    Technology Lead/Principal Architect

    Pipli Technologies
  • Dec, 2018 - Jan, 2019 1 month

    Software Development Intern

    Pharos Solutions
  • May, 2019 - Jun, 20201 yr 1 month

    Software Development Engineer

    Pharos Softtech
  • Jul, 2020 - Dec, 2020 5 months

    Innovation Lead

    Pharos Softtech
  • Aug, 2018 - Dec, 2018 4 months

    Blockchain Application Development

    Bookingjini Labs
  • Jul, 2018 - Nov, 2018 4 months

    Medical Records Management on Blockchain

    Lyfscience MedTech
  • May, 2018 - Jul, 2018 2 months

    Software Development Engineer

    Tata Consultancy Services
  • May, 2017 - Jul, 2017 2 months

    Application Development Intern

    Vodafone

Applications & Tools Known

  • icon-tool

    .NET Framework

  • icon-tool

    Angular

  • icon-tool

    PowerBI

  • icon-tool

    NodeJS

  • icon-tool

    NGINX

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    GCP

  • icon-tool

    AWS

  • icon-tool

    Hyperledger Fabric

  • icon-tool

    Python

  • icon-tool

    OpenCV

  • icon-tool

    MySQL

  • icon-tool

    Swift

  • icon-tool

    Go

  • icon-tool

    C

  • icon-tool

    C++

  • icon-tool

    R

  • icon-tool

    MS-Access

  • icon-tool

    Cassandra

  • icon-tool

    CouchDB

  • icon-tool

    Ethereum

  • icon-tool

    Solidity

  • icon-tool

    PyTorch

  • icon-tool

    Scikit Learn

  • icon-tool

    YOLO

  • icon-tool

    Docker Swarm

  • icon-tool

    Google Cloud Platform

  • icon-tool

    Microsoft Azure

  • icon-tool

    Outsystems

  • icon-tool

    Mendix

  • icon-tool

    ExpressJS

  • icon-tool

    React Native

  • icon-tool

    ReactJS

  • icon-tool

    Cognos

  • icon-tool

    Tableau

  • icon-tool

    Jaspersoft

  • icon-tool

    Arduino

  • icon-tool

    Apache Kafka

  • icon-tool

    Apache Zookeeper

  • icon-tool

    Apache Spark

  • icon-tool

    Jenkins

  • icon-tool

    MongoDB

  • icon-tool

    Cassandra

  • icon-tool

    YOLO

  • icon-tool

    AWS

  • icon-tool

    Terraform

  • icon-tool

    Cognos

  • icon-tool

    Power BI

  • icon-tool

    Tableau

  • icon-tool

    Arduino

Work History

5.7Years

Senior Backend Engineer(SDE-III)

Weave Communications
Feb, 2025 - Present1 yr
    Contributed to the development of new features and resolution of bugs in the Call Intel Platform, which processes over 1 million calls per day. Conducted daily monitoring and assessment of observability tools to ensure high availability and performance of the Call Intel Platform. Integrated AI-driven analysis tools to extract actionable insights from call recordings, enhancing decision-making and operational efficiency. Provided support to the Insurance Verification team with a focus on integrating new service providers into the platform. Driving enhancements in AI workflows by refining LLM prompts and optimizing prompt engineering strategies. Tech stack included Golang, GCP, Kubernetes, Prometheus, Grafanna, etc.

Senior Software Engineer(IC)

Channel19
Aug, 2024 - Feb, 2025 6 months
    Contribution towards the development of new features(new exceptions) on the Exception management system. Improving the interfaces, abstractions and design. Built the feature management system, working towards building a control plane for the platform. Implemented rule engine(wherever appropriate) for making features easily scalable with minimal/no code change. Built a task management system following the case management framework linked to different entities/exceptions on the system. Working on multiple 3rd party systems(TMS, ELDs) integration. AI Voice Agent PoC - optimizing on latency, voice and prompts.

Technology Lead/Principal Architect

Pipli Technologies
Jan, 2021 - Sep, 20243 yr 8 months
    Developed a core data collection software using the .NET framework, supporting daily operations across 5000+ stores and handling 50,000+ transactions. Created the technology suite, architecture, deployment for the applications related to the EBilling Solution following the industry practices. Created the microservices architecture(12+ Services), application development and deployment on cloud. Developed relevant web using Angular. Used MongoDB charts & PowerBI for developing the dashboards for the apps. Led a team of 5 people to further optimize the entire platform (both frontend & backend) & moved it to production. Initiated and led the development of a document system from scratch, focusing on machine learning and image processing for invoice and receipt processing. Leading a team of 2 associate developers on this project. Built a case management framework to enhance the feedback and reputation management system. Designed the high-level/low-level architecture for the platform(products(3+) unification). Built various plugins/designed the apps for 3rd party applications. Designed & leading development of Invoice Management System (for automating invoice processing for Organizations).

Innovation Lead

Pharos Softtech
Jul, 2020 - Dec, 2020 5 months
    Spearheaded diverse application development projects using both traditional and low-code tools. Responsibilities included designing, solutioning, and leading the development of various PoCs for emerging technologies catered to client needs. Architected and led the backend development and cloud deployment of the Chambre application, a comprehensive hospitality guest management system. Developed the proof of concept for Cruzo, an e-commerce platform akin to Shopify but with enhanced features, successfully deployed and tested with over 3 retailers and 1000+ end users. Initiated and developed a proof of concept in IoT for smart home automation. Engineered video streaming solutions using AWS infrastructure and developed web conferencing systems utilizing various backend technologies and WebRTC for real-time communication.

Software Development Engineer

Pharos Softtech
May, 2019 - Jun, 20201 yr 1 month
    Contributed to various NodeJS application developments, scaled RESTful API servers using NGINX and multithreading, and developed hardware products and programming in embedded C for home automation devices. Deployed applications on GCP. Created Docker Images and deployed on Kubernetes.

Software Development Intern

Pharos Solutions
Dec, 2018 - Jan, 2019 1 month
    Built the backend (Process Engine) for various applications in NodeJS, integrated external services, created Restful APIs, and designed architecture and databases for MongoDB.

Blockchain Application Development

Bookingjini Labs
Aug, 2018 - Dec, 2018 4 months
    Developed a blockchain application on Hyperledger Fabric for record management and a KYC service for their booking engine.

Medical Records Management on Blockchain

Lyfscience MedTech
Jul, 2018 - Nov, 2018 4 months
    Developed a blockchain application using Hyperledger Fabric, integrated into the existing Lyflink system. Built a proof of concept (PoC), laid the groundwork for production, and expanded functionality. Leveraged NodeJS for application development and secured REST API endpoints.

Software Development Engineer

Tata Consultancy Services
May, 2018 - Jul, 2018 2 months
    During internship at TCS, Bangalore, was engaged in the development of a blockchain-based application focused on the tokenization of credit/debit cards. Role encompassed market research, selection of technology platforms, and designing the architecture and workflow of the application, including backend, middleware, and frontend components. The application was developed using Hyperledger Fabric, with NodeJS as the primary programming language. Contributed insights to various ongoing client projects related to loyalty services and KYC processes on blockchain. Explored other blockchain technologies such as Quorum, Blockapps STRATO, and Hyperledger Sawtooth.

Application Development Intern

Vodafone
May, 2017 - Jul, 2017 2 months
    Developed a comprehensive application package for the HR team at Vodafone, Bhubaneswar. This package included a test interface and an analytics application that processed results and generated candidate reports in PDF format. Utilized Visual C# within the .NET framework for software development, and MS-Access to manage the database storage.

Major Projects

2Projects

WSN

    Deployed wireless sensors near mines for recording the intensity of vibrations from blasting activities. Utilized Arduino Uno, accelerometers, SIM800 modules, IoT and cloud technology. Data collected via sensors was transmitted to a cloud server for analytical purposes.

Quick Attendance System

    Application for capturing class attendance through photographs, developed using image processing techniques with OpenCV and machine learning algorithms in TensorFlow. Included server management, Android app development, Python sockets, MySQL, and Python scripting.

Education

  • B.Tech & M.Tech(Dual), Computer Science Engineering

    National Institute Of Technology, Rourkela (2020)
  • Science: Physics, Chemistry, Mathematics, Biology

    Mothers Public School (2015)
  • Physics, Chemistry, Mathematics, Biology, Computer Science

    Delhi Public School, Kalinga (2013)

AI-interview Questions & Answers

So, um, I am a software engineer, uh, with more than five and a half years of experience. In total, I have been working with, uh, multiple startups across, uh, multiple domains, including hospitality, health care, as well as, uh, briefly, uh, end finals. Uh, I have gotten a chance to explore a lot of, uh, languages, tools, uh, across, again, multiple technologies, which includes blockchain, uh, low code application development, um, BPM tools, and, of course, uh, your native, uh, application development using, uh, Node. Js stack or that, uh, Node JS stack like, uh, MongoDB, uh, Angular, Node JS Express, as as well as, uh, MongoDB React, Node JS, and Express. So I have built scalable solutions, uh, which are which are, uh, being, uh, used by more than uh, 10 thousands of users handling over tens of GBs of data every single day. And, uh, the peak usage, uh, for one of the applications that I had built was, uh, almost, uh, like, 100,000 users, which which, uh, and 100,000 users in a period of, um, it was basically within a period of a span of 3 hours, which translates to around few 1000 users every minute. So, uh, mostly, my experience has been on the back end side, uh, using Node. Js. I do have a fair bit of capability on, uh, Python as well. On the front end side, I have, uh, worked on Angular briefly as well as on React. And, uh, on the DevOps, uh, the deployment side, I have been working with all the 3 major clouds that is AWS, Azure, and GCP. Primarily, uh, worked a lot on GCP and Azure. Uh, I have a fair bit of understanding on Kubernetes, uh, deploying, uh, infrastructure as a code, and I strongly believe, uh, in, uh, system designing. So I have been doing that for quite a bit now, uh, especially the high level design and the low level designs. And I do have a fair bit of understanding on microservices, and I have been developing all my applications from scratch using the microservice architecture. Thank you.

Uh, a Node. Js back in application, uh, could be, uh, potentially divided into 3 or 4 parts. The one is, uh, let's start with the data part. So we need to create our data access objects so we could use, uh, an ORM something like Mongoose to connect with MongoDB. And, uh, we can create our DAOs or data access objects, uh, over there where we could have helper class to access the databases or not. So, basically, uh, all the logic for accessing databases could be, uh, potentially done under that. Then, uh, there could be a layer, uh, on top of it, which handles, uh, most of your business logic and everything. So which would be something like, uh, the service layer, let's call it. Uh, it will contain all the business logic and other details over there. Then on top of it, we could have our handlers, uh, which would potentially incorporate these business logic and, uh, potentially, uh, uh, add or, like, connect the interfaces of our DAOs as and business logic. And finally, uh, finally, on top of it, we'll, uh, have our roots, which will be using handlers per se to create the endpoints on API. Now, uh, coming back to the way I'll be designing things, properly structuring things, uh, so, uh, for MongoDB per se, I would separate out or create a single ton for MongoDB connections, which would be used globally. So I use I use OOP's concepts, uh, like, on a daily basis and follow the I'll I'll follow the proper design patterns. So creating, uh, using ORMs or creating models for validation is something that I would do to handle a Node. Js, uh, and MongoDB combination. And all my business logic could have utilities and helpers associated with their service logic or the business logic. So that is how I would design my application.

For securing sensitive data, uh, on a MongoDB application, uh, there could be few ways to address this. 1 is, uh, of course, all the connection between Node. Js and our, uh, cloud service application should, uh, be, uh, uh, like, use, uh, SSL certificates to prevent, uh, any kind of attack over there. The second is, uh, on the MongoDB side, we should have proper access controls, uh, to prevent unauthorized access to the app to the database. Initially, we could, uh, store, uh, these data as clear data on the MongoDB side. And on the application side, while while, uh, serving the data, we could mask that data and provide it to the end user. If those things don't work, we could store encrypted data on MongoDB and decrypt it every time when we want to fetch data. So that is one of the approaches that I understand and I know of. There may be few other approaches, but, uh, this is what I know at this point.

Uh, which Node JS tools and libraries? I am not sure of this.

To integrate error tracking and monitoring, uh, tools into, uh, Node. Js back in deployment on Azure. So there are multiple, uh, multiple ways to do this. Uh, basically, if I'm using Azure log stacking or log logins, what I need to do is either I can use, uh, the Azure SDKs to add, uh, these things, add the monitoring tools, or I could always have an endpoint which could be called every time an error happens, or I could basically write utility for it. And every time something happens, it basically pushes that data on to Azure. But but coming back so if I'm not using an external logging service or I am using an internal one per se, let's say I start storing my log data on MongoDB. So I'll be, uh, creating that utility which basically add, uh, or fence the logs to, uh, our MongoDB database every every single time something happens. If that solution is not a viable one, I could always go with, uh, something like a file system or a file write it to a file a file, uh, system, and then I could basically upload, uh, or pick that file and process that using ELT stack or anything. But, uh, problem with file system would be if we are using something like a distributed deployment environment, those file systems could, uh, potentially be, uh, be a bottleneck.

So especially, uh, while using or designing a distributed, uh, system, uh, for what I would do, uh, to, uh, ensure consistency of my data is I could follow certain patterns like saga pattern, which would help me and allow me to track everything that I'm, uh, or create basically orchestration or please give me a second. Uh, I I'll be using, uh, saga pattern like an orchestrated or a choreographed feedback for the data consistency part of it. And everything would be, uh, associated with even sourcing, which would help us analyze or deeply that entire thing to prevent any kind of mislead. Like, uh, prevent any kind of any kind of issues in the data consistence. But if we have to make our MongoDB asset compliant, what we could do is there is a possibility there is something called session or transactions, which allows you to write multiple write to multiple collections on MongoDB and then complete that session or transaction. If if anything or any failure occurs, it would roll back all the previously, uh, done operations. So that is what will help us ensure consistency of data on on MongoDB. But, uh, to ensure consistency of data models is something, we could use an existing ORM, or we could use some kind of validated libraries where we create the models and everything on Node. Js. I don't remember any particular library for Node. Js, but for example, in Python, you have something called PyDentech, which will help you create a model. Then we can integrate it with, uh, Mongo, uh, PyMongo or something like that.

So, uh, this particular application creates an express app. The only thing which is missing right now is it is, uh, not exposed yet on any kind of the server is not created. But what this part particularly does is it has created a middleware which validates our authentication token. If that authentication token is valid, it proceeds, uh, or allows you to use your, uh, endpoints. If if, uh, the token authentication authorization fails, it sends back a response saying authentication token is, uh, failing. If if at all sorry. Uh, right now, what happens is, uh, it basically checks if, uh, this thing is, uh, checking or not. The return value of authentication not happening is not there. But if it the only thing that middleware does right now is it checks if that token is available or not. If yes, uh, it proceeds or proceeds for validation, there could be certain returns over there as well. But if that token is not there, it basically sends back a 403 response asking for authentication, uh, token. And finally, if all those requirements are suffice, it proceeds to the endpoint, which basically sends back hello world. Over here, uh, we could, uh, to ensure better error handling our performance, uh, I'm not really sure, but we could add certain try catch blocks if if things are failing or if it is not not going, uh, in the way that we would like it to. And there could be potential problems like call center and other things, uh, on this API as well.

This has an entire phase, high depository where you have get on, get my ID, and you have the DB context. Uh, this is, uh, kind of following, uh, my, uh, this this part is following dependency injection at this point, uh, particularly when that repository class is being initialized. Uh, it is taking, uh, my DB context as the dependency over here. Then it the basically, the pattern being followed here is dependency injection. And we also have the abstract class of, uh, the repository, which has an the, uh, abstract class, uh, or interface at this point, uh, for insert, update, delete application. And, uh, once once this repository class has initialized, uh, the details for each of those abstract classes, that is insert, update, delete functions. They have added it. This could, uh, this repository could be initialized where we pass our, uh, database helper, let's say, or database context. And then, uh, finally, we call the call any any other functions that we might use on you. I don't have a very good suggestion at this point. And, ideally, the DB context should be a singleton pattern. That is the only thing I think. Right? Uh, I'm able to think right now.

How do you architect the system? If, uh, we want to have millions of operations happening through MongoDB, uh, or through our Node. Js application onto MongoDB. Uh, the application should be horizontally scalable. We can't have, uh, we can't vertically scale the application per se because it has a certain ups uh, limits and potentially very expensive. But what we could do is we could potentially horizontally scale this entire thing so that, uh, it, uh, allows allows, uh, multiple concurrent requests to come into the Node. Js application. Then about, uh, similarly, we would need to scale our MongoDB horizontally having multiple, uh, multiple, like, multiple instances in that particular cluster. And, uh, of course, we need to enable sharding so that, uh, we could, uh, potentially store a lot of data across, um, these individual instances. Now, finally, uh, apart from that, we need to have purge mechanisms or archiving processes in place so that we don't exceed, uh, the limit of database at any given point in time. Or we need to have auto scaling enabled on MongoDB. And all of these need to be on an orchestration layer so that auto scaling hap could happen, uh, potentially easily. We don't have to manually scale it. So as as the load increases on Node JSOM, MongoDB for that matter, it should automatically scale. So that is how I would, uh, try to handle, um, or go about handling a lot of concurrent requests. That would be the starting point. Uh, maybe, uh, the exact process might require some additional, uh, processes and other things in place, but at this point, uh, that would be my

So, uh, environment configuration, uh, could potentially store, uh, things like database, database URI, then, uh, username, passwords, any, uh, other third party endpoints, any other variables that you might like it to be configurable. And, uh, this could be potentially set for each of the individual, uh, environments that an application is, uh, an application is, uh, running for, let's say, production environment, pre prod, or staging, and finally, uh, production. So that is how I think environment configuration would help manage different stages of Node JS application life cycle, but I I don't exactly know how this works.

Yeah. One of the, uh, major things that I did was, uh, to implement, uh, caching. So I created a pass through cache, uh, for our applications, especially while the transactions were being written to our MongoDB database, while being written to a MongoDB database. So, um, it put it basically, uh, it basically wrote, uh, your data to, uh, to our, uh, Redis Redis DB. And, uh, then finally, from, uh, that data, we created events to write it to our MongoDB application to our MongoDB database. So what it did was it potentially reduced, uh, huge, uh, latency on our application. So it basically took down the response time for our API, one of the APIs, from around, uh, 900, uh, 50 to 1.25 seconds or something like that from that point to around, uh, 300 milliseconds. 200 to 300 milliseconds for that particular API, which had a lot of things being written to the database, that particular API. So that is what I did, which enhanced the application profile. And apart from that, I, uh, took people at a very high level, uh, for application and performance. Try to use, um, multi threading over here. Uh, so, like, adding worker threads as well and also running it as, uh, multiple, uh, processes on multiple course. So both of these, uh, potent could potentially enhance the application performance quite a bit.