profile-pic
Vetted Talent

Akash Priyadarshi

Vetted Talent
A subject matter expert (SME), versatile Full Stack Developer, Data Engineer, Data Scientist and AI/ML Engineer with 4+ years of experience in building scalable, efficient web applications and deploying advanced AI/ML solutions.
  • Role

    AI Engineer

  • Years of Experience

    4 years

  • Professional Portfolio

    View here

Skillsets

  • Java
  • Next.js
  • NestJS
  • MongoDB
  • Microservices
  • Logging
  • LangChain
  • Lambda
  • Kotlin
  • OpenAI
  • Integration Testing
  • HTML5
  • gRPC
  • GraphQL
  • Go
  • GitHub Actions
  • Flask
  • FastAPI
  • Pinecone
  • Playwright
  • Pub/Sub
  • pytest
  • RDS
  • Redux toolkit
  • REST
  • semantic search
  • System Design
  • Unit Testing
  • Vector DBs
  • Vertex AI
  • Websockets
  • Code health monitoring
  • Binary i/o
  • RAG pipelines
  • Azure
  • TensorFlow
  • Python
  • Azure
  • TensorFlow
  • React.js
  • Node.js
  • TypeScript
  • AWS
  • Docker
  • GCP
  • Hugging Face
  • NumPy
  • OpenCV
  • PostgreSQL
  • Python
  • Redis
  • Scikit-learn
  • Tailwind CSS
  • Angular.js
  • API Design
  • BigQuery
  • Chart.js
  • Ci/Cd Pipelines
  • Cloud monitoring
  • Cloud run
  • CloudWatch
  • Distributed Systems
  • EC2
  • event-driven architecture
  • Express

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Full Stack Engineer - React JS and Node JS (Remote)AI Screening
  • 83%
    icon-arrow-down
  • Skills assessed :MeteorJS, AWS, Git, JavaScript, Jenkins, Mongo DB, Node Js, Problem Solving Attitude, React Js, React Native
  • Score: 83/100

Professional Summary

4Years
  • Feb, 2025 - Present1 yr 2 months

    Full-Stack AI Engineer

    Mercor
  • Jul, 2020 - Sep, 20233 yr 2 months

    Senior Software Developer & Technical Lead

    Bihari Construction & Infra

Applications & Tools Known

  • icon-tool

    Linux

  • icon-tool

    Windows

  • icon-tool

    Git

  • icon-tool

    Jira

  • icon-tool

    Selenium

  • icon-tool

    Cypress

  • icon-tool

    Postman

  • icon-tool

    Apache Kafka

  • icon-tool

    Airflow

  • icon-tool

    Power BI

  • icon-tool

    Craft CMS

  • icon-tool

    Testing Framework

  • icon-tool

    Power BI

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    AWS

  • icon-tool

    GCP

  • icon-tool

    Azure

Work History

4Years

Full-Stack AI Engineer

Mercor
Feb, 2025 - Present1 yr 2 months
    Spearheaded the development and long-term maintenance of LLM-integrated internal tools, with production deployments using Python (FastAPI), LangChain agents, claude, and GPT-4. Built and monitored agentic workflows leveraging Retrieval-Augmented Generation (RAG), vector databases (Pinecone), and real-time data pipelines for semantic search. Implemented robust integration and unit testing frameworks; contributed to automated deployment, logging, and monitoring using Docker and GitHub Actions. Participated in product incubation cycles, delivering sprint-based prototypes aligned with GTM tooling needs for cross-functional teams. Reviewed and improved legacy systems, optimized API performance, and ensured long-term maintainability through observability and error tracking practices.

Senior Software Developer & Technical Lead

Bihari Construction & Infra
Jul, 2020 - Sep, 20233 yr 2 months
    Led the design and development of AI-enabled full-stack enterprise platforms using MERN stack, Python, and OpenCV. Built an AI-powered car parking management system using computer vision to detect real-time parking availability. Integrated ML inference pipelines with a user booking system and internal ERP dashboard, enabling automated parking allocation. Architected scalable backend services using Node.js and FastAPI, supporting real-time updates via REST APIs and WebSockets. Designed distributed microservices and asynchronous processing pipelines for camera feeds, bookings, and ERP workflows. Deployed and managed production systems on AWS (EC2, S3, Lambda, RDS) with Docker-based CI/CD pipelines. Mentored developers on system design, AI integration, cloud deployments, and production debugging.

Achievements

  • Media Coverages
  • DRDO Patent Letter
  • AI innovation campaign
  • System efficiency boost
  • Patent shortlisted by DRDO

Major Projects

4Projects

Machine Learning Model to Forecast Company Stock Prices using LSTM Model

    Developed a cloud-based web application using React.js, Python (Flask), and AWS to analyze financial market trends. Built RESTful APIs in Python (Flask) and Node.js to handle real-time data ingestion and processing. Integrated automated test cases for model performance evaluation, improving model accuracy by 20%. Deployed on AWS (EC2, S3, RDS, Lambda), improving performance by 3x through load balancing and optimized queries. Implemented dynamic dashboards using React.js and Chart.js, visualizing financial analytics in real-time.

Core Contribution to Astropy - FITS Logical VLA Storage Fix

    Diagnosed and resolved FITS standard compliance bug in Astropy's binary table I/O module. Implemented writer/reader conversion logic for logical variable-length arrays to store ASCII T/F/NULL bytes per FITS spec. Added comprehensive unit tests and collaborated with core maintainers; contribution integrated into v8.0.0 milestone.

AI Car Parking Management System

    Built an AI-powered parking management system with React.js, Node.js, Express.js, and WebSockets for real-time slot updates. Developed predictive parking allocation models using Python, OpenCV, Scikit-learn, and TensorFlow. Designed automated test cases for AI behavior and accuracy testing, leading to a reduction in performance errors.

Open-Source LLM Developer Chatbot SDK & Platform

    Built a developer SDK and chatbot platform using Node.js, OpenAI GPT-4, and Pinecone. Implemented frontend and backend for an AI chatbot using Next.js, GPT-4, Pinecone, and file ingestion with real-time chat. Structured the repo as an open-source developer tool with semantic search, embedded logging, and CLI-based configuration.

Education

  • Master of Science Computer Science (Distinction)

    University of Liverpool (2024)
  • Bachelor of & Instrumentation Engineering (Merit)

    Galgotias College of Engineering and Technology (2020)

Certifications

  • Microsoft certified: azure ai fundamentals

AI-interview Questions & Answers

Background giving by a brief introduction of yourself. Um, okay. Um, Hi. Um, my name is, uh, Akash Pedarshi. I'm a full stack developer with 4 years of, uh, professional experience, um, in building a scalable web application and AI driven solution. My technical expertise, uh, spans the monstack, MongoDB, Express, uh, React JS, and Node JS. I'll visit I'll visit as I'm proficient in, uh, Python for, uh, back end API development and machine learning workflows. So I've I have built various, uh, projects during my, uh, job and and also during my masters and and during my BTech. So the first project that I built during my job was an AI car parking management system. It was developed a real time parking slot allocation system using the ArcGIS Node. Js and WebSocket integrated with predictive AI model built with TensorFlow. The next project that I built, um, during my masters was a chatbot for contextual customer interaction. It was built an advanced chatbot system, which was actually powered by, uh, GPT find uni and integrated with, uh, Socket IO for real time communication. Also, right now, I'm working on a personal research project. It's it's actually, uh, dark matter mapping AI model. It's developed a Python based deep learning pipeline for cosmic shear analysis combining PyTorch, NumPy, and NextShares for interacting interactive visualization. Also, my dissertation for master's degree was a stock market, uh, price forecasting bot. I created a machine learning model using a LSTM network in Python with AI developed in Node. Js for real time data fetching, and I also use, uh, AutoML from, uh, Azure Azure AutoML, uh, for training the model and comparing which, um, I mean, which algorithm, which, uh, model was best and more better performing. Also, my technical strength, uh, for front end, I I have used, uh, React JS, Redux, HTML, CSS, MUI, and TLWin. For back end, I use Node. Js, Express. Js, Python, Flask, Django, RESTful, API development. For database, I have worked with, uh, NoSQL and MySQL, um, like MongoDB and MySQL. Uh, for cloud and DevOps, I have experience in AWS, Lambda EC 2, Docker, uh, Kubernetes, and Azure and Google Cloud. Also, for AI and ML, I have experience with, uh, TensorFlow, PyTorch, scikit learn, and.

Okay. In react, what method would you say is essential for, uh, properly unmounting components that involve ongoing API request that involve ongoing API request. Okay. So, uh, in React, in React, actually, properly unmounting component is is crucial to avoid potential memory leaks or errors. So we can use, uh, useEffect cleaner function. It is an essential method, uh, we can use to handle this. So, um, the key approach can be we can use, like, abort controller. So abort controller API to, uh, cancel any ongoing API request when the component announce. Uh, this is particularly useful for, uh, fetch based API calls, and we can use proper useEffect cleanup. So React's, uh, useEffect hooks allow you to return a cleaner function. This function is, uh, executed when the components announce or when the, you know, dependencies of use fetch change. And, uh, it is actually essential to prevent memory leaks by stopping API calls that are no longer relevant. Also, it cleans up, like, event listeners, uh, subscription, or any interval associated with a component. Also, we can use, um, cancel token for Axios. So if you are using Axios, uh, for API request, you can use cancel tokens for managing API request cancellation. Also, some of the important, uh, points that we, uh, use for, uh, you know, unmounting components is always use cleanup function in use fact to cancel on when API calls when the component unmounts. Use a board controller whenever possible to, you know, fetch base requests and cancel to console access. Also and this approach will always ensure a proper resource management, and it will always prevent any unexpected, uh, errors during, uh, mounting.

Sorry for the sorry, Okay. So you are building a dashboard that needs to display a large dataset, for example, 10,000 rows. How would you ensure smooth rendering and efficient performance in React? Oh my god. Ten questions. So, um, okay. When building a React dashboard, that needs to display a large amount of dataset, efficient performance and, you know, smooth rendering are critical to ensure greater user experience. So the strategies that we can use are, uh, virtualization, which is also known as windowing. So rendering 10,000 datas uh, sorry. Rendering 10,000 rows at once can cause significant, uh, performance issue. So using virtualization ensures that not only the, uh, rows currently visible in the viewport are rendered, but also these libraries allow you to render only a small portion of data, the rows in view, uh, which also improves the performance. And sorry. The next thing we can use is, uh, navigation. So we can break the dataset into smaller chunks that can be fetched and rendered on demand. So, uh, for example, use a library like material UI or and design to panic get data already loaded in the browser, and we can fetch rows in chunks from the server using the back end API. Also, we can use lazy loading or infinite scrolling. So instead of fetching and rendering rendering all the rows at once, we can implement lazy loading or infinite scrolling to load load more rows as as the user scrolls down. So we can fetch and append new data as the user scrolls near the bottom of the list. And, uh, you know, we can use memorization. So React components reenter by default, which can lead to performance issue when dealing with such large dataset. So we can use the reactive memo to prevent unnecessary renders of table row. Also, uh, we can use memo to memorize expensive calculation or, you know, derived data. Also, we can use efficient state management. So if the data is updated frequently, um, we can ensure the state changes are, you know, actually efficient and localized. So we can use react contact context API or libraries like, uh, redux or, uh, zu stand to manage the dataset without causing unnecessary re renders. Also, it only updates specific rows that change rather than the entire table. And there are more, uh, but for now, only this one.

Okay. Uh, given the React component state managed approach, What potential issue for this thing? So So, uh, potential issue with the code is the state updates may be asynchronous, and there is a binding issue. The increment count function is not bound to the component instance. Uh, this, uh, when passed as a callback or event handler, this will be undefined leading to, uh, runtime error. Also, uh, react set state function is asynchronous, meaning multiple calls to set state may be batched together. Uh, this can cause issue when the next state depends on the previous state as this state, uh, as did actually, the component, this dot state dot count class might not always refer to the latest state. Also, the code uses a class component, which is less commonly used in modern React development compared to the functional components with hooks. So this approach increase, uh, boilerplate code and may not align with the current best practices. So how can we improve? So we can use the callback version of set state to ensure the state is updated based on the, like, latest state. Um, also, uh, binding issue is there. So the fix the binding issue, uh, we can, uh, like, bind increment count in the constructor or we can use a arrow function, uh, for the method, which is more preferred. Also, we can use, like, modern React. For example, the use of functional components with hook for state management. Uh, we can rewrite this component using, um, use states. Also, we can, uh, use, uh, like, event handling best practices. For example, we can, uh, ensure that the event handlers like, uh, on click are bound or defined, uh, in line correctly. If using the modern function approach, uh, this issue is inherently resolved. Also, uh, finally, we can use modern modern React practices. For example, we can use, uh, we can write it like import React use states from from React and then we can use then we can declare the function, uh, like function, uh, my component, function start without any parameters and then constant count, uh, comma set count equals to useState 0 and then const increment equals to brackets then arrow function set count previous count equals, uh, bracket sorry. Uh, arrow functions with which then pref count plus 1, which returns the whatever the, uh, result is. And then in the end, we can use export defaults my component.

okay we have to examine this examine this JavaScript function that is intended to return intended to return so the JavaScript function provided is intended to return a new array where each element of the input array is squared while the logic is mostly correct there are areas for improvements I mean there's an out of bound error so the for loop condition uses I is less than equals to error length which will cause an out out of bound error on the last iteration as array indices range from 0 to array dot length minus 1 so the condition should be I less than array dot length instead also the variable declaration the var keyword is used for the squared array and the I loop variable it is better to practice to use let or const to avoid issues with variable scoping in modern JavaScript also the function can be rewritten using modern JavaScript features like array dot prototype dot map for better readability and functional programming yeah that's it

Sorry. So, uh, explain a method to efficiently execute complex queries in MongoDB that needs to read and write data in a node JS application. Okay. So um, well, I think we can use optimize query design. So to efficiently execute a complex queries in MongoDB while handling read and write operation in Node's application. Uh, like, there are many, many methods. So first, I will suggest is, uh, optimized query design. So it ensures broker indexing, uh, is in place to speed up the query execution. Also, it analyze the query patterns and create in in in indexes on the fields frequently used in filtering, sorting, and joining. Also, uh, aggregation framework. Uh, we can use MongoDB's, uh, powerful aggregation pipeline for complex queries. So it will allow you to filter group, put, um, project and transform data, like, very fish efficiently. Also, we can we can do connection polling. So using connection polling to reuse exist existing database connection instead of, uh, creating a new connection for every request. So this will, uh, reduce, like, overhead and also improve performance. Also, we can use bulk write for batch operations. So for writing intensive queries, you we can use, like, bulk write to perform multiple insert, update, or delete operation in a single command, reducing the number of database call. Asynchronous query execution can also do the job or, uh, promises to handle queries in a non blocking way, which ensures the applicant, uh, application level always remains responsive. Also, we can, uh, optimize aggregation pipeline. We can limit the number of document processed early in the pipeline using a match or limit. Also, we can use project to include only necessary fields reducing the amount of data passed through the pipeline. Uh, there is also sharding for scalability, catching result, also transaction management for consists of rights, monitor query performance. Uh, these things can be used.

Sorry. K. No. It's hard. Um, your React app's performance has degraded significantly. As data volume grew, What step would you take using reactive tools and MongoDB profiling to identify and solve the issue? Where, um, to identify and solve performance integration in React app as data volume grows, A combination of React and, uh, dev tools and MongoDB profiling can be used. So, um, the React tools that we can use is, first, we can, like, identify, uh, components rerenders. Um, for example, we can enable React dev tools profiler. Okay? We can look for wasted renders. Like, uh, we can check if components are rerendering due to unchanged props or state. We can inspect the stage state management. We can analyze how state is being, uh, managed in the app. It it is the global state I mean, sorry. Is it the global state being updated too too frequently, or are large amount of, uh, data being stored, uh, unnecessary instead or, uh, passed down via props? Also, um, we can analyze component hierarchy. So, uh, we can look for components with deep nesting that trigger, uh, renders across the tree. The solution we can implement if this is happening is we can split components into smaller ones and leverage lazy loading or code split splitting to reduce the rendering overhead. Also, we can use we can check virtual DOM updates. So we can use, uh, the highlight updates option in the React dev tools to see, uh, which part of the app are rerendering frequently. Okay. Um, now, um, for uh, MongoDB profiling, we can enable MongoDB profiling. We can set a profiling level to capture slow queries. We can use a d v dot set profiling level, and we can 2, uh, slow the grams 100. Also, uh, if queries taking longer than 100, uh, milliseconds, uh, it will be logged helping to identify slow database operation. We can analyze query logs, check MongoDB for slow queries or efficient operation like missing index, navigation scanning, large, uh, large collection, fetching excess data. Also, we can optimize queries, optimize aggregation, uh, profile execution, uh, query plans.

Okay. Describe a scenario where an atomic operation in MongoDB is critical within a Node. Js application, and how would you achieve it? Okay. So, um, suppose you are building an ecommerce application where inventory stocks level are managed in MongoDB database. So when a user, uh, places an order, the system needs to verify, you know, first, verify the, uh, product stock. 2nd, uh, deduct the product, uh, quantity automatically to prevent over selling. So, uh, here, the atomic operation are critical to ensure that simulation, uh, seamless transactions, for example, uh, multiple user purchasing the same products do not result in risk conditions where more stock is sold than available. So so, see, here, in in this situation, atomic operation becomes very critical because it prevents overselling. Uh, it ensures that a stock level reflects all transaction accurately even under a high higher concurrency. Also, it prevents partial updates or data corruption in case of system failure. So we can achieve atomic operation in MongoDB. So MongoDB provides atomicity at a document level for more complex operation across multiple documents. Or, uh, collections, you can use transaction or atomic operators like, uh, ink, INC. So, uh, we can so, uh, like, for me, I have, like, 2 different approaches. So we can use ink for single, uh, document. So if all, um, inventory data for a product is stored in a single document, so, uh, we can use ink operator to automatically decrement the stock. Also, ink automatically increments or decrements a field value. Uh, so it it it is the best option. Also, uh, the find one and update operation ensures that only documents meeting the condition. For example, if stock is uh, less than equals to quantity to buy are up updated. So this approach is very suitable for managing stocks development. All data resides in a single document. Approach 2 is using MongoDB transaction. So if the inventory data spans multiple collection or needs to update multiple documents, so we can use a MongoDB transaction. So, um, like, uh, transaction why it works if we talk about it? So it has atomicity. It ensures that either both operation, uh, succeed or neither does. Also, consistency. It prevents partial updates when, uh, the application crashes or steps fails. That's why.

So very hard. Oh my god. K. Demonstrate how would you design a fault tolerant system using Node. Js and AWS for a critical application. Very hard question. Okay. Um, so, basically, fault tolerance system, uh, ensures availability, reliability, and scalability, like, even when parts of system fails. Right? So, uh, we can design a fault tolerance system. We can use the first is, um, if you're talking about availability, so we can use load balancers. So AWS elastic load balancer, uh, to distribute traffic across multiple nodes, multiple node, yes, in instance. So it is available even if the traffic is very high. Also, um, auto scaling group, we can ensure new instance are launched automatically if current instance failed. Also, uh, fault tolerant database using, uh, Amazon RDS, multi AZ, or, uh, DynamoDB for, uh, replication. Also, stateless Node JS instance, we can make. So it maintains a stateless architecture so that, uh, any instance can handle incoming request. We can use catching there. We can use, uh, Amazon Elastic catch or maybe, uh, Redis, uh, together with the Node. Js to reduce the database load. Also, we can use, like, message queues. We can use, um, Amazon SQS for asynchronous, uh, processing and decoupling service. We can, uh, use Amazon CloudWatch for logs, uh, alarms, and performance monitoring. So if something goes wrong, we get to know quickly. Also, we can use, um, route 53 for DNS failover to redirect traffic in case of region failover. And we can, uh, regularly, uh, backup the snapshot of database and file storage using Amazon s 3 and life cycle policy. Also for, uh, fault tolerant design, we can design, uh, the Node. Js server instance to be stateless, um, use external storage. For example, s 3 for sessions and other stateful data, we can ensure that there is no, uh, dependency on individual instance. So we can easily replace them or maybe scale them. Also, uh, using load balance to distribute traffic across multiple Node JS instance. We can also make it, like, fault tolerant. We can use, uh, we can also perform elastic load by balancer, ELB, uh, performance. Sorry. We can perform health checks of our load balancer and, uh, routes traffic away from unhealthy instance. Also, uh, for auto scaling auto scaling was the last thing. No. How do you design fault on the system? Uh, we should also consider auto scaling. So configuring, like, auto scaling, uh, groups to ensure a new Node JS instance are noise if, uh, existing one fails. Auto scaling can also handle, like, sudden spikes in traffic by automatically, uh, provisioning resources. Also, uh, we can use Amazon RDS with multi AZ deployment for, uh, relational database, for example, MySQL or PostgreSQL. For no SQL, we can use DynamoDB with global tables for multi multi region fault fault tolerance. Um, that's it.

Okay. If you have a memory leak in your node use application, how would you go about diagnosing and solving it? Memory leak. So if you have a memory leak in your node, your application, how would you go about diagnosing and solving it? Memory leak. Well, before jumping into diagnosis, recognizing typical signs is the best option. So we can increasing the memory usage. So the memory footprints keep growing over time without stabilizing. So, also, application slows down as memory consumption increase. The garbage collector, uh, runs more often, but memory is not re reclaimed efficiently. Also, the process runs out of memory. So these are the symptoms of memory leaks. So then we can, uh, after confirming the symptoms, we can then diagnose. So we can identify memory users. We can use a process dot memory users to check the memory usage, um, of our application. And if the memory usage steadily grows over time, it indicates a potential leak. Also, we can use, uh, money monitoring tools. For example, uh, real time we can get real time view of our applications memory usage. So, uh, Node. Js inspector, Chrome dev tools, and third party tools like, uh, Neuralink, uh, Datadog, or Prometheus. We can use these these. We, um, and, also, we can create a heap snapshot. So, uh, we can take a heap snapshot to identify memory usage patterns and objects retained in memory. So we can, like, attach a Chrome dev tools or we can also use Node. Js inspector. Then we can navigate to the memory tab, and, uh, we can capture a heap snapshot a heap snapshot. Then we can look for, uh, objects that persist across the snapshot, uh, but should have been garbage collected. Uh, also, we can use, like, node head pump package to generate snapshot, uh, program programmatically. Also, we can analyze, uh, the heap snapshot to identify detached dome nodes, objects reference referencing dome nodes no longer in use, functions retaining reference to objects unnecessary, or we can, uh, unremove we can see un removed event listeners that prevent object from being garbage collected. See common commonly culprits in this case are unclosed database connection or file streams, global variable unintentionally holding large objects, or, like, catch growing indefinitely without cleanup. So, uh, we can use profiling tools because it provides insights into memory allocation and, uh, object retention. So we can use clinic dotjs, which is a powerful suit for diagnosing performance issue. And to fix it, uh, we can use, uh, unclose e. We have to first if there is an event listener, we have to, uh, close them. We have to manage catches properly. We have to limit our data retention. We should correct every kind of global variables. We have to fix data connection leak. If there is any, uh, we throttle or debounce request, this will prevent the memory leaks.