profile-pic
Vetted Talent

Krupal Kothadiya

Vetted Talent
Senior Full Stack Developer with 6+ years of experience in building scalable web applications using ReactJS, NextJS, Elixir and NodeJS. Proven expertise in leading cross-functional teams, optimizing performance, and delivering high-quality solutions. Skilled in Agile project management with experience as both Scrum Master and technical lead, coordinating between non tech teams to deliver complex features on schedule. Passionate about creating seamless user experiences and driving technical innovation.
  • Role

    Senior Elixir Software Engineer (Full-Stack) - Frontend Lead

  • Years of Experience

    6.10 years

Skillsets

  • GitHub Actions
  • Technical Roadmaps
  • Tailwind CSS
  • Redis
  • ReactJs
  • PostgreSQL
  • Performance Optimization
  • Nodejs
  • Nextjs
  • MongoDB
  • Mentoring
  • Low-latency architectures
  • Google datastore
  • Ruby on Rails - 2 Years
  • GCP
  • FrontEnd
  • Docker
  • Distributed Systems
  • Developer Productivity
  • CI/CD
  • Backend
  • AWS
  • Agile/Scrum
  • Elixir-phoenix
  • Redux

Vetted For

9Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Full Stack Software Engineer (Remote)AI Screening
  • 54%
    icon-arrow-down
  • Skills assessed :Communication Skills, data-driven decision making, Problem Solving, API, Architecture Design, react, Testing and debugging, AWS, Node Js
  • Score: 49/90

Professional Summary

6.10Years
  • Nov, 2020 - Present5 yr 1 month

    Senior Software Engineer (Full-Stack) - Frontend Lead

    TagMango Pvt. Ltd.
  • Nov, 2018 - Nov, 20202 yr

    Software Engineer (Full-Stack)

    IDFY

Applications & Tools Known

  • icon-tool

    ReactJS

  • icon-tool

    Redux-Saga

  • icon-tool

    Node.js

  • icon-tool

    Radis

  • icon-tool

    AWS (Amazon Web Services)

  • icon-tool

    Juspay

  • icon-tool

    Stripe

  • icon-tool

    Antd

  • icon-tool

    RabbitMQ

  • icon-tool

    Redis

  • icon-tool

    Postgres

  • icon-tool

    Elixir

Work History

6.10Years

Senior Software Engineer (Full-Stack) - Frontend Lead

TagMango Pvt. Ltd.
Nov, 2020 - Present5 yr 1 month
    Led frontend for creator platform (10K+ MAU), optimized dynamic payment pages with NextJS to reduce TTI by 30% (Lighthouse). Acted as Frontend Team Lead, managing project execution, fostering Agile practices, and providing technical guidance, resulting in a 10% improvement in team productivity. Collaborated with technical and non-technical teams to conceptualize and deliver features based on creator feedback, ensuring timely delivery of high-quality solutions. Built Admin Dashboard automating 15+ revenue workflows, reducing manual reporting by 20 hours/week and improving data accuracy by 30%.

Software Engineer (Full-Stack)

IDFY
Nov, 2018 - Nov, 20202 yr
    Designed and implemented a monolithic API gateway in Elixir, achieving 25% faster response times compared to the legacy Ruby on Rails system. Modernized Elixir stack via RabbitMQ microservices, cutting downtime 30% and saving $20K/year on GCP.

Achievements

  • Made responsive web application in ReactJS Single-handed handles web application, maintain production and testing environment for fast development and quick improvements
  • Developed a responsive web application using ReactJS
  • Optimized dynamic pages with NextJS
  • Led web team and acted as Scrum Master
  • Implemented multiple payment gateways integration

Major Projects

5Projects

Creator Platform

    Built a course platform with module tracking, drip logic, quizzes, Q&A and certificates. Integrated Zoom SDK for video conferencing with scheduling, reminders, and cloud recording capabilities. Implemented real-time analytics for engagement metrics, earnings tracking, like/comment counts, and payment success rates. Developed white-label SaaS solution enabling brands to launch custom-branded learning platforms on their own domains.

Checkout Page

    Enhanced checkout speed with Next.js Static Optimization + MongoDB-triggered cache invalidation, cutting API load. Scaled payment infrastructure to handle $50M+ annual transactions.

API - Gateway

    Deployed Redis for high-performance caching and implemented workflow orchestration, improving efficiency and system scalability.

KYC Service

    Developed WebRTC-based realtime verification system adopted by leading Indian banks, processing 50K+ KYCs/month. Built an Agent Dashboard for KYC processing with intelligent UX and analytics to streamline agent workflows.

PDF Generator

    Optimized job processing using Redis-backed Bull queues, reducing infrastructure costs by 25% and eliminating manual workflows.

Education

  • M.Tech in CAD/CAM

    Nirma University (2017)
  • B.E. in Mechanical Engineering

    Dr. Subhash Technical Campus (2014)

Certifications

  • Gate 2015 qualified

Interests

  • Travelling
  • Learn new technology
  • Badminton
  • Hollywood movies
  • Trekking
  • AI-interview Questions & Answers

    Yeah. So, uh, hi. Myself, Kupil. I'm currently working with the Tagmango. I'm working as a web lead here. I am working with this company from last, uh, more than 3 years. I have more than a 5 years of experience in in IT industries. Uh, currently, I am majorly handling the front end portion in my current company, but I also working with the, uh, back end technology also. In my previous company, I worked with IDfy. That company name is IDfy, and my major work experience with that company is, uh, with, uh, back end technology. Uh, my, uh, tech stack is JavaScript, React, Elixir, and Node. Js. So, yeah, that's it. Thank you.

    Okay. So in the memory, like, application, there is, like, uh, many places, uh, where, uh, when you open such a memory usage process and it didn't close, so we check that that, uh, which kind of a heavy operation process we do it in our platform and check that at at any particular pattern is matching that any any particular API is hitting. And, uh, in that API, after due to that, the memory spike is there. So we can check that also, and, uh, we can add some logs about that. And, uh, from the dashboard also, we can check that key after certain API or after certain pattern. If the memory leak has happened, then we can approach to the that RAM. So, basically, uh, the memory leak has happened due to some, uh, heavy memory use, uh, process we opened, and we forgot to close it, like, uh, file read and, uh, capture of imaging and recording or something like that. Yeah. That's it.

    Okay. So, uh, refactoring re encode is, like, mainly featuring by MATLAB and, uh, less, uh, impacting the, uh, existing feature is likely. Uh, the enhancement of which we needed is like key, uh, code splitting. So with the use of code splitting, uh, we can, uh, for for mono repo, we can the mono component, we can, uh, split it into the micro component. So what happened due to that? Like, due to this, like, whenever that components need to rendering as at that point of time, uh, only that component will be rendered and the state requirement. So, uh, suppose that any component, uh, any state which is required only on a child, then we move move that state to the child so that, uh, at the time of updating the state, that whole parent won't be update. Only the child component will be update. So that's why, uh, the feature is the same, but, uh, we can reduce the rerendering and refactor the code base.

    Okay. So, uh, during this, uh, we can, uh, add some authentication, uh, application in that. So there there are many authentication applications that, like, uh, JWT token and, uh, like, API keys, like, API token and something like that. Okay. So what we can do k, Uh, at every, uh, API, we can use some middleware. And that and in that middleware, we check that whatever the token or the JWT token or the API token which we use, that token is authenticated. Mostly, we use JWT token in our our platform. So what happened that, uh, every, uh, not open, uh, there isn't mainly 2 kind of APIs that there in in in that platform. That is like, uh, one is, like, authenticated API, and this non non authenticated API. So non authenticated API is, like, might be possible for the open for we use for the open pages. Like, when, uh, that like, uh, we can say that we one example, like, checkout page. So at the payment page, we do not require any user authentication at the time of or landing that page at the time of opening that service. So at that point of time, uh, we do not require authentication. But something like after logging whatever the data we show to the user that all the APIs need the authentication. So that authentication API have some middleware, um, in in between the before calling the that function. So that middleware calls the uh, decrypt the metal of, uh, that, uh, token and check that that that token which we which we added for that. Uh, it is authenticated or not. That token, uh, with every API, that token will be centered in the header, and that token will be, uh, bare bare token and something like that. And, uh, based on that token's authentication, we can, um, give the response to the further, sir, um, to the client that, uh, is this token is authenticated or not? If this token is authenticated, we you give the 200 response with the proper, uh, answer they required from the client side. If that not, we give you 401 unauthorized, uh, token or something like that. And, uh, with this, uh, so many, uh, APIs are as as I said, many APIs are also, uh, unauthenticated. So on on for an authenticated API, we can put some, uh, this, uh, like, IP restriction. So, uh, if we if, uh, any, um, any with the datas or tech is happened on on our platform, so we can use the IP restriction that that if from the for the same API, if from certain more than certain amount of hit is there, we can block that API for a while. So we can use that thing also.

    Okay. So in the AWS, uh, for the back end, uh, we use the EC 2 instance. And from the EC 2 instance, uh, we can use the Kubernetes cluster. Okay? So based on the Kubernetes cluster uh, whenever the MATLAB, it's a and we keep the we use a Docker in on that. And for the Docker, we split it in the MATLAB. We create a docker file and created one instance. Okay? And, uh, in in the docker file, we mentioned that what is the minimum uh, capacity of that minimum inside of that. And, uh, for the minimum instance, we can create that. Uh, I suppose we, uh, we use the small CPU, and we use, uh, 2 instance, I suppose. Okay? And as soon as the load is increasing and, uh, we create 1, uh, Python script there that as soon as the load increasing more than that this, we can create a new instance. So as soon as that one instant, uh, the load is increasing from 2, uh, the 3 3 instance will be created. And as soon as the load is decreasing, that third instance will be slowly closed. Uh, to manage that, uh, API, so we can use a, uh, a load balancer on that. So that load balancer handle test the API so that, uh, it will be route the, uh, APIs in such a way that the the no no no single instance will be loaded with the much more loaded with their recuses. And, uh, due to this, like, uh, as soon as possible, suppose due to some, uh, load or something like that, if one instance will be crashed, that, uh, the the new instance will be automatically created the use of Kubernetes. So we can use this thing to generate a whole or dynamic, uh, scale is erratic. And even though based on that, uh, our flow, we can, uh, use this shutdown time also. Suppose, uh, in our platform, there there is a, uh, no one came between, uh, um, night 2 AM to 4 AM. So at that point of time, uh, uh, we can undo the shutdown also that that, uh, our server will be taking a hold rest. If even though that time, we can, uh, make our server in such a way that, uh, it is kind of on a sleep mode. There is a setting in the in the in the CPU also, like, a kind of a sleep mode. There is a very less, uh, CPU and then and then memory gets around that.

    Explain I don't know about this.

    I'm not exactly sure with that, but I think, uh, the lines split should not be equal to should be colon. I'm exactly not sure. But there is I think the mistake would happen.

    Okay. Okay. So for the microservice architecture, we can, um, create, um, small, small phone. EC two instance for the each and every part of that. And, uh, for the communication between them, there is a few communication microservice communication management tools are there. It looks like there is a own AWS tool that, uh, that each instance can talk with each other with the use of that AWS thing. And other than that, like, we can use the uh, RabbitMQ. So on the RabbitMQ, uh, as soon as the one server complete that process and, uh, when we create the logic in such a way that, uh, that server as soon as he complete the process, uh, for the next process, it can assign to the, um, another server. Okay. So, uh, what we can do is, like, first of all, uh, we create a reverse proxy, uh, server or something something like that. And after that, we can put the load balancer so that, uh, our security will be measured with that. And any further routing is required, we can do that things. Okay? So after that, one main server is there, which is like a request handler. Okay? So request handler is like what you have to do. Based on that, uh, the request is required, that that request and and and that is connected with the multiple, uh, small small microservices. Okay? So, uh, as soon as, like, uh, if we required that something, a process only required, then, yeah, that request handler will send to that process to the a, then, um, get the response and, uh, give it to the answer. Okay? And, uh, if the if any API is hit, then it's required to process it required a and b process. Okay? So what happened that request handler get the request? It, uh, gives that request to the a servers. A server did not return the request handler. It directly give that request to the b server. Um, b server complete that thing and give it back to the, again, request center. So how this manage life? As soon as the request and look at the request, it'll, uh, create the process and get save the data in the database. And, uh, with that, uh, a small ID is created. So based on that ID, we can track that key with server processes that do doing that and what the status of that request handler. And, um, it it it every server, it's the status is updated on the database. And and, uh, if if it is sync process, suppose that, uh, microservices is the sync process, though, we can give you the answer based on that. Uh, so, uh, we, uh, hold that request and prioritize that request and give it, uh, by return back to the end and answer it. And if it asynch for asynch a response data, as soon as the request handler create the, uh, request, it return the ID. So and complete the process. So as soon as the user, uh, want to see the see the result, it it it can use that request ID, and you fetch the data from our server. We can, uh, create 1 one microservice also that to only fetch the data so that the authentication and all part of this, will be taken care of that. And as soon as someone, uh, hit with the proper request ID, that server, uh, get back to request with the proper response. That figure we can do with Microsoft architecture in the.

    Okay. So, uh, for securing to the API, we can use the authentication authorization kind of the implementation in that. Uh, so what happened in the server side, we can implement the JWT token. Okay? So JWT token is generated, and, uh, we can generate the the, uh, 1, uh, access token and, uh, refresh token. So access token is, like, a small lifestyle, uh, and, uh, request token is, like, have a big lifestyle on on the token. So, uh, access token is the, uh, token with that every API. As soon as the user logged in for the every API and that, uh, API key is added on that and, uh, give it back to the, uh, response, Okay? And give it back to the, uh, the server node. Yes, sir. Okay? So, uh, as soon as, uh, any any creator uh, sorry. Any user logged in with the platform, uh, that access token and request token is saved in in our platform. Um, some the access token is saving to the data in in the in the in in the global component, like Redux or context or somewhere somewhere that. And, uh, the the request token, we can say save it in in in our local storage. And, uh, as soon as the user closed the tab or a user locked out, uh, user closed the tab, again, when he when that user came, we can use that request token to, uh, fetch the new token and uh, get back to the user. And as soon as the user logged out, we can wipe out the local storage so that the token and all this thing will be wiped out from that. And um, for every request, uh, that, uh, the access token is sent with that API and on on the Node. Js side, it token will be authenticated. And if it is authenticated, then and only it performed the re a request. If it is a but not authenticated, it will throw back with the 401. On front end side, uh, if the access token is expired, uh, we use that, uh, refresh token. And to, uh, check that key from that, uh, from using that refresh token, can we generate a new access token? If it is in in a time permit, we can generate a new access token, give it back to the server side. Sorry. Give it back to the client side. Client, uh, will again use that new access token and start their journey smoothly. And, uh, if it is not a refresh token is also expired, we logged out the server from that process and make sure ensure that, um, user will you'll have to land on a landing page and again log in. So plus, you can use the rate limiter on that. So, uh, suppose any server hit the, uh, certain more than certain amount of time from the same IP, then we can, uh, ensure that that, uh, we can assume that that they is try to, uh, do the data set in our platform, and might be possible it harms our platform. So we can restrict the user to hit a certain amount of, uh, a p certain amount of, uh, time only, um, with such a request. So, yeah, that's where we can generate a secure, uh, API communication.

    Nothing good to have a question. It's looks only good, underscore to, underscore how? I don't think.

    Same with this