profile-pic
Vetted Talent

Amit Kumar

Vetted Talent
With 13+ years of experience in software design, development and leadership, looking to utilize my technical and management skills to contribute to innovative projects and lead technology teams to success.
  • Role

    Sr. Architect

  • Years of Experience

    14 years

Skillsets

  • Software Architecture - 13 Years
  • MySQL
  • Architecture Design
  • REST API
  • Kafka
  • microservice architecture
  • Java - 10 Years
  • Amazon AWS
  • Azure DevOps
  • Go Lang
  • microservice architecture

Vetted For

15Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Staff Software Engineer - Payments EconomicsAI Screening
  • 68%
    icon-arrow-down
  • Skills assessed :Collaboration, Communication, Payments systems, service-to-service communication, Stakeholder Management, Architectural Patterns, Architecture, Coding, HLD, LLD, Problem Solving, Product Strategy, SOA, Team Handling, Technical Management
  • Score: 61/90

Professional Summary

14Years
  • Jan, 2024 - Present1 yr 8 months

    Sr Staff Engineer

    Elpha Secure
  • Jan, 2020 - Dec, 20233 yr 11 months

    Software Architect

    Reliance Jio Digital Health
  • Jan, 2018 - Dec, 20202 yr 11 months

    Senior Software Engineer

    Uber
  • Jan, 2014 - Dec, 20151 yr 11 months

    Software Architect

    Picsean Media
  • Jan, 2015 - Dec, 20172 yr 11 months

    Lead Fullstack SDE

    Bankbazaar
  • Jan, 2017 - Dec, 20181 yr 11 months

    Senior Software Development Engineer

    Microsoft
  • Jan, 2012 - Dec, 20142 yr 11 months

    Lead Software Engineer

    Samsung
  • Jan, 2010 - Dec, 20111 yr 11 months

    SDE Intern

    Intel India Pvt Ltd

Applications & Tools Known

  • icon-tool

    Kafka

  • icon-tool

    Github

  • icon-tool

    Hive

  • icon-tool

    Spark

  • icon-tool

    HDFS

  • icon-tool

    Docker

  • icon-tool

    Golang

  • icon-tool

    Java

  • icon-tool

    Jenkins

  • icon-tool

    Amazon S3

  • icon-tool

    MS Azure

  • icon-tool

    Redis Cache

  • icon-tool

    Azure DevOps

  • icon-tool

    Azure Datafactory

  • icon-tool

    Azure Synapse

  • icon-tool

    PowerBI

  • icon-tool

    Git

  • icon-tool

    JSON

  • icon-tool

    YAML

  • icon-tool

    Azure

  • icon-tool

    Spring Boot

  • icon-tool

    Hibernate

  • icon-tool

    MySQL

  • icon-tool

    AWS

  • icon-tool

    Bitbucket

  • icon-tool

    ffmpeg

  • icon-tool

    Valgrind

  • icon-tool

    Coverity

  • icon-tool

    Perl

  • icon-tool

    Unix

  • icon-tool

    HTML

  • icon-tool

    CSS

  • icon-tool

    Kafka

  • icon-tool

    Cassandra

  • icon-tool

    Github

  • icon-tool

    Hive

  • icon-tool

    HDFS

  • icon-tool

    Redis Cache

  • icon-tool

    Azure DevOps

  • icon-tool

    PowerBI

  • icon-tool

    Hibernate

  • icon-tool

    Valgrind

  • icon-tool

    Perl

  • icon-tool

    Unix

  • icon-tool

    HTML

  • icon-tool

    CSS

Work History

14Years

Sr Staff Engineer

Elpha Secure
Jan, 2024 - Present1 yr 8 months
    Designed and built event-driven microservices, eliminating monolithic dependencies and enhancing system performance. Architecturally improved existing systems to address scalability and availability challenges. Identified performance bottlenecks, implementing innovative solutions to enhance efficiency. Mentored team members, sharing technical expertise and fostering a culture of innovation by encouraging ideas and suggestions for improvement. Designed and implemented custom APIs to enable seamless data exchange between disparate systems and partners, improving functionality and driving scalable business integration. Led cross-functional teams to deliver robust software solutions, driving architectural enhancements. Established engineering processes that increased efficiency, reduced costs, and minimized manual efforts.

Software Architect

Reliance Jio Digital Health
Jan, 2020 - Dec, 20233 yr 11 months
    Designed system architectures and delivered high-quality engineering products, components, and platforms to achieve strategic goals. Collaborated with management and stakeholders to align engineering efforts with business goals, defining strategic roadmaps, consolidating requirements, and translating them into actionable engineering tasks. Built and mentored technical teams to meet objectives, fostering growth and skill development. Facilitated cross-team communication to manage dependencies, resolve conflicts, and unblock teams. Made key decisions on tech stacks, assigned teams to projects, and drove performance improvements.

Senior Software Engineer

Uber
Jan, 2018 - Dec, 20202 yr 11 months
    Designed and build finance data accounting platform for all the Uber businesses. Developed security and audit compliance features to RCP platform, enhanced performance of RCP platform and upgraded platform to onboard different Uber businesses worldwide. Collaborated with team members to identify issues, implement effective resolutions, and apply best practices, improving system stability and performance. Conducted code reviews and guided engineers to uphold high coding standards, improve code quality, and support their career growth.

Senior Software Development Engineer

Microsoft
Jan, 2017 - Dec, 20181 yr 11 months
    Collaborated with the Microsoft Office 365 team to enhance a cross-platform graphics rendering platform supporting Android, iOS, mac OS, Windows, and Office Online. Developed multiple features, including the security feature Safe-Link, improving platform security and user protection. Mentored junior developers, enhancing their technical skills and fostering a collaborative work environment.

Lead Fullstack SDE

Bankbazaar
Jan, 2015 - Dec, 20172 yr 11 months
    Optimized application performance by writing efficient database queries and streamlining code, enhancing system speed and responsiveness. Improved system reliability by proactively identifying and resolving potential issues during development stages. Collaborated with cross-functional teams including project managers, designers, and testers to ensure successful product launches.

Software Architect

Picsean Media
Jan, 2014 - Dec, 20151 yr 11 months
    Designed and developed a content digitization framework, improving data accessibility and operational efficiency. Communicated software architecture strategies to senior leadership and third-party stakeholders, aligning technical decisions with business goals.

Lead Software Engineer

Samsung
Jan, 2012 - Dec, 20142 yr 11 months
    Designed and developed the Tizen platform multimedia framework, introducing key features like the audio pool and smart framework. Contributed to architectural decisions that improved scalability, maintainability, and security. Built high-performance applications using modern technologies and best practices, while collaborating with stakeholders to gather requirements and drive successful project outcomes.

SDE Intern

Intel India Pvt Ltd
Jan, 2010 - Dec, 20111 yr 11 months
    Developed a backend tool for unified access to distributed databases. Collaborated with design teams to enhance the front-end experience, improved performance by resolving issues and upgrading interfaces, and delivered tailored software solutions for consumers.

Achievements

  • Won best designer awards from IIT-Delhi
  • Won best designer awards from IIT-Kanpur in Robotics competitions
  • Won the 1st prize in Robotics competition at IIT-Guwahati
  • Won consolation prizes from IIT-Roorkee, IIT-Kanpur and IIT-BHU in different Robotic competitions
  • Won 3 best designer awards
  • Won the 1st prize in Robotics competition
  • Won consolation prizes

Major Projects

5Projects

Health Data Platform

Reliance Jio Digital Health
    To scale for 1 billion users, currently its supporting 20 million users health data. Complete event driven architecture platform having zero data loss capability and complete fault tolerance system. Data extraction capability from user health records (pdf/images) and reconciliation of the data. Single source of truth of all user's health data. Highly secured platform, end to end encryption of the data. Heath data marketplace to support heath research and analytics support. Compliance with FHIR, HL7, HIPPA and ABDM

Health Data Provider OnBoarding Platform

    To onboard any health data provider within 24/48 hours, depending on the health data type. To onboard all pharmacies, clinics, hospitals and labs available in the country. To sync all health data from data provider. To enable providers business workflows from the system, like search/book consultations and medicines.

Jio Health Wellness Platform

    Platform through which health wellness programs are configured for users and corporates. Track users health vitals, activities and nutrition intakes. Capability set individual user health goals and reconcile the same against the enrolled program. Nudge the user for their health and scheduled/missed consultations. Provide specialised doctors consultations for each program to help enrolled users to achieve the health goals.

Corporate Health Wellness Platform

Reliance Jio Digital Heath
Jun, 2022 - Oct, 20231 yr 4 months

    Vision:

    Platform through which health wellness programs are configured for users and corporates.

    Track users health vitals, activities and nutrition intakes.

    Capability set individual user health goals and reconcile the same against the enrolled program.

    Nudge the user for their health conditions/performance and scheduled/missed consultations.

    Provide specialised doctors consultations for each program to help enrolled users to achieve the health goals.

    Complexities Identified:

    User nutrition intake computation is a complex problem to solve, have lots of SMEs dependencies.

    Specialised doctors consultation support for each program is another complex problem to solve.

Provider OnBoarding Platform

Reliance Jio Digital Heath
Dec, 2021 - May, 2022 5 months

    Health Data Provider OnBoarding Platform Vision:

    To onboard any health data provider within 24/48 hours, depending on the health data type.

    To onboard all pharmacies, clinics, hospitals and labs available in the country.

    To sync all health data from data provider.

    To enable providers business workflows from the system, like search/book consultations and medicines.

    Complexities Identified:

    Integration of providers business workflows in our system.

    Integration pipeline to support health data ingestion from different health data providers and data

    sharing/migration between providers.

Education

  • M.Tech, Computer Science

    Birla Institute of Technology, Ranchi (2011)
  • B.Tech, Computer Science and Engineering

    UP Technical University (2009)
  • M.Tech (CS)

    Birla Institute of Technology, Ranchi (2011)
  • B.Tech (CSE)

    UP Technical University (2009)

AI-interview Questions & Answers

Hey, myself Amit Kumar, and I'm having 13 plus years of experience in software development and design. So far, I'm working in Reliance Digital Health since more than three years. And here I designed three major platforms. So my key roles and responsibilities are I work on the high-level system design, interacting with the stakeholders, getting their role requirements, working on the role requirement to refine the engineering task and features we will be developing. I do mentorship of junior engineers and other engineers, including like software engineers, senior software engineers as well. I help them to grow from the mentorship perspective, and I work on the LLD and Cote d'Ivoire, plus I design the, from the high-level perspective, I design the system, and... Hey, are you? I'm on call, Babu, please go. On the tech side, I work on Java, Golang, and the microservices. So here, the major platforms that I design from scratch is completely event-driven based architecture. We are using Kafka heavily here for asynchronous communication between the services and to scale the services and systems. Apart from that, we use MongoDB as well. As a NoSQL DB, we are using because the data is very not in, not fixed in a schema-wise. So it's a very, I would say, variable in a schema. So overall, my background is like having 13 plus years of experience and having strong knowledge in system design, work on the system design from scratch, and three major platforms so far I completed from scratch in my previous company. Yeah, that's all about my background.

Yeah. So to predict the fraud payment fraud in real time, we can build a a data streaming pipeline, and we can use the Flink as one of that, uh, framework to, uh, identify and capture any pattern of, you know, payment specific uh, orders or events. So, yeah, uh, we need to have a, um, real time, uh, processing of the payment information, and I think on, um, scale on a scale like, uh, building a system, um, or having a very scale system, which is taking the payment specific data. So we can apply the, uh, data, real time data streaming and processing approach here. Apache Flink is, uh, I would say, the best approach here where we can identify any, uh, fraud based on the, uh, scenarios like, uh, pattern matching or any information that we have against the fraud, uh, to identify, uh, in almost real time. And that processing data, we can use for training the ML and AI model to predict the fraud, uh, in almost a real time system. So, yeah, so I think that is the best way going forward. So I'm assuming here the fraud detection should take place in almost a real time system. It's not like fraud happen, and then we are trying to identify what fraud has been done. Right? So the best system would be if we can help and to capture, um, soon after fraud happen or if we are capturing that, uh, information in our early stage so that we can take immediate action on the fraud payment fraud specific. So here, we can rely on that information. Uh, payment specific information if we see any, um, abnormality in the in the payment system. Um, so we can have the user profiling data, uh, on top of that to have a proper match on, um, their payment and credit histories. So that is, uh, another approach. Like, we have already a reference system in place, which is having, uh, payment information about the users, how their prints being done on the payment side, and in real time, what abnormality we got in that, uh, data payment data processing so that we can capture as, uh, fraud. Yeah. I think that's

How can you design a secure communication protocol between microservices? So between the microservices, the, we can protocols, communication protocols, we can use secure connections like HTTPS, that is one level of security, it is already been secure connection. On top of that, if we want to have additional layer of security, we can do a data, complete data encryption between the systems when we are talking. So the encryption, we can apply encryption, encrypted data, we can communicate through HTTPS, communication protocol, and the services, microservices, who are basically working on the encrypted data. That key we can maintain on a secure cloud, like Azure Vault is best, one of the best solution to have that key, well, key storage. So yeah, so as a secure communication protocol, we need a very secure connection. So HTTPS, I already talked about, which is providing the SSL layer or SSL certificate we have in the HTTPS connection. So which is basically securing the connection from the secure software. So that is, that is, that is one of the communication protocol we can use here. And all the data we can communicate through encryption way. So all the data end-to-end encryption should be applied not only in between the microservices, it should be through any client facing application, which is sending the data to back-end services should also be encrypted. And we can have those encrypted keys, encryption and decryption keys stored on the secure cloud. So that is, we can make the secure communication between the services in a payment ecosystem.

In what way would you manage team, knowledge sharing, branching, consistent code quality, various payment systems? So the, in what ways? So I'll definitely have ACICD pipeline, where we have the continuous development and continuous integration on the deployment side, the code on the, from the consistent code quality side, we can have standards on having a proper naming pleasure while writing the code, defining the proper protocol of writing the code, like function variable and methods being used should have a proper commenting, like around 40 to 60% as commenting on the code side, in the code, so that code is much readable and it's having a proper description of what functions will do, code should be much modular, it should not be very long file, we should use much more libraries in terms of while we build the code, so we should target the common components to export as a library, also we should not write the functions from the, if the functions are already being exposed through the library, so it's always better to use the standard libraries for faster development, also we can share, we can make the codes should not be repeated, we should have a kind of proper way of avoiding the code repetition side, we should have a proper test cases written, where we have the test case coverage of 90, above 80% as a code coverage quality, so these are the standards we can follow, we can share that knowledge across the team to have a consistent code quality, and we should have a monorepo, not like all the services are hanging, so we can have a monorepo concept where all the codes and services are being in a single repository, that is called monorepo, and in that library or in that module, we can have all the dependencies, so if someone is writing the code in that monorepo style, so the shared component we can easily share across the components, so that is one of the best way to have the code consistency across the team.

What steps would you take to ensure a new feature? Any payments? Product aligns with both technical and business object objective. Aligned with technical and business objective. Yeah. So while ensuring a new feature in the product align with both technical and business objective, so steps are, like, we should have a, uh, uh, proper sign off from the product team on the, uh, you on the outcome of, uh, of the, uh, feature that we are building. We should have proper UAT testing against the feature we build. We should have a proper test plan, What is the expected outcome of of that feature? And, uh, we should be, uh, from that, uh, from that, uh, feature testing. We should have a proper UAT testing. We should have a proper sign off from the, uh, stakeholders. So stakeholders should give us the plan where what is the expected output, out of that payment, uh, product that we are building from the engineering side. So business objective will we will reconcile the outcome with the business objective. What is the expected output from that? And on the technical front, we will make sure we have a proper, uh, UAT testing, we have the proper integration test, we have the proper unit test being written, and, uh, the the the we should have if there is a testing team, we should have both the both the, we should have proper testing scenarios from the stakeholders and should be validated through the, uh, testing team and identify if there is any, uh, unexpected output. We should have a proper infrastructure for deploying the product on, uh, on a, production. I would suggest here we should have a proper CICD pipeline. Uh, soon after we have, uh, the feature tested by the tester, and it is being deployed. So should we should have a, uh, automated test test scenario there? Like, Jenkins build is one of that where we can enable the, uh, testing on the pipeline soon after we submit the code. So it will that auto compile, build, unit test case, and integration test case is being, run, and expected output are may already met. And finally, the code goes to the production. These are the steps we can make.

Imagine consolidating several payment services, consolidating several payment services into a unified platform. So to minimize the risk, the payment services should have a proper authentication in place, should have in fact service to service communication we are doing, all the communication service to either service to service should be proper, should have a proper authorization and authentication in place. We should not log any PII information on the system and in the log, they should always have a encrypted communication protocol follow, we should follow like HTTPS and it should be an encryption between the services and nowhere we should mention the PII information and passwords and usernames in the system and PII we should avoid logging any payment specific information on the console or in logging into the system, let's say we are using some logging system. So we should be very careful while working on the payment services into a unified platform, there should not be, the PII information should not be easily trackable or easily gaseable in the system. All the communication should be very secure through the secure protocols, it should be properly encrypted, we should not log any secure key or if we are using secure connection through the let's say encryption keys we are using, so should not log those encryption keys into the console or logging or should not have in configuration. Everything should be under a secure API gateway, we should have a proper whitelisting of the APIs, all the services should be under API gateway, no API endpoint should be exposed directly from the services, we should have a proper gateway, we should have a proper rate limiting in the system and yeah so these are, we should have a proper whitelisting of the request, so these are the steps we can follow.

During the program, can you spot the technical error that would prevent it correctly determining if number is If a number is prime. So if number is 2. Right? So, I So the logic Yeah.

So here, logger, we are basically not is being initialized as null. I didn't process transaction. It is not sure that logger will be set somewhere. Right? We are not passing any information to that, and it is seems like local. So definitely my try sorry. Execute Discord. So logger being null, it will throw the exception, so it should be initialized properly.

Yeah. So in while we are doing payment processing and we want to identify outline dynamically adopt payment processing based on the real time analytics. So somewhere, the real time analytics, we are having. And based on that, we want to, uh, dynamically adopt the payment processing threshold. So, sir, definitely, we need to have a proper, real time streaming, uh, data pipeline in in place. So I would suggest here, uh, we can use Apache Flink. It's one of the best, um, real time data processing. And it can work on to adopt the payment processing threshold. So if there is a, like, on real time analytics, we want to set some, uh, we want to set some conditions or we want to capture some pattern or we want to analyze the pattern or we want to take the action based on some patterns being already set in real time, and those can change over time. So, definitely, we need a a real time data processing on the payment system so that we can change that the data pipeline. So I would suggest here we can use Apache Flink as a data processing pipeline here, which will work on the real time data and, uh, to analyze the data and to set the to adopt any payment processing thresholds, which is very easy, uh, with Apache Flink. And it is basically very distributed. It works on a, uh, scale. So Apache Flink did real time data processing. The applying payment processing threshold is easier on a scale system, and we can, um, set the threshold on based on the real time analytics data, and those will be applied in the data processing pipeline through Apache Flink itself. Uh

So basically, to help web bot compatibility, we can have the versioning in in in API versioning in place. We can we can if we can configure the proxies or the in a failed scenario, we can configure the API gateway itself. If in a failed scenarios to the new APIs we or update, we are passing. If there is scenario failure, it will it should pass to the back end APIs. The all all, I would say, the older APIs. So worsening, we can have we can add the proxies. We can add, uh, a kind of, uh, configure the API gateway to route the failed request to the older system or the older API. So that is the way we can handle, uh, backward compatibility.

Papa. What would be your design So I would suggest we should have a source of truth system in place soon after data is being entered into the platform. We should log that. And after processing, right, after processing the final transactions, so what transactions we are done, we can have a source of truth as one of the databases and final transaction storage as another database. Now we can build a platform which can on daily basis, or on we can create a bucket, which can be hourly, weekly, or, uh, monthly, or minute wise, uh, buckets, which we we can which we can do a reconciliation between the final transaction storage and a source of truth storage. So that way, we can, uh, implement a scalable payment reconciliation system in place.