profile-pic
Vetted Talent

Satyendra kumar

Vetted Talent
Dedicated Backend Software Engineer with 6.8 years of experience in developing robust and scalable backend solutions. Eager to leverage expertise in software architecture, system design, and problem-solving skills to contribute to innovative projects and drive impactful results.
  • Role

    SDE2 - Tech Lead

  • Years of Experience

    7 years

Skillsets

  • CI/CD
  • Spring Data JPA
  • S3
  • Ruby on Rails
  • Postman
  • Postgre SQL
  • MySQL
  • Mongo DB
  • Kubernetes
  • Kubernetes
  • IntelliJ
  • Elasticsearch
  • Docker
  • Java - 7 Years
  • CI/CD
  • AWS
  • AWS
  • object-oriented - 8 Years
  • Distributed Systems - 5 Years
  • Kafka
  • TestNG
  • JUnit
  • Hibernate
  • Spring Boot
  • System Design - 6 Years
  • Python

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Software Engineer II - ML PlatformAI Screening
  • 53%
    icon-arrow-down
  • Skills assessed :ArgoCD, CD/CI automation, Postgres, Apache Spark, AWS, Docker, Java, Kubernetes, machine_learning, Problem Solving Attitude, Python, Redis
  • Score: 48/90

Professional Summary

7Years
  • Jun, 2022 - Present3 yr 3 months

    SDE2 - Tech Lead

    Flexcar Pvt Ltd
  • Mar, 2021 - Jun, 20221 yr 3 months

    Senior Software Engineer

    Paytm Money
  • Oct, 2019 - Feb, 20211 yr 4 months

    Senior Analyst (Software Engineer)

    Goldman Sachs
  • Jun, 2017 - Sep, 20192 yr 3 months

    Software Engineer

    Samsung Research and Development

Applications & Tools Known

  • icon-tool

    Java

  • icon-tool

    Spring Boot

  • icon-tool

    Hibernate

  • icon-tool

    JPA

  • icon-tool

    Postgres

  • icon-tool

    MySQL

  • icon-tool

    MongoDB

  • icon-tool

    Gradle

  • icon-tool

    Maven

  • icon-tool

    CI/CD

  • icon-tool

    Intellij

  • icon-tool

    Android Studio

  • icon-tool

    MySQL Workbench

  • icon-tool

    DBeaver

  • icon-tool

    Flyway

  • icon-tool

    Postman

  • icon-tool

    AWS

  • icon-tool

    S3

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    CI/CD

  • icon-tool

    IntelliJ

  • icon-tool

    Flyway

  • icon-tool

    Lens

Work History

7Years

SDE2 - Tech Lead

Flexcar Pvt Ltd
Jun, 2022 - Present3 yr 3 months
    Joined as a founding engineer to develop a car rental product, led inventory system development, managed migration of microservices, and overseen defect management system development.

Senior Software Engineer

Paytm Money
Mar, 2021 - Jun, 20221 yr 3 months
    Designed and developed end-to-end solutions for a mutual funds platform, developed EdTech software supporting reading, listening, and live webinars.

Senior Analyst (Software Engineer)

Goldman Sachs
Oct, 2019 - Feb, 20211 yr 4 months
    Managed data for global entity data management ensuring compliance, optimized software for regulatory processes, developed a streaming data pipeline.

Software Engineer

Samsung Research and Development
Jun, 2017 - Sep, 20192 yr 3 months
    Developed camera features for Samsung devices, improved camera performance during OS upgrades, collaborated with teams for integration.

Achievements

  • Secured 1st rank in college 12th standard
  • Secured 18771th rank in JEE- Main 2013
  • Business Trip to South Korea to co-work with HQ Samsung

Education

  • Bachelor of Technology

    National Institute of Technology Jamshedpur (2017)

AI-interview Questions & Answers

Could you please help me to understand more about your background while I give you a brief introduction about yourself, ok. So myself Satyendra Kumar and I am working as a software engineer 2 in Flexcar. So Flexcar and I have total around 7 year experience on the backend development. So in the Flexcar, it's a long term car rental company. So it's provide the facility to customer, customer can take the car on the rent basis is nothing but a subscription model. So you can take, subscribe that you will have to pay the weekly, monthly as per your subscription and you can choose the car. So anytime you can swap the car, you can return the car, there is no extra cost for that. So this is the main business. So initially started working on the inventory management system and inventory management system of car. So we are managing how to inflate because as we are dealing with the second hand cars. So how to inflate the vehicle in our system, how to manage the status of the car, how to manage the status of the car, how status will going to change, how we track each activity of the car and so on. There are multiple things we are managing on the inventory side and next we have, I have started working on the defect tracker and checklist side. So I tell you what is defect tracker. So defect tracker is nothing but it's going to track all the defects of your vehicle. So defect means if a car have any damage like oil change required, any scratch, tire puncture, all that thing. What is checklist? Checklist is nothing but whenever car go to customer and come from the customer, we check, inspect the entire cars and detect, find all the defects, log into our system. We can, we are calculating how much cost, how much amount should we charge to customer that actually that damage is done by the customer and accordingly we are defining what will be the next available date for this car. So this is overall. So many important thing like in last one year, I also lead my team with some project like checklist project and defect tracker project and like four or five members I have done. So I was doing designing things and collecting all the data from the product manager, writing the code and managing the things. So everything I have did and last one year, but yeah, I have total overall good and sufficient experience on the backend development thing, how to design, how to plan, how to break down bigger problem into a smaller problem so that we can give our juniors or colleagues to do it and how we can design the high level system from the scratch so that it can be easily scalable, maintain and its latency is also good. So this is the things I worked on. Thank you.

How would you ensure atomic transaction across microservices communication via Apache Kafka in AWS environment. Okay? So, like, uh, this is nothing but a distributed system. And, uh, uh, as you may know, Kafka is up for the distributed system more so for the distributed system. So just take example, there are multiple microservices or maybe one service have the different pod. And, uh, each pod are pushing some message into the Kafka. Right? So so that only transaction across project of credit of this environment. Okay. Fine. So, like, uh, uh, we are pushing into the Kafka queue. Right? So each one have that topic. So, like, uh, I just created the one topic in Kafka. And, uh, as you know, Kafka meant in the, um, partition for the each topic. Based on traffic, it's handle that. Okay. So once the topic is created, uh, there will be a partition will also create it. So multiple service will going to, uh, connect with the multiple, uh, uh, we can say that, uh, partitions. So each service, if you're not assigned any if you have not assigned any partition ID or the partition key, in that case, it will going to, um, any partition of that topic. And, uh, based on subscriber, there are multiple group subscriber as well. So subscriber can be talked to, uh, like, key means each will will also cap also makes your each partition have at least 1 subscriber. Okay. So when you talk about the atomicity, uh, atomic transaction. So if you talk about the atomic transaction, then I think this level is bit difficult for us. We'll have to maintain the one partition, or we'll push each message on the one partition, uh, using partition ID. So we'll pass the key, uh, partition ID or the key so that it will Kafka will identify what is this key. And based on that key, it will going to push into only one partition. So if you have the if our all message in the one partition, then it is based on your time, boosting time. It will be maintained that tran atomic transaction. So when's our consumer will read, it will go to read in 1 partition. It will be everything in the order. So I think this way, uh, we can achieve the atomic transaction. Everything will will be in one place to maintain the order and each directory. Okay. Thank you.

Okay. So what are some best practices you would follow to secure sensitive data in the crash, Python application AWS? What are some best practices? Best practices you would follow, uh, you would follow to secure sensitive data and dockerize sis dockerize Python application. Uh, I have not much experience on the Python, but, uh, I can say that, uh, if we have, uh, some kind of, uh, secure or sensitive data like PI data. Right? So in that case, we're not going to push into the doc. Right? System. We should be push it somewhere else, like in AWS as well. Uh, we have, uh, something called So AWS provide the system who can store the key value in the secure way. So we're going to store the all sensitive data in the, uh, AWS, uh, token and AWS have. So I don't remember what is the exact name of that, but but there are so we can also pass some API key or some user improvements as a credential. If you want, uh, then we can put into that or so, yeah, we are going to put into that. That's, uh, nothing, but I don't remember a jet name. Yes. So AWS provides some kind of, uh, tool so we can use that to providing the secure information like credentials or something. So it will help us.

Okay. So how would you leverage MLflow in CICD pipeline to automate the deployment of machine learning? Model AWS SIG Maker. ML flow. I haven't worked on this. Sorry.

what technique would you use to monitor troubleshoot in troubleshoot your java service that is troubleshoot your java service that is experiencing intermittent legacy spike on the kubernetes what technique would you use to monitor troubleshoot your java service that is experiencing intermittent legacy spike okay so there are multiple technique in the java service to monitor it like if intermittent legacy so we will check it may be there are multiple thread going on so our cpu and memory are busy with serving the another request and your new request in the wait so in that case it will take time to process a new task so it so in that case it will also consume some memory and if your memory is full then it can also port is going to restart so we will firstly we can dump the memory and we can analyze the cpu things and then we can see how much white legacy things in happening there and we will also going to cross check with your database connection or if you are connecting with third party right so is that taking too much time or our lots of request in the queue or not so if we have the multiple in more time taking to connect with the third parties or the database or whoever not part of the that service so if that is taking the or it may be some of our program is going into loop and memory is not releasing properly that will be the major reason I think so in that case we should just analyze the memory and we should check the request how many request in the queue and if request is increasing and we are not complete we are not able to increase the we are not able to complete the task then we should increase the number of ports memory or we should also focus on the increasing the third party we are also improve or increase work on the latency of the database connection database doing or the third party things or doing the asynchronous way so we have to analyze the request final things that we have to analyze the request and the how much memory is being used how much cpu is being used so these are the things we have to analyze using the dump in the kubernetic port ok and we can also see the whenever request come and whenever request completed so what's the time taken done by this or we can also put the some kind of log interpreter in between so we can check when request come and when request completed finishing thank you or we can also correct your database then we can print the log and we can see that keyword query it's going to run

What is your performant way to serialize and deserialize large large dataset in Java based Apache Spark application. Performance. Uh, I went to work on the Spark, so it does.

Okay, so in this Kubernetes YAML configuration file, there is a subtle mistakes that can be resulted in a service disruption. So can you point out what's wrong and explain the potential impact of the service? Okay, so in this Kubernetes YAML configuration file, there is a subtle mistake that could result in service disruption. Can you point out what is the wrong and explain the potential impact? Service, V1, service, okay, metadata, my service. Okay, selector, my app, port, protocol, port 80, target port, this one, node port, this one, TCP. I believe we don't require the node port because there may be multiple services for that. And node port is not compulsory here. If you provide the node port, it would be difficult to scale up this thing. So it should just work on the, and we should, a target port is just, yeah, target port is fine, but I think we don't need the node port here.

Looking at the following Python function function prototype, Can you pinpoint the potential issue that might arise if function instead of my class inbox add item simultaneously. How do you recommend to redeeming this situation considering concurrency? My class is data item. Prototype. Can you pinpoint the potential issue? Oh, yeah. So in case of prototype, it will going to create the multiple instance. If you create the multiple instance and call, uh, add item method have from the multiple instances at same time, it may be order of that, uh, is not correct, or we can say that, uh, someone can also override that Our order will be always wrong. It can be in the any order. So that will be the problem there. And yes, so that will be problem in the order or overriding problem. So in that case, we can use the, uh, in the map adding in the add item method. We can use the synchronization, like, critical section, and, uh, we'll put this add item, uh, method content, uh, added the critical section into the synchronization block. So it will handle 1 by 1. Thank you.

Okay. How would you optimize the radius for low latency? Low latency. Low latency. High throughput operation needed by real time machine learning interference service. Okay. So so, actually, we can use the strategy and the Haradis, like, uh, LRU, MRU. We can use we can also we can also use, uh, LFU frequent recent use frequent use. Uh, so some strategy will, uh, use here so that our Redis will have list data. If Redis will have the list data or because up to some amount of data, then it can serve in better way or their throughput will increase. Or same it's a when throughput enable, it will also, we can survey load. So we don't need to every time go to the real DB. We can sub our responses request, uh, using the Redis, or we can fast, uh, firstly get in the Redis. So if may be in some request if your release have no data or due to eviction policy, like, you will see for sale, are you, uh, recent LRU? Okay. So least recently used, uh, the most used in tools based on that. Based on the situation, if you implement it, it will automatically delete from the caches. So caches will be little less, or we can surf faster in that way. Yeah.

Okay. So, uh, can you detail your strategy to optimize Java applications running in Docker container for CPU and? Can you detail your strategy to optimize Java application running in Docker and container for CPU and memory efficiency? Can you detail your strategy to optimize our application running in Docker? Okay. For CPU and memory efficient. CPU and memory, PCMC. Yes. So we can do it. So, like, uh, how we can first of all, how we can do, uh, how we can make a system, uh, like, take the less CPU and less memory. So, like, whatever task, uh, we have done. So some memory located with that task should be also free. Right? And our CPU is doing the multiple task in parallel. Right? So we can do, uh, 1 by 1 means, uh, like, multiple like, in case of multithreading environment, we are doing the multiple request at a time on the same method. On the so we're trying to block the memory. So I don't suggest, but, uh, we can use the not we don't use the multiple in multithreading environment. So it will use the less CPU, less memory. It will request get one request completed that we're going to uh, submit the next switch next request, and it will go in the critical section every time. But if you don't want to do it, then we have to improve our, uh, garbage collector strategy, uh, in the Java Java application so that, uh, it can easily, uh, free the memory, uh, based on unusual based on unreferenced. Means if you don't want to use that, then we can skip, uh, we can free that memory. So that thing, I think, would be to, uh, improve that.

Okay. What are the some technique to optimize your post grace database for large scale read, intensive machine learning workload. Technique to optimize your post grace database for large scale feed. Yes. So we can use master slab architecture, and, uh, all the instances, the instances will be sync from the master in the slab instances, and it will be try to make the real time. All the returns can be, uh, doing the sync automatically there. Uh, don't real time. I don't think you will need the real time syncing between these two instance, uh, DB, posting instance, but, uh, yeah. We can also do the multiple we can also create the multiple slab, and, uh, each will go into the master dev instance, page from the master dev instance and sing into the slips. And, uh, our the, uh, who want to read that data, it will refer to the slab instance. So slab instance can be you can easily read because they will that will not going to be g to updating or that thing. Or we can also create the multiple slab, or, uh, we can say this slab is particular. This slab instance is going to connect with this particular service apart. This particular service part or this one, uh, going to connect with this particular part 1. So based on that, it can be, uh, best for particular instance of database, will busy, uh, will serve in some particular ports. So every time, it will be, uh, it can provide the better or, uh, we can also improve some indexing thing. Right? So we will keep, uh, we'll just we'll have to understand the what are the column which by using which column is going to page mostly so we can put the indexing on that. Because in the real instance, we are writing in the back in the back. Right? We are writing in the syncing in the back. So that is not the why about us. We just want to, uh, read make a label mostly. So in that case, we should have to index properly our column, and it can be faster. Okay. Good. Thank you.