profile-pic
Vetted Talent

Deepak Kumar

Vetted Talent
Professional Engineer with 3+ years of hands-on experience in the eld, who likes to take initiative and loves in nding innovative solution to complex problems, reducing manual interventions, and optimizing processes to achieve cient, scalable, and highly available infrastructure.
  • Role

    MLOps, DevOps and Release Engineer

  • Years of Experience

    5 years

Skillsets

  • Lambda
  • Django
  • Github
  • Agile
  • AI/ML model development
  • Cloud Infrastructure Management
  • DevOps
  • EC2
  • IAM
  • Gerrit
  • RDS
  • release
  • Route 53
  • S3
  • SNS
  • SQS
  • VPC
  • Prometheus
  • AWS
  • AWS
  • Docker
  • Flask
  • Grafana
  • Jenkins
  • MySQL
  • Perforce
  • Akamai
  • Python - 5.0 Years
  • Terraform
  • Git
  • Kubernetes
  • Ansible
  • Bash
  • FastAPI

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior AWS DevOps EngineerAI Screening
  • 46%
    icon-arrow-down
  • Skills assessed :Agile principles, Jira, CI/CD Tools, Terraform, AWS, Docker, Java, Jenkins, Kubernetes, 組込みLinux, Python, Ruby on Rails
  • Score: 46/100

Professional Summary

5Years
  • Apr, 2024 - Present1 yr 11 months

    MLOps, DevOps and Release Engineer

    Qualcomm
  • Jan, 2022 - Apr, 20242 yr 3 months

    R&D Engineer 2

    Keysight Technologies
  • Jan, 2021 - Jan, 20221 yr

    R&D Engineer 1

    Keysight Technologies
  • Jan, 2019 - Jan, 20201 yr

    Software Developer Intern

    Keysight Technologies

Applications & Tools Known

  • icon-tool

    Jenkins

  • icon-tool

    Perforce

  • icon-tool

    MySQL

  • icon-tool

    AWS RDS

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    Kubernetes

  • icon-tool

    Terraform

  • icon-tool

    Akamai

  • icon-tool

    AWS

  • icon-tool

    Docker

  • icon-tool

    Vault

Work History

5Years

MLOps, DevOps and Release Engineer

Qualcomm
Apr, 2024 - Present1 yr 11 months
    EC migration to Jenkins to build CI/CD pipeline, reducing EC load and enabling faster, scalable automation. Developed MCP powered meta build analyzer that processes release builds, analyzes CRs and automatically generates rich, actionable release insights for PMs/PEs, project & development teams and Leads. Leveraged AI & ML to develop an intelligent Change Based Testing which runs selective test cases based on Change lists and reduced time by 80% and increased efficiency. Developed a live dashboard for real-time monitoring of build servers by Project and Software Product wise, offering early insights into setup statuses such as active, inactive, busy, and restore state.

R&D Engineer 2

Keysight Technologies
Jan, 2022 - Apr, 20242 yr 3 months
    Spearheaded a release automation initiative that streamlined deployment processes, reducing manual errors and cutting release cycle time by 95%, leading to a Value Award (2023). Managed and supported build/release activities for globally distributed teams (India, Romania, US), maintaining high uptime for critical build systems. Automated Akamai system workflows, leading to a SPOT Award (2022) for enhancing team efficiency and reducing manual configuration time. Administered AWS cloud infrastructure and designed process automation tools, identifying and implementing technical solutions that enhanced overall team efficiency. Provided technical mentorship to junior team members, fostering a culture of knowledge sharing and collaborative problem-solving.

R&D Engineer 1

Keysight Technologies
Jan, 2021 - Jan, 20221 yr
    Owned and resolved complex issues within automated build and test environments, delivering end-to-end solutions that improved system stability. Defined and established standardized development, test, and release processes for DevOps operations, improving consistency across multiple projects. Collaborated with Product Owners and Project Managers to effectively manage release schedules and dependencies for multiple software products.

Software Developer Intern

Keysight Technologies
Jan, 2019 - Jan, 20201 yr
    Automated the Hotfix Installer workflow using Python and Jenkins master-slave configurations, reducing manual deployment time from hours to minutes. Developed a proof-of-concept (POC) to dynamically load Perforce branches into Jenkins via APIs and AWS Lambda, improving pipeline flexibility. Authored Python scripts to validate and report on build locations in AWS RDS, supporting strategic decisions on data archiving and purging.

Achievements

  • Value Award (2023)
  • SPOT Award (2 times in 2022)
  • SPOT Awards (2 times in 2022)

Major Projects

1Projects

Stock Market Analysis using ML

    Analyzed a 9-year historical stock market dataset using various ML algorithms in Python to predict future returns. Compared algorithm performance and explored hybrid models to improve prediction accuracy, demonstrating proficiency in data analysis and ML model evaluation.

Education

  • B-Tech in Computer Science & Engineering

    Institute of Engineering & Management (2020)

Certifications

  • AWS Concepts from Udemy Kolkata

  • Aws concepts from udemy

  • Industrial training from bsnl, kolkata

AI-interview Questions & Answers

Hey. Hi. I'm Deepak Kumar. I have a overall more than 3 years of experience in the DevOps where my technical skills are, like, uh, AWS as a cloud And Docker and Kubernetes for the container. Uh, apart from this, I'm also, uh, managing here the CICD pipelines. So the tech which we are using here is the Jenkins I just see a city. And, uh, apart from that, uh, I'm working on, like, uh, many Automation project as well as, uh, I'm also handling the release engineering's sets here. So, uh, I have a much more relevant experience in the DevOps as well as in my Technical skills with the scripting languages, the Python as a preferred language. That's it.

For the secure to create a security of this architecture for a new application, uh, like, we first have to, take all the requirements of the new applications, how it is going to be. And according to that, we have to, like, uh, think in a in a this purpose, like, who is going to be, uh, use these applications as a basis? Just take example, like, the dev teams also want to, uh, do some modifications update or something, adding new features in the applications which are going to be used, there will be some, uh, admin admin part will be there. So we have to provide the admin user according to that. So all these things, we have to create our VPC. And in this VPC, we have to, uh, also create the identity access management service in the AWS, which will, this will have to configure some groups like, uh, testing and some end users kind of things there, we will we will we will have to add multiple user according to that, who will have the, uh, permitted permissions to use that applications. And, uh, after after that, there will be something like, we're going to enhance some firewall as well to add some additional securities and vault for the

Suggest using Terraform, uh, like, if there are multiple use server you have and you have to, uh, like, uh, you are using their your own, uh, systems, uh, as well as to, uh, use the Terraforms like domains like you have multiple system and running applications which you need to be, uh, means you need to be deployed on other machines every times as whenever the demand increases. So uh, in that case scenario, you can have the telephones to use

We can use the, uh, AWS CloudFront, uh, to see what is happening there and how many deployments are going. Uh, also, we can use the service like the EKS. So where we can configure the, Kubernetes, and, uh, we will see how many monitoring tools are going to be there.

I have, So here, uh, I have created 1 project. It's like from the it I initiated the idea as well as the, And then Slack implemented a pipeline, uh, in which we are using several AWS services like AWS Fargate and, uh, AWS Fargate EKS for building the Docker containers and also pushing the Docker images to the ECS. Uh, after that, we are using, uh, means like s three bucket here to post some files for the for our managing the release. Once we prepare the staging area for means under the s three, From there, we are using the CDN, that is the Akamai for the CDN purpose. And from there, we we are we are just adding our means, like, creating a present URLs from that side, and then we are updating our, uh, product page key site product page there to get the it's like to provide the software to the end users or our customers. So in this scenario, uh, it's like the first, Uh, it was fetching some credentials from the AWS to sorry. Fetching the credentials from the vault, Uh, on the base of rules from the to access the

Handling large database to automate process to pause all the data. Yeah. We could do that. Uh, on the basis of some timeline, uh, we can write a Python script that will, uh, face the database and, uh, query some we'll query and we'll take out those data only which have some, uh, exceeded, uh, timeline. Or it is being in the just like the for the, uh, for the many times line. So if, uh, we can create a pipeline here. And from the pipeline, we could just track first, uh, at several stages like prebuild, where we could test to check out what are those data that needs to be passed. And based on that, uh, we could have some backup files there as well to maintain if there is some any, uh, misshapen occurred, then we could first check for that backup files and then, uh, processing that back of file, we could just trigger 1 script that will, uh, then select

So since it is a private, uh, class, so we have to means, like, and the string is also SDR is not defined here, so we are receiving the null pointer exception

Under the selector app, it is NGINX. And under the template, it is mentioned app equals to NGINX one. So we have to make it, like, Uh, the same name where this template will be matched, the selectors for the replications. So this is the uh, I think this is a bug that we need to prevent for a successful

The project that I work on, that is the release automation of some release, uh, processes. So I have been here used several, uh, like, worked with several Linux uh, OS operating system. And, uh, also use the AWS means, like, several services of the AWS. Uh, and, uh, in this project, uh, I have written some Python script as well as some Groovy script and also, uh, written the uh, pipe event pipelines in a declarative way. So, uh, all these combinations were uh, very challenging to me. Also, this was a very new project. Initially, this was going on it's like a manual process. So uh, after I did some investigations and some also, uh, findings of the AWS services which are best to use, Uh, I came to know that. It means, like, uh, I have successfully implemented the release automation projects uh, and, uh, which increases the efficiency, also decreases the time by uh, at most 90%. So and, also, we are not much like the even the dev teams can trigger those bills uh, spite their own. So these were the benefits. It also reduces some of our workload from the DevOps team side. So Uh, this was the things that, um, I have been misled. Very much gain our knowledge of Linux, uh, cloud service AWS and scripting

Uh, we can achieve 0 downtime when, uh, releasing new version of applications. Like, first, we have to, like, uh, use the previous version of the applications. Once it is configured a new applications, we will create 1 new Docker image. And from that Docker image, we will pull the, uh, container, and we will deploy those container at the same times so that, uh, it will, uh, means, like, it first, we'll take it to the pre productions, and we'll check whether all the features of the new application is working fine or not. If there is some issues, then we will, uh, get it back to the dev teams, or we will look if it is a uh, dev issue or, uh, means, like, the issue from the configuration management sites. So then we will take care of those things. Once it is done, I will once, like once it is fixed, then we will proceed for the production deployment and, we will just, uh, change the volume of that existing container with the same similar, uh, new containers. And, uh, this will also means, like, not affect any, uh, any means like the job which has been triggered that time. So we will have a backup of the job as well, which is running, uh, all all the previous applications that we have. Uh, also, we can also, uh, include the new application in that things.