profile-pic
Vetted Talent

Jitendra Daya

Vetted Talent

DevOps and Cloud Engineer with over 6.7 years of experience in building and managing cloud infrastructure and automation. Skilled in AWS, Terraform, Kubernetes, Docker, OpenShift, OpenStack, Consul, Nomad, Linux and CI/CD tools to improve deployment efficiency and system reliability.

Passionate about driving automation, optimizing processes, and implementing scalable solutions. A proactive problem-solver who enjoys collaborating with teams to deliver innovative, high-performance infrastructure that supports business goals. Eager to tackle new challenges and continuously learn and grow in the DevOps space.

  • Role

    DevOps Engineer

  • Years of Experience

    7 years

Skillsets

  • Test automation
  • Grafana - 2 Years
  • Linux Server - 7 Years
  • AWS EC2 - 3 Years
  • deployment pipelines
  • Azure - 2 Years
  • Docker - 4 Years
  • infrastructure as code - 2 Years
  • Continuous deployment (cd)
  • Continuous integration (ci)
  • AWS - 3 Years
  • Python Scripting
  • OpenShift
  • Infrastructure as Code (IaC)
  • Containerization
  • Configuration Management
  • Python - 2 Years
  • Kubernetes - 4 Years
  • Terraform - 3 Years

Vetted For

15Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Software Engineer, DevOpsAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :infrastructure as code, Terraform, AWS, Azure, Docker, Kubernetes, 組込みLinux, Python, AWS (SageMaker), gcp vertex, Google Cloud, Kubeflow, ml architectures and lifecycle, pulumi, seldon
  • Score: 54/90

Professional Summary

7Years
  • Nov, 2022 - Present2 yr 10 months

    DevOps Engineer

    Infobeans
  • Apr, 2021 - Nov, 20221 yr 7 months

    System Engineer (Level 3)

    Cybage Software Pvt. Ltd.
  • Feb, 2019 - Apr, 20212 yr 2 months

    Linux Administrator (Level 2)

    VSN Internation
  • Jul, 2014 - Nov, 20151 yr 4 months

    Hardware Engineer

    R.D. Computers.
  • Jun, 2017 - Dec, 20181 yr 6 months

    Linux and Windows System Administrator (Level 1)

    Exclusive Securities Ltd.

Applications & Tools Known

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    Jenkins

  • icon-tool

    Git

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Terraform

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    VCenter

  • icon-tool

    Github

  • icon-tool

    Bitbucket

  • icon-tool

    Chef

  • icon-tool

    Rancher

  • icon-tool

    Nginx

  • icon-tool

    AWS

  • icon-tool

    VMware

  • icon-tool

    Windows server

  • icon-tool

    Nginx

  • icon-tool

    WordPress

Work History

7Years

DevOps Engineer

Infobeans
Nov, 2022 - Present2 yr 10 months
    Deploying and managing containerized applications using Kubernetes, Docker, or similar tools. Developing Jenkins jobs and Jenkins Pipelines while ensuring the successful execution of existing jobs. Provisioning new virtual machines on the Openstack platform. Creating, deploying, and resolving issues related to VMware configurations. Performing Ubuntu server upgrades and verifying proper functionality post-upgrade. Establishing VCenter clusters and promptly addressing any encountered issues. Implementing modifications to existing code, pushing changes to Github and Bitbucket when necessary, and subsequently raising pull requests. Managing upgrades and troubleshooting for Kubernetes clusters. Conducting basic-level tasks on Openstack and troubleshooting VM-level issues as needed. Configuring Jenkins master and slave nodes and creating new jobs when the need arises. Maintaining and creating documentation for infrastructure, processes, and configurations. Integrating security practices into the DevOps workflow, including vulnerability scanning and code analysis. Directing GitOps practices to manage infrastructure and applications using Git repositories.

System Engineer (Level 3)

Cybage Software Pvt. Ltd.
Apr, 2021 - Nov, 20221 yr 7 months
    Led storage volumes (LUNs) by creating, deleting, and assigning them to servers from SAN storage. Installed and configured Rancher clusters and Kubernetes clusters. Planned and executed upgrades of Kubernetes clusters in both production and non-production environments. Conducted Windows and Linux server patching and managing VCenter upgrades, VMware host patching, and VCenter cluster deployment. Configured Storage Classes with Pure storage on Kubernetes clusters and redeploying Pure Storage Classes as needed. Executed adding and removing Kubernetes hosts as required and troubleshooting Kubernetes Master and worker node issues. Identified and troubleshooting Docker host issues. Built and configured new physical and virtual servers in the infrastructure as needed. Performed installed and configured Kubernetes clusters on Rancher, creating multiple Kubernetes clusters within it.

Linux Administrator (Level 2)

VSN Internation
Feb, 2019 - Apr, 20212 yr 2 months
    Created, deleting and assigning storage volumes (Luns) to the servers from the SAN storage. Developed two node clustering in our infrastructure on the Linux environment and also good knowledge about facing, shared storage, resource and resource group etc. Configured two node clustering on windows server 2016 and then created SQL clustering for data warehouse. Performed taking full backup of our organization into tape library and also managing the tape library like modify and schedule backup by CA-ARC server software. Managed and configured Nginx reverse proxy server and small AWS infrastructure. Installed and configuring Red Hat virtualization. Prepared multiple AWS EC2 instance when required. Generated, live migrating and taking snapshots of VMs on Red Hat virtualization Manager. Added new host in Red Hat virtualization and integrate Red Hat virtualization manager with windows active directory. Conducted installation and configuration of Cpanel and workdpress in Centos7. Executed migration of MSSQL disk from one disk to another disk in windows clustering. Administered setting up GFS2 filesystems within a clustered environment and configuring the cluster's logical volume manager on Linux. Transferred and exported physical volumes, volume groups, and logical volumes in a Linux environment. Directed migrated SAN storage volumes from Fujitsu SAN to IBM SAN storage while configuring virtualization between them. Led end-to-end server administration for Linux platforms, ensuring the stability, availability, reliability, and service capacity of Linux servers. This includes provisioning new Linux servers on diverse hardware systems. Configured SCSI target and initiator settings, as well as implementing multipathing in Linux. Established and maintained core infrastructure components, technology standards, processes, and policies. Identified and analyzed issues that impact system performance, collaborating closely with various teams to recommend solutions. Detected system discrepancies, assessing associated risks, and implementing solutions while adhering to security standards.

Linux and Windows System Administrator (Level 1)

Exclusive Securities Ltd.
Jun, 2017 - Dec, 20181 yr 6 months
    Red Hat certification in 2018 Hardware & Networking Course from Jetking in 2017 AWS Certified Solutions Architect Associate in 2021 DevOps tools and technologies like Jenkins, Terraform, Docker, Kubernetes, etc. Deploying containerized applications using Kubernetes and Docker Developing Jenkins jobs and pipelines Using Terraform and Ansible for infrastructure provisioning and configuration management Managing Kubernetes clusters and OpenShift environments Implementing monitoring and alerting using Prometheus and Grafana Managing cloud environments on AWS and Azure Integrating security practices into the DevOps workflow Developing and implementing CI/CD pipelines Operating system management tasks such as server upgrades, VM provisioning, and troubleshooting Writing and maintaining scripts in Bash and Python Handling code changes using Git and raising pull requests Conducting training and mentoring sessions for junior engineers Documentation of processes, infrastructure, and configurations Collaborating with cross-functional teams to streamline workflows Ensuring compliance with enterprise security policies and procedures Identifying and addressing complex technical challenges Driving continuous improvement and maintaining knowledge of industry trends and advancements. Proactive problem-solving and collaboration skills motivated us to stay at the forefront of DevOps practices, contributing to operational excellence and the reliable delivery of high-quality software releases.

Hardware Engineer

R.D. Computers.
Jul, 2014 - Nov, 20151 yr 4 months

Achievements

  • Led storage volumes (LUNs) creation
  • Developed Jenkins jobs and pipelines
  • Provisioned virtual machines on Openstack
  • Managed upgrades and troubleshooting for Kubernetes clusters
  • Administered server patches and upgrades
  • Integrated security practices into DevOps workflow
  • Managed large-scale server environments

Education

  • BCA (Computer Application)

    DAVV University, Indore (2014)
  • Intermediate

    Vimal Higher Secondary School, Bhopal (2010)
  • Matriculation

    Vimal Higher Secondary School, Bhopal (2008)
  • BCA (Computer Application)

    DAVV University (2014)

Certifications

  • Red hat certified engineer (rhce) in 2018

  • Red hat certified system administrator (rhcsa) in 2018

  • Red hat certified specialist in containers and kubernetes (openshift i) in 2022

  • Hardware & networking course from jetking, indore in 2017

  • Aws certified solutions architect associate in 2021

  • Red hat certified specialist in containers and kubernetes (openshift ii) in 2023

AI-interview Questions & Answers

Hi. My name is. Basically, I have, like, total 6, 7 years of experience in IT field. Uh, total like, I've worked on multiple tools and technology in the past 7 years. Uh, let's say, like, uh, Fibonacci, Stock Cash, and several times, uh, AWS, Azure, and Terraform. Uh, and other than this, uh, recently, I migrated to a new new project, which are the moment and the console. So, uh, I successfully upgraded moment cluster. Like, around any moment cluster, we have, uh, around, like, 120 of the clients and, uh, 2 data centers in 2 data centers. And other than this, uh, I also migrated or upgraded the console cluster. In the console cluster, we have multiple, uh, services, uh, which are the critical parts for us. So I work on these things. Other than this, I also working experience on the, like, AWS and Azure side. And in the same in our in our project, uh, we are using an AWS and, uh, AWS and Azure for both of the both of the, uh, cloud cloud things. And to manage or create and service on the, uh, Azure side, we are, uh, or on AWS side, we are using a Terraform script or Terraform code, uh, to create or make any changes in the, uh, infrastructure side. Other than this, uh, I also have working experience on the storage side. So, uh, in my past, uh, working experience, I have worked on, like, multiple types of storage, like, SAN or NAS and the provider of for the same, it would be all storage or NetApp or Dell, uh, and then, uh, IBM. So I work on those, uh, those kind of storage side, and, uh, I also work on the, uh, Kubernetes. So in our infrastructure, we are around, uh, 40, maybe 40 or 30 people Kubernetics listed. Uh, and, uh, we have, uh, managing those things, uh, on the Kubernetes and deploy our application on the Kubernetes. So whenever we have an issue on the Kubernetes side, I I was there. Uh, we need to resolve it. And other than this, the deployment part is managed by the deployment team. But, yeah, uh, in the infrastructure side, we are managing the Kubernetes cluster. And whatever the service is managed by the Kubernetes, we are taken care by those things. And other than if any issues occur on the Kubernetes cluster or in Kubernetes client mode, we need to manage, and we need to resolve it. Uh, I also plan I also upgraded the Kubernetes cluster in my, uh, working experience. Uh, I also worked on the Ansible side, uh, on Linux, Windows, uh, and Windows side of Windows AD and and Linux side Linux side. Uh, some helps I have somehow some some of the working experience with the LDAP side. So that's all about me. Thank you.

So to deploy the, uh, uh, a straightforward application on the Kubernetes cluster, like, uh, for an example, uh, we have in our organization database clusters, uh, and we wanted to, uh, deploy on the Kubernetes cluster. So we can do, like, uh, create an stateful, uh, application and, uh, create an create an state, uh, create an stateful or create an create an file for deploying the database on the Kubernetes cluster, uh, which is our demon set. Uh, sorry. Uh, what is what is the service? I forgot the service name, but, uh, the service name is state state stateful. Sorry. Stateful and write the file for it and create an for a database application, uh, using in a stateful stateful state, and then, uh, create a service for it. Uh, and, uh, like, at the time of the deployment, we also need to provide the volume, like, uh, PV and PVC. So for storing a data in it, uh, and, uh, and create an application on the Kubernetes cluster. Like, for breaking up, uh, and restoring a data of, uh, the database, we we we can use like, the volume would be on the, uh, back end storage. So it will automatically taken care by the storage services for breaking up. And, also, we can set up a cron, or we can also set up in the storage site for, uh, creating a volume. So we're taking a backup of the volume for, uh, for for different thing, we can also do a second reading, like, create and backup on the regular basis of the volume which are mounted by the or which are using by the, uh, straightforward application for our database.

How to set up by these test documenting build process to optimize the Python application? Application. Yeah. So, uh, to create an, uh, multi instance Docker build process, we we need to create a Docker file for it. Uh, let's say, uh, we can we've in the first, uh, stage, we required, uh, image. Uh, like, for the Python image, we can use in Python and with, uh, whatever the configuration we wanted to do in, uh, like, for copying our Python code to the image we can use. And on that, uh, on that, we, uh, empty. Uh, and then, uh, secondary, we we use a a same image in which we have copy the, uh, copy our doc Python code, and we can, uh, build it. Uh, we can use that image in whatever the configuration required or whatever the package is required, uh, to build the, uh, to run the Python code like requirement to TXT file. And whatever the package is required to run the code, we can, uh, install that codes inside the, uh, image, and, uh, we can build the Docker files. In that way, we create, uh, we can create a multi multistage Docker Docker image, and, uh, we can reduce image size by by following this, uh, this method.

For using a secret in the, uh, Linux environment, we we can use, uh, like, uh, the HashiCorp Vault for storing our secret, uh, inside the, uh, Hashi HashiCorp Vault. And whenever we wanted to use it, we can use it. Uh, other than this, uh, in the AWS side, we can, uh, also use his AWS secret service, uh, in which we can store our, uh, our secret secret, uh, data, like, like, password or key or certificate of anything which are the secret for us, and we can store those things in it. And whenever we require inside, uh, in our application, we can use it from there. Uh, so so the secret in inside the secret, uh, we can we can store our data and, uh, and we can, uh, use in our Linux environment or Linux application where the Linux application require sorry. In our, uh, in our application required the secret, at that time, we can use our secret inside it. Inside the containerized, uh, containerized way, we can, uh, store our key uh, in the history code vault. And whenever we deploy the container on the Linux environment, we can use that key, uh, by using an history code vault, uh, and then we can start the service. Uh, we can start the container.

Comparing using is AWS because the Terraform from first to the support with a focus on a specific use case like network provisioning. So, like, uh, AWS secret sorry. SD AWS SDK, uh, to use, uh, by, uh, like, for an for sorry. Sorry. Sorry. Uh, so, uh, like, Terraform, uh, for the like, Terraform is the, uh, infrastructure as a code. So write in Terraform code, uh, to build a, uh, to build an infrastructure on the AWS provider or any other cloud provider. Uh, so it's an, uh, it's, uh, it's, like, uh, like, create a VPC for AWS or create a VPC on AWS cloud provider using a Terraform script. We can write in a script, uh, Terraform code inside Terraform code, like, what VPC you wanted to use, in which reason you wanted to use, uh, you wanted to create the VPC, and then, uh, what what will be the submit for the VPC. And, uh, for, uh, like, it will be the do you want do you want, like, uh, public access for the private, uh, public access for the subnet, allow public access for that particular subnet or not. Uh, whatever the configuration you wanted inside it, you can mention in the Terraform code, and you can, uh, create an infrastructure using Terraform code. But, uh, in AWS SDK, we need to provide, like, for an example, uh, we you for an example, we need to create an AWS v VPC. We we need we need to use an AWS, uh, command, CLI command for for, uh, creating the VPC and whatever configuration you wanted to use. We we need to provide, uh, we need to provide the, uh, argument at the time of running those, um, that command.

Suggest on a on suggest an automated approach to scale Kubernetes deployment in a in a response to increase wave traffic loads. So, yeah, uh, so automatic, uh, scale of Kubernetes deployment, we can use an HPA horizontal port scaler. Uh, so by that way by in that way, we can we, uh, it will automatically increase the, uh, port or deployment ports, uh, port number if the traffic is increased. So it it use, like, metrics, like, when whenever the CPU or memory goes above 90 or 85 or 95%, it will automatically increase 2 or 3 ports and depend upon, uh, and it it depend upon our, uh, configuration side. So, uh, so for the automatic approach, we we we need to use an, uh, Kubernetes for horizontal code scanner.

Here is an snap from the Python CID pipeline script, which you utilize Docker. What is wrong with this code that might fail the build process? Local build failed and raise. By, like, uh, the understanding of this code, like, I'm not able to, uh, like, understand what error I am getting. I will get when I run this code, if I need, uh, to troubleshoot the issue, I need to run this code and see, like, what error I am getting. And on the behalf of that, I will, uh, fix this this code, like, where it is failing. Uh, but by, uh, looking at the code, I am not understanding, uh, at what point this, uh, this particular code will fail.

Assuming, assuming you are viewing the data from module for deploying an AWS system, I notice the follow-up. Look. Can you point out potential security risk here? So in the quotation potential risk for this is the key. Uh, so we are using a default key. And inside the, uh, when, uh, inside a variable, we are storing the, uh, key. And, uh, so the risk is here, uh, inside the file, uh, anyone can, uh, look at the file and see the, uh, key. And and and, uh, if someone has, uh, access or someone has, uh, IP address or name of the server, which are deployed on the AWS, it will, uh, he can access the, uh, server. So it the like, it's a big risk for, uh, for, uh, for us to use that type of configuration. So for instead of using this way, we can use and the HashiCorp board for storing our key. So it will, uh, what do you say? It will secure and, uh, only authenticate people can access.

How to deploy multi tier application using a data form, ensuring high availability in both data and environment. Okay? Uh, so, uh, for deploying a multi tier application on the, uh, using a Terraform Terraform code, uh, we first, we need to write an, uh, script for deploying a Terraform script. Like, for an example, multi tier events like, we can deploy a a w sorry. Easy to instance, or we can like, it's depend upon the scenario. Do we want to, like, uh, do we want to build an infrastructure, like, uh, easy to instance or not? Or we or can we go in the serverless architecture? But in my in this case, I will, uh, use, uh, the, uh, like, infrastructure way. I'm not going to use the, uh, serverless architecture. So, uh, for the multi tier application, first, uh, I will create an VPC, write a code. So for creating VPC and, uh, inside the VPC, I create, uh, the VPC in, uh, uh, so for, uh, one one one VPC like, in the VPC, we create in 3 3, uh, subnets for it. So one subnet would be the private. Uh, 2 subnet would be private for it. 1 for the public for the web server. Uh, so in the private subnet, I will deploy the database application. The second the second, uh, subnet, I will use the deploy the, uh, what do you say? Like, API, uh, API, uh, REST REST API. And in the 3rd, I will deploy the web server for it for it. So web server will be in the public public subnet. But other than this, 2 application would be in the private subnet, and I can access the, uh, application by using it. So what all this requirement, uh, uh, one more thing, like, uh, to deploy the application in high highly availability zone, highly available. So for that, I will deploy the application in multiple AZs. Uh, so, like, let's say, like, in the, uh, wave server deploy in the 2 or 3 availability zone, like, 1 server in each availability zone. And same way, I will use, uh, database and the, uh, API and deploy all these different availability zones. So whenever we have an issue in it and one of the abilities on, uh, we won't able to hamper our application and user can access it.

What approach would you take to build code to production pipeline for AI driven application using customized technology? For for for this approach, what I use, like, uh, creating, uh, what do you say? Creating in a pipeline for deploy the code on the, uh, using an containerized technology. Uh, so for that, uh, first, like, let's say, uh, the, uh, developer has built the code. So by that code, I will, uh, build, uh, use and then pipeline to build the code and, uh, create an artifacts, store the artifacts on the build. Uh, in the second stage, we store the artifact on the artifact server, and then, uh, copy the artifacts on the image inside and unzip and install the, uh, required packages inside it and, uh, upload the, uh, upload the, uh, image to the Docker Hub or any any any other registry. And, uh, after, uh, after, uh, uploading the image, I can use the image and, uh, create until, uh, deploy the, uh, application or deploy the new image or create an container by using that image in the test environment, then then, uh, QA or testing will test it if everything is working fine. Uh, and then we can deploy in the, uh, staging in what sorry. QA in what sorry. Testing. And then once the testing is completed, then I will, uh, trigger a message for, like, uh, for in the production deployment, what we can we can, uh, we can use, uh, like, uh, things, like, for for deployment in the production. What we can trigger a message, then once trigger an email for it and, uh, manager will approve that made, uh, things to deploy the application in the production environment, then it will automatically deploy the, uh, code in the production environment. So by using this approach, I I need to create an, uh, pipeline for it and, uh, deploy the application to production.

Explain your approach to optimize the Kubernetes cluster for deploying computer v vSAN module, developed in Python. Python. If we're optimizing a Kubernetes, cluster, uh, we can use, like, Prometheus and Grafana for for monitoring things. Uh-uh. Uh, so, like, how how much load do we have in the Kubernetes cluster? Which node is using the less resource and more resources like, uh, CPU and memory. Uh, so in the behalf of that, we can, like if the we have higher load on the Kubernetes cluster, we can create, uh, add a new new, uh, client node to the Kubernetes cluster. But, uh, in this word, compute these and model level Python, I'm not sure what exactly we in trying we wanted to achieve in this scenario, uh, in this, uh, involvement, uh, specific to Kubernetes side. Uh, so I'm not understanding the actual question for it. Sorry about this.