profile-pic
Vetted Talent

Anmol shukla

Vetted Talent
am seasoned DevOps Engineer with over 4.3 years of hands-on experience collaborating closely with cross-functional teams, including application development and data engineering. My role extends beyond day-to-day tasks, encompassing comprehensive project planning and seamless implementation. I have proactive approach to adopting emerging technologies, ensuring that my solutions are consistently at the forefront of innovation.
  • Role

    Senior CLOUD Applications Reliability Engineer

  • Years of Experience

    5 years

Skillsets

  • pgAdmin
  • CI - CD
  • Kubernetes
  • Linux
  • Jenkins
  • AWS
  • Kubernetes
  • Alarm monitoring
  • DevOps
  • Terraform
  • Shell Scripting
  • Python
  • Robot3t
  • Visual Studio
  • Python - 3 Years
  • groovy script
  • Docker
  • Aws cli
  • Atlas mongo
  • Shell Script
  • Web Development
  • Jenkins
  • Terraform
  • Kubernetes
  • Ansible
  • AWS
  • Bitbucket
  • Python - 3 Years

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior AWS DevOps EngineerAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :Agile principles, Jira, CI/CD Tools, Terraform, AWS, Docker, Java, Jenkins, Kubernetes, 組込みLinux, Python, Ruby on Rails
  • Score: 60/100

Professional Summary

5Years
  • Aug, 2022 - Present3 yr 2 months

    Devops Engineer

    Nium Pvt Ltd
  • Aug, 2019 - Aug, 20223 yr

    Systems Engineer

    Tata Consultancy Services
  • Jun, 2019 - Jul, 2019 1 month

    Software Developer

    DishTv Pvt Ltd

Applications & Tools Known

  • icon-tool

    Jenkins

  • icon-tool

    Terraform

  • icon-tool

    Ansible

  • icon-tool

    Docker

  • icon-tool

    ECS

  • icon-tool

    BitBucket

  • icon-tool

    Visual Studio

  • icon-tool

    Kubernetes

Work History

5Years

Devops Engineer

Nium Pvt Ltd
Aug, 2022 - Present3 yr 2 months
    Developed CI/CD pipelines, automated manual tasks, implemented notification systems, designed and managed IAC solutions using Terraform, configured ELK stack, and architected diverse AWS cloud solutions.

Systems Engineer

Tata Consultancy Services
Aug, 2019 - Aug, 20223 yr
    Migrated on-site solutions to AWS cloud, implemented DevOps pipelines, utilized Terraform and AWS CLI, conducted internal knowledge transfer sessions, and implemented advanced monitoring solutions.

Software Developer

DishTv Pvt Ltd
Jun, 2019 - Jul, 2019 1 month
    Developed content management system and created in-house tools.

Achievements

  • Star of the Month Award _ Tata consultancy Service Ltd (21-December-2021)www.enhancv.com Powered by TCPDF (www.tcpdf.org)\x0c'
  • Successfully reported Acknowledge Bugs
  • Star of the Month Award - Tata Consultancy Service Ltd (21-December-2021)

Education

  • Bachelor of Technology, Computer Science

    GLA University (2019)

AI-interview Questions & Answers

So I am working as a I worked as a DevOps engineer. So having 4.80 years of experience, so I love to solve the problem. So I have a experience in programming language as well in Python and BAS scripting. So coming to the cloud platform, I have extensively worked on AWS cloud. Uh, in that, I have worked on, uh, DevOps tool as well. So in that, I work, uh, for infrastructure part, I have worked on Terraform. And, uh, for for monitoring perspective, I worked on the ELK, uh, Datadog on Grafana and as well as CloudWatch. So for setting some alarm, I worked on also in CloudWatch alerts and Datadogs. So, uh, moving throughout also, I worked on uh, both, uh, basically, uh, both type of application, monolithic and, uh, microservice based application. So in that, uh, so I worked on microservices. I worked on, uh, I mean, Elastic container services as well as in Elastic Kubernetes service. And in monolithic, I set up, uh, basically, uh, servers on, uh, and through the automation using the user data and. And, uh, to do the any deployment, we are using Jenkins pipeline. It is all all auto deployment, which is configured to the webhook, and I worked on a shared library, which is written into Jenkins. And I'll, uh, and I do also some automation, uh, from convert the task from manual to automation. So I use Python, BaaS, uh, Ansible. So Groovy scripting as well into to write into, uh, basically into Genkit. Yeah. That's all pretty much from my side. Thank you.

given the below poison would explain what the issue might be so basically this this is the code it is written basically have which is dividing X dividing by Y so it will give the divide so let's say if it is divided by the zero it will give the infinity it will get the exception called zero division error so that that is handling so it will print the message called division by zero and other than that if it anything comes it will go to the question and print out don't access and finally it will go sprint in the final block for cleanup so the issue I can see is only in this code is that XY should be always into integer when we are taking it yeah

So, uh, when we are defining SDR, we need to define, uh, basically, uh, we can't we can't use some, um, we have to use, uh, we have to assign that basically SDR when we're defining it over there only. That's why it giving a null pointer exception.

Under.append name. So base um, basically, the adding the name and quantity at the same time to basically to the list. So it will come into key pair, and it will be returned as a, uh, basically written as a add a lister. And when we are using a mutable, we need to, uh, basically use, uh, add into a new, uh, basically, list every time. Then we need to

Uh, let me So in, uh, in a label, it should be Nginx only. In a label in metadata label, app should be Nginx. So and because when we are matching with the Nginx, it should be mat the metadata label app NGINX only. In template, metadata label app colon, it should be NGINX not only, not NGINX one.

How do you ensure that system is secure against cybersecurity threat? So how we can make sure the Intel Success secures? All the, basically, database and DC two instance are should be all in a private subnet. And in every load balancer which we are using, it's going to have that WAF. She'll need to be attached, and, uh, all all the traffic which we are whitelisting, it should be a very, uh, specific IP. So let's say 32 or specific to the service we need to attach, and we always we try to need to use endpoint service to route the traffic from, uh, to the different services. And, uh, for for security purpose, we we have to restrict our I'm role access as well into it. And, um, apart from that, uh, none uh, no one can access any resource outside from our network, so it should be having a gen box to access any resource. And all RDS and these should be only in a restricted, uh, subnet only. And and always should be a few few basically program which we need to run, uh, automation as like, uh, why does the idea block like 0 if if enabled in outbound? We can we can we can filter out that, and we need to run a guard duty alerts like that. Yeah. Uh, and also we can set up some, uh, alerts on basis that multiple users are tried, they failed again, then, yeah, definitely we need to, uh, uh, we need to set

So, uh, Kubernet help in, uh, basically in location and, uh, an application scaling on AWS. It's basically when, uh, when we're using Oriental Autoscaler, it help us to scale the ports, uh, in AWS, and it it basically, uh, is depend upon the method which we are going to select. And and then basically on allocation part, uh, we have our labels which we assign to the nodes. Uh, we once it is set aside basically to that label, then only it will go through the location on that. So and if we can, uh, in parts, uh, we can set up some, uh, selectors on that where node affinity basically, uh, where we can, uh, we can define that port will go to which basically in which, uh, which node group and which is which node. Basically, we can define user node affinity. Right? For application scaling, we can use horizontal, um, pod autoscaler. Uh, we can we can use on a metropolis CPU and memory on that.

So every time when, uh, so every time when a new, uh, release is happening, we will make sure have a lower environment which we we can test and understand how how the system is behaving. If having any issue, we can have a dynamic, uh, Java version into CICD, which we can manage. And that can be worked as a environment very well into Jenkins pipeline, which will run into a new version of Java. And then after that, once it build, it can deploy to the basically, the application. We can and everything would be processed through a, basically, uh, a Jenkins CICD pipeline. Let's say it should be a compile 1st compiled, uh, unit testing. Any SonarQ when sync vulnerability, we need to do it. We need to test it out, then after, we can deploy it.

So so, basically, um, so in a dev op project, Uh, so so we are, uh, we created a Terraform, uh, for creating all the resources, uh, of cloud, AWS cloud using Terraform. And that Terraform basically will be managed, uh, for all the teams. Basically, they can create and use, and we we put it into, uh, approval basis, not default approval. So this is where we help us to central and manage, basically, uh, automated way of infrastructure. And and, uh, apart from that, uh, in a Linux, uh, we can use a UgaritabaaS, uh, scripting, which can help us to configure the automated servers. And, uh, for, uh, for deploying any application and doing any automation apart from that, we can have a Jenkins CICD, and we can have a Lambda as well-to-do that these things.

Uh, so for 0 downtime, um, we have a strategies, uh, basically, deployment strategies which we can use. Um, we have a we basically have a rolling back, uh, also a strategy. We have a green blue deployment, which help us in a zero downtime when we are deploying an application into the docker,