profile-pic
Vetted Talent

Vijay Kumar

Vetted Talent

An Innovative and passionate DevOps expert with over all 16 years of experience with 10 years of experience in CI/CD, DevOps solutions plan, design and implementation. 6 years of experience in Project Management using Agile and Hybrid Methodologies. Experience in application development, maintenance on-premise & cloud environments. Establish and manage DevOps solutions and expert in automation strategies. Manage infrastructure operations in cloud computing environments AWS and Azure.

  • Role

    DevOps Manager

  • Years of Experience

    16 years

Skillsets

  • AWS - 5 Years
  • CI/CD - 10 Years
  • Python - 2 Years
  • Bash - 6 Years
  • PowerShell - 1 Years
  • Cloud Infrastructure - 3 Years
  • DevOps - 5 Years

Vetted For

20Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    AWS Cloud Implementation Specialist (Hybrid, Bangalore)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :Microsoft Azure, Team Collaboration, Vendor Management, API Gateway, AWS GuardDuty, AWS Inspector, AWS Lambda, AWS Security Hub, Ci/Cd Pipelines, Cloud Automation, Cloud Infrastructure Design, Cloud Security, Identity and Access Management (IAM), AWS, Docker, Good Team Player, Google Cloud Platform, Kubernetes, Problem Solving Attitude, Python
  • Score: 45/90

Professional Summary

16Years
  • May, 2022 - Present3 yr 7 months

    DevOps Engineering Manager

    Smiths Detection (Smiths Group)
  • Mar, 2017 - May, 20225 yr 2 months

    DevOps Manager

    MFX InfoTech (Quess Corp Subsidiary)
  • Aug, 2009 - Mar, 20177 yr 7 months

    Project Lead(Release Manager)

    Amadeus Software Labs India Pvt Ltd
  • Oct, 2006 - Aug, 20092 yr 10 months

    Software Engineer

    Wipro Technologies

Applications & Tools Known

  • icon-tool

    CentOS

  • icon-tool

    Ubuntu

  • icon-tool

    Windows Server

  • icon-tool

    Shell Script

  • icon-tool

    Python

  • icon-tool

    PowerShell

  • icon-tool

    Terraform

  • icon-tool

    Ansible

  • icon-tool

    Jenkins

  • icon-tool

    SVN

  • icon-tool

    BitBucket

  • icon-tool

    Maven

  • icon-tool

    ANT

  • icon-tool

    Make

  • icon-tool

    SonarQube

  • icon-tool

    Microsoft SQL Server

  • icon-tool

    MySQL

  • icon-tool

    PostgreSQL

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Jira

  • icon-tool

    BMC Remedy

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    ELK Stack

  • icon-tool

    Nagios

  • icon-tool

    Nexus

  • icon-tool

    RabbitMQ

  • icon-tool

    Nginx

  • icon-tool

    Confluence

  • icon-tool

    Excel

  • icon-tool

    MS Project

  • icon-tool

    EC2

  • icon-tool

    VPC

  • icon-tool

    S3

  • icon-tool

    ELB

  • icon-tool

    EKS

  • icon-tool

    ECR

  • icon-tool

    Lambda

  • icon-tool

    IAM

  • icon-tool

    RDS

  • icon-tool

    Clearcase

  • icon-tool

    Artifactory

  • icon-tool

    API Gateway

Work History

16Years

DevOps Engineering Manager

Smiths Detection (Smiths Group)
May, 2022 - Present3 yr 7 months

    Roles and Responsibilities:

    • Leading a team to develop automation processes that enable cross functional teams to deploy, manage, configure, scale, and monitor applications.
    • Collaborate with application development and infrastructure teams in order to help bridge the functional gap and determine the DevOps function set in order to optimize the technical environment.
    • Implementation of complete DevOps pipelines setup using Git, Jenkins, Maven, Ansible, Docker, Kubernetes for multiple projects across the organization.
    • Lead the design, build, and operational management of highly secure and scalable software for the business.
    • Hands on experience on AWS services (EC2, EBS, Monitoring, EFS, S3, Lambda, IAM, VPC).
    • Collaborate with Dev, Infra, Quality assurance and engineering teams from IND, UK & Europe to deliver integrated solutions.
    • Review and setup of CloudWatch alarms and setting up thresholds.
    • Removal of bottlenecks and inefficiencies in DevOps practices.
    • Automation of provisioning and managing cloud resources using infrastructure-as-code (IaC) Frameworks Terraform, Ansible and AWS.
    • Lead DevOps transformation initiatives with the transparency and visibility of the teams engagement & stakeholders involvement.
    • Manage on-premise & cloud infrastructures for various projects environments.
    • Setup Kubernetes cluster for multiple containers and implementation of auto scaling.
    • Application deployments using cloud compute platforms (AWS EKS), Helm, containerization (Docker) and automation process implementations using shell, PERL scripting.
    • Create architecture layout in Visio and share it across with project stakeholders.
    • Drive process improvements and operational stability to reduce risks and time to market.
    • Anticipate & manage changes effectively in rapidly evolving global business environment.
    • Working in an agile development environment, collaborating with Application teams by involving in daily scrum & project review meetings to ensure timely completion of project releases and to improve system performance & productivity.
    • Leading a team with initiatives to support migrations from on-premise to AWS and ensure Knowledge build-up of team members & performance monitoring.
    • Excellent communication and collaborative skills, interpersonal skills and leadership quality with ability to work efficiently in both independent and teamwork environments.
    • Implemented event-driven architecture with AWS Lambda to automate data processing, reducing processing time.
    • Collaborate with development, quality assurance, cyber security and product teams & program managers to develop deployment strategies and to ensure successful deployment of applications.

DevOps Manager

MFX InfoTech (Quess Corp Subsidiary)
Mar, 2017 - May, 20225 yr 2 months

    Roles and Responsibilities:

    • Deployment of applications to on-premise and Amazon cloud infrastructure.
    • Implemented continuous deployment end-to-end system along with user interface for providing end-user inputs for given environment using Ansible, Jenkins and Service-Now.
    • Interface and co-ordinate with diverse teams while advocating the best practices for implementation of low risk, more reliable software products across the organization.
    • Deployment of applications using Git, Jenkins, Maven, MS-Build, Docker, Kubernetes, Ansible, Sonarqube, Artifactory, Amazon EC2, EKS & ECR.
    • Developed projects release plans, rollouts and automation tools using PERL, Shell scripting languages.
    • Implementation of docker containers deployment and handled multiple & large DevOps engagements across multiple divisions in the organization.
    • Hands on experience on creating EC2, EBS/Infrastructure using Terraform, Ansible.
    • Experience in presales and due diligence (RFP), requirements gathering from customers.
    • Implementation of post-prod (SRE) best practices service level indicators (Mean Time To Restore).
    • Identify and explore chaotic situations, conduct formalized experiments and solutions implementation to ensure high availability of applications, services and servers post to the production deployments.
    • Review IAM security settings, VPC security groups, and inbound/outbound traffic.
    • Expert with troubleshooting issues, bugs, communicate and negotiate with multiple teams to accomplish project common goals.
    • Developed deployment automation across multiple environments and implemented best practices. Setup of various automations using Bash, PERL scripting.
    • Implement security controls and measures within the AWS environment, IAM policies.
    • Setup of EC2 instances, monitor and validate on regular basis, volume/snapshots, restore from backups and networking options available in AWS.
    • Implementation of AWS Lambda and other related AWS services like S3, API Gateway
    • Coordinate with cross functional teams on setup and management of containerized applications using docker and Kubernetes.
    • Integration of CI/CD with continuous test automation process and upload test results document onto share-point site using Ansible, Jenkins, Power-shell.
    • Implemented performance optimization of MS SQL based applications for the end user gratification.
    • Expertise in Microservices load balancing using Nginx based on highly consumable and high volume Microservices.
    • Setup of auto scaling using Kubernetes and servers monitoring using Grafana

Project Lead(Release Manager)

Amadeus Software Labs India Pvt Ltd
Aug, 2009 - Mar, 20177 yr 7 months

    Roles and Responsibilities:

    • Led and managed project teams, set clear goals and offered guidance via monthly 1-2-1.
    • Implementation of continuous integration and continuous delivery (CI/CD) using clearcase and jenkins.
    • Coordinated with customers to gather requirements for setting up release plans and processes, deliver project expectations.
    • Drove process improvements and operational stability to reduce risks and deliver quality products using various automations and best practices.
    • Established and maintained close, interpersonal working relationships with multiple teams across geographies Asia, Europe and US for effective & efficient project deliverables.
    • Developed project release plans and ensure successful completion of product releases on time and proper stakeholder engagement throughout the project life cycle.
    • Worked with operations team efficiently for scale up applications and infrastructure across the globe.
    • Point of escalation for addressing and providing solutions for critical production issues.
    • Involved in appraisals and career development programs.
    • Led offshore team and collaborate with onsite co-coordinator to deliver quality solutions within the boundaries of scope, schedule, cost for multiple projects releases & migrations.
    • Proven ability to prioritize tasks effectively as per market conditions and customer needs.
    • Taken up initiatives on continuous improvement, nurturing innovation.
    • Acquired and managed team by timely directing, coaching and delegate the project tasks.
    • Maintained stage environment and pre-production logs monitoring & analysis.

Software Engineer

Wipro Technologies
Oct, 2006 - Aug, 20092 yr 10 months

    Roles and Responsibilities:

    • Worked with key users to facilitate effective application use across business groups.
    • Enhanced project processes by implementing few improvements using automation scripts.
    • Managed code integrations for multiple releases, code coverage and source code tools maintenance clearcase & svn.
    • Led source code and build teams for code integration activities and releases.
    • Monitor daily builds and track for closure of issues related to code, server and middleware.
    • Plan migration activities for different applications to version control system.
    • Plan, design & documentation and implementation of in-house tools with existing systems.
    • Source code management for scm tools clearcase, svn and build management.
    • Performed migration activities for application code to support internet browser version

Achievements

  • Lead design, build, and operational management of secure and scalable software
  • Managed infrastructure operations in cloud computing environments AWS, Azure
  • Implemented event-driven architecture with AWS Lambda

Major Projects

3Projects

Project: OS Migration of core products

May, 2022 - Present3 yr 7 months

    Roles and Responsibilities:

    • Collaborate with Dev, Infra, Quality assurance and engineering teams from IND, UK & Europe to deliver integrated solutions.
    • Application deployments using cloud compute platforms (AWS EKS), Helm, containerization (Docker) and automation process implementations using shell, PERL scripting.
    • Working in an agile development environment, collaborating with Application teams by involving in daily scrum & project review meetings to ensure timely completion of project releases and to improve system performance & productivity.
    • Automation of provisioning and managing cloud resources using infrastructure-as-code (IaC) frameworks Terraform, Ansible and AWS.
    • Leading a team with initiatives to support migrations from on-premise to AWS and ensure Knowledge build-up of team members & performance monitoring.

Project: S-Lane A

Nov, 2022 - Oct, 2023 11 months

    Roles and Responsibilities:

    • Manage on-premise & cloud infrastructures for various projects environments.
    • Setup Kubernetes cluster for multiple containers and implementation of auto scaling.
    • Review and setup of CloudWatch alarms and setting up thresholds.
    • Removal of bottlenecks and inefficiencies in DevOps practices.
    • Collaborate with development, quality assurance, cyber security and product teams & program managers to develop deployment strategies and to ensure successful deployment of applications.

Project: iCMORE

May, 2022 - Mar, 2023 10 months

    Roles and Responsibilities:

    • Implementation of complete DevOps pipelines setup using Git, Jenkins, Maven, Ansible, Docker, Kubernetes for multiple projects across the organization.
    • Lead the design, build, and operational management of highly secure and scalable software for the business.
    • Implemented event-driven architecture with AWS Lambda to automate data processing, reducing processing time.
    • Drive process improvements and operational stability to reduce risks and time to market.
    • Anticipate & manage changes effectively in rapidly evolving global business environment.

Education

  • B.E. (Electronics & Communications)

    VTU, Belgaum (2005)

Certifications

  • PMP (Project Management Professional)

  • CSM (Certified Scrum Master)

AI-interview Questions & Answers

Hi. I am Vijay Kumar Dota, and, uh, now currently, I'm working for Smith Reduction, uh, as a dev ops engineer manager. So, currently, I'm having 16 years of experience, out of which, you know, I'm having more than 10 years into DevOps and the finance into DevOps cloud computing technology with the DevOps as well. So as part of my roles and responsibilities, no. I do leave it, uh, various projects and, uh, how our discussions with the customer care teams. Don't send our requirements regarding AWS and DevOps and, uh, provide the solutions and also implement them. I also do, you know, with the team. So when I'm having some resources, what actually reporting to me, I do provide, uh, no guidance and then keep tracking off the activities, and we follow 2 weeks of sprint cycle. So we work along with the dev and QA team and also other, uh, cross functional teams to to work along, uh, with the size DLC life cycles. And we are 3 months now early cycles. Thank you

Okay. So, um, too many is no secrets, you uh, in the Kubernetes side. So what we can do see there are no multiple ways or to just know on the thing is we can follow enrollment section if they're not all ML files. That is one way of that. We can also create, you know, secrets. QCTL will create, uh, secrets, uh, you know, config maps, uh, for, you know, like, a port numbers, etcetera, which are not, you know, very much confidential. That is one way. And the 2nd way is to go with the what we can say. Um, no enrollment variables now, like a thing. Even the in the environment variable section, we cannot take 1 more percussion bit. The reason is, like, no, if you run any command. Right? So, uh, if you, uh, know, see the history, uh, you know, like, um, e t c d, you know, or not the if you, uh, any if you discover the deployment on the of, you know, any community service, we'll come to know what is the user name and password, etcetera. So for that to avoid that, no, you can go with, uh, you know, encryption methods to different encryption methods, basically for adding other encryption methods. Uh, if you want to go with the, uh, the enrollment variables, what you can do, you can know, uh, instead of now export, uh, say, for example, user ID equal to, say, Vijay Kumar. So ensure that now just, uh, give us space and then now export and then, uh, username equal to Vijay Kumar. So with that, like, um, even if you run the history command, if someone, uh, get access to that, uh, the server, from there, they try to now fetch and get the details of the username and password, so it won't show ideally. So that is, you know, like, uh, one of the most secret feature which we get from the Linux and all thing. And one more thing is that we can if you're I mean, as we are using AWS, right, so definitely we can go with the, uh, your secret manager where now we have no key and no user ID, key and value pair. So with that, now we can fetch and we can use those things within the our

So to automate IAM role, so IAM role, it's all about, you know, like, you know, giving the access to know different services, while we know accessing different kinds of services in AWS. So this is about identity and access management. So it will give us the authentication and authorization to get into that service and then authorization, like, you know, how we can use, say, for example, for EC2, if you want to give access to talk with, you know, say S3 bucket, right? So the EC2 instance should have the IAM role, through which it can talk directly without giving user access key and secret key every now and then, as and when we keep communicating with S3 bucket, so that's where, you know, like, we can create those things. Whereas the CloudFormation, you know, it helps us to create, you know, it will help us to have, you know, like automation with the help of different templates. So we can create, you know, like EC2 instances, you know, you can go with the IAM roles, etc. So in the CloudFormation templates, if you select which category you want to automate, so we can select the template section, and then now we can go with the IAM role, and we can choose the policies which needs to be part of that specific IAM role. So with that, we can create a template, and then next time if you want to create the thing, you can simply use that existing CloudFormation template. So that's how, you know, it will work.

Kubernetes operators See, uh, when it comes to the commodities and then on the service deployment and scaling. Right? So we can for example, if we consider AWS, uh, cloud, uh, technology, so we have no EKS that is now, uh, elastic, uh, network of this service, so wherein we can choose we can know, uh, tunnel scaling, um, based on by now, like, uh, the load, the traffic, how much now it is going coming out coming in and going out of the services from the end users. So based on the usage and number of hits and other traffic, uh, from the end users, so we can go with, you know, ELB has to know all balances and so along with auto scaling groups. So auto scaling groups, you know, it comes, uh, we can choose now different now, like, based on what we can say. The number of requests per second, the topic and also, or not, uh, number of users, how many is that hitting hitting. So under memory circulation, we need no memory or no CPU relation. So if it exceeds more than 70% or 80%, so we can, um, go for auto scaling enabled. So that's it. Then, uh, what else I can say? Yeah.

Okay. Just a second. I'm gonna take a call. Yeah. Uh, Python function to manage cross region replication. Python function to manage cross region. I can't find it. Okay. So so Python see. No. There are no if you are, you know, using see here, it is talking about Python function to manage cross region, replication of sensitive data. Replication of sensitive data. Okay? Uh, ensuring So we can now see there are no different ways, no way we can replicate the data. Uh, and we can make use that, you know, uh, throughout the cross regions. See, we can use, you know, like say for example, now if you have a lambda function, now, uh, through using Python scripts, you we would have implemented a Lambda function. Or not we're not, uh, not stores, you know, since the data, maybe know, like, uh, where it will be used, you know, for encryption and non decryption, etcetera. So if you want to make it available, right, so what we can do, um, if lambda function, you know, uh, is there and if we're having any, uh, say, any other service, any other services under service, we'll need to talk with the the cross function or cross region, you know, on lambda functions in that case. Right? So what we can do, we can make use of, um, endpoint. Now VPC endpoints, Uh, for the return of VPC, we create now these things, and then using this VPC endpoints, we if you are in a different region, So from that, uh, we can, uh, you know, access this, uh, system information, uh, in that

What considerations would you take into account when integrating AWS inspector findings with AWS security hub. Um, when we say no. Like, uh, AWS inspector and then AWS security. Right? So security hub is now are dealing about, you know, like, how security, you know, you are on the account is set up, you know, on the contamination. So, uh, and you know, like, when we are, um, dealing with integration of, uh, this, uh, AWS and info inspector finding it. So we need to make sure that what kind of, you know, things we are now going to put under inspection mode. So, uh, and, uh, whether it is across, uh, no region or only 1 region or it's the complete automation level. Um, and whether do we really need to enable the security perspective content also along with the known inspection, Um, that way to take addition from the admin level. So if you enable that inspection, uh, under all complete, you know, security level things, then now chance of for now, like, uh, to accomplish in the data, um, to which you are we're supposed to maintain

Repeat the WSLA command used to modify an I'm policy. So let's say what is wrong or missing from the command that may cause issues when trying to execute it. So AWS, I am put role policy, role name. Mhmm. My sample role. Okay. Policy name, policy. Mhmm. Policy document. So JSON file, I'll specify it. Uh, here, I am role. Okay? Then par put role policy. Okay? Uh, so I can see the syntax here is file name. Notice that it's showing slash slash my policy. So, uh, the part of the file is not, uh, the absolute part, so that is wrong. That's one observation. And apart from that, I don't see any other issues. My sample policy, sample role. Okay. Yeah. I don't see any other mistakes apart from this not file path, obsolete path name. Rest up now. Rest everything looks good to me.

What process would you use to isolate and resolve it? Performance bottleneck in a DockerL application to plan on AWS. Resolve performance bottleneck. So when we're not talking with no declaration application that is contained application on AWS. Right? So, um, what we can do now, we can to, um, see any performance effects. 1st first and foremost thing is we need to, you know, enable let it enable the you know, cloud watch and see now if we call us and then, you know, um, the CPU inflation, uh, when any specific market application is running. And, uh, you know, what is the net traffic, what is the net the inflation of the resources, and, um, see whether or not it is, uh, not calling any other invoking any other, uh, dependable sources from which there is a delay in that. Uh, and we need to see, like, no enabling, you know, auto scaling. And also we need to, uh, based on our observations, uh, we can enable the auto scaling. And, uh, after that, we can also go with the ELB, elastic load balance thing. Uh, in case, if you see, you know, there are more number of fields, uh, from all over the world, and there is in the single node DNS, then in that case, we can now, um, go with the the DNS below or maybe, you know, the now in that case, now keep keep checking the health and then routing the traffic to the healthy note notes. So and apart from that, that is had ELB. So we can, reorder the traffic based on the path pattern. So if it is for example, you've taken a bank application that maybe now uh, a ticket booking. Right? So we can, uh, based on the note, location wise and then the not DNS failure or maybe the backpack button. Like, for example, Yeah. Cancellation, if it is in a or the issue of the ticket, uh, if you're gonna change any of the ticket. So based on that, we can resolve the traffic. So based on any of these, you know, things, so we can, you know, orchestrate, uh, and, you know, isolated. And, uh, now for deploying this kind of thing, now we can go with another one now. I selected, uh, uh, instance and see how it working. Before, now it is using along with other, uh, docker applications on any instance. That can be done So, uh, see and then, uh, to resolve learning about the

Let's try the workflow for a Python based microservice to interface with an AWS managed database ensuring EAC ID. Mhmm. So when we say AWS managed database, no, like, that is RDS. Uh, you know, we are not different databases, uh, which is both the SQL and MySQL server. So when we want interface from the pipeline based microservice so we'll be providing the AWS different database with all the ACID properties within the Python based microservices to make sure that, uh, it is nowhere, not directly, you know, talking, uh, with the the database directly. So it should be in an encrypted manner so that the data is not covered in any manner, uh, whether it's in transit mode or more or in a on not on mode. So that's how we can do that.

Demonstrate the knowledge of container security best practices when not restricting deployment with Kubernetes on AWS. See, when it comes to container security, right, so we can go with, you know, security policies on the container level. So we can, uh, create and now add the policies, you can do policies. Uh, we can attach, uh, them by, you know, adding the section, uh, within we can create a normal file limited to security policy, and then attach within that. Connect security. So that's how we can

Craft an example where AWS KMS is used to key management system. Mhmm. Just to use it within Python scripts to manage encryption case for cloud services. So then we use, uh, AWS KMS. Right? So in the Python scripts, so we can now make use of AWS game, like, no? The entry point that local database, the AWS, and then the key value pair. Now we can pass through our script while now we're running that Python script. So even if I run any Python script, I know Python dot service, so so we can ignore that way. I need not no notable port. Yeah.