profile-pic
Vetted Talent

Vijay Kumar

Vetted Talent

An Innovative and passionate DevOps expert with over all 16 years of experience with 10 years of experience in CI/CD, DevOps solutions plan, design and implementation. 6 years of experience in Project Management using Agile and Hybrid Methodologies. Experience in application development, maintenance on-premise & cloud environments. Establish and manage DevOps solutions and expert in automation strategies. Manage infrastructure operations in cloud computing environments AWS and Azure.

  • Role

    DevOps Manager

  • Years of Experience

    16 years

Skillsets

  • AWS - 5 Years
  • CI/CD - 10 Years
  • Python - 2 Years
  • Bash - 6 Years
  • PowerShell - 1 Years
  • Cloud Infrastructure - 3 Years
  • DevOps - 5 Years

Vetted For

20Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    AWS Cloud Implementation Specialist (Hybrid, Bangalore)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :Microsoft Azure, Team Collaboration, Vendor Management, API Gateway, AWS GuardDuty, AWS Inspector, AWS Lambda, AWS Security Hub, Ci/Cd Pipelines, Cloud Automation, Cloud Infrastructure Design, Cloud Security, Identity and Access Management (IAM), AWS, Docker, Good Team Player, Google Cloud Platform, Kubernetes, Problem Solving Attitude, Python
  • Score: 45/90

Professional Summary

16Years
  • May, 2022 - Present4 yr

    DevOps Engineering Manager

    Smiths Detection (Smiths Group)
  • Mar, 2017 - May, 20225 yr 2 months

    DevOps Manager

    MFX InfoTech (Quess Corp Subsidiary)
  • Aug, 2009 - Mar, 20177 yr 7 months

    Project Lead(Release Manager)

    Amadeus Software Labs India Pvt Ltd
  • Oct, 2006 - Aug, 20092 yr 10 months

    Software Engineer

    Wipro Technologies

Applications & Tools Known

  • icon-tool

    CentOS

  • icon-tool

    Ubuntu

  • icon-tool

    Windows Server

  • icon-tool

    Shell Script

  • icon-tool

    Python

  • icon-tool

    PowerShell

  • icon-tool

    Terraform

  • icon-tool

    Ansible

  • icon-tool

    Jenkins

  • icon-tool

    SVN

  • icon-tool

    BitBucket

  • icon-tool

    Maven

  • icon-tool

    ANT

  • icon-tool

    Make

  • icon-tool

    SonarQube

  • icon-tool

    Microsoft SQL Server

  • icon-tool

    MySQL

  • icon-tool

    PostgreSQL

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Jira

  • icon-tool

    BMC Remedy

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    ELK Stack

  • icon-tool

    Nagios

  • icon-tool

    Nexus

  • icon-tool

    RabbitMQ

  • icon-tool

    Nginx

  • icon-tool

    Confluence

  • icon-tool

    Excel

  • icon-tool

    MS Project

  • icon-tool

    EC2

  • icon-tool

    VPC

  • icon-tool

    S3

  • icon-tool

    ELB

  • icon-tool

    EKS

  • icon-tool

    ECR

  • icon-tool

    Lambda

  • icon-tool

    IAM

  • icon-tool

    RDS

  • icon-tool

    Clearcase

  • icon-tool

    Artifactory

  • icon-tool

    API Gateway

Work History

16Years

DevOps Engineering Manager

Smiths Detection (Smiths Group)
May, 2022 - Present4 yr

    Roles and Responsibilities:

    • Leading a team to develop automation processes that enable cross functional teams to deploy, manage, configure, scale, and monitor applications.
    • Collaborate with application development and infrastructure teams in order to help bridge the functional gap and determine the DevOps function set in order to optimize the technical environment.
    • Implementation of complete DevOps pipelines setup using Git, Jenkins, Maven, Ansible, Docker, Kubernetes for multiple projects across the organization.
    • Lead the design, build, and operational management of highly secure and scalable software for the business.
    • Hands on experience on AWS services (EC2, EBS, Monitoring, EFS, S3, Lambda, IAM, VPC).
    • Collaborate with Dev, Infra, Quality assurance and engineering teams from IND, UK & Europe to deliver integrated solutions.
    • Review and setup of CloudWatch alarms and setting up thresholds.
    • Removal of bottlenecks and inefficiencies in DevOps practices.
    • Automation of provisioning and managing cloud resources using infrastructure-as-code (IaC) Frameworks Terraform, Ansible and AWS.
    • Lead DevOps transformation initiatives with the transparency and visibility of the teams engagement & stakeholders involvement.
    • Manage on-premise & cloud infrastructures for various projects environments.
    • Setup Kubernetes cluster for multiple containers and implementation of auto scaling.
    • Application deployments using cloud compute platforms (AWS EKS), Helm, containerization (Docker) and automation process implementations using shell, PERL scripting.
    • Create architecture layout in Visio and share it across with project stakeholders.
    • Drive process improvements and operational stability to reduce risks and time to market.
    • Anticipate & manage changes effectively in rapidly evolving global business environment.
    • Working in an agile development environment, collaborating with Application teams by involving in daily scrum & project review meetings to ensure timely completion of project releases and to improve system performance & productivity.
    • Leading a team with initiatives to support migrations from on-premise to AWS and ensure Knowledge build-up of team members & performance monitoring.
    • Excellent communication and collaborative skills, interpersonal skills and leadership quality with ability to work efficiently in both independent and teamwork environments.
    • Implemented event-driven architecture with AWS Lambda to automate data processing, reducing processing time.
    • Collaborate with development, quality assurance, cyber security and product teams & program managers to develop deployment strategies and to ensure successful deployment of applications.

DevOps Manager

MFX InfoTech (Quess Corp Subsidiary)
Mar, 2017 - May, 20225 yr 2 months

    Roles and Responsibilities:

    • Deployment of applications to on-premise and Amazon cloud infrastructure.
    • Implemented continuous deployment end-to-end system along with user interface for providing end-user inputs for given environment using Ansible, Jenkins and Service-Now.
    • Interface and co-ordinate with diverse teams while advocating the best practices for implementation of low risk, more reliable software products across the organization.
    • Deployment of applications using Git, Jenkins, Maven, MS-Build, Docker, Kubernetes, Ansible, Sonarqube, Artifactory, Amazon EC2, EKS & ECR.
    • Developed projects release plans, rollouts and automation tools using PERL, Shell scripting languages.
    • Implementation of docker containers deployment and handled multiple & large DevOps engagements across multiple divisions in the organization.
    • Hands on experience on creating EC2, EBS/Infrastructure using Terraform, Ansible.
    • Experience in presales and due diligence (RFP), requirements gathering from customers.
    • Implementation of post-prod (SRE) best practices service level indicators (Mean Time To Restore).
    • Identify and explore chaotic situations, conduct formalized experiments and solutions implementation to ensure high availability of applications, services and servers post to the production deployments.
    • Review IAM security settings, VPC security groups, and inbound/outbound traffic.
    • Expert with troubleshooting issues, bugs, communicate and negotiate with multiple teams to accomplish project common goals.
    • Developed deployment automation across multiple environments and implemented best practices. Setup of various automations using Bash, PERL scripting.
    • Implement security controls and measures within the AWS environment, IAM policies.
    • Setup of EC2 instances, monitor and validate on regular basis, volume/snapshots, restore from backups and networking options available in AWS.
    • Implementation of AWS Lambda and other related AWS services like S3, API Gateway
    • Coordinate with cross functional teams on setup and management of containerized applications using docker and Kubernetes.
    • Integration of CI/CD with continuous test automation process and upload test results document onto share-point site using Ansible, Jenkins, Power-shell.
    • Implemented performance optimization of MS SQL based applications for the end user gratification.
    • Expertise in Microservices load balancing using Nginx based on highly consumable and high volume Microservices.
    • Setup of auto scaling using Kubernetes and servers monitoring using Grafana

Project Lead(Release Manager)

Amadeus Software Labs India Pvt Ltd
Aug, 2009 - Mar, 20177 yr 7 months

    Roles and Responsibilities:

    • Led and managed project teams, set clear goals and offered guidance via monthly 1-2-1.
    • Implementation of continuous integration and continuous delivery (CI/CD) using clearcase and jenkins.
    • Coordinated with customers to gather requirements for setting up release plans and processes, deliver project expectations.
    • Drove process improvements and operational stability to reduce risks and deliver quality products using various automations and best practices.
    • Established and maintained close, interpersonal working relationships with multiple teams across geographies Asia, Europe and US for effective & efficient project deliverables.
    • Developed project release plans and ensure successful completion of product releases on time and proper stakeholder engagement throughout the project life cycle.
    • Worked with operations team efficiently for scale up applications and infrastructure across the globe.
    • Point of escalation for addressing and providing solutions for critical production issues.
    • Involved in appraisals and career development programs.
    • Led offshore team and collaborate with onsite co-coordinator to deliver quality solutions within the boundaries of scope, schedule, cost for multiple projects releases & migrations.
    • Proven ability to prioritize tasks effectively as per market conditions and customer needs.
    • Taken up initiatives on continuous improvement, nurturing innovation.
    • Acquired and managed team by timely directing, coaching and delegate the project tasks.
    • Maintained stage environment and pre-production logs monitoring & analysis.

Software Engineer

Wipro Technologies
Oct, 2006 - Aug, 20092 yr 10 months

    Roles and Responsibilities:

    • Worked with key users to facilitate effective application use across business groups.
    • Enhanced project processes by implementing few improvements using automation scripts.
    • Managed code integrations for multiple releases, code coverage and source code tools maintenance clearcase & svn.
    • Led source code and build teams for code integration activities and releases.
    • Monitor daily builds and track for closure of issues related to code, server and middleware.
    • Plan migration activities for different applications to version control system.
    • Plan, design & documentation and implementation of in-house tools with existing systems.
    • Source code management for scm tools clearcase, svn and build management.
    • Performed migration activities for application code to support internet browser version

Achievements

  • Lead design, build, and operational management of secure and scalable software
  • Managed infrastructure operations in cloud computing environments AWS, Azure
  • Implemented event-driven architecture with AWS Lambda

Major Projects

3Projects

Project: OS Migration of core products

May, 2022 - Present4 yr

    Roles and Responsibilities:

    • Collaborate with Dev, Infra, Quality assurance and engineering teams from IND, UK & Europe to deliver integrated solutions.
    • Application deployments using cloud compute platforms (AWS EKS), Helm, containerization (Docker) and automation process implementations using shell, PERL scripting.
    • Working in an agile development environment, collaborating with Application teams by involving in daily scrum & project review meetings to ensure timely completion of project releases and to improve system performance & productivity.
    • Automation of provisioning and managing cloud resources using infrastructure-as-code (IaC) frameworks Terraform, Ansible and AWS.
    • Leading a team with initiatives to support migrations from on-premise to AWS and ensure Knowledge build-up of team members & performance monitoring.

Project: S-Lane A

Nov, 2022 - Oct, 2023 11 months

    Roles and Responsibilities:

    • Manage on-premise & cloud infrastructures for various projects environments.
    • Setup Kubernetes cluster for multiple containers and implementation of auto scaling.
    • Review and setup of CloudWatch alarms and setting up thresholds.
    • Removal of bottlenecks and inefficiencies in DevOps practices.
    • Collaborate with development, quality assurance, cyber security and product teams & program managers to develop deployment strategies and to ensure successful deployment of applications.

Project: iCMORE

May, 2022 - Mar, 2023 10 months

    Roles and Responsibilities:

    • Implementation of complete DevOps pipelines setup using Git, Jenkins, Maven, Ansible, Docker, Kubernetes for multiple projects across the organization.
    • Lead the design, build, and operational management of highly secure and scalable software for the business.
    • Implemented event-driven architecture with AWS Lambda to automate data processing, reducing processing time.
    • Drive process improvements and operational stability to reduce risks and time to market.
    • Anticipate & manage changes effectively in rapidly evolving global business environment.

Education

  • B.E. (Electronics & Communications)

    VTU, Belgaum (2005)

Certifications

  • PMP (Project Management Professional)

  • CSM (Certified Scrum Master)

AI-interview Questions & Answers

Hi, I am Vijay Kumar Dota, and now I currently work for Smith Reduction as a dev ops engineer manager. So, I have 16 years of experience, out of which, I have more than 10 years in DevOps and finance into DevOps cloud computing technology with DevOps as well. As part of my roles and responsibilities, I do lead various projects and have discussions with customer care teams. Don't send our requirements regarding AWS and DevOps, and provide solutions and also implement them. I also work with the team. When I have resources reporting to me, I provide guidance and then track the activities, and we follow a 2-week sprint cycle. We work along with the dev, QA team, and other cross-functional teams to work through the entire product life cycles. We are three months into the new cycles. Thank you.

So, too many is no secrets, you have in the Kubernetes side. So what we can do see there are no multiple ways or to just know on the thing is we can follow the enrollment section if they're not all ML files. That is one way of that. We can also create secrets. QCTL will create secrets, config maps, for port numbers, etcetera, which are not very much confidential. That is one way. And the second way is to go with the no enrollment variables now, like a thing. Even in the environment variable section, we cannot take one more precautionary bit. The reason is, if you run any command. Right? So, if you see the history, you know, or not the if you discover the deployment on the of any community service, we'll come to know what is the user name and password, etcetera. So for that to avoid that, you can go with encryption methods to different encryption methods, basically for adding other encryption methods. If you want to go with the enrollment variables, what you can do, you can instead of now export, say, for example, user ID equal to Vijay Kumar. So ensure that just give us space and then export and then, username equal to Vijay Kumar. So with that, even if you run the history command, if someone gets access to that, the server, from there, they try to now fetch and get the details of the username and password, so it won't show ideally. So that is, one of the most secure features which we get from Linux and all things. And one more thing is that we can if you're I mean, as we are using AWS, right, so definitely we can go with your secret manager where now we have no key and no user ID, key and value pair. So with that, now we can fetch and we can use those things within our application.

So to automate IAM role, it's all about giving access to different services while accessing different kinds of services in AWS. This is about identity and access management, which will give us authentication and authorization to get into a service. Then authorization is about how we can use, for example, for EC2, if we want to give access to talk with S3 bucket. The EC2 instance should have the IAM role, through which it can talk directly without giving user access key and secret key every time we communicate with S3 bucket. That's where we can create those things. CloudFormation helps us to create automation with the help of different templates. We can create EC2 instances, IAM roles, etc. In the CloudFormation templates, we can select which category we want to automate. We can then select the template section, and go with the IAM role, and choose the policies that need to be part of that specific IAM role. With that, we can create a template, and then next time if we want to create the thing, we can simply use that existing CloudFormation template. That's how it will work.

Kubernetes operators. When it comes to commodities and service deployment and scaling. Right? So we can, for example, consider AWS cloud technology. We have EKS, which is now an elastic network of this service. We can choose tunnel scaling based on the load, the traffic, and how much it is coming in and going out of the services from end users. So based on usage, number of hits, and other traffic from end users, we can use ELB with load balancers and auto scaling groups. Auto scaling groups can be chosen based on various factors, such as the number of requests per second, the topic, and the number of users hitting the service. Under memory and CPU utilization, if it exceeds 70% or 80%, we can enable auto scaling. That's it. What else can I say?

Python function to manage cross region replication. There are no Python functions, if you are using AWS, it is talking about Python function to manage cross region replication of sensitive data. Replication of sensitive data. Ensuring that we can replicate the data in no different ways, and we can make use of it throughout the cross regions. We can use, for example, a Lambda function, now, through using Python scripts, you would have implemented a Lambda function. Or not, since the data may not be stored, you know, since you don't know where it will be used, for encryption and decryption, etcetera. So if you want to make it available, what we can do is, if the Lambda function is there and if we're having any other services under the same service, we'll need to talk to the cross-function or cross-region, you know, on Lambda functions in that case. Right? So what we can do is, we can make use of an endpoint. Now, VPC endpoints, for the return of VPC, we create these things, and then using this VPC endpoint, we can, if you are in a different region, access this system information in that region.

When integrating AWS inspector findings with AWS security hub, considerations include how security is set up on the account, the scope of inspection, and the level of automation. We need to determine what kind of things we will put under inspection mode, whether it's across multiple regions or just one, and the level of automation. We also need to consider whether we need to enable security hub content alongside inspector findings to take additional actions from the admin level. If we enable complete security, the chance of not accomplishing our data maintenance goals decreases.

Repeat the WSLA command used to modify an I'm policy. So let's say what is wrong or missing from the command that may cause issues when trying to execute it. So AWS, I am putting a role policy, role name. My sample role. Okay. Policy name, policy. My policy document. So JSON file, I'll specify it. Here, I am specifying the role. Then, I use the put role policy command. So I can see the syntax here is file name. Notice that it's showing slash my policy. So the part of the file is not, the absolute part, so that is incorrect. That's one observation. And apart from that, I don't see any other issues. My sample policy, sample role. Okay. Yeah. I don't see any other mistakes apart from this incorrect file path. The rest looks good to me.

What process would you use to isolate and resolve a performance bottleneck in a Docker application on AWS? To plan on AWS, resolve performance bottleneck. So, when we're not talking about the application that is contained on AWS, right? So, what we can do now is see any performance effects. First and foremost, thing is we need to enable CloudWatch and see if we can call us and then see the CPU inflation when any specific application is running. And, we need to see what is the net traffic, what is the net inflation of resources, and see whether or not it is not calling any other invoking any other dependent sources from which there is a delay in that. And we need to see if enabling auto scaling is necessary, and also we need to based on our observations, enable auto scaling. And, after that, we can also go with the ELB, elastic load balance thing. In case, if we see there are more numbers of fields from all over the world, and there is in the single node DNS, then in that case, we can go with the DNS load balancer or maybe keep checking the health and then routing the traffic to the healthy node. So, apart from that, ELB can also reorder the traffic based on the path pattern. So, if it is for example, a bank application that maybe now a ticket booking. Right? So, we can based on the note, location wise, and then the not DNS failure or maybe the back-end button. Like, for example, cancellation, if it is in a or the issue of the ticket, if you're gonna change any of the ticket. So, based on that, we can resolve the traffic. So, based on any of these things, we can orchestrate and isolate. And, now for deploying this kind of thing, we can go with another one now. I selected an instance and see how it's working. Before, now it's using along with other Docker applications on any instance. That can be done. So, see and then, to resolve learning about the instance.

We're trying to interface with an AWS managed database ensuring EAC ID through a Python-based microservice. When we say AWS managed database, that is actually RDS, which combines both SQL and MySQL servers. For our pipeline-based microservice to interface with the AWS database, we'll provide it with the AWS database that has all the ACID properties within the Python microservice. This ensures that the microservice is not directly talking to the database. We should ensure that the data is encrypted in both transit and at-rest modes to prevent any unauthorized access.

When it comes to container security, we can implement security policies at the container level. We can create and add policies, you can do that by creating a normal file limited to a security policy, and then attaching it within that. We connect security by adding the section within the container definition. So that's how we can implement security in our containers.

AWS KMS is used as a key management system. It's utilized within Python scripts to manage encryption cases for cloud services. So, we use AWS KMS. In the Python scripts, we can now make use of the AWS SDK. The entry point is the local database, AWS, and the key-value pair. Now we can pass through our script while running that Python script. Even if we run any Python script, we can ignore the notable port.