
An Innovative and passionate DevOps expert with over all 16 years of experience with 10 years of experience in CI/CD, DevOps solutions plan, design and implementation. 6 years of experience in Project Management using Agile and Hybrid Methodologies. Experience in application development, maintenance on-premise & cloud environments. Establish and manage DevOps solutions and expert in automation strategies. Manage infrastructure operations in cloud computing environments AWS and Azure.
DevOps Engineering Manager
Smiths Detection (Smiths Group)DevOps Manager
MFX InfoTech (Quess Corp Subsidiary)Project Lead(Release Manager)
Amadeus Software Labs India Pvt LtdSoftware Engineer
Wipro Technologies
CentOS

Ubuntu

Windows Server

Shell Script

Python

PowerShell

Terraform

Ansible
.png)
Jenkins

SVN

BitBucket

Maven

ANT

Make

SonarQube

Microsoft SQL Server

MySQL

PostgreSQL

AWS
Azure
Jira

BMC Remedy
.png)
Docker

Kubernetes

Prometheus
.jpg)
Grafana
.png)
ELK Stack

Nagios

Nexus

RabbitMQ

Nginx

Confluence

Excel

MS Project

EC2

VPC

S3

ELB

EKS

ECR

Lambda

IAM

RDS

Clearcase

Artifactory

API Gateway
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Hi. I am Vijay Kumar Dota, and, uh, now currently, I'm working for Smith Reduction, uh, as a dev ops engineer manager. So, currently, I'm having 16 years of experience, out of which, you know, I'm having more than 10 years into DevOps and the finance into DevOps cloud computing technology with the DevOps as well. So as part of my roles and responsibilities, no. I do leave it, uh, various projects and, uh, how our discussions with the customer care teams. Don't send our requirements regarding AWS and DevOps and, uh, provide the solutions and also implement them. I also do, you know, with the team. So when I'm having some resources, what actually reporting to me, I do provide, uh, no guidance and then keep tracking off the activities, and we follow 2 weeks of sprint cycle. So we work along with the dev and QA team and also other, uh, cross functional teams to to work along, uh, with the size DLC life cycles. And we are 3 months now early cycles. Thank you
Okay. So, um, too many is no secrets, you uh, in the Kubernetes side. So what we can do see there are no multiple ways or to just know on the thing is we can follow enrollment section if they're not all ML files. That is one way of that. We can also create, you know, secrets. QCTL will create, uh, secrets, uh, you know, config maps, uh, for, you know, like, a port numbers, etcetera, which are not, you know, very much confidential. That is one way. And the 2nd way is to go with the what we can say. Um, no enrollment variables now, like a thing. Even the in the environment variable section, we cannot take 1 more percussion bit. The reason is, like, no, if you run any command. Right? So, uh, if you, uh, know, see the history, uh, you know, like, um, e t c d, you know, or not the if you, uh, any if you discover the deployment on the of, you know, any community service, we'll come to know what is the user name and password, etcetera. So for that to avoid that, no, you can go with, uh, you know, encryption methods to different encryption methods, basically for adding other encryption methods. Uh, if you want to go with the, uh, the enrollment variables, what you can do, you can know, uh, instead of now export, uh, say, for example, user ID equal to, say, Vijay Kumar. So ensure that now just, uh, give us space and then now export and then, uh, username equal to Vijay Kumar. So with that, like, um, even if you run the history command, if someone, uh, get access to that, uh, the server, from there, they try to now fetch and get the details of the username and password, so it won't show ideally. So that is, you know, like, uh, one of the most secret feature which we get from the Linux and all thing. And one more thing is that we can if you're I mean, as we are using AWS, right, so definitely we can go with the, uh, your secret manager where now we have no key and no user ID, key and value pair. So with that, now we can fetch and we can use those things within the our
So to automate IAM role, so IAM role, it's all about, you know, like, you know, giving the access to know different services, while we know accessing different kinds of services in AWS. So this is about identity and access management. So it will give us the authentication and authorization to get into that service and then authorization, like, you know, how we can use, say, for example, for EC2, if you want to give access to talk with, you know, say S3 bucket, right? So the EC2 instance should have the IAM role, through which it can talk directly without giving user access key and secret key every now and then, as and when we keep communicating with S3 bucket, so that's where, you know, like, we can create those things. Whereas the CloudFormation, you know, it helps us to create, you know, it will help us to have, you know, like automation with the help of different templates. So we can create, you know, like EC2 instances, you know, you can go with the IAM roles, etc. So in the CloudFormation templates, if you select which category you want to automate, so we can select the template section, and then now we can go with the IAM role, and we can choose the policies which needs to be part of that specific IAM role. So with that, we can create a template, and then next time if you want to create the thing, you can simply use that existing CloudFormation template. So that's how, you know, it will work.
Kubernetes operators See, uh, when it comes to the commodities and then on the service deployment and scaling. Right? So we can for example, if we consider AWS, uh, cloud, uh, technology, so we have no EKS that is now, uh, elastic, uh, network of this service, so wherein we can choose we can know, uh, tunnel scaling, um, based on by now, like, uh, the load, the traffic, how much now it is going coming out coming in and going out of the services from the end users. So based on the usage and number of hits and other traffic, uh, from the end users, so we can go with, you know, ELB has to know all balances and so along with auto scaling groups. So auto scaling groups, you know, it comes, uh, we can choose now different now, like, based on what we can say. The number of requests per second, the topic and also, or not, uh, number of users, how many is that hitting hitting. So under memory circulation, we need no memory or no CPU relation. So if it exceeds more than 70% or 80%, so we can, um, go for auto scaling enabled. So that's it. Then, uh, what else I can say? Yeah.
Okay. Just a second. I'm gonna take a call. Yeah. Uh, Python function to manage cross region replication. Python function to manage cross region. I can't find it. Okay. So so Python see. No. There are no if you are, you know, using see here, it is talking about Python function to manage cross region, replication of sensitive data. Replication of sensitive data. Okay? Uh, ensuring So we can now see there are no different ways, no way we can replicate the data. Uh, and we can make use that, you know, uh, throughout the cross regions. See, we can use, you know, like say for example, now if you have a lambda function, now, uh, through using Python scripts, you we would have implemented a Lambda function. Or not we're not, uh, not stores, you know, since the data, maybe know, like, uh, where it will be used, you know, for encryption and non decryption, etcetera. So if you want to make it available, right, so what we can do, um, if lambda function, you know, uh, is there and if we're having any, uh, say, any other service, any other services under service, we'll need to talk with the the cross function or cross region, you know, on lambda functions in that case. Right? So what we can do, we can make use of, um, endpoint. Now VPC endpoints, Uh, for the return of VPC, we create now these things, and then using this VPC endpoints, we if you are in a different region, So from that, uh, we can, uh, you know, access this, uh, system information, uh, in that
What considerations would you take into account when integrating AWS inspector findings with AWS security hub. Um, when we say no. Like, uh, AWS inspector and then AWS security. Right? So security hub is now are dealing about, you know, like, how security, you know, you are on the account is set up, you know, on the contamination. So, uh, and you know, like, when we are, um, dealing with integration of, uh, this, uh, AWS and info inspector finding it. So we need to make sure that what kind of, you know, things we are now going to put under inspection mode. So, uh, and, uh, whether it is across, uh, no region or only 1 region or it's the complete automation level. Um, and whether do we really need to enable the security perspective content also along with the known inspection, Um, that way to take addition from the admin level. So if you enable that inspection, uh, under all complete, you know, security level things, then now chance of for now, like, uh, to accomplish in the data, um, to which you are we're supposed to maintain
Repeat the WSLA command used to modify an I'm policy. So let's say what is wrong or missing from the command that may cause issues when trying to execute it. So AWS, I am put role policy, role name. Mhmm. My sample role. Okay. Policy name, policy. Mhmm. Policy document. So JSON file, I'll specify it. Uh, here, I am role. Okay? Then par put role policy. Okay? Uh, so I can see the syntax here is file name. Notice that it's showing slash slash my policy. So, uh, the part of the file is not, uh, the absolute part, so that is wrong. That's one observation. And apart from that, I don't see any other issues. My sample policy, sample role. Okay. Yeah. I don't see any other mistakes apart from this not file path, obsolete path name. Rest up now. Rest everything looks good to me.
What process would you use to isolate and resolve it? Performance bottleneck in a DockerL application to plan on AWS. Resolve performance bottleneck. So when we're not talking with no declaration application that is contained application on AWS. Right? So, um, what we can do now, we can to, um, see any performance effects. 1st first and foremost thing is we need to, you know, enable let it enable the you know, cloud watch and see now if we call us and then, you know, um, the CPU inflation, uh, when any specific market application is running. And, uh, you know, what is the net traffic, what is the net the inflation of the resources, and, um, see whether or not it is, uh, not calling any other invoking any other, uh, dependable sources from which there is a delay in that. Uh, and we need to see, like, no enabling, you know, auto scaling. And also we need to, uh, based on our observations, uh, we can enable the auto scaling. And, uh, after that, we can also go with the ELB, elastic load balance thing. Uh, in case, if you see, you know, there are more number of fields, uh, from all over the world, and there is in the single node DNS, then in that case, we can now, um, go with the the DNS below or maybe, you know, the now in that case, now keep keep checking the health and then routing the traffic to the healthy note notes. So and apart from that, that is had ELB. So we can, reorder the traffic based on the path pattern. So if it is for example, you've taken a bank application that maybe now uh, a ticket booking. Right? So we can, uh, based on the note, location wise and then the not DNS failure or maybe the backpack button. Like, for example, Yeah. Cancellation, if it is in a or the issue of the ticket, uh, if you're gonna change any of the ticket. So based on that, we can resolve the traffic. So based on any of these, you know, things, so we can, you know, orchestrate, uh, and, you know, isolated. And, uh, now for deploying this kind of thing, now we can go with another one now. I selected, uh, uh, instance and see how it working. Before, now it is using along with other, uh, docker applications on any instance. That can be done So, uh, see and then, uh, to resolve learning about the
Let's try the workflow for a Python based microservice to interface with an AWS managed database ensuring EAC ID. Mhmm. So when we say AWS managed database, no, like, that is RDS. Uh, you know, we are not different databases, uh, which is both the SQL and MySQL server. So when we want interface from the pipeline based microservice so we'll be providing the AWS different database with all the ACID properties within the Python based microservices to make sure that, uh, it is nowhere, not directly, you know, talking, uh, with the the database directly. So it should be in an encrypted manner so that the data is not covered in any manner, uh, whether it's in transit mode or more or in a on not on mode. So that's how we can do that.
Demonstrate the knowledge of container security best practices when not restricting deployment with Kubernetes on AWS. See, when it comes to container security, right, so we can go with, you know, security policies on the container level. So we can, uh, create and now add the policies, you can do policies. Uh, we can attach, uh, them by, you know, adding the section, uh, within we can create a normal file limited to security policy, and then attach within that. Connect security. So that's how we can
Craft an example where AWS KMS is used to key management system. Mhmm. Just to use it within Python scripts to manage encryption case for cloud services. So then we use, uh, AWS KMS. Right? So in the Python scripts, so we can now make use of AWS game, like, no? The entry point that local database, the AWS, and then the key value pair. Now we can pass through our script while now we're running that Python script. So even if I run any Python script, I know Python dot service, so so we can ignore that way. I need not no notable port. Yeah.