
An Innovative and passionate DevOps expert with over all 16 years of experience with 10 years of experience in CI/CD, DevOps solutions plan, design and implementation. 6 years of experience in Project Management using Agile and Hybrid Methodologies. Experience in application development, maintenance on-premise & cloud environments. Establish and manage DevOps solutions and expert in automation strategies. Manage infrastructure operations in cloud computing environments AWS and Azure.
DevOps Engineering Manager
Smiths Detection (Smiths Group)DevOps Manager
MFX InfoTech (Quess Corp Subsidiary)Project Lead(Release Manager)
Amadeus Software Labs India Pvt LtdSoftware Engineer
Wipro Technologies
CentOS

Ubuntu

Windows Server

Shell Script

Python

PowerShell

Terraform

Ansible
.png)
Jenkins

SVN

BitBucket

Maven

ANT

Make

SonarQube

Microsoft SQL Server

MySQL

PostgreSQL

AWS
Azure
Jira

BMC Remedy
.png)
Docker

Kubernetes

Prometheus
.jpg)
Grafana
.png)
ELK Stack

Nagios

Nexus

RabbitMQ

Nginx

Confluence

Excel

MS Project

EC2

VPC

S3

ELB

EKS

ECR

Lambda

IAM

RDS

Clearcase

Artifactory

API Gateway
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Roles and Responsibilities:
Hi, I am Vijay Kumar Dota, and now I currently work for Smith Reduction as a dev ops engineer manager. So, I have 16 years of experience, out of which, I have more than 10 years in DevOps and finance into DevOps cloud computing technology with DevOps as well. As part of my roles and responsibilities, I do lead various projects and have discussions with customer care teams. Don't send our requirements regarding AWS and DevOps, and provide solutions and also implement them. I also work with the team. When I have resources reporting to me, I provide guidance and then track the activities, and we follow a 2-week sprint cycle. We work along with the dev, QA team, and other cross-functional teams to work through the entire product life cycles. We are three months into the new cycles. Thank you.
So, too many is no secrets, you have in the Kubernetes side. So what we can do see there are no multiple ways or to just know on the thing is we can follow the enrollment section if they're not all ML files. That is one way of that. We can also create secrets. QCTL will create secrets, config maps, for port numbers, etcetera, which are not very much confidential. That is one way. And the second way is to go with the no enrollment variables now, like a thing. Even in the environment variable section, we cannot take one more precautionary bit. The reason is, if you run any command. Right? So, if you see the history, you know, or not the if you discover the deployment on the of any community service, we'll come to know what is the user name and password, etcetera. So for that to avoid that, you can go with encryption methods to different encryption methods, basically for adding other encryption methods. If you want to go with the enrollment variables, what you can do, you can instead of now export, say, for example, user ID equal to Vijay Kumar. So ensure that just give us space and then export and then, username equal to Vijay Kumar. So with that, even if you run the history command, if someone gets access to that, the server, from there, they try to now fetch and get the details of the username and password, so it won't show ideally. So that is, one of the most secure features which we get from Linux and all things. And one more thing is that we can if you're I mean, as we are using AWS, right, so definitely we can go with your secret manager where now we have no key and no user ID, key and value pair. So with that, now we can fetch and we can use those things within our application.
So to automate IAM role, it's all about giving access to different services while accessing different kinds of services in AWS. This is about identity and access management, which will give us authentication and authorization to get into a service. Then authorization is about how we can use, for example, for EC2, if we want to give access to talk with S3 bucket. The EC2 instance should have the IAM role, through which it can talk directly without giving user access key and secret key every time we communicate with S3 bucket. That's where we can create those things. CloudFormation helps us to create automation with the help of different templates. We can create EC2 instances, IAM roles, etc. In the CloudFormation templates, we can select which category we want to automate. We can then select the template section, and go with the IAM role, and choose the policies that need to be part of that specific IAM role. With that, we can create a template, and then next time if we want to create the thing, we can simply use that existing CloudFormation template. That's how it will work.
Kubernetes operators. When it comes to commodities and service deployment and scaling. Right? So we can, for example, consider AWS cloud technology. We have EKS, which is now an elastic network of this service. We can choose tunnel scaling based on the load, the traffic, and how much it is coming in and going out of the services from end users. So based on usage, number of hits, and other traffic from end users, we can use ELB with load balancers and auto scaling groups. Auto scaling groups can be chosen based on various factors, such as the number of requests per second, the topic, and the number of users hitting the service. Under memory and CPU utilization, if it exceeds 70% or 80%, we can enable auto scaling. That's it. What else can I say?
Python function to manage cross region replication. There are no Python functions, if you are using AWS, it is talking about Python function to manage cross region replication of sensitive data. Replication of sensitive data. Ensuring that we can replicate the data in no different ways, and we can make use of it throughout the cross regions. We can use, for example, a Lambda function, now, through using Python scripts, you would have implemented a Lambda function. Or not, since the data may not be stored, you know, since you don't know where it will be used, for encryption and decryption, etcetera. So if you want to make it available, what we can do is, if the Lambda function is there and if we're having any other services under the same service, we'll need to talk to the cross-function or cross-region, you know, on Lambda functions in that case. Right? So what we can do is, we can make use of an endpoint. Now, VPC endpoints, for the return of VPC, we create these things, and then using this VPC endpoint, we can, if you are in a different region, access this system information in that region.
When integrating AWS inspector findings with AWS security hub, considerations include how security is set up on the account, the scope of inspection, and the level of automation. We need to determine what kind of things we will put under inspection mode, whether it's across multiple regions or just one, and the level of automation. We also need to consider whether we need to enable security hub content alongside inspector findings to take additional actions from the admin level. If we enable complete security, the chance of not accomplishing our data maintenance goals decreases.
Repeat the WSLA command used to modify an I'm policy. So let's say what is wrong or missing from the command that may cause issues when trying to execute it. So AWS, I am putting a role policy, role name. My sample role. Okay. Policy name, policy. My policy document. So JSON file, I'll specify it. Here, I am specifying the role. Then, I use the put role policy command. So I can see the syntax here is file name. Notice that it's showing slash my policy. So the part of the file is not, the absolute part, so that is incorrect. That's one observation. And apart from that, I don't see any other issues. My sample policy, sample role. Okay. Yeah. I don't see any other mistakes apart from this incorrect file path. The rest looks good to me.
What process would you use to isolate and resolve a performance bottleneck in a Docker application on AWS? To plan on AWS, resolve performance bottleneck. So, when we're not talking about the application that is contained on AWS, right? So, what we can do now is see any performance effects. First and foremost, thing is we need to enable CloudWatch and see if we can call us and then see the CPU inflation when any specific application is running. And, we need to see what is the net traffic, what is the net inflation of resources, and see whether or not it is not calling any other invoking any other dependent sources from which there is a delay in that. And we need to see if enabling auto scaling is necessary, and also we need to based on our observations, enable auto scaling. And, after that, we can also go with the ELB, elastic load balance thing. In case, if we see there are more numbers of fields from all over the world, and there is in the single node DNS, then in that case, we can go with the DNS load balancer or maybe keep checking the health and then routing the traffic to the healthy node. So, apart from that, ELB can also reorder the traffic based on the path pattern. So, if it is for example, a bank application that maybe now a ticket booking. Right? So, we can based on the note, location wise, and then the not DNS failure or maybe the back-end button. Like, for example, cancellation, if it is in a or the issue of the ticket, if you're gonna change any of the ticket. So, based on that, we can resolve the traffic. So, based on any of these things, we can orchestrate and isolate. And, now for deploying this kind of thing, we can go with another one now. I selected an instance and see how it's working. Before, now it's using along with other Docker applications on any instance. That can be done. So, see and then, to resolve learning about the instance.
We're trying to interface with an AWS managed database ensuring EAC ID through a Python-based microservice. When we say AWS managed database, that is actually RDS, which combines both SQL and MySQL servers. For our pipeline-based microservice to interface with the AWS database, we'll provide it with the AWS database that has all the ACID properties within the Python microservice. This ensures that the microservice is not directly talking to the database. We should ensure that the data is encrypted in both transit and at-rest modes to prevent any unauthorized access.
When it comes to container security, we can implement security policies at the container level. We can create and add policies, you can do that by creating a normal file limited to a security policy, and then attaching it within that. We connect security by adding the section within the container definition. So that's how we can implement security in our containers.
AWS KMS is used as a key management system. It's utilized within Python scripts to manage encryption cases for cloud services. So, we use AWS KMS. In the Python scripts, we can now make use of the AWS SDK. The entry point is the local database, AWS, and the key-value pair. Now we can pass through our script while running that Python script. Even if we run any Python script, we can ignore the notable port.