
12 years of industry experience in Cloud DevOps Engineer with specialization in AWS, Kubernetes, Terraform, Docker, Jenkins, Python, and Linux.
Senior DevOps Engineer
Nomis SolutionsSenior DevOps Engineer
Smatbee NetworkDevOps Engineer
Xoriant Solutions Pvt LtdSenior Software Developer
EbusinesswarePerl Developer
Cognizant Technology Solution (CTS)Perl Developer
Scan IT & Shipco ITSoftware Engineer
Artech Infosystem.png)
Jenkins

Git
.png)
Docker

GitHub

Bitbucket

Maven

Nexus

SonarQube

Ansible

AWS ECS

Terraform
.png)
Datadog

SumoLogic

EC2

VPC

S3

Auto Scaling

CloudWatch

Route53

ECR

CodeDeploy

TeamCity

uDeploy

Splunk

Perforce

Sybase

Tableau

PagerDuty

Perl
Hi. I'm. I'm a senior DevOps engineer. I have 12 years of experience in IT. Out of that, I have, uh, 5 years of experience in DevOps engineer. Along with that, I have experience with, uh, cloud computing and automation engineering. Uh, so far, I have worked on different domain industry. Majorly, I worked with the Fintech company and investment banking. In my last project, I worked with Nomi Solution. Here, I'm, uh, I'm working as a senior DevOps engineer and part of, uh, cloud engineering team as well. Here, I have taken care of the AWS cloud and its automation and did a lot of automation using Terraform and Python. And I have also worked on Red Dot Linux, parallel Python, and shell scripting. Along with my these tools, I have experienced with Oracle database and optimize the SQL query. Mongo, uh, MongoDB SQL. Along with that, I have experience with CICD pipeline in Jenkins, taking care of the CICD pipeline, then optimize the optimize it and understanding some critical CICD pipeline for build and deployment activities. Also, I have experience with the monitoring tools like Datadog, Splunk, in AWS side, CloudWatch. Yeah. That's all about me. Thank you.
Yep. So, uh, when we talk about AWS CloudFront for for s three automation. So we have to, uh, share s three bucket name, and we have to we need, uh, AWS credentials like AWS ID and, uh, a passkey, which, uh, and the I'm user ID is supposed to have s three bucket access, executable access as well, and download access as well for the s three bucket. Then we can, uh, check-in history we have data or not.
Uh, I can say here, uh, whenever we think about using Python, we have to import the library. Uh, here, dockerized, uh, if the microservice architecture, uh, is dockerized, then we have to, uh, create a docker file. And we have, uh, like, as we know in Dockerfile, we have to write step by step, uh, about the image construction. And then we can, uh, write docker build command, which will build the image file, and it will store in architecture. All these things we can, uh, also write together in Python, make it automate. For that, we need to call import Docker. And using Docker, we can create the object file, and from object, we can call to the different methods.
Okay. So doc if we want to optimize the document sizes, then we have to, uh, imp like, we have to write in the doc file the light version of the, you know, source so that the image file will not get loaded too much. So there are 2 type of source. 1 is light version and one is full place library version. For example, Alpine, like, we can write from Alpine light. So the, uh, image will will reduce from 100 MB to something that just like 2 MB or 3 MB. In that way, we can optimize the docker image file.
In this case, we, uh, think about, uh, to monitor the Python application. For example, uh, the URL, SEDP or HTTPS URL, we need to monitor. And the application logs, we need to monitor using different monitoring tools such as, uh, SumoCloud or Grafana, then we can monitor the applications using Datadog or Splunk And the URL using upload uploads uptrends. Sorry. Yeah. Up trends. In that way, we can monitor and restrict the application downtime and observe the, uh, application downtime.
So if we talk take about, uh, docker container, uh, in CICD pipeline, then we have, uh, when the Docker image is getting built in the pipeline process, we need to, uh, take care of the security, uh, like, uh, Docker. Docker have the compatible compatibility, uh, to deal with the password in a secure way, encrypted way that we can take care of.
I don't have any idea on this.
So this is a transcription. I I see the there is a if block, else block, and if block 10 4 dot TXT file, if it is a file type, then it is saying file exist. Actually, uh, instead of hyphen f, we can write hyphen zed also whether it is a 0 file or, uh, geosized file or not. And, uh, f hyphen f is for file type. So for this file type, then only this tech, uh, this condition will pass and the rest of the thing will run. Otherwise, simp yeah. The condition may be wrong here.
In this case, we think about the load balancer auto scaling tool. And that load balancer connecting to target group, the target group pointing to EC two instances, We need to have, uh, a number of, uh, issue 2 instances in the target group, and load balancers should be pointing to to availability zone. And both are, like, issue 2 instances are supposed to be in different availability zone. So one if one availability zone goes down or application goes down, then it will get same data from another another submit. In this way, we can make, uh, we can perform 0 downtime deployment.
No idea how to do this.
For any application, we can think of, uh, think of disaster recovery plan where we will have similar application in another subject, another availability zone. And data should be also, uh, get back up into the d r server, and we should test under Doctor. And we have to take the backup of the data, uh, was, like, sometime weekly or monthly and keep it up to date in the DSR board. In this way, we can make a strategy of recovery and or backup of application.