
MLOps, DevOps and Release Engineer
QualcommR&D Engineer 2
Keysight TechnologiesR&D Engineer 1
Keysight TechnologiesSoftware Developer Intern
Keysight Technologies.png)
Jenkins

Perforce

MySQL

AWS RDS

Prometheus
.jpg)
Grafana

Kubernetes

Terraform

Akamai

AWS
.png)
Docker

Vault
Hey. Hi. I'm Deepak Kumar. I have a overall more than 3 years of experience in the DevOps where my technical skills are, like, uh, AWS as a cloud And Docker and Kubernetes for the container. Uh, apart from this, I'm also, uh, managing here the CICD pipelines. So the tech which we are using here is the Jenkins I just see a city. And, uh, apart from that, uh, I'm working on, like, uh, many Automation project as well as, uh, I'm also handling the release engineering's sets here. So, uh, I have a much more relevant experience in the DevOps as well as in my Technical skills with the scripting languages, the Python as a preferred language. That's it.
For the secure to create a security of this architecture for a new application, uh, like, we first have to, take all the requirements of the new applications, how it is going to be. And according to that, we have to, like, uh, think in a in a this purpose, like, who is going to be, uh, use these applications as a basis? Just take example, like, the dev teams also want to, uh, do some modifications update or something, adding new features in the applications which are going to be used, there will be some, uh, admin admin part will be there. So we have to provide the admin user according to that. So all these things, we have to create our VPC. And in this VPC, we have to, uh, also create the identity access management service in the AWS, which will, this will have to configure some groups like, uh, testing and some end users kind of things there, we will we will we will have to add multiple user according to that, who will have the, uh, permitted permissions to use that applications. And, uh, after after that, there will be something like, we're going to enhance some firewall as well to add some additional securities and vault for the
Suggest using Terraform, uh, like, if there are multiple use server you have and you have to, uh, like, uh, you are using their your own, uh, systems, uh, as well as to, uh, use the Terraforms like domains like you have multiple system and running applications which you need to be, uh, means you need to be deployed on other machines every times as whenever the demand increases. So uh, in that case scenario, you can have the telephones to use
We can use the, uh, AWS CloudFront, uh, to see what is happening there and how many deployments are going. Uh, also, we can use the service like the EKS. So where we can configure the, Kubernetes, and, uh, we will see how many monitoring tools are going to be there.
I have, So here, uh, I have created 1 project. It's like from the it I initiated the idea as well as the, And then Slack implemented a pipeline, uh, in which we are using several AWS services like AWS Fargate and, uh, AWS Fargate EKS for building the Docker containers and also pushing the Docker images to the ECS. Uh, after that, we are using, uh, means like s three bucket here to post some files for the for our managing the release. Once we prepare the staging area for means under the s three, From there, we are using the CDN, that is the Akamai for the CDN purpose. And from there, we we are we are just adding our means, like, creating a present URLs from that side, and then we are updating our, uh, product page key site product page there to get the it's like to provide the software to the end users or our customers. So in this scenario, uh, it's like the first, Uh, it was fetching some credentials from the AWS to sorry. Fetching the credentials from the vault, Uh, on the base of rules from the to access the
Handling large database to automate process to pause all the data. Yeah. We could do that. Uh, on the basis of some timeline, uh, we can write a Python script that will, uh, face the database and, uh, query some we'll query and we'll take out those data only which have some, uh, exceeded, uh, timeline. Or it is being in the just like the for the, uh, for the many times line. So if, uh, we can create a pipeline here. And from the pipeline, we could just track first, uh, at several stages like prebuild, where we could test to check out what are those data that needs to be passed. And based on that, uh, we could have some backup files there as well to maintain if there is some any, uh, misshapen occurred, then we could first check for that backup files and then, uh, processing that back of file, we could just trigger 1 script that will, uh, then select
So since it is a private, uh, class, so we have to means, like, and the string is also SDR is not defined here, so we are receiving the null pointer exception
Under the selector app, it is NGINX. And under the template, it is mentioned app equals to NGINX one. So we have to make it, like, Uh, the same name where this template will be matched, the selectors for the replications. So this is the uh, I think this is a bug that we need to prevent for a successful
The project that I work on, that is the release automation of some release, uh, processes. So I have been here used several, uh, like, worked with several Linux uh, OS operating system. And, uh, also use the AWS means, like, several services of the AWS. Uh, and, uh, in this project, uh, I have written some Python script as well as some Groovy script and also, uh, written the uh, pipe event pipelines in a declarative way. So, uh, all these combinations were uh, very challenging to me. Also, this was a very new project. Initially, this was going on it's like a manual process. So uh, after I did some investigations and some also, uh, findings of the AWS services which are best to use, Uh, I came to know that. It means, like, uh, I have successfully implemented the release automation projects uh, and, uh, which increases the efficiency, also decreases the time by uh, at most 90%. So and, also, we are not much like the even the dev teams can trigger those bills uh, spite their own. So these were the benefits. It also reduces some of our workload from the DevOps team side. So Uh, this was the things that, um, I have been misled. Very much gain our knowledge of Linux, uh, cloud service AWS and scripting
Uh, we can achieve 0 downtime when, uh, releasing new version of applications. Like, first, we have to, like, uh, use the previous version of the applications. Once it is configured a new applications, we will create 1 new Docker image. And from that Docker image, we will pull the, uh, container, and we will deploy those container at the same times so that, uh, it will, uh, means, like, it first, we'll take it to the pre productions, and we'll check whether all the features of the new application is working fine or not. If there is some issues, then we will, uh, get it back to the dev teams, or we will look if it is a uh, dev issue or, uh, means, like, the issue from the configuration management sites. So then we will take care of those things. Once it is done, I will once, like once it is fixed, then we will proceed for the production deployment and, we will just, uh, change the volume of that existing container with the same similar, uh, new containers. And, uh, this will also means, like, not affect any, uh, any means like the job which has been triggered that time. So we will have a backup of the job as well, which is running, uh, all all the previous applications that we have. Uh, also, we can also, uh, include the new application in that things.