A passionate DevOps Engineer with 4+ years of relevant experience. I aim to leverage my expertise to optimize and automate infrastructure management, streamline CI/CD pipelines, and ensure robust monitoring and security practices. With a particular focus on Kubernetes, I seek to enhance container orchestration, deployment scalability, and cluster management within a dynamic and forward-thinking organization
AWS DevOps Consultant
Minfy TechnologiesConsultant
Minfy TechnologiesDevOps Engineer
GloballogicSupport Engineer
Globallogic.png)
Docker

Kubernetes

ECS
.png)
Jenkins

AWS CodePipeline

Terraform

CloudFormation

EC2

EBS

VPC

IAM

Lambda

API Gateway

EKS

SQS

SNS

CodeDeploy

S3
.jpg)
Grafana

Prometheus

ELK

Cloudwatch
.png)
Dynatrace

Rapid7

OWASP ZAP

Maven

Gradle
NPM

Windows

Linux
Jira

Github

ServiceNow
Hey. Hi. So first of all, thanks for the opportunity. So this is Shubhinde. It will basically belong to, uh, Odisha. We are currently staying in Hyderabad. Yeah. So, uh, I I I I I I hold around 4 and a half years of experience in AWS in the boot stores, which, uh, you know, which contains various AWS tools such as such as EC 2. Then comes, uh, EBS, s three buckets, and API gateway load balancer, EC, ACKS, all those things into AWS. But apart from that, in DevOps tool, I do I'll take good experience in Jenkins, dot com, Kubernetes, uh, and there are few DevSecOps tools as well, which comes in the JAP, Sonu, HadoLink, kubelinter. Uh, these are the, uh, few tools which I have worked on. And for, uh, monitoring and all cases, they, uh, for logging ELK, CloudWatch, then, uh, OpenSesame. These are the few tools, uh, I have worked with. So that's that's currently working in MinuFi Technologies. It's been 1 and a half 1.7 years to be precise. I have been working in MinuFi Technologies. So that's that's pretty much it. Thank you.
Okay. So sounds good. Docker, Python, and AWS services. So we've been, uh, we may, uh, we can automate the infrastructure using Terraform or CED. We can, uh, use Docker, the Docker Docker images, uh, Python based Docker images. So we can run them as a Lambda functions, or we can simply run them as an ECA ECA service or if you get the option to Kubernetes because, uh, if you are going for EKS service, uh, we have thought that suppose, uh, if your application is communicating with other CNC and certified case, we can go with the case. Otherwise, ECS would be able to get ops preferred ops in in my point of view. So we need to create a robust CICD pipeline that might be if you want AWS services, we do have AWS code pipeline using AWS pipeline. We'll maybe code we'll code deploy all those agents. Using that, we can deploy our code. And the same thing, if you want to replicate production environment, we just have to, you know, uh, the same safety or same, uh, telephone would be it will be used to create a similar kind of infrastructure in higher environments as well. And for deployment, the CHD is there anyway. So we can create just create an approval stage and move it to further, uh, environment. So that's how we can, uh, design this workflow using top line, Python, and internal list services. Suppose, uh, we are cutting communicating with Adi or something, if it is easiest, it will be easier to communicate establish a communication. Uh, in in terms of Adi, also previously, there will there used to be services and limitations. Like, it used to be difficult creating I'm those using OIDC and annotate that role to the service account. But now as the port identity agent came, so it it becomes easier for them also to communicate.
Compare using AWS CDK system for infrastructure gateway with a focus on a specific use case like network provisioning. Okay. Uh, we can use AWS CDK. To be honest, uh, I have never worked with AWS CDK, uh, but I have a good idea, like, using TypeScript, uh, with, uh, Python's, uh, database CDK. We can provision the AWS resources. The same thing, Terraform Terraform, if you are using edge infrastructure as a code tool, uh, we we if you want to focus on a specific use case like network provisioning. So, uh, in that case, both of them works fine, but, uh, Terraform gives us in a double CTK also, we can, uh, create we can create 1, uh, you know, artifact, and that artifact, we can deploy this, if I'm not wrong. Other than that, Terraform will be the will be the state file using that. We can create our network in these VPCs and there's Internet gateway, NAT gateway, uh, the DHCP things, the route, uh, routeable associations will be there, uh, that the security groups, the technical things. Uh, all those things we can create using both of them.
So just an automated approach to scale. Kubernetes deployment in response to increased web. We can set up HPA. Uh, that that's something we can do. Uh, based on, uh, request, uh, we can automate, uh, auto scale our number of ports and all. Uh, that too, again, uh, we have to, uh, count the CPU and memory with the reasons we do. We have to keep the resource limit as well as that in case of data centers, we should not, uh, overprovision the resources. Uh, that's one thing we can do. We can create a deployment. Uh, in deployment, we can keep, like, uh, the replicas number of replicas, the the GR design number of replicas, the minimum and the And, again, uh, for, uh, infrastructure level, the node groups and node groups also, there will be auto scaling group. Uh, there will be a target node group option as well, which you can go with that option.
Outline the steps to implement a zero downtime deployment strategy for a cumulative charge data. Okay. Uh, for zero downtime deployment strategy, we will be having, uh, various things, like the blueprint deployment, the canary deployment, and we might be having some other deployment methods like rolling back, uh, the AB method. Uh, there are various methods, uh, but, uh, highly, uh, what I have seen is people are using deployment or canary. So in deployment, what happens, there will be another replica of our existing application. Suppose v one is existing, the v two will be created, and the traffic will be shifted to v 2. Then the v two will be the v one will be getting the provision. That approach we can go, but in that one, resource allocation will be more. Other than that, we can do one thing like, uh, candidate deployment. The traffic will be moving slowly using HTO. We can do, uh, that that service mesh. In that, what will happen, uh, slowly, the traffic will be moving, like, from g 1, uh, 90% traffic will be going to b 2. 10% traffic will be going if everything is working fine. We can slowly move forward, like, 3080, 3070. Then, eventually, the v one will be getting deprovisioned, and v 2 will be fully active in that.
How would you include a Terraform module? Yeah. We can, uh, define Terraform modules, uh, and while while we wanted the Terraform module in in our, uh, main dot t f, we just need to call that module. And if you want for multi cloud infrastructure to I I don't know if you want to reuse them in IRO environment environments. We'll be having TFR files so we can get multiple TFRs, or else we can use Tera Grant in that case to to know, you know, to reuse this knowledge, uh, in in multiple stages. You know? So, uh, multi cloud infrastructure components in the sense that we need to have multiple providers, block and based on my. So I might be wrong at this point, but this is what
First of all, uh, we should not be using the default key. Uh, we need to create a key each time we launch in an instance, uh, then, uh, using, uh, data block, and we have to import that. Yeah. We just have to keep an output of that key into a file, and we need to define that file here. And security group IDs, we should not hard code. I am a spine. Extension type is fine. That, uh, key name is something, uh, that that keeps this risk here. And the security ID should be fine. Apart from this listing, that's SSS key which is creating. Uh, we need to create that, uh, SSS key also needs to be in contact. That's one thing.
Uh, I'm not able to find an error. Maybe the 3 statement which has been written, I'm not sure whether that might be one issue. Other than that, docker will have been t and then come. The tag needs to be very light in this case, I think, apart from the time
First of all, uh, if you are going for hybrid cloud, we need to do all these steps like, uh, direct connect checkup for the network connectivity from on prem or it might be some other cloud to headless cloud. So once the network setup has been done, uh, using Kubernetes in the sense, uh, suppose, uh, there are there are a few models, uh, no, not more than the models which are running the front end we can do here. And that using Glue and, uh, we'll be able to work the tools, uh, the airflow, and will be maybe SalesMaker, something we might use from AWS. Uh, those things will be communicating with each other how to design a system to auto scale content? Uh, like, uh, normal given is the deployments with, uh, HP enabled. The secret should be kept in secret managers. Uh, the configurations file should be in config map. That's it. Coming up from there. The volumes we can mount, it's EFS or EVS. That's how the EKS stuff works.
What methodologies would you apply in the DevOps website? Yeah. So the first of all, the methodologies would be the assigned methodology. So there should be some sprint plans. There should be some plans for that initially. Then once the code is there, we can check using GetGuardian or CheckMaths. Uh, and and, uh, we have these tools will be basically checking, uh, if there is any credential or any vulnerability in our code, then we can scan the code, the SAC. Uh, we can do using SonarQube. Once the SAC is done, we can build it. Uh, during build also, uh, the dependencies would be there. We can use dependency track to get an edge bone, like the field of material. Then then comes, uh, once the build is done, then the Docker file will be there to scan the Docker file. We'll be being idle in. Once the Docker file is scanned, Docker image will be created. That up to scan the Docker image will be having quality since, like, the we will sneak a few other tools as well to scan the Docker container. Once the Docker container is running fine, there are no vulnerabilities so we can deploy them in Kubernetes. Uh, to dip before deploying the, uh, QB intern would be used to check the YAML files, whether are they following any complaints or not? Then, uh, once the deployment has been done, we need to make sure no secrets are exposed. We need to check that in terms of networking, uh, there should not be any port or, uh, no ports are initially opening to Internet. Things should be the application should be in private subnets, should be exposed via door balance at the API gateway. If it is API gateway, we need to make sure that the authorization step is in place. It should not be, uh, you know, uh, there are few things, uh, which you need to keep into consideration, uh, while deploying application to meet the compliance guidelines. That's it. Thank you.