
Devops Engineer
RelevancelabDevOps Engineer Intern
Tivona Global
AWS
Azure
.png)
Docker

Kubernetes
.png)
Jenkins

Azure Devops

Terraform

Ansible
.jpg)
Grafana

Prometheus
Yeah. So myself, I have I'm holding a degree in computer science engineering with, uh, specialization in cloud computing and information security. Throughout my academic journey, I started gaining, uh, uh, I started getting passion towards, uh, AWS and DevOps related tools, cloud and DevOps related tools that eventually led me to pursue my career in DevOps as a DevOps engineer. Uh, my corporate life, I started with an internship at Tivuna Global Private Limited, where I learned, uh, basics about GCP, learned Docker, and even trained them in the Docker. And my first full time job was with Relevance Jap Private Limited, Bangalore. So here, I worked on multiple projects working on multiple projects, And the software tools, stacks that I'm using is AWS, Terraform, and civil, Jenkins, e l AWS e l k, and, uh, Datadog, Sumo Logic, and lot more. Uh, talking about my certification. So I have certification in AWS. I am AWS certified solution architect. Apart from that, I have cleared Red Hat Ansible RH 29 for examination. Also, I am Azure 900 fundamentals plus Azure architect technology 303 certified. And, uh, talking about my skills, so in cloud, I am comfortable with AWS Azure. Also, with GCP, if I get an opportunity. And for provisioning, uh, Terraform and Ansible, I use for CICD, Jenkins, or Azure DevOps that can be used. And for programming, I prefer Python and Bash as for scripting. I have knowledge in HTML and, uh, and MySQL and also in. Uh, also comfortable with the Linux administration using Linux. I have a knowledge in Linux as well. And orchestration perspective, I have, uh, knowledge or I have I'm good with the Podman, Docker, and Kubernetes. Apart from that, uh, my hobbies are to travel and to play football.
For configuration applied using terraform and multi data performance. Traceability. Yeah. So for that, what we can use is there's a tool from Terraform called TeraGrants that can be used to achieve traceability for configuration applied using Terraform. And, also, uh, um, one one thing we can do it. So since multiple developers will be using this, so we can, uh, push our code to the GitHub repository. But remember to not to push the state files to over there because it may contain a sensitive data, or we we have to use word for that. And, uh, for state state files, what you can do is you can use s 3 as a back end for storing state file of Terraform. So these are the few ways that we can achieve those.
How would you upload setting up a multistage pipeline using Jenkins that includes quality gates between stages in data voice? How can we approach? So what we can do is we can have a Jenkins setup within the same VPC of AWS where we have the quality gates, and then we can have a separate separate task for quality gates for AWS. And after we have a success for that, then they can, uh, we can proceed with the Jenkins pipeline. Uh, quality quality gate means we have a lot of the quality with the term quality gates, many things come into picture. So what kind of quality gate if that comes in picture, then it will be great. But yeah. But from AWS side, what we can use, we have core pipeline code built and all. So that can be used as a quality gate for quality kits.
Describe how you would containerize a legacy Linux Yeah. Yeah. It's simply what we can do is, first, we can create a Docker file. So for Docker file, we can use the same base OS. Suppose, uh, send to s 7 is being used in the legacy Linux application. Those same kind of, uh, uh, base image we can use for creating our Dockerfile. And after that, we can ensure all the required packages that is you required for, uh, us to use, uh, for us to, uh, make the application up. And after that, what we'll do is after the we'll run those that docker files so that it will create a image out of it. And after that, we can upload it to our, uh, repository or to Docker Docker Hub repository or on the AWS ECR. It's on us. And then we can use it to our EKS cluster as a deployment. So in the in the deployment itself in the Yammer file, we can specify that, yeah, this is the location of our image, and we can, uh, that you can download it from there. So it will automatically gets downloaded from the from the location, the image that we created, and we can move ahead with that. So this is how we can do it.
Can you walk through a role back strategy in client cleanser for deployment fields in production? It depends on which type of deployment it is. Whether it's a Kubernetes related, if it's Terraform related, or which sort of deploy rollback it is. Because of for example, for Kubernetes, if we have a rollback, so then then we can have a separate branch that if this fails, then we have to roll back to the specific or the just previous uh, previous version. So in Kubernetes, what we have, we can have a we can roll back to the previous version by default, or we have up to 10 10 previous deployment stored. So out of which, we can select which one we need. Similarly, like, uh, uh, in Jenkins, what we can do do is if a deployment fails, we can have a rollback strategy by having a sort of a pipeline where if the deployment fails, then please means we have to deploy the previous version from the GitHub. We have to take the previous version from the GitHub and try to deploy this kind of thing. Uh, we can do it.
Discussed it. AWS that experiences variable workloads. Yeah. So for variable workloads, what we can do is, first of all, we can have an auto scaling work enable. Like, we in the EKS, we have an option to enable auto scaling. So this will help us when the load is variable. Um, but if that also doesn't work, then we can, uh, go with the target option where it will automatically bring up and bring down the instances whenever needed. Nothing needs to be done from our side. And one more thing we'd like we can do is we can choose let's start with the first. First, we have to choose the region where we have very less price because the price of the EKS depends on the region. So the one which is suitable to us, and we have a low latency, that region we can choose. And then the second step come is what type of node we are using. So there are many, which if we want compute optimizes or storage optimized, memory or or higher IOPS, depending on that and comparing it with comparing it in the AWS calculator, we can define one of the images, uh, one of the node type we have to use. After that, we can choose maybe, uh, auto scaling, uh, load auto scaling option or the AWS Fargate option. We have 2 options. If, uh, if we if we selected the node, then auto scaling is the option, or we can use Fargate directly, then it will do it automatically, Select the type of instance, and it will deploy. So this is how we can do it. Or we can also do is, uh, we can have a set of monitor monitors that will show us when the data where what is the utilization happening. And depending on that, we can again proceed with reducing or increasing the, uh, type of cluster we have. Or if then if that also means if you don't have to, um, uh, if you have to reduce the call, and one thing you can do is, uh, uh, you can create your own EKS cluster in the EC two instance. That is also a good option where the cost is much lesser. But there is only one downside is we have we ourselves have to manage those EKS cluster instead of AWS. So if anything, uh, if not working, then we have to take care of that. That's the only downside for that.
In this Python code for integration, the mistake will prevent the function from working correctly, identify, and explain the bug. Import boto3, define lambda event, and context client portal 3 s 3. Okay. The response equals to client dot list bucket bucket. Return response. Uh, seems correct. This is done. Yeah. Uh, for me, it seems the slash it has inside the parameters of botor.botor3client. Inside that, we have slash, and then we have s t written that slash is the issue that will be that we have to fix it. Similarly, in the client dot list object, bucket equals to event. There also, we have a forward slash. This 2 forward slash, I guess, is the main reason why
Will happen if you modify the instance type and apply the chain, and what principle does it reflect? Snippet, what will happen if you modify the instance type and apply the chain and what result? Does it check? Yeah. So whenever we have, uh, an AWS instance type, uh, yeah, there are 4, 5 things that we have to take in consideration. First of all, not every AMI supports every type of instance type. That is one thing. So we have to first keep that in mind whether this is supported or not. Okay? And the other thing is while we initially, when we have created with the other type of instance time, that time, we have a state file created. After that, we have re and after that, we'll again be running with with the changed version. So for that, first, we need to have the specific permission from where we are running the AWS, this Terraform code. So permission is required. The type which is supported that we have to look into it, whether the t 2 micro for in this case, t 2 dot micro is supported or not. This is one thing that we have to also take into consideration. Change and what was it? And then what it will do is it will means after we apply it so or what it will do is it will modify the instance type. Okay? But, uh, it will it can't directly modify the instance type. We have to first stop the instance because if the instance is in running state, then it will not it there it is not possible to change it in your instance then. So first thing we have to do is we have to chain we need to stop the instance, then we can do this modification. That is the main thing we have to keep in mind.
Yeah. Uh, even security manage yeah. So sensitive information. So there are 3, 4 tools or method that we can use. First thing is AWS secret manager. That is one of the thing that is provided by AWS that we can use it. Second thing is what we can use, uh, uh, what we can use is we can use password hold. Password hold is one of the best technology that, uh, mostly in use by the by in the real world. So that we can also use. And, also, in the Jenkins also, we have option to store secrets so we can use that option as well. But the only reason it is, uh, concern is, like, we have to make Jenkins secure. Because if Jenkins got then Jenkins got, uh, compromised, then the others should be the others should be constrained. The further so, Jenkins secure showing in Jenkins is the least, uh, uh, option that we can use. And then other option will be the AW at, uh, HCL Vault HCL Vault, we can use. If not, then we can use AWS because it AWS secret manager because it is a AWS tool. So that we can also use.
Propose a method for integrating network security and compliance standard checks within AWS infrastructure deployment. Uh, for Netex, what we can do is we can create an alert when somebody somebody exposes a particular sort of port that we can do is we can implement firewall on the load balancer that we can use it. And, also, there are some tools already available that can help us for network security and compliance related standards to be maintained. The tool name is called, uh, the tool name is called device, which help us to find out the exposed port inside and any network related security checks also it it monitors. So this is the one of the tool that we can use.
Elaborate or leverage for automatic configuration of a newly launched. Yeah. Yeah. So first thing is, uh, which tool needs to use and. So if you talk if you ask me, my prefer e preferable ability is Ansible. The reason Ansible is my preferable preferred thing is because Ansible is agentless. We don't have to run some sort of agent inside that instance to, uh, or instance to run those. That is one thing. And other thing is it is item potent in nature. So if the thing is done, then it is done. It will not run it again. That's the meaning of iron potent iron potent, and it is very easy to understand. We have seen YAML, uh, YAML format. And so for newly launched instance, if we have to connect, so what we can do is we can use a bash in instance from where we can in the same VPC, from where we can, uh, run our Ansible scripts, uh, so that we can perform configuration related changes to that instance. So it's works like we have an instance, batch and instance. Okay? From where we'll connect to the e c two instance, newly created e c to install using key or maybe the username, password depending on what you have selected. And from there, if it is successful, then in Ansible, we'll provide in the host, we'll provide in the inventory, we'll provide those details, and we'll be able to connect without any issues. So it's nothing much we can do. Yeah. One thing is we have to allow port 22 for, uh, within the VPC so that it can connect to that instance. That is the main thing.