DevOps and Cloud Engineer with over 6.7 years of experience in building and managing cloud infrastructure and automation. Skilled in AWS, Terraform, Kubernetes, Docker, OpenShift, OpenStack, Consul, Nomad, Linux and CI/CD tools to improve deployment efficiency and system reliability.
Passionate about driving automation, optimizing processes, and implementing scalable solutions. A proactive problem-solver who enjoys collaborating with teams to deliver innovative, high-performance infrastructure that supports business goals. Eager to tackle new challenges and continuously learn and grow in the DevOps space.
DevOps Engineer
InfobeansSystem Engineer (Level 3)
Cybage Software Pvt. Ltd.Linux Administrator (Level 2)
VSN InternationHardware Engineer
R.D. Computers.Linux and Windows System Administrator (Level 1)
Exclusive Securities Ltd.Kubernetes
Docker
Jenkins
Git
AWS
Azure
Terraform
Prometheus
Grafana
VCenter
Github
Bitbucket
Chef
Rancher
Nginx
AWS
VMware
Windows server
Nginx
WordPress
Hi. My name is. Basically, I have, like, total 6, 7 years of experience in IT field. Uh, total like, I've worked on multiple tools and technology in the past 7 years. Uh, let's say, like, uh, Fibonacci, Stock Cash, and several times, uh, AWS, Azure, and Terraform. Uh, and other than this, uh, recently, I migrated to a new new project, which are the moment and the console. So, uh, I successfully upgraded moment cluster. Like, around any moment cluster, we have, uh, around, like, 120 of the clients and, uh, 2 data centers in 2 data centers. And other than this, uh, I also migrated or upgraded the console cluster. In the console cluster, we have multiple, uh, services, uh, which are the critical parts for us. So I work on these things. Other than this, I also working experience on the, like, AWS and Azure side. And in the same in our in our project, uh, we are using an AWS and, uh, AWS and Azure for both of the both of the, uh, cloud cloud things. And to manage or create and service on the, uh, Azure side, we are, uh, or on AWS side, we are using a Terraform script or Terraform code, uh, to create or make any changes in the, uh, infrastructure side. Other than this, uh, I also have working experience on the storage side. So, uh, in my past, uh, working experience, I have worked on, like, multiple types of storage, like, SAN or NAS and the provider of for the same, it would be all storage or NetApp or Dell, uh, and then, uh, IBM. So I work on those, uh, those kind of storage side, and, uh, I also work on the, uh, Kubernetes. So in our infrastructure, we are around, uh, 40, maybe 40 or 30 people Kubernetics listed. Uh, and, uh, we have, uh, managing those things, uh, on the Kubernetes and deploy our application on the Kubernetes. So whenever we have an issue on the Kubernetes side, I I was there. Uh, we need to resolve it. And other than this, the deployment part is managed by the deployment team. But, yeah, uh, in the infrastructure side, we are managing the Kubernetes cluster. And whatever the service is managed by the Kubernetes, we are taken care by those things. And other than if any issues occur on the Kubernetes cluster or in Kubernetes client mode, we need to manage, and we need to resolve it. Uh, I also plan I also upgraded the Kubernetes cluster in my, uh, working experience. Uh, I also worked on the Ansible side, uh, on Linux, Windows, uh, and Windows side of Windows AD and and Linux side Linux side. Uh, some helps I have somehow some some of the working experience with the LDAP side. So that's all about me. Thank you.
So to deploy the, uh, uh, a straightforward application on the Kubernetes cluster, like, uh, for an example, uh, we have in our organization database clusters, uh, and we wanted to, uh, deploy on the Kubernetes cluster. So we can do, like, uh, create an stateful, uh, application and, uh, create an create an state, uh, create an stateful or create an create an file for deploying the database on the Kubernetes cluster, uh, which is our demon set. Uh, sorry. Uh, what is what is the service? I forgot the service name, but, uh, the service name is state state stateful. Sorry. Stateful and write the file for it and create an for a database application, uh, using in a stateful stateful state, and then, uh, create a service for it. Uh, and, uh, like, at the time of the deployment, we also need to provide the volume, like, uh, PV and PVC. So for storing a data in it, uh, and, uh, and create an application on the Kubernetes cluster. Like, for breaking up, uh, and restoring a data of, uh, the database, we we we can use like, the volume would be on the, uh, back end storage. So it will automatically taken care by the storage services for breaking up. And, also, we can set up a cron, or we can also set up in the storage site for, uh, creating a volume. So we're taking a backup of the volume for, uh, for for different thing, we can also do a second reading, like, create and backup on the regular basis of the volume which are mounted by the or which are using by the, uh, straightforward application for our database.
How to set up by these test documenting build process to optimize the Python application? Application. Yeah. So, uh, to create an, uh, multi instance Docker build process, we we need to create a Docker file for it. Uh, let's say, uh, we can we've in the first, uh, stage, we required, uh, image. Uh, like, for the Python image, we can use in Python and with, uh, whatever the configuration we wanted to do in, uh, like, for copying our Python code to the image we can use. And on that, uh, on that, we, uh, empty. Uh, and then, uh, secondary, we we use a a same image in which we have copy the, uh, copy our doc Python code, and we can, uh, build it. Uh, we can use that image in whatever the configuration required or whatever the package is required, uh, to build the, uh, to run the Python code like requirement to TXT file. And whatever the package is required to run the code, we can, uh, install that codes inside the, uh, image, and, uh, we can build the Docker files. In that way, we create, uh, we can create a multi multistage Docker Docker image, and, uh, we can reduce image size by by following this, uh, this method.
For using a secret in the, uh, Linux environment, we we can use, uh, like, uh, the HashiCorp Vault for storing our secret, uh, inside the, uh, Hashi HashiCorp Vault. And whenever we wanted to use it, we can use it. Uh, other than this, uh, in the AWS side, we can, uh, also use his AWS secret service, uh, in which we can store our, uh, our secret secret, uh, data, like, like, password or key or certificate of anything which are the secret for us, and we can store those things in it. And whenever we require inside, uh, in our application, we can use it from there. Uh, so so the secret in inside the secret, uh, we can we can store our data and, uh, and we can, uh, use in our Linux environment or Linux application where the Linux application require sorry. In our, uh, in our application required the secret, at that time, we can use our secret inside it. Inside the containerized, uh, containerized way, we can, uh, store our key uh, in the history code vault. And whenever we deploy the container on the Linux environment, we can use that key, uh, by using an history code vault, uh, and then we can start the service. Uh, we can start the container.
Comparing using is AWS because the Terraform from first to the support with a focus on a specific use case like network provisioning. So, like, uh, AWS secret sorry. SD AWS SDK, uh, to use, uh, by, uh, like, for an for sorry. Sorry. Sorry. Uh, so, uh, like, Terraform, uh, for the like, Terraform is the, uh, infrastructure as a code. So write in Terraform code, uh, to build a, uh, to build an infrastructure on the AWS provider or any other cloud provider. Uh, so it's an, uh, it's, uh, it's, like, uh, like, create a VPC for AWS or create a VPC on AWS cloud provider using a Terraform script. We can write in a script, uh, Terraform code inside Terraform code, like, what VPC you wanted to use, in which reason you wanted to use, uh, you wanted to create the VPC, and then, uh, what what will be the submit for the VPC. And, uh, for, uh, like, it will be the do you want do you want, like, uh, public access for the private, uh, public access for the subnet, allow public access for that particular subnet or not. Uh, whatever the configuration you wanted inside it, you can mention in the Terraform code, and you can, uh, create an infrastructure using Terraform code. But, uh, in AWS SDK, we need to provide, like, for an example, uh, we you for an example, we need to create an AWS v VPC. We we need we need to use an AWS, uh, command, CLI command for for, uh, creating the VPC and whatever configuration you wanted to use. We we need to provide, uh, we need to provide the, uh, argument at the time of running those, um, that command.
Suggest on a on suggest an automated approach to scale Kubernetes deployment in a in a response to increase wave traffic loads. So, yeah, uh, so automatic, uh, scale of Kubernetes deployment, we can use an HPA horizontal port scaler. Uh, so by that way by in that way, we can we, uh, it will automatically increase the, uh, port or deployment ports, uh, port number if the traffic is increased. So it it use, like, metrics, like, when whenever the CPU or memory goes above 90 or 85 or 95%, it will automatically increase 2 or 3 ports and depend upon, uh, and it it depend upon our, uh, configuration side. So, uh, so for the automatic approach, we we we need to use an, uh, Kubernetes for horizontal code scanner.
Here is an snap from the Python CID pipeline script, which you utilize Docker. What is wrong with this code that might fail the build process? Local build failed and raise. By, like, uh, the understanding of this code, like, I'm not able to, uh, like, understand what error I am getting. I will get when I run this code, if I need, uh, to troubleshoot the issue, I need to run this code and see, like, what error I am getting. And on the behalf of that, I will, uh, fix this this code, like, where it is failing. Uh, but by, uh, looking at the code, I am not understanding, uh, at what point this, uh, this particular code will fail.
Assuming, assuming you are viewing the data from module for deploying an AWS system, I notice the follow-up. Look. Can you point out potential security risk here? So in the quotation potential risk for this is the key. Uh, so we are using a default key. And inside the, uh, when, uh, inside a variable, we are storing the, uh, key. And, uh, so the risk is here, uh, inside the file, uh, anyone can, uh, look at the file and see the, uh, key. And and and, uh, if someone has, uh, access or someone has, uh, IP address or name of the server, which are deployed on the AWS, it will, uh, he can access the, uh, server. So it the like, it's a big risk for, uh, for, uh, for us to use that type of configuration. So for instead of using this way, we can use and the HashiCorp board for storing our key. So it will, uh, what do you say? It will secure and, uh, only authenticate people can access.
How to deploy multi tier application using a data form, ensuring high availability in both data and environment. Okay? Uh, so, uh, for deploying a multi tier application on the, uh, using a Terraform Terraform code, uh, we first, we need to write an, uh, script for deploying a Terraform script. Like, for an example, multi tier events like, we can deploy a a w sorry. Easy to instance, or we can like, it's depend upon the scenario. Do we want to, like, uh, do we want to build an infrastructure, like, uh, easy to instance or not? Or we or can we go in the serverless architecture? But in my in this case, I will, uh, use, uh, the, uh, like, infrastructure way. I'm not going to use the, uh, serverless architecture. So, uh, for the multi tier application, first, uh, I will create an VPC, write a code. So for creating VPC and, uh, inside the VPC, I create, uh, the VPC in, uh, uh, so for, uh, one one one VPC like, in the VPC, we create in 3 3, uh, subnets for it. So one subnet would be the private. Uh, 2 subnet would be private for it. 1 for the public for the web server. Uh, so in the private subnet, I will deploy the database application. The second the second, uh, subnet, I will use the deploy the, uh, what do you say? Like, API, uh, API, uh, REST REST API. And in the 3rd, I will deploy the web server for it for it. So web server will be in the public public subnet. But other than this, 2 application would be in the private subnet, and I can access the, uh, application by using it. So what all this requirement, uh, uh, one more thing, like, uh, to deploy the application in high highly availability zone, highly available. So for that, I will deploy the application in multiple AZs. Uh, so, like, let's say, like, in the, uh, wave server deploy in the 2 or 3 availability zone, like, 1 server in each availability zone. And same way, I will use, uh, database and the, uh, API and deploy all these different availability zones. So whenever we have an issue in it and one of the abilities on, uh, we won't able to hamper our application and user can access it.
What approach would you take to build code to production pipeline for AI driven application using customized technology? For for for this approach, what I use, like, uh, creating, uh, what do you say? Creating in a pipeline for deploy the code on the, uh, using an containerized technology. Uh, so for that, uh, first, like, let's say, uh, the, uh, developer has built the code. So by that code, I will, uh, build, uh, use and then pipeline to build the code and, uh, create an artifacts, store the artifacts on the build. Uh, in the second stage, we store the artifact on the artifact server, and then, uh, copy the artifacts on the image inside and unzip and install the, uh, required packages inside it and, uh, upload the, uh, upload the, uh, image to the Docker Hub or any any any other registry. And, uh, after, uh, after, uh, uploading the image, I can use the image and, uh, create until, uh, deploy the, uh, application or deploy the new image or create an container by using that image in the test environment, then then, uh, QA or testing will test it if everything is working fine. Uh, and then we can deploy in the, uh, staging in what sorry. QA in what sorry. Testing. And then once the testing is completed, then I will, uh, trigger a message for, like, uh, for in the production deployment, what we can we can, uh, we can use, uh, like, uh, things, like, for for deployment in the production. What we can trigger a message, then once trigger an email for it and, uh, manager will approve that made, uh, things to deploy the application in the production environment, then it will automatically deploy the, uh, code in the production environment. So by using this approach, I I need to create an, uh, pipeline for it and, uh, deploy the application to production.
Explain your approach to optimize the Kubernetes cluster for deploying computer v vSAN module, developed in Python. Python. If we're optimizing a Kubernetes, cluster, uh, we can use, like, Prometheus and Grafana for for monitoring things. Uh-uh. Uh, so, like, how how much load do we have in the Kubernetes cluster? Which node is using the less resource and more resources like, uh, CPU and memory. Uh, so in the behalf of that, we can, like if the we have higher load on the Kubernetes cluster, we can create, uh, add a new new, uh, client node to the Kubernetes cluster. But, uh, in this word, compute these and model level Python, I'm not sure what exactly we in trying we wanted to achieve in this scenario, uh, in this, uh, involvement, uh, specific to Kubernetes side. Uh, so I'm not understanding the actual question for it. Sorry about this.