
At Blazeclan Technologies, our team leverages my expertise in AWS and Terraform to architect robust cloud solutions. Having earned a Bachelor of Engineering in Computer Engineering, I apply rigorous analytical skills to optimize cloud operations.
Certified in AWS and Terraform, I contribute to seamless infrastructure management. My focus remains on delivering high-quality, scalable cloud environments that empower our clients with efficiency and innovation.
Sr. Cloud Operations Engineer
Blazeclan Technology Pvt LTDSystem Administrator
SEED Infotech Ltd
AWS

Terraform
.png)
Jenkins

Linux

Windows

CloudWatch

Redshift

EMR

EKS

Active Directory

CLI

Firewall
Yeah. Uh, first of all, thanks for asking me this this question and giving me opportunity to introduce myself. So my name is, and, uh, I have completed my bachelor's degree in computer engineering. Currently, I'm working as a senior cloud operations engineer in expertise into both the streams. I'm AWS solution architect professional certified and Terraform associate HashiCorp certified person. I'm with this industry since more than 7 years. And, uh, if I'm starting from my internship, it could be around 8 years. Uh, moving forward for the my BlissCline Technologies rules and responsibility. Seeing a BlissCline Technology, we are working in a shared model. Shared model means we do have multiple clients to work simultaneously. I do have 4 to 5 clients to work simultaneously with them. My primary roles and responsibility is, like, to communicate with the clients, to engage the client, make understand what they want to implement, what the use case they have. After understanding the use cases, provide them a solution. Once the solution has been approved by a client, my responsibility is to make sure all the solution should be implemented in the respective account. So I already told you I'm working on a multiple project. So during implementation of this kind of infrastructure, I do got exposure to create multiple AWS services like EC 2, RDS, VPC, subnets, EKS, and every and many more services by manually as as well as by leveraging the tool of infrastructure as a code that is Terraform. So I do have hands on on the both the skills, like, by manually and, uh, by using Terraform. Going ahead, uh, this is a midscale company. I'm also responsible for the cost optimization. I am a relay related stuff, uh, that is permissions, uh, to the users and the creation of the rules and, uh, other other which are being related to the I'm and the security aspects. So the security aspects, uh, uh, we are using CIS report, trusted advice reports, and AWS inspector service report to make sure all the AWS environment properly. Beside this, my one of the client having big architecture, we, uh, all the they have 100 plus client, uh, accounts with them, and all are managed by the Terraform with the help of control tower control tower service of the AWS. And, uh, they have multiple vendors to deal with as the infrastructure is already huge. So my primary responsibility to here is to make sure all the troubleshooting part with me, and I am responsible for the all troubleshooting calls as well. That's it, my my from my side. Thank you very much.
How do I learn a disaster recovery from for cloud infrastructure spanning multiple region in AWS engine? Okay. So, uh, this question is, uh, as per my understanding, it is related to the disaster recovery. So it's completely depend on the client's requirement. What is the, uh, time at, uh, I mean, how many time they are provide for the disaster recovery recovery and till what time the data should be available during the disaster plan. So this is called the RTO and the RPO, and, uh, this is complete on client requirements. If the and, uh, we are using the scenario, uh, like, active active active passive in this disaster recovery plan. So I have created the disaster recovery plans, which are mostly, uh, for the AWS services like EC 2, RDS, EKS, EFS, and many more. And I have created this in the AWS infrastructure. And my client's, uh, primary region is the Mumbai, and the disaster recovery region is the Hyderabad. So let's say I'm giving an example of the RDS. So for that purpose, we have created the arc Doctor architecture. It's like, um, we are using the Postgres RDS, and, uh, their primary node is in the Mumbai region. And we have, uh, make the read replica of that RDS into the dis Hyderabad region. Once the primary region went down, that is in my case, that is Mumbai. So, uh, we I have run the Lambda scriptlet. That is I have created 1 Lambda script. Once the Lambda will hit, that automatically promotes the Hyderabad read replica to the primary instance. And once the Mumbai will be up, make sure the Hyderabad region's primary instance would replicate to the Mumbai, and then we can make in Mumbai region, um, in future, uh, during the switchback plan, we can make, uh, that ideas in the Mumbai region should be primary. So vice versa, we are doing so. This is one of the part of the AWS service I'm explaining here. Going ahead, I have created EC 2 VPC architecture as well for a disaster recovery. One of my clients' requirement is to make sure all the infrastructure which are present in the Mumbai, make sure it is being replicating the Hyderabad. So I have written the Terraform code as per the disaster recovery setup. And, uh, what I'm doing through the backup services, I have created AM and copied it to the Hyderabad region. And through a single click of the Terraform code, I have putting a latest AMI of that particular EC two instance and launching full, uh, uh, Doctor setup in Hyderabad region within 10 minutes of downtime. I think that's it for myself.
What method would you use to optimize other resource consumptions? So I'm talking about the general resource consumption. Let's say, uh, I'm primarily working on the AWS, so I'm giving going with that. To reduce any resource consumption for that, we how to, uh, we how to make make sure enable the monitoring on those resources. Let's say I'm saying the EC 2 instances. I do have 10 EC 2 instances. I'm enabling the monitoring on those instances, which will monitor the CPU and memory utilization. If the CPU and memory utilization is not using as expected, so we can downstream the instance size. This is the one off solution to reduce the consumption also. The secondary is to use post spot instances or auto scaling. Less I'm going with the auto scaling. If you need a specific if if you are using a ecommerce website and you are, uh, or your request are used, so auto scaling will automatically create the instances. If the pressure is reduced, so it will automatically reduce the instances. So auto scaling, we can use here. Also, we can use auto in instances to reduce the consumption as well as the cost. So spot instances is some instances which is having the cost is, um, minimal, and it will, um, it will get so we will we will, uh, I'm, uh, I'm currently using spot instance dashboard to get the, uh, spot instances, uh, by using auto scaling. So spot instance dashboard is good to the combination of the spots in instances with the auto scaling. Thank you very much.
How does containerization with enhanced application deployment in a cloud? Application deployment cloud in environment. Okay. So containerization with the application deployment, for that, I'm using CICD pipeline, which is integrated with the Jenkins. So, primarily, enhance application deployment in a cloud enhance application deployment in a cloud environment. How, uh, does containerization with Docker containerization with Docker enhance application deployment? Yeah. Yeah. Uh, so here, actually, Docker is consist of the multiple app um, uh, Docker is good to create a multiple application. They support lot lot of spin. Docker with the combination of Kubernetes is something which can handle the Kubernetes is good enough to handle Docker. Because, uh, Kubernetes is having scaling and auto rolling updates with them. And the so the combination of doc Dockers and Kubernetes are good enough to make sure containerization will work more. Related to the deployment, currently, I'm using the CICD setup with the Jenkins. And the all, uh, the Docker images are stored in the c AWS services of the c e ECR service.
Can you suggest a strategy to migrate on premises application to AWS using Docker containers. So, see, I to be really honest, I'm never worked with the migration services. And, um, in my organization, we do have separate migration team for those tasks, migration and the RDSO database things. We do have separate things. So this is not something which I the the does previously. But, uh, I will guess something. Strategy to migrate on prem application to the AWS. You can see I know I am aware about, let's say, to connect on prem to AWS, we can use a direct connect service in the AWS. Something which is related to the Docker containers, I'm not pretty sure.
How do you how do you manage state in the distribution application using Kubernetes for the how do you manage state in a distribution application using Kubernetes for the orchestration? How do you manage state in a distributed application using Kubernetes for orchestration. Okay. See, to managing any state, we can use the combination of Kubernetes with the Terra Terraform. Terraform is good enough to manage the state files, and the Terraform also had options to, uh, to maintain their state file to a particular location. So let's say the Terraform is having, um, s three location, Kubernetes position, post the RDS location, and, uh, local, remote, and many more. So I think to manage it in, uh, Kubernetes for orchestration, we can use a Terraform.
In this Dockerfile snap, review the command and identify what potential issue may arise when building the image. Explain how you would resolve it. Run apt get up to apt get install. First observation from my side is in a second, run apticateupdate and apticate install hyphen y git. So, uh, apticate don't have option of hyphen v. Instead, yam should have. Okay? And the second, uh, so it will give an error is like, um, hyphen y option is not valid. And my second observation here is the, uh, last two lines. Run Mac directory hyphen code and, uh, change hyphen, uh, slash code and git clone on a particular repo. So git clone, uh, git clone came in a first line, and their repo directory come in the next line. So it may be like the, uh, git clone will not found a specific repo. So it will use an error, will be like, uh, repository is not defined, and, uh, git, uh, git URL is not defined, not specified, likewise.
Given this data from snap that initializes a new AWS CCT images identified an excellent was wrong with the variable interpolation and how it could be affect the deployment. So given the data from snap, I didn't mention what's wrong with the variable. Variable inter pollution was wrong and how it could be affect the infrastructure deployment. Okay. See, var.environment is what the variable is defined, and you want what's wrong with the variable interpolation. So, um, in a tags, name equals 2. Okay? It will use in a single quoted like instance hyphen and, uh, then dollar sign and the variable dot environment. See, if you if you put anything in a single quoted, so it will print at as it is. If you want a variable enrollment and the in instance, a combination of both, so it should be, again, a big bracket at the end and get a value. Likewise, something. But if you put anything in a single coated, it will print as it is. So I don't think so the value will print. Also, the very variable dot environment is not defined in the current code. I'm reviewing again.
Design a Terraform module to deploy a multi tier web application using AWS service. Is there anything to write it down? I don't think so. So okay, so to define multi tier application. Okay. So firstly, if you go with the multi tier application, there are 2 approach to deal. 1 is we can deal with the model. Another approach is to deal with the workspace. So according to question, we have to create a model to deploy multi tier web application. So it's, uh, simply create the syntax will be module. First, you can write the module and write it down AWS, whatever the resource do you want to create for that particular module. And the name local name of that particular model. And inside then, we can add whatever whatever of the resource you have created for that particular model. Let's say you have created for any easy 2 instances. So you can uh, create a variable for everything and add it add add it to the, let's say, uh, instance type. So instance type equals to instead of hard, meaning you can add the variable like wire dot instance type. Any instance types variable, you can put all the values. And whenever you want to get this, uh, want to get this model details, and, uh, in that case, you just have, uh, during the declaration, you just have to put, like, module dot your model local name. And, uh, with with specific pass, Obviously, let's say if we are maintaining with any folder outside, then then go with the specific path. Uh, so, likewise, model dot your local name dot or not your values. Likewise, we can deal with the application. So, basically, model is something like this only. And, uh, for a multi tier web application, it's called as a multi tier web applications. It's something the resources you require is the c 2 that is your server. Then there there should be a load balancer. There should be route security settings. There should be RDS setting. So you have to create everything via modules and call those model in the declaration part whenever it is required.
If you're required to build a CICD pipeline, which tools in, uh, Kubernetes ecosystem could you incorporate? Would you incorporate? If required to build CICD pipeline, which tool in the equivalent ecosystem would you incorporate? So c for a CICD pipeline, there are the multiple actualities I found. It is a theoretical question. So uh, in a c in a CSCD pipeline, there are multiple services to implement. Let's say, primarily, we are using or, uh, in general, it is going with the Jenkins. Okay. Jenkins, uh, with the with CICD pipelines. Nowadays, we do have core pipeline services in AWS also. Apart from this, then nowadays, uh, new service is found for a CICD pipeline, which is called as a Argo CD. So primary difference between the core pipeline or a Jenkins let's consider the Jenkins and the Argo CD setup. So the Jenkins, uh, to build a PCICD pipeline with the Jenkins is something the Jenkins server we have to maintain outside the Kubernetes and, um, provide the connectivity between the cluster and the Jenkins with the specific. But what happened in the Argo CD is, like, Argo CD, we have to deploy application inside the Kubernetes clusters only. And and in the app in that application file, you have to provide your centralized repository information and your Kubernetes in clusters information. So, uh, there there is no manual manual things here, and, uh, no need to grab the keys or put it somewhere. Uh, whenever the user or a developer pushes their code into the centralized repository, Argo said you automatically fetch all the data to the Kubernetes cluster. So
See, I'm not pretty good good with the Azure. But if I get a chance to work with the Azure cloud, so I will definitely long. Currently, I'm professionalized as a AWS person, so I will let you know about the monitoring tools. So if you go with the any cloud, something which has been there is called as a logging. So this is for me as you do how. I'm not pretty sure. But the GCP do how I'm aware about. So logging, what is doing, whatever the logs are present, so it are captured. But in AWS, CloudWatch is there for monitoring tool, okay, where we can set a matrix, add a log, uh, add a log table there. Also, we can set alarms by using the matrices. So all the monitoring in AWS will be happen via CloudWatch service. I'm not good in the a dub uh, Azure right away, so I'm not, uh, pretty sure about the Azure exact monitoring tool. But we can use the 3rd party tools as well currently. Act in my current organization, we are using the New Relic. Uh, we are, uh, and, uh, that kind of third party tool, we can also use.