
Amit Kumar is an experienced Cloud/Linux Engineer with a robust background in various aspects of system administration, cloud infrastructure management, and DevOps practices. With proficiency in a wide range of technologies including RedHat Linux, Debian, CentOS, Ubuntu, AWS, Azure, GCP, Docker, Ansible, and Jenkins, he has demonstrated expertise in provisioning servers, ensuring cloud security, and resolving customer issues promptly. Amit has a Bachelor's degree in Commerce with Computer Science and has worked at Grras Solutions and Ninehertz India, where he contributed to designing, developing, and maintaining cloud-based solutions, providing technical direction to implementation teams, and ensuring optimal performance and uptime of applications. With excellent problem-solving skills, strong technical acumen, and effective communication abilities, Amit is a valuable asset in any team environment, adept at meeting tight deadlines and working under pressure.
Platform Engineer
VerticurlPlatform Engineer
WPPSite Reliability Engineer
WolframCloud/Infrastructure Engineer
Grras Solution Pvt. LtdDevOps Engineer
The NineHertzAWS (Amazon Web Services)
Azure DevOps Server
.png)
Jenkins
.png)
Docker

Linux Admin
.jpg)
Terrafrom

Kubernetes
.png)
Debian

CentOS

Oracle VM

Ubuntu

Ansible

Amazon Web Services
Azure

GCP

AWS EC2

VPC

CLI

S3

IAM

GitHub

Apache

Nginx

LAMP

MERN

MEAN

AWS

Oracle

Terraform

EC2

SNS

Route53

IAM

EKS

ECS

CloudWatch

GitLab

OTRS
Jira
.png)
ELK Stack

MEAN stack

Oracle Cloud

IAM
Could you help me to understand more about your background? Like, even a brief introduction of yourself. So, my name is Amit, and I'm working as a production engineer in Manors, India, which is based on. And currently, We are, working on our multiple projects where we need to deploy the application on our Kubernetes as well and, Auto scale without the scaling and, high scalability. So yeah. Also, after the deployment, we need to take care of the monitoring. We need take care of the security, and we need to take care of the continuous indications. So, yeah, these are the stuff we are both currently working on. I have, experience, more, like, more than 5 years now, and, yeah, I continue. I'm learning new stuff and new skills.
So, let's suppose if we have a multiple text stack and we are not only depending on the cloud formation, I mean, on the AWS cloud, definitely at that scenario. In that scenario, we can consider Terraform as well. Like, there are few dependencies. Like, if your Kubernetes cluster and everything is running on a different cloud, then you can't reuse AWS cloud formations. But if you are only working with the AWS, definitely you can go with the AWS cloud formation. That's what I think about it.
I would prefer to use Python among the given options. So, first of all, if it is a Linux server and it's okay. So there are multiple servers, and if I need to perform any sort of task that's regular, we need to run the same task every time. So definitely, first, I would go with Python, and I would use the YAML format, and I would use Ansible to automate the script to perform the task on multiple servers in a click.
Can outline the steps to create a secure AWS architecture for a new app. For the architecture, it's not mentioned, it is, you know, my approach or so. So, let's talk about the scaling one, so we can create a separate VPC. First, in a secure VPC, we can create a set of machines. We can apply auto-scaling and the load balancer. And the load balancer is a public one. You can deploy your application via CodeCommit. On CodeCommit, we can connect for automation, I'll use CodePipeline. For the code side, we can use CodeCommit to store the code. We can also use CodePipeline to test the data. If it's a machine learning project, we can implement multiple security tools, like for DDoS and everything. For the secure layer, we can arrange some Cloudflare for the first line of interaction. From Cloudflare, we'll handle the client traffic. Then, it will route the traffic to the load balancer, and then there's a. In this way, we can do a secure setup for our new application.
You handle version control and deployment in a multiple cloud environment to ensure consistency across production and non-production setups? Wasn't meant to run deployment in multi-cloud environment. So, yeah, we can, as I understand, we can do some things for the kit. For the kit, we can create multiple branches and, whatever the production setup or let's say, for the production, we have a different cloud for development, and we have different clouds for staging. So, we can divide all the departments or production into branches. And whenever any developer uploads code in a specific branch, definitely, the code will go into that particular environment. I mean, if it's in AWS, the development environment is on the AWS side. Definitely, if that particular port will go to the AWS side. If it's in production, and production will be on Azure, then it will go to the Azure site. If any developer uploads code on a production branch, the code will definitely go to the Azure site. So, in that way, I guess, we can do this multiple, you know, production setups. Thank you.
In my experience, implementing high-level system design with an emphasis on learning as well as resulting in fewer reduction errors has been effective. As I thought about the agile approach, it's like, We need to first understand the requirement. Whatever the requirement is, whatever the text type we are using, we have to be clear about it. Once we have a clear picture about that, we need to think about the users and the public, which will be using our application in a few months. So accordingly, we have to think about it, and we have to give some proper scalability in our application. We have a proper solution for monitoring and security, as a DevOps, I can assure you about their infrastructure, monitoring, security, and scalability. So that, if we implement all of these things with the thought of what are the test tags we are currently using, and with that as well, we can use light test tools and technologies along with that. And, we can also check how many users we are going to face in our release phase. Definitely, accordingly, we need to prepare for that.
public ClusterStream public System Wait a minute. So as I don't I don't have the exact Java knowledge because I'm not from a coding background, but, definitely, like, in this scenario, I will definitely reach out to my colleague who's working on this, I'll sit with them. I'll discuss this issue. If they are not busy with their specific task, definitely, at that point, I'll reach out to them. I'll ask for help. I'll talk to them, and I'll try to get it to run, like, get it fixed, definitely. So, because directly I don't have the Java knowledge. But, yeah, with my team, definitely, I'll work with it, and I'll make it run. For sure, thanks.
The following communities YAML configuration. The configuration for the deployment supports the update. The update will prevent successful implementation of the deployment. The next deployment replica is 3. So, in Genex 1, the version is 1.7.9, and the port is 8080. Continue. Okay. So, in this case, the first thing I can see is the syntax error, which is a container port. The container spelling is incorrect, and it's in a small case also. The other thing is to get the match labels to request 3. Yeah. That is the first thing I can see right now. I can see the selector template and specs. There are 3 replicas with the main level of Nginx. Okay. The label will be NGINX 1. Okay. So, here we define the label name as Nginx one, but on the match label, we just said Nginx. So, that is something else. There is some issue, like, whenever a new replica will be created, it will check the label. But the label is in the metadata, the label name is different, and the mesh label name is different. So, as I can see, this is also an issue. And apart from that, I don't see anything else currently. But, yeah, the match level is the issue, I guess, and the NGINX one, and the match level name is generic only. So, that is something I think we need to fix. Thank you.
Challenging DevOps project where the depth of knowledge in Linux, cloud services, and scripting languages played a key role in the problem-solving approach. Okay. So, there was one project where the client was in-house at that time, and we needed to live a specific feature in our applications over there. The issue was that particular feature was running on the staging side, but not in the production side. We had checked a few things, like what was missing, and as I mentioned in the description as well. I checked the Linux version, which was running on staging, and it was different because it had just been updated. On the other side, the production side, the version was a little bit lower. So, we checked everything, including DevOps tools and technologies, and everything seemed perfect, but very small thing was the Linux version. With that specific version, there were some tools that were not working properly. We needed to go through the Linux path and check everything. What was missing? And that's how we resolved it. It would take around 6 to 7 hours.