As an enthusiasc IT professional with 8 years in tech, including 5+ years in DevOps. Proficient in Deployment, Automaon, and CI/CD with a strong command of Bash, Docker, Kubernetes, Terraform, and Ansible, I am keen to apply my skills to advance your team's objecves.
SENIOR DEVOPS SRE
HelpshiftSITE RELIABILITY ENGINEER
Nvizion SolutionsDEVOPS + DEVELOPER
Ecolife Engineers LimitedIT ENGINEER
Maharashtra Police AcademyOWNER
Vikshan TechnologyGoogle Kubernetes Engine
Jenkins
Docker
Kubernetes
Terraform
Helm
Ansible
Python
GitHub Actions
AWS
Azure
GCP
Grafana
Kibana
Elastic Search
Prometheus
Maven
Android
Flutter
LMS
Puppet
Chef
GitHub Actions
Kibana
Prometheus
Airflow
mlflow
GCP
AWS
Maven
MySQL
Terraform
Crowdstrike
Mlflow
Centralized Information Hub for Customers, Agents, and Bots
Consolidate all essential information into a single, centralized hub to provide instant access to FAQs, resources, and critical details, empowering users with unified knowledge.
This system leverages automation tools like Terraform, Ansible, and CI/CD pipelines with Jenkins, and utilizes AWS and Azure cloud services to support AI-related features. Monitoring is handled through Kibana, Grafana, and Elasticsearch.
Keep 125 clients up and running, handling 2000 request per second, modification in kubernetes infra
Complete cicd pipelines i have prepared
Created and deployed mobile applications on Play Store
Hey, calm down. Hey, calm down. Yeah, with the Gather AI, actually I will learn a lot of things from scratch for new emerging technologies, the DevOps and the machine learning and artificial intelligence. And I have heard about the Gather AI on the internet and got to know that the company's profile is pretty much, you can say, employee friendly. And I like to love to work with such environment where employees are taken as more, you can say, lovely. That's it.
Yeah, with respect to hosting the application with Kubernetes versus VM, yeah, with respect to those two approaches, I will definitely go with, not directly go with anything, yeah, better approach to learn the things, what is our requirement, and whether it's, whether the application is suitable for monolithic or microservice-based architecture, that things we need to consider, how much CPU, how much loads that virtual machines can, we can say, handle, and with respect to Kubernetes, if there is a microservice-based architecture of that application can be possible, then we will definitely go with Kubernetes and not with the virtual machine, so, yeah, and it's monolithic architecture, and the application size is very big, and we required special attention to the virtual, we can say, environment for that particular application, like in my organization, we have a lot of virtual machines as well, and Kubernetes infrastructure as well, so, we can say, it basically depends on the requirement of the organization, so, we can say, if you want to choose for the microservice-based architecture, then you go with Kubernetes, otherwise, you should use VM, yeah.
Why do you need a route table and when do you need a route table? When we are going with any private VPC and we need to have subnets and subnets contain the route table and that route table whenever we want to allow or deny access to particular routes, we can say internet facing or we can say network facing IP address or like we can allow some, we can create some rules that can handle a particular port or that can allow ingress or egress traffic and that way we need to have route table and the route table will be connected to the subnets and the subnets will be connected with the VPC and this is how the infrastructure as the code with the private network will work. So this question, why do you need a route table? So basically route table is used to have a, can say accessibility of a particular application to external world or to the internal through API or through any databases, that way you can support port forwarding or we can say port mapping with route table or provided with the CIDR blocks. And when do you need route table? Obviously if you want to allow access or deny access to particular routes through particular CIDR or particular port number, then you use that.
Yeah, how database indexing work with respect to indexing of the databases, we'll have MongoDB and in my organization there is MongoDB, PostgreSQL, Redis and we have a lot of databases that in our organization we have and in that application we have indexes. So index is nothing but we are, we can say, allocating the record on the basis of indexes in the databases on the particular database and it will help us to retrieve the particular table, particular column or particular row of the particular table or if it is from the NoSQL or like MongoDB, then we'll have collection and the collection contains the index and index number that you can generally because you can say underscore ID is there and that underscore ID will take the thing as a, we can say, index of that particular record and that index will be identified that record for the particular use case. That is the thing that we are using indexes and how database indexing work. So whenever we are doing any CRUD operation like create, read or you can say update or delete operation through database, then the indexing like we are like suppose removing particular record, then it will ask you which index, which ID you want to remove that record, that kind of thing. We are using index for that matter. Yeah, that's it. Thanks.
Yeah, 10 things, top 10 things that you care for to keep the system secure, then first I should avoid external access to that internet, to that instances, and second one, I will allow SSH ports that should not communicate with the, you can say, directly from 22 ports default, that one security mechanism, third one I can say about, like I should allow two factor authentication with the, for the following, like you can say MFA with the AWS account, or fourth one you can say, the fourth one, allow, deny the access of, you can say, ports, or we can say disable the ports that we are not using, and another one, you can say, application should not communicate with the external works, that's it.
Okay, whenever, what are the, what are the some challenges you face when trying to autoscale or a Kubernetes cluster, VM or instances horizontal? Okay. With respect to horizontal or autoscaled HPA, we are using the Kubernetes, the challenges that we face like storage issues with the database, and that storage is not enough for scaling the app, scaling the application, that kind of thing. And CPU and memory that we have allocated to particular pods, that is not enough. That are the challenges that we fix by attaching or increasing the size of the instance storage. And like, suppose, with respect to EC2 instances, with autoscaling, we are, actually, I do prefer custom autoscaler, and with AWS autoscaling for HPA, we are creating separate instances, and it will take more time, so that is very bad for the performance of the application. So, it is, it is very bad for the performance of the user, yes, you can say, user, so we can say, the autoscaling will, horizontal for autoscaling, you can say autoscaling with the EC2 instances or VMs are very, you can say, challenging with respect to the performance, but it will work. If you have a number of instances, it's more than we have, yeah, that's it.
How did you manage the DB changes that the developers want to deploy at the production? Yeah, with respect to managing the DB changes, with respect to developer, we are creating a planned CM that you can say it's a CM is change management. So with the change management, we have we have request. So with respect to change management request, we'll have a plan for the seven days and with respect to that plan, we'll have a lot of approvals from various engineering managers and then that engineering managers will give approval to the tickets and that CM and then we will manage to upgrade or update the databases or database instances or database servers you can say and VMs that for that matter. So the answer to this question like how did you manage the DB changes or that developers would want to deploy to the production, let's say we are doing planned change management. So that is the thing. Thanks.
Here is a snippet for the Python CI CD pipeline script which utilizes Docker. What is wrong with the code? Might fail the building process. Import dev build-docker-image-tag. Try. From the process, run docker build-t tag . and check true, except print docker build failed. Build-docker-image-tag. Ok, so there is one function called as build-docker-image and it takes the argument as a tag. And once we call, there is one import, there is one import, one sub-process, post-library and we call the build-docker-image function and in that function, we have try-get-broke and in a try, we have sub-process and sub-process.run and in that sub-process.run, we have docker build-t and then tag and yeah, and dot, ok. So in that, in that check true, so it will work fine and except sub-process.called process error, print docker build failed, like in and I think this code will work, might fail the building process. Yeah, this code, I think it will work and if it is not, then there might be the problem with exception that sub-process.called process error and yeah, that's the thing.
Okay, how to deploy a multi-tier application using Terraform using high availability in both AWS and Azure environment. Okay, yeah, with respect to multi-tier application deployment using Terraform, ensuring the high availability. Okay, so with respect to high availability, we have, we should have the existing infrastructure that should be ready and up and running and then we should create a parallel infrastructure with that. So, for the multi-tier application, like we have application interface and we can say database and third one with the user interaction. Okay, so there are many, you can say multi-tier applications. So, I'm just considering three tier, application level, database level and user level. Okay, so whenever we are deploying multi-tier application with the Terraform, then we must use special workspaces. With respect to workspaces, we are considering the databases should be separate, separated with that workspace and application code or application artifacts should be deployed in separate instances. So, that and in between their communication with that networking will be handled through the, you can say, a network of Kubernetes, like it's your internal network of them. Like you can say, ingress as well, ingress or nginx, nginx controllers we can use. So, with that, like with Terraform code, first we need to create any, like suppose I'm using AWS, so AWS, we need to create database instance or you can say database, yeah, database instance and we can use, or we can use RDS for that matter. And with RDS, we can use PostgreSQL or MongoDB, whatever the database, not, sorry, not MongoDB, we can say PostgreSQL or MySQL. So, that database we can use. And after that, we will create the instance of the RDS and we'll create the credentials and store that credential to secrets. And later on, we'll create another Kubernetes infrastructure with this separate workspace and that's a workspace will contain, contain the Kubernetes deployments and the deployment will have connection with the that headless, headless service and headless service, headless, headless continuous and that headless ports we can say, and then that ports will be connected with the directly database. Or we can have directly database connection with the ingress and whatever the request we are getting with that ingress will be transferred from the port level or to container level to port level and deployment and then database with the endpoint. Yeah, that way we can work with AWS. And for the Azure, we can have a pipelining as well, Azure pipelines. So, we can use Azure pipelines to deploy end-to-end application from the, from scratch to data, from database to application or whatever the layer that we want to, whatever the type you want to use. That's it. Thanks.
Your web application on Kubernetes repeatedly crashes due to memory leaks, how would you diagnose and resolve the issues and which metrics would be vital for monitoring? Yeah, with respect to Kubernetes crash report, we have Kubernetes container, sorry, Kubernetes pods, we have Kubernetes pods and the pods is crashing because of the memory issue. Okay, so we can create another sidecar containers, sidecar containers, in that sidecar containers we can have their monitoring logs that can be of Prometheus metrics or application metrics for Kibana. So we can have any monitoring mechanism like with Kibana or Prometheus and then Grafana for the graphical representation of the data and Kibana Elasticsearch to show the data with the actual logs of the application. And the crashes due to memory leaks and diagnose and resolving that issue, we need to check how much memory and CPU we have allocated. At the initial level, we might have a little bit slow, but we can say limit, whatever the limits we have set for the particular pod to grow up to that particular memory or CPU that we need to check and we need to increase by using kubectl commands, we can use kubectl pod and we can say hyperno, yml or void for the complete details of the pod, but we can describe by kubectl describe pod and deployment and deployment content pod. So that things we can describe and get to know that this issue is happening and we can change the namespace or we can say to solving that issue, we can increase the pods or we can we can have HP, horizontal pod autoscaler or vertical pod autoscaler and third option we can have, we can say custom autoscaler like KEDA, Kubernetes event-driven autoscaler. So we can use event-driven autoscaler to autoscale the issue, autoscale the pods and we can say pods and then containers for that matter. So that way we can mitigate the issue of the memory leaks and the diagnosing, we need to have mechanism to take the logs by kubectl logs command of the particular deployment or particular pod, yeah, that would be vital for the monitoring. So as I said, we can we can use application logs or like info warning or errors of that particular application and that application will be can say that logs will be collected and then this will be sent to Prometheus for the visualizer and Prometheus will say will give that metrics to the Grafana, that things we can say and we can use Loki as well if you want to and there is another solution for monitoring logs, we can have Datadog as well. So we can use our the monitoring solution, it depends on our application requirement. So that's it, thanks.
What methodology would you apply in the day of life cycle to meet the compliance standard like SOC 2 or GDPR deployment application? Okay, to be frank, yeah, in my organization, we are using GDPR compliance and SOC 2 compliance of the application. So, with respect to GDPR compliance, that application, we should have monitoring in place, logging in place, backup and recovery of that databases should be in place and security of that application should be in place and everything will be monitored and controlled through, we can say, audits of that application. And the audits should be happen with the proper auditing organization. So that things we need to consider while developing any application from end to end. In the day of life cycle, we will have like continuous integration and continuous deployment and delivery. So, in that delivery part, we will have the life cycle and that life cycle, like in that whole part, we are using the life cycle policy and that life cycle with respect to GDPR. So that we need to follow, like we generally, like in our organization, we are calling some agreements, like SLAs and SLOs and that whether that agreements will follow or not, that should be documented and that document will be available through your audits and your company's, can say, company's processes. Yeah. That's it.
What approaches would you take to build a code to production pipelines for AI driven applications using Kubernetes? In my current organization, we are using many AI driven applications and that AI driven applications are like we are using some Python libraries and that they are controlled with the Kubernetes and that application like suppose we are using lang detect module and we can see AI tools more AI tools and a lang detect and these are the module that company has designed and that module we are we can say we have we have one pipeline with the Jenkins and that pipeline will trigger when new tags of that application is released then what will happen like we have we are not we are in the organization we are using github or our internal tool gerrit like git only so we are using the internal repository manager and that manager sends the triggers to Jenkins and then Jenkins will we have a release pipeline and in the release pipelines we have our you can say Ansible codes Ansible and Terraform codes written that Terraform codes will provision the infrastructure and in the Ansible code we have configuration of the infrastructure so with the configuration of the infrastructure we will have installation part and configuration part of the containerized environment like Docker or Kubernetes for that matter and then we will once the environment is ready infrastructure is ready we will deploy the application in the containerized manner on the that nodes that newly created or we can say release pipeline or deployment pipeline that same workflow we are using so that is why that is what we are using so your the question to your the answer to your question what approach would you take to build code to production pipeline for AI driven application using containerized technology that's it we are we are we have end to end we as pipeline for the AI driven application so with respect to containerization we are using Dockers and Kubernetes so with the Docker we have Podman's and actually we are using Podman to we are not using directly Docker we are using Podman's and then Podman contains we are we are having Podman commands and that Podman commands will create images and the can say store that images into ECR and then ECR will and once the container is once the application Docker application or Podman application on the production is ready then we will have images deployed on that container deployed on from that images on that production machine that's it thanks