5.7 years of Backend Development and deployment experience.
Worked on multiple projects in cloud native domain.
Working experience on microservice development and deployment on Kubernetes platforms using helm charts and DevSecOps.
Hands-on with multiple technologies related to back-end development and cloud DevOps.
Result-oriented, self-motivated, and learning-focused Senior software engineer with 5+ years of experience in design, development, and deployment of enterprise-level cloud-native applications.
DevSecOps
Great Software LaboratoryMCMP Turbonomics Integration
Great Software LaboratoryCP4MCM-MCMP Integration
Great Software LaboratoryGolang
Kubernetes
Docker
Terraform
AWS
Azure
NodeJS
Python
Git Actions
Helm
MongoDB
Linux
IAM
MySQL
Grafana
Prometheus
Tekton
ArgoCD
Monitoring Tools
Okta
IBM Cloud Pak
ReactJS
DevSecOps
Snyk
Trivy
SAST
DAST
DevOps
SonarQube
yeah so I wanted to join gather AI because it's one of the growing company and I wanted to like I here in my current organization I am doing 30% of back-end and 70% of DevOps kind of things so I wanted to be in fully in DevOps field so this role and all the skill set is matching for my skill sets and I'm looking for that like yeah I am looking for the particular role which is I think this will be the that particular role which has all the skill sets I have so I wanted to join gather AI
how database indexing works there are different ways you can say like database indexing is a technique you know and indexing is a technique used to optimize query performance by reducing the amount of data scanned during the data retrieval. indexes works you know similarly to an indexing book or in that allowed you know database quickly to locate particular row without you know scanning the entire data set so how we can work like first we need to create an index when we create an index on column I know database in shorts the values in that column and stores them into the sequence of data structure and without the index you know and database performs a full table scan but with index we can you know directly point out that particular index and return that particular row so yeah this is how it will you know speed up the query like full table scan is not required index searching is there like under the hood that index it will use binary search or you can say binary search or binary tree or hash based lookup or you know much faster much faster result binary tree it will use more in most cases like there are many different types of indexes like single column index or multi-column index or we can say composite index and another like unique index you like this is yeah this is how database indexing works
So for keeping system secure, we need to apply authentication and authorization over it. So that based on you know, but so that you know, only authorized user can access that particular system. Also inside a system, based on the authorization, user can access particular component. This is the one way. Another way is we can, you know, provide a gateway or load balancer kind of thing or gateway in between so that it will be, you know, one layer between a system from external to internal. Also, we can, we can say, we can create, we can deploy a particular system into the HTTPS so that it will be, you know, whatever the communication between client and server will be encrypted. So we can use some encryption algorithms. And if that particular system is containerized system, then we can, you know, scan all the containers with, with the help of some DevSecOps tools like Tri-V, BlackDock, yeah, we can scan that particular container and then only we can, you know, deploy that particular container so that it will be vulnerability free system, you can say. Vulnerability scanning we can apply and all the different, different tools we can integrate in our existing CICD so that, you know, system will be, our system will be secure.
What are the challenges I faced when trying to autoscale Kubernetes clusters, VMs or EC2 instances horizontally? So, what we can say like, you know, like there are different factors or different challenges I faced like when adding scalability to virtual machine, particularly when scaling horizontally like I have, let me tell you first what is horizontal scaling, adding, you know, more VMs to the cluster. So in that case, you know, I faced multiple challenges like networking and load balancing like challenges, you know, ensuring the proper network operation and reliable load balancing across all the VMs. This is one challenge I faced. Also, you know, in case of distributed data, we have faced some challenges like managing the data consistency between across multiple, you know, multiple VMs, especially when if, you know, stateful application is there. In that case, we face this challenge. Also there is another challenge we face in like in case of, you know, orchestration of that particular VM, we face the challenge like, you know, dynamically adding or removing that particular VM, the cluster requires, you know, automation or coordination. So, yeah, also like other challenge I faced like scaling bottleneck. That is also one challenge we face like horizontal scaling might not, you know, eliminate all the bottlenecks, particularly in a system components that cannot be scaled horizontally like VM and VM will scale, but yeah, like scaling bottleneck or performance and latency we can, we faced like, you know, distributing the workload across the multiple VMs can, you know, sometimes introduce overhead and latency. Yeah, autoscale challenge is also there like managing efficient autoscaling and scaling for this is can be tricky. Yeah.
How do you manage DB changes that developers would want to deploy to production? So what we can say, managing, okay, managing changes that developers want to deploy to production is, you know, critical aspect of, you know, RDevOps. So we want to follow some, you know, CI, CD, or we can say this is typically managed through some changes management or continuous integration and continuous development pipelines along with, you know, other best practices also involved. So we, I will explain step by step, like we need to establish a change management process or like for that we can use, we can say, we can use a change request process via Jira or ServiceNow, then approval workflow, and then, then there will be kit or something like version control system, and then test changes in non-production environment, and then implement CI, CD pipeline. There will be, first it will be, you know, testing, test particular changes in non-production environment, then implement CI, CD for production, and again, that implementing CI, CD will involve some multiple things like CI, like CI part will, you know, automatically build and test the application whenever changes are committed. And in CD, it will, you know, automatically deploy changes to the stage and then production according to your, like, if stage is, environment is deployed on stage, particular request is for stage, then it will deploy for stage, then production. And also for deployment, we can use different, different strategies like BlueGreen, Canary, like mostly we are using BlueGreen strategy. So, in BlueGreen, what we can say, there will be two environments, identical environments, like, for example, this is live environment, and sometimes, for example, this is blue, this is green, and all the changes will be happen to the green first, blue first, and then all the traffic from green will be moved to blue, and now our blue is the latest environment, and after that, green will be updated.
How I choose between hosting your application in Kubernetes and VersysVM, okay. So I need to like think on some of the parameters like if I want to deploy that particular application into the Kubernetes then I need to you know find out like it is a microservice architecture or monolith then if it fits into the microservices then I need to divide that into the different different small functional functionalities or functional services and if it fits then I will if microservice architecture then I will deploy on Kubernetes otherwise I will deploy on VM so yeah like if any simple simple application which is not simple but any complex application but which has only one functional code or one functional service then we can go with VM as well.
why do you need route table and when do you need route table in which context like like in networking context if I say like the analogy of you know needing route table and when do we need route table and why do we need route table I don't know actually in which context I am not getting this question like in which context we need route in networking routing will be happen on the basis route table load balancer will use that route table yeah yeah like in case of load balancing we can use route table route table like nginx nginx will do on the basis of route table it will route that particular traffic to the particular nodes based on the you know based on that route table redirections like yeah
So, here in this example, if you see VPC security group IDs, over there they hard-coded that particular VPC security ID, which is potentially a security risk over here, we can't hard-code this type of things over here. Instead of that, what we can do, instead of directly hard-coding that particular ID inside the Terraform template, what we can do, we can store that particular ID inside VAR file or we can provide it through some of the, through HACCP or PORT or through any cloud native secret store like parameter store. Likewise, we can provide that particular value instead of providing here. Yeah, this is what I identified. Any other thing I, yeah, this is it.
Explain how to deploy multi-tier application using Teraform, ensuring high availability in both AWS and Azure environments, okay. So we need to consider two environments, multi-tier applications in Teraform, right, in cooperation, high availability, right, right. Okay to deploy multi-tier application using Teraform on AWS and Azure, ensuring high availability like we need to, it will involve some, what we can say, we need to follow one of the approach like first we need to architecturing the multi-tier application like, you know, multi-tier it means like front-end will be, you know, web app or another one will be back-end which has business logic, then there will be the database tier. So for database tier, you know, like for front-end, it includes, you know, the engine, that load balancer we can introduce to handle user traffic and back-end will expose some of the APIs that only accessible to back-end, front-end can only access that, not back-end. And for database tier, we can use cloud network database services like Amazon RDS, Azure Database to, you know, okay. So high availability we need to achieve, then in that case, you know, deploying the front-end and back-end layer across the availability zone we can deploy for high availability and using managed services for database layer that can be, you know, replicated across availability zones. And using Terraform, you know, the infrastructure is managed as a code, like we need to, yeah, Terraform will support both the providers, AWS and Azure, and we can create separate modules for both the environments, like separate reusable component we can create. So we can manage two different environments through Terraform workspace and yeah, through Terraform workspace, we can handle that. Also we can use Terraform cloud, but it again is costly. So we can use Terraform workspace for managing different environments and we can do another. Like we can, back-end tier we can deploy as a microservice or API. And in case of Azure, we need to, you know, set up VPC. Then yeah, final stage we'll be executing that Terraform, particular Terraform commands mentioning that on one particular workspace, then that particular, you know, environment will be up.
Your team needs to deploy Azure-based application with high availability and minimal latency across the globe. Describe the deployment architecture you would utilize and the key metrics to ensure performance. To deploy Azure-based application with high availability and minimal latency, we need to describe how we can design architecture like we can distribute application components across multiple Azure regions and availability zones. This ensures application can handle failure and maintain the performance under high load or adverse conditions. So let me think about deployment architecture first. So global traffic distribution can be we can use Azure product or our traffic manager for global routing and load balancing. Also, for regional redundancy, we can deploy that, you know, across the region, across the zones. That is also one thing. Autoscaling we can apply on, you know, Azure VM or on Azure App Service, we can autoscale. Also, for database things, we can create multiple replicas. And same thing for caching, like caching will be there. We can use Azure Redis caching to reduce database load and improve performance. Also, there will be the high, you know, high traffic. So managing the traffic, we can use global traffic distribution and load balancing. Provided by Azure itself, Azure Front Door. For compute layer will be there and App Service, compute layer VM, App Service. And last but not the least is monitoring and alerting for that Azure monitor and Azure application insights. That service we can use and attach to the particular cluster or VM. It will provide you, you know, metrics like response time, latency, availability, error rate and CPU memory use. That kind of metrics they can provide. And based on that, we can create alerting or reporting. Across globe we need to deploy. So, yeah, global traffic and load balancing. We require global traffic distributor is there in Azure. Yeah, that's what I think.
Okay, now, so gdpr, okay, okay. So while deploying any application, we need to follow some of the, you know, standard compliances or we can say, standard compliances we use like we can't reveal any information which is, you know, sensitive or which is organization related information or we can say any information that will, you know, potentially expose some client or something, that kind of information we can't, you know, can't expose outside the organization. Also some of the secrets, some of the tokens, all those we will need to use either inside as an environment variable or we need to store that particular secrets inside some cloud native secret store or in case of Kubernetes, we need to store it as a Kubernetes object called secret, we can store over there, yeah, also like not revealing any client's information to the outside, that's one point, yeah, that's it.
Describe your experience with setting up distributed email interface on platform like AWS SageMaker. Now, setting up distributed machine learning interface on platform like Enhance, you know, leveraging deploying on to the SageMaker. Enhance, you know, leveraging cloud infrastructure to train large model or we can say our database across multiple instances. So, I will explain step by step breakdown of you know process of based on you know my experience with AWS SageMaker. So, first will be like what we will do like AWS SageMaker, you know, provides managed platform to build, train and deploy you know machine learning model. When it comes to distributed learning, SageMaker enables you to train models, scale horizontally or use built-in algorithm. There will be some built-in algorithm like our frameworks there in TensorFlow, PyTorch. So, let's discuss about the steps like steps to you know set up that email interface on AWS SageMaker. First, we need to create SageMaker. There will be one concept called Notebook. First, we need to create that. That notebook has you know computation logic like for the you know or you can say computation logic to run particular model or execute particular model. And then we need to prepare data for distributed training and choose this. We need to choose particular distributed training approach. And then we need to you know set a job. There will be jobs. Jobs will decide which algorithm to use and all. You know this is how and we can optimize that for distributed training.