
8+ Years of professional experience working in IT. DevOps Engineer, with a good hands-on in containerization, automation, Kubernetes orchestration etc. Well versed with public clouds such as Azure as well as different Cloud Native tools. Good working experience in Agile environment.
Sr. Software Engineer
Chubb Business Services Ltd.Consultant
Infosys Ltd.Systems Engineer
TCSIntegration Engineer/Deployment Engineer
TCS.png)
Docker

Kubernetes

Helm

Git
.png)
Jenkins

Shell Scripting

Python
Azure

Linux
.jpg)
Terrafrom
Could you help me understand more about your background?
Uh, hello. Uh, hey. I'm Wasim. So I'm from Adoni, Andhra Pradesh. So I'm a software professional with, uh, 8 plus years of, uh, professional software experience. Uh, I started my career with TCS, then I moved on to Infosys. Currently, I'm working with. So I started my career as a DevOps integration engineer, then moved on to dev, uh, full time DevOps engineer. Uh, and now, currently, I'm working as a. So as, uh, I have good like, I have 5 years working as a a DevOps engineer. Okay. So I do have good experience, uh, over uh, end to end CICD pipelines, creating, managing them, um, as well as scripting, uh, automating day to day tasks using the scripting technologies, uh, such as Bash, Python, as well as Groovy. Uh, I do have very good experience over, uh, Linux, uh, as well as, uh, other Linux tools, uh, such as Docker, uh, and, uh, basically Kubernetes. Uh, as I said, like, I have been working as a Kubernetes submission, That's why I do have good experience in creating and managing Kubernetes clusters both on prem as well as cloud. So I do have good experience provision provision various cloud based on Kubernetes such as logging, monitoring, uh, etcetera etcetera. Uh, I do have good experience in managing on prem Kubernetes clusters with, uh, with a great, uh, backup and restore strategy as well. So, yeah, uh, that's goes my basic summary.
Can you detail the security measures you would implement in the COBRA? Prevent, uh, unauthorized access. Uh, Yep. Yeah. So to prevent, uh, unauthorized Kubernetes access, uh, I guess the the main gate or the main entry point for the Kubernetes is the APS server. So first and foremost thing is, like, you secure your APS server. Like, uh, you basically, like, uh, try to restrict is and make it as private as possible. Like, uh, try to make sure that it's not basically freely available or accessibly accessible. Uh, maybe try to have, uh, hard on author, uh, authentication policies. Like, even though, uh, authentication is not part of Kubernetes, uh, so, like, you should have a, what do you say, uh, like, uh, external authentic authenticator so as to make sure that only, what you say, uh, authenticated and authorized users logged in have access to Kubernetes API server. So once, uh, once you harden your API server, then the second thing is, uh, then the second thing is you basically, uh, provide, what you say, uh, role based access control here, like, uh, to the users. Okay? You do not, uh, give like, the more open or the more, uh, cluster admin access to everyone. So you basically try to queue only least possible access or the least, uh, access, like, they need. Okay? So, uh, if an app team if it is a shared cluster and if, uh, multiple app teams, uh, share the cluster, then only try to give only namespace level access for that particular, uh, app team. So, uh, that's how you try to prevent unauthorized access access, uh, to the other resources as well. Okay? Uh, and, also, like, try try to have, uh, network policies as well, uh, so that you have next best level of isolation, uh, uh, so that one team do not, uh, interfere with the other team. Uh, these are three basic steps. And apart from that, like, I guess, like, try to enforce, uh, VLS and SS level SSL connection for the APS server or or the VM, uh, that basically host the APS server of the Kubernetes. Yeah. So and, also, uh, this about the cluster hardening and about the apps running on the clusters. Like, you try to, uh, make sure that the apps doesn't run as a root container or, like, the container doesn't run with the root access. Okay? Uh, and, also, try to prevent, uh, unpublished or, what you say, uh, unnecessary, uh, image, like public Docker images. Try to have your own images also. Like, try to use some kind of, like, uh, Docker like, image scanners, uh, like, such as Prisma cloud or any other, like like, way. So you should be using, uh, your gatekeeper policies to secure your applications, uh, and have your policies enforced. Uh, Yeah. So try to use OPA gatekeeper policies uh, and all yeah. So these are the stuff, like, uh, what you say you could, uh, implement. Yeah.
Data group in the climate strategy using Kubernetes. So, yeah, uh, for a blue green deployment, uh, strategy, okay, so it's basically, like, uh, having 2 versions of, uh, your cluster, uh, you 2 versions of your application, uh, up and running, and maybe you just, uh, switch, like, uh, wherever you wish. Like, whether you wish to, uh, switch to this or that. Okay? Uh, one way you could do is, like, like, you could have 2 deployments, and uh, and you could just only basically, uh, load balance them using a single service. And you could just alternate or alternate the selector, uh, so that, uh, each time you each time the service only basically load balance, uh, one particular set of, uh, ports, uh, that belongs to our deployment. Uh, so, uh, that's one way. And other an another way is to use basically, uh, a path based routing. Okay? So you have to, uh, deployments as well as 2 separate services, and those services basically are behind a reverse proxy such as, uh, NGINX ingress. Then maybe upon path based, uh, routing, you can basically implement your, uh, blue green deployment. Yeah. So that's how it goes.
We want the horizontal port, auto scaler based on custom metrics based on custom metrics. Uh, I guess, uh, what do you say? Kubernetes, uh, natively provide horizontal port auto scaler, like but based on CPU and, uh, memory. Okay? But if we need to have, uh, custom metrics, then I guess, uh, we need to use, uh, some tools, uh, that are native to like, uh, what do you say? Some tools such as, uh, KEDA, like, uh, Kubernetes. Like, KEDA is a, uh, Kubernetes, uh, event driven auto scaler. So that way, like, you could, uh, customize, like, you you could customize, uh, the events that basically triggers auto scaling. Like, you could have you could have, uh, a custom event such as, uh, number of hits as well as, what you say, the number of things in the queue, uh, uh, as well as number of, uh, the number of incoming request in a in in a, uh, for a, uh, Azure database. Okay? So that's so and, also, like, you could, uh, and another example would be, like, GPU. Okay? So you could even have it, uh, you could even have auto scaling based on on the custom another custom metrics for GPU. Okay. Again, like, you need, uh, to have a third party tool. Uh, I guess, like, Azure provides its own, uh, GPU related, what do you say, uh, metrics, uh, metric tool that basically takes cares of the metrics and all. So yeah. So, like, you need to use a third party tool, uh, and for our current case. So I guess, KEDA is a one such great open source tool, uh, open source open source tool, uh, that can be used for, uh, auto scaling based on, uh, custom metrics.
Uh, Kubernetes operator. Okay? Uh, Kubernetes operator is a Kubernetes, uh, resource, okay, uh, that basically manages, uh, the custom, uh, resources. Okay? So it basically manages, uh, the life cycle of the customer resources. So, um, I guess, Kubernetes, uh, in its 1 dot 16 version, it's it comes up with CRDs. Okay? So CRDs are nothing but customer resource definitions. Okay? And, basically, when, uh, we try to create a custom resource okay. So when we do kubectl, uh, create custom resource command, so I guess it's, uh, basically the operator that the that take that basically watches, uh, the custom resource, uh, for its creation and, uh, for for its whole life cycle. And, uh, so as in when, uh, there is a change in the event for the custom, uh, resource, so Kubernetes operator comes into play, and it basically takes appropriate action. Okay. And it also basically make sure that, uh, it is watching, uh, the customer resource the whole time. And, also, it makes shows that that the customer re resource, uh, is in its, uh, defined, uh, state. Okay? And so in between, if there's, uh, any manual intervention by any other resource, um, that is that is, um, outside of the, uh, Kubernetes operator scope, uh, then basically, Kuber, uh, Kubernetes operator, it's restores the actual version, uh, of the resource that it had, uh, created. Okay. Few examples of, uh, custom resources are, like, uh, like, cert manager is a best example for Kubernetes operator where it creates and manages, uh, where it creates and manages custom resources, uh, such as certificate, certificate, uh, issuer and all. Okay? So here certificate and certificate issuers are custom resources. Okay? So when every time we get a certificate okay. So or a certificate issuer. Okay? So it's basically the cert manager that's basically, uh, watches and takes control of these custom resources, and it basically manages these things. Okay? And, also, uh, Kubernetes operators basically, uh, help, uh, deploy the apps, okay, uh, into uh, Kubernetes native way and that are not primitive to Kubernetes. Yeah. That goes my answer.
So what are the benefits of using Helm charts in Kubernetes, and how would you manage dependencies in a Helm chart. Yeah. Uh, Kubernetes, uh, and it deployment, uh, is basically a bunch of Yamal. So if the application is small, so it doesn't becomes an issue. But when, uh, the applications are large and much, okay, then there that's where, like, managing the whole every part of the, uh, every report, and if you fill up the, uh, YAML file, uh, it get it gets difficult. So that's where Helm comes into play. So Helm's basically, uh, is a deployment management tool, uh, for Kubernetes, uh, where it's basically has a rich, uh, templated resources, like where, uh, where which we with which, uh, you can, uh, basically leverage, uh, its functionalities and, uh, template our Kubernetes, uh, definition files and basically create multiple versions of Kubernetes multiple version of the same resource, but just by specifying different set of values. Okay? Uh, so Helm chart, uh, it's basically template your whole Kubernetes, uh, definition files. Okay? Uh, Kubernetes definition files, and, uh, you could basically reuse them. Okay? And it also helps provide in life cycle management of your, uh, deployment such as, like, uh, creation, updation, rollback, uh, as well as decision as well. So even Helm chart even helps in maintaining the revisions and so that you could, uh, basically roll back to the whatever version, like, you want. Uh, that's how it that's that's how it helps. And, uh, dependency management, like, uh, in Helm chat, like, basically, dependency management is is done via, uh, the chart dot ml where, like, you basically mention the dependencies in the charts section. Okay? Yeah. So it's not very difficult. So, uh, it's great. Yeah. Helm is a great tool to basically manage your Kubernetes, uh, and its, uh, things. Yeah.
Uh, describe how would you handle persistent storage in a stateless persistent storage. You handle persistent storage in a stateless application department using Kubernetes. Yeah. So for a status application to basically handle, uh, like, uh, to handle, uh, persistent storage, I guess, like, you could use a PVC, uh, persistent volume and persistent volume claim. Uh, have it embedded in your, uh, deployment dot yml. Okay? So that, like, uh, every time, uh, like, the port and and, uh, and, basically, refer like, uh, every time a port gets created, so it's basically refer refers back to the same, uh, persistent volume. Okay? Uh, so that way, like, the state will be maintained. Okay? Now it becomes, uh, so it become, uh, simpler when we have just single replica and a single PVC. Like, so there is a one to one mapping. But, uh, when it comes to, like, uh, like, multiple replicas, uh, then, like, we might not have same number of position volumes, uh, created. Okay? So in that case, like, you could have like, you could basically maintain, uh, a rewrite many type of persistent volume. And, basically, you mount mount everything, like, the same volume onto the other ports so that the state is maintained. Now it's up to the application, uh, allow, uh, to basically handle, uh, the multiple rights, uh, or duplicate rights, uh, etcetera. So but yeah. So by using persistent volume and volume claim, you basically can, uh, handle, uh, a stateless application and make it a, uh, with a make it a, uh, persistent storage. Yeah. And, also, on position volume claim, I guess, you should be, uh, using re reclaim policy as, uh, uh, retained so that, uh, when a port, uh, when the deployment gets scaled down to 0, so it is the persistent volumes and volume doesn't get deleted. Yeah. That way, like, even though, like, there are 0 replicas, uh, you still have your state, uh, stored in your percent volume. Yeah. That goes my answer. Yeah.
Yeah. How would you handle disaster recovery and backup strategies for stateful application running on Kubernetes in Azure? Yeah. So, uh, we could use, uh, multiple options like, uh, so if if the Kubernetes, uh, is in, uh, uses a past service, uh, such as, like, uh, AKS in Azure, then you could basically, uh, have your PVCs, uh, stored, like, basically stored in the, uh, Azure storage. Okay? And from there, like and those storage, like, you could have multiple replicas, and you could even have a storage, like, uh, storage account, uh, and container backups. Okay? That's the one way. That's the Azure way. And the, uh, Kubernetes native way, like, you could, uh, use, uh, CNC, like, cloud native cloud native tools, uh, such as Velero, uh, where, like, you could have like, where you could back up your state of your app your application as well as the, uh, percent volume and volume claims. Okay? So that way, like, when that way, whenever there is a disaster, okay, so so Velaro basically will be periodically taking, uh, the backups of your application. So, uh, so you would be having a backup, uh, stored in a remote back end. Okay? So that, uh, so whenever a disaster happens, so you could just, uh, restore using the latest backup available. So, uh, uh, the way Velero works, okay, uh, so it would be just running as part of, uh, on your cluster as part of, like, other uh, system, uh, applications. So and it would be like you could configure it to periodically take, uh, the backups of the namespaces. You could reconfigure it to take, uh, backup of a whole cluster, uh, or, like, you could configure it to take it at a namespace level. So, like and then they like, taking backups, uh, at the namespace level is kind of recommended. So, um, so you could basically, uh, periodically take backups and have them stored, uh, at a place where, uh, which is independent of the Kubernetes. So that, like, even if something happens to the Kubernetes infra, uh, your backup, uh, are safe in that, uh, remote shore. Okay. So, yeah, so once you have a backup, then you could just run a simple another Kubernetes cluster. Uh, or once your Kubernetes cluster is successfully up, then you could just use a Valero restore command from the backups, and you could have all your applications, uh, restored from the last, uh, backup point, including the persistent volumes. Yeah. So I do have a very good experience in in setting up a production grade, uh, backup and restore. Uh, and I like, we do have a on prem cluster, uh, that's basically running on bare metal and, uh, kubeadm. So where we, uh, where basically we are successfully running it. So we had a, uh, we we had a thing last, uh, last month where we had a, uh, production in outage, uh, where our power downs was down. So all we had to do was to just create another Kubernetes cluster, uh, on another infra and just restore the restore was very successful. Yeah. So I do have a good experience. Yeah.
Describe the advantages of implementing a service mesh in a Kubernetes environment and the con consideration for choosing 1. Yeah. Service mesh yeah. So, basically, uh, Kubernetes is a is a, uh, uh, microservices, uh, is a environment with a lot of microservices. And, uh, and, basically, p to p authentication and identification, basically, uh, is very complex thing in Kubernetes because and to trust the peers, I guess, that's a very important part of security. So that's where our service mesh comes into play, uh, where, like, you could have the peer to peer communication, uh, secured and authenticated, uh, and, uh, trusted. Okay? So that's where, like, service mesh comes into play. So that's the one way, uh, one thing, uh, about advantages of service mesh. The second thing is, like, you could basically use it for your, uh, traffic handling, okay, uh, as well as to enforce your few of the traffic, uh, network related network related, uh, traffic rules as well. And, also, like, you could even use it for, uh, I I l three load balancer, like, the load balancing at, uh, TCP IP level, uh, so as to restrict traffic from, uh, specific ports or specific namespaces. Yeah. And service mesh provide provides a great way to, uh, enter communication between ports, uh, with a lot of flexibility. Yeah. So and, also, it has a a control plane where from where you could basically manage and visualize everything. So I guess, uh, is one such example. So I do have a great, like, uh, good, uh, good experience working on. Yeah. That goes my answer.
How do you approach a performance testing for deployments in Kubernetes? Performance testing for deployments in Kubernetes, I guess, uh, what you say? I would say, like, uh, it's recommended because, uh, performance testings of your applications and deployments basically gives you a good idea of your resource utilization uh, resource utilization capacity of your, uh, application, uh, both in a normal run and in under stress run. So that way, like, you could basically, uh, make a good calculation of our all all your resources. And that way, like, you could configure your deployment dotml and, uh, try to give, uh, correct request and limits, um, like, which basically, uh, helps you in, uh, longer run of your, uh, application and lessen lessen the risk risk of, like, uh, getting OEM killed or as well as getting basically, uh, betting, basically, it will be getting from the nodes because of high use of, uh, memories or resources. Okay? So, uh, that way, like, uh, performance, uh, the testing definitely helps you in basically planning your, uh, resource utilization cost as well as, like, uh, planning your Kubernetes, uh, cluster as a whole. Yeah. So that's goes my answer.