profile-pic
Vetted Talent

Komirisetty Nageswara Rao

Vetted Talent

In my role, I focus on software development and analysis, contributing to projects that enhance business operations and foster technological innovation. My journey in software engineering reflects my dedication to excellence and a passion for driving impactful solutions in the industry.

  • Role

    Senior DevOps Engineer

  • Years of Experience

    5 years

Skillsets

  • AWS
  • Jenkins
  • Github
  • Azure
  • Kubernetes
  • Ansible
  • Terraform
  • Docker
  • Grafana
  • Prometheus
  • Debugging
  • DevOps
  • Shell
  • TeamCity

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Kubernetes Support Engineer (Remote)AI Screening
  • 61%
    icon-arrow-down
  • Skills assessed :Ci/Cd Pipelines, Excellent problem-solving skills, Kubernetes architecture, Strong communication skills, Ansible, Azure Kubernetes Service, Grafana, Prometheus, Tanzu, Tanzu Kubernetes Grid, Terraform, Azure, Docker, Kubernetes
  • Score: 55/90

Professional Summary

5Years
  • Apr, 2022 - Present3 yr 8 months

    Senior DevOps Engineer

    DXC Technology
  • Jun, 2019 - Apr, 20222 yr 10 months

    DevOps Engineer

    NeuAlto Technologies Private Limited

Applications & Tools Known

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Jenkins

  • icon-tool

    Ansible

  • icon-tool

    Terraform

  • icon-tool

    Cloud Formation

  • icon-tool

    GitHub Actions

  • icon-tool

    Argo CD

  • icon-tool

    TeamCity

  • icon-tool

    Octopus

  • icon-tool

    AzureDevOps

  • icon-tool

    Shell script

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    Loki

  • icon-tool

    Azure Monitor

Work History

5Years

Senior DevOps Engineer

DXC Technology
Apr, 2022 - Present3 yr 8 months
    Worked on Kubernetes and Azure DevOps tools like TeamCity and Octopus, integrated and automated processes, and monitored environments.

DevOps Engineer

NeuAlto Technologies Private Limited
Jun, 2019 - Apr, 20222 yr 10 months
    Set up automation pipelines, created infrastructure as code on AWS and Azure, managed container orchestration with Docker and Kubernetes, among other responsibilities.

Achievements

  • Completed complex and challenging tasks in time
  • Recovered and backed up completely crashed Kubernetes clusters
  • Upgraded old challenging Kubernetes versions to latest versions without affecting existing applications
  • Debugged and cracked complex, challenging tasks
  • Received multiple championship awards and team collaboration awards in professional journey

Major Projects

5Projects

Morpheus

Apr, 2022 - Present3 yr 8 months
    Involved in Kubernetes management, operations, and upgradations using kops.

DevCloud

Apr, 2022 - Present3 yr 8 months
    Handled AKS architecture, build, and deployment operations using TeamCity, and Octopus.

Nirmata

Jun, 2019 - Apr, 20222 yr 10 months
    Set up automation infrastructure and processes on AWS and Azure, among other contributions.

FoodyHive

Jun, 2019 - Apr, 20222 yr 10 months
    Contributed to the project's automation and cloud infrastructure.

SmartJoules

Jun, 2019 - Apr, 20222 yr 10 months
    Involved in creating infrastructure as code and managing container orchestration for the project.

Education

  • Masters (M. Tech)

    Gayatri Vidya Parishad college of Engineering (JNTU-K)

Certifications

  • Cka (certified kubernetes administrator) - 2024

  • Az-900 - 2022

  • Dca (docker certified associate) 2019

AI-interview Questions & Answers

I'm. I have built 5 years of experience in DevOps. Our current multiple technologies and multiple projects. The main skill set is the containerization. And expecting Kubernetes and Docker. Uh, also working on multi cloud environments, Databricks, Azure, Google Cloud. Yeah. This is a high level introduction. Present, I worked with the BNC company.

Yeah. Yeah. Connect to that CICD pipeline. Anyway, the basic steps is there to involve. One is first, we need to build up a source code. To building the source code, that is a depends, uh, depends on the program, uh, like, uh, a co coding language. Which libraries they selected, dotnet, or Java, Python. So example, in present projects, we are using the dotnet. So to build that, first, you can take any CICD tool like Jenkins, TeamCity, and GitHub actions or anything. 1st, we need to use the interpreter as a dot net. And the in in the interpreter, we can find the command, like a builder. And, uh, there's a command line. Our, uh, some of the pipelines, they probably the one you want it to do, uh, to do this one. Example, dotnetcla and dotnetcli version. Then you can pass the imports like, uh, so what is our input? The target is we need to build, and what is the file solution of file name. Like that, uh, so first, we need to build. After building, then we need to pass the testing phases, like unit test, module test, uh, so you can run the unit test and module test phases. Once it's completely, uh, um, the unit test is passed and module test has passed, Now the container the actual containerization started. Now containerize our application. We're coming to the containerization first. We can we can create 1 Docker image from our source code. So then we can upload the proper on this, uh, private container registries like ACR or any, uh, JFrog Artifactory so we can upload. So we can integrate to the pipeline. After uploading, now the actual deployment to the Kubernetes will start. So here to decline for the Kubernetes, we we can use the modules. Example, if we take the Azure DevOps, here they will, uh, we request some service connection to authorize to the Kubernetes cluster. So here, first, we will, uh, with the help of the service connection, we can authorize to that cluster. After, we can provide the, uh, deployment. Like, uh, which AML files you want to deploy, we can, uh, we can filter those YAML files and the locations so that all will deploy. That is, um, the higher level way. Okay. If you're not comfortable with the modules, that way we can go to the script, our old script. So we can, um, uh, use the kubectl or in the higher level. If our packages is very more, we can also use the helm. By using the Helm chart, our directly kubectl commands also, we can deploy into the Kubernetes. After once, we can deploy into the Kubernetes. There's any kind of cluster. Maybe there is a k o p s, r v k s, r v k s, r on premises, any kind of cluster, uh, we can we can deploy. Once it is deployed, you can go and validate, like, all our configuration is coming or not. And if it's a branch too, the, uh, the model is, uh, uh, the BRAV manifestor files is branch to any different kinds, like replica search. Uh, so the that situation, we need to restart, uh, world world parts, then the new effect will come. If you take the model as a deployment, the direct a man push to type is the deployment, then we no need to restart. Then, automatically, the effects, uh, the changes will affect. So, uh, it's a general CSC deploy, uh, including the Kubernetes. So based on our exact document, we can change our flows and workflows like, um, the number of stages in the each stage, how many steps you want to include and conditions. And after in the approvals, everything we can include in this pipeline.

Yeah. To exposing a Kubernetes service to the Internet, we have different options. One is the load balancer. Uh, first to the higher level is the ingress controller. So there is a, uh, NGINX ingress controller or issue ingress controller. Then that will that will take care of the exposed. So, actually, the ingress controller coming with an application load balancer, and this application load balancer will communicate with the worker nodes. And our our Kubernetes cluster worker nodes throughout the application load balancer, uh, we can able to communicate. And then mainly the definition, we need to mention the increase. Like, if the particular host, uh, the host coming, which which background, like, which, uh, backend services we need to redirect. So there is a higher level advanced setting like ingress. And okay. If it's an individual objects, so, like, all single services, we don't need to, uh, go with the directly, uh, ingress level. We can simply open one load balancer. If you say we need to connect to the Internet, we can open the public load balancer. The, uh, um, the service type actually, we have different types of services. Node port service and, um, and after that load balancer service and the cluster IP service. So, uh, if we choose our cluster IP service and, uh, we can, uh, send the public IP, then we can easily access our, uh, our applications from outside of outside of the world. Okay. If our worker now sees, uh, assigned to the public IPs, then we can also use the, uh, make the using the node port. So node port types are these. But the best recommended way is that we can go with the ingress. So then, uh, it will handle hundreds of applications, uh, very easily, the routing request and where we need to go, uh, which request is coming. All these things we can handle very easily throughout the ingress. If it's a individual application, we can go to any one of the method like a load balancer service, a load per service, Or sometimes for testing purpose, we can also make one, uh, the export. That way, we can export the product, and we can also test, uh, when we are exporting, then, uh, yeah, it will it will access to the, uh, public Internet.

Yeah. Uh, Yeah. In coming to, uh, scaling policies, we have different types of scalings like HPA, VPA, and there is a node level and port level. Coming to exert this question, this is a port level. So horizontal port auto scaling, HPA. I'm coming to here. Yes. Uh, if, uh, if the any particular application is, uh, taking the more load in the particular time. So always, we don't need to go and adjust the the number of replicas in that time. So better we can create 1 HPA. So here, HPA, we can, uh, create based on the target metrics, like CPU utilization, memory utilization. Like example, we take and take the memory utilization. In in normal hours, so, uh, 2 replicas is enough. Uh, like, uh, 2 number of course. 2 number of course is enough, uh, to to to balance the complete load. But even the pay covers, if the more customers increase, that time we require the more resources, like more memory and more CPU. So in this case, just I will, uh, the target metric is the memory. So just I will, uh, make the memory utilization percentage. Example, uh, if I'm making the 70 percentage, the memory utilization is 70 percentage. So if the load is increased to balance the the target value number, then automatically, it will increase the more number of replicas based on our minimum value and maximum value. So if example, I put the 2 replicas. So in the normal hours, 2 is enough. In the peak hours, so, uh, the memory, uh, the memory resource is not sufficient, then automatically it will launch more number of parts, like 3 parts, 4 parts, 5 parts. To to maintain the target value, it will launch, scale, and automatically scale out. Okay. If there is no load, then again it will decrease. Um, simply, it's, uh, on kind of, uh, load balancing in the pod level HPA.

Yeah. Coming to 0 down, uh, 0 downtime. Actually, the Kubernetes view already view this beautiful feature. So when we, uh, when we make changes to any application, uh, then automatically so it will follow on strategy, like, uh, rolling update percentage. Like so example, 3, uh, we have an application. That application has 3 ports. So 3 ports at a time with not destroy. So here, simply migrating means so first, we need to delete here and create in the ender, uh, ender worker node simply in the ender cluster. So in this case, if you follow the, uh, updating strategy, uh, so here, first, it will create only 1. First, it will destroy simply terminated. First, it will terminate 1 port. If it is successfully terminated and recreated, then only it will go to the next port. So in this case, always the, uh, um, uh, remaining parts is available. Example, in the 3 customers, 2 parts is available. Uh, the 2 parts will send the request. There is no downtime zero downtime. Simply, we can call as the the, uh, the, uh, the kind is the deployment. So, uh, there's a by uh, the Kubernetes provided feature only. So there is no, uh, downtime here in this case by following this update strategy.

Yeah. Yeah. We are coming to the infrastructure and mainly especially Kubernetes infrastructure means first, uh, we need to create, uh, a number of master nodes. And, uh, as it's always you can better we can go to the, uh, high availability. So more number of control plane nodes. The based on the, uh, the rules. We are like, always hard number of nodes we need to choose, like, 3 nodes or 5 nodes. So first, we can take, uh, control plane nodes we need to launch. And after launching the control plane nodes, then and here in the coming to this, uh, so it's a based on kind of clusters. If it's a AKS, we don't need to worry the all these things. So once we when we're launching the AKS cluster, that time only it will do the all options, like a Kubernetes version and, uh, worker mode information, how many worker mode tools, uh, we require, what is the size of worker mode tools, And this, uh, and the CNN plug in, uh, container network interface plug in, which set of plug in we need to use for the networking. So all these on-site ranges, everything, uh, we can configure very easily. Okay. If we come into the on premises, so that is a little bit difficult. 1st, we need to create master nodes and after, uh, some tools like Kubernetes, uh, like Kubernetes, kubectl. So the main the fundamental components is there. With the help of the cube CTL, we can form the cluster. Uh, for us to, uh, we need to choose our, uh, number of nodes. Like, some of the nodes is branch to control plane nodes, some of the nodes is branch to the workers. Then after, we can join. And here also, Vivnet and, uh, like, CNA plug ins. Like, maybe it's a Vivenet CNI or Azure CNA. We can configure everything, and we can form the cluster. So these all process we can do using the Terraform. So, uh, in the of, uh, especially for cloud, AWS, Azure, Google Cloud, if you want to provision the clusters in their infrastructure, we know, uh, we know it to go manually go open the portal under here by using the UI. From the UI, we can select option presence. There is no need to worry. From the script, simple, the Terraform script, we can close in all these resources. In the Terraform, we will mention, uh, based on the other. The first other is first, we need to create the Kubernetes cluster. And after once, uh, within the creation of the cluster, the properties is the kashla of, uh, cluster properties. So in these properties, we can mention all these we may we will need plug in inside the ranges, a number of our proposed, uh, everything. Once the cluster is, uh, created, if you want to integrate this cluster to any other resources, then next thing, uh, you, uh, you can add, uh, the integration details like as your container registry. If you want to integrate our AKS cluster to ASR, then we can add 1, uh, one more resource integration in the Terraform also. Everything, the infrastructure as a code, everything we can then, uh, using the Terraform, uh, for including Kubernetes setup also.

Yeah. Coming to far life cycle, uh, for, uh, it have, uh, uh, uh, def it have number of life cycle. 1st is the initializing mode. 1st, the part is the need to come into the initialization. Before that, there's a pending stage. Okay. Uh, 1st, the processes, 1st, the parties need to allocate with us, uh, the particular local node based on the health checks and condition, everything. So if it's still not able to assign to any worker node, that condition, simply that state, we can call as a pending state. Once it's allocated to the, uh, worker node, then after, uh, so here, again, a number, uh, different stages is there. Okay. If it is the first the process is the initialization. Once, uh, uh, that is the initialization stage. Once the initialization is completed, everything, uh, going to the smooth, then it will still come into the running state. Or if any ECCs fail, then it will come into the cache loop backup state. Then the cache loop backup state is a different reasons. Maybe, uh, like, uh, our resources are not sufficient, uh, after locating, and, uh, maybe any errors is there in our application. So, uh, so it will enter the crash loop back up state. So, um, pending state, initialization, crash loop back up state, and a running state, and comp if you see it's a job, then one more stage also called completed state. Also completed state. Finally, it's a terminating state. So these are the Kubernetes state life cycle. It will follow an other put under mainly the restart policy. Based on our restart policy, I'll say it will behave the life cycle will behave like this. Like example, the by default, the restart policy is the, uh, uh, always. So if it's, uh, anything has happened, automatically, it will restart Or if we can mention only failure, look, if something has happened, that time only, it will restart the port. Like like that, these are the port life cycle policies, and we can also control, uh, using some, uh, probes, like a liveness probe, readiness probe, so and startup probe. So, uh, if you want to take the particular action, like, if you want to restart, uh, the pod, uh, when the pod is, uh, uh, an example, you want to execute the particular operation when, uh, that will execute only, uh, the part is, uh, uh, running simply live in this probe. So we can execute that condition. Like, we can also apply these probes, uh, in this, uh, uh, part life cycle. Based on these probes, uh, it it will act. So mainly, these are the main things, like, uh, the life cycle stages. Uh, the simply creation, the container creation and termination process.

Yes. Uh, Yeah. Come to the stateful set application, mainly for the stateful set applications, we will use the, uh, some database applications, like, uh, to maintain the persistency. And when coming to we have the disaster recovery and backup is uh, desire disaster recovery means simply maintain the sorry. Simply maintain the backups. So here, we are creating the persistent volume. Example, if you take the any one of the, uh, database, uh, take the RavanDB only. So in the RavanDB, the all data, sorry. Yeah. The all data is stored, uh, in the RavanDB server. Okay. If you if the data is stored on the internal space, internal space means the node space, there is no actual space for this. Then automatically, if the party is deleted, then automatically, we lose the entire thing. Or if the cluster is affected, that time also, we will lose the entire thing. But if you maintain the persistent volumes, external persistent volumes, under the main thing is a storage class retention policy, uh, the recline policy. The reclined policy, we need to as a retention retained. So if we make as that the reclined policies are retained, even the even the cluster is completely crashed, but it's not affected that, uh, uh, the, um, the persistent volume, like, uh, throughout the storage class. Maybe in Azure, we can take the Azure disk, uh, Azure file systems. So in this Azure disk, even we deleted or crushed our complete cluster, that volume is won't be delayed because the, uh, the volume client policies are retained. That is one case. And here, what even that is existed. But in worst cases, maybe our user resources also deleted. So in that case, we can also, uh, enable one more backup, uh, backup option like snapshots. Taking the snapshot from that particular resources and maintain in the, um, in different region. And here, we can also maintain as a geo replication. Geo replication concept also suitable for this here. So example, if we have the RavanDB. So we will maintain the multiple replicas for the Ravin DB, uh, uh, and the Ravin DB database, simply our stateful set application. So here, one one part are replicas of 1 region and other part are in the replicas in the region. Okay. If this region is affected, then we can able to recover from in the region. So that is one method. And backup, uh, snapshot, uh, uh, snap on the snapshot backup and restore, that is the end of the method. And, uh, here, the Kubernetes also, uh, um, and, uh, it mean our persistent volumes reclined policy option also is helpful, backuping the applications. And, uh, this is for a single application. But coming to cluster, we have the, uh, very good features for cluster also, like etcetera backups. Etcetera backups and restore. Okay. If you take the regular, uh, there is a different types of etcetera backups, full backups and incremental backups. If you take the full backup and incremental backup that our ETCD cluster regularly, even, uh, even everything, uh, 100% is crashed, uh, the cluster. Also, we can retrieve. The ETCD backups will help the Kubernetes object. The real data, uh, will store it on the persistent volume. So with the help of this ETCD backups, our database backups, even the 100% is crushed also, we can easily recover.

Yeah. I'm coming to log in. Yeah. Logging architecture. Yeah. And, uh, so we are using, uh, based on our our kind of Kubernetes cluster. Okay. If we use the any, uh, issue, uh, any crowd provided Kubernetes cluster, like EKS, EKS, then better we can go to the, uh, uh, they provided, uh, uh, uh, logging system. Like example, if you take the EKS, And AKS, their logging system, uh, is the Azure monitor. The Azure monitor, just we enable the applicator, uh, container in size. When we enable the container in size, then you can completely, uh, monitor our logs and metrics and everything. Uh, so and and that is provided by Zoom. So, uh, internally, if our applica if our cluster or anything is affected also, we can able to, uh, we can able to, uh, check the history of the logs and everything because there's outside of our cluster. So, uh, um, that is one thing. And okay. If it is our on premises thing, then in this case also, we can use our own tools like promo tiers. Actual promo tiers is a very best, uh, it will provide more number of metrics, uh, uh, uh, more than all tools. So, uh, but, uh, okay. Uh, Prometheus for metrics. And if you can also configure the loki. Loki is for only, uh, the lagging, uh, locks, uh, locks monitoring. So help of this Loki, Grafana. We can provide our our own system and some more system, uh, and some more log monitoring things also, the ELK. ELK starts. And, um, and yeah. Coming this is all of our internal things and user provided things. You can also use the external services like Dynatrace. Dynatrace also will provide, uh, very good. In present, uh, in present project, we are using the Dynatrace. It will, uh, it will provide all metrics very detailed. So there is also good, uh, logging architecture. And coming to previous projects, I implemented, uh, these things, Prometheus, Grafana, Loki. So that is also probably very detailed, uh, information. And, uh, in Azure also, we can and, yeah, I and AWS also, we can enable the container insights, and we can directly own the cloud provided systems. So, uh, better so, uh, the which one is better means based on our targeted, uh, cluster. Like, where our cluster is there and how we will configure. Based on that, uh, we can definitely choose one goal amount to reduce, which will help to the completely, uh, uh, logging the information like node metrics. Any problems is there, no. We can cover the node metrics and node blocks. And, uh, we can, uh, we can also uh, uh, make a product application center application, cluster level, application level. Uh, we can cover, uh, uh, always. And, uh, and here and coming to debugging section, yeah, uh, we can easily debug. If it's on premises, we can also debug the control payloads. The key point of Kubernetes is the debugging these, uh, the system applications. Uh, in the cube system namespace, some of the critical applications is there, like core DNS, and after that, daemon sites. And, uh, so, uh, the what are the applications within this our our cube proxy, uh, within the cube system namespace that is a belongs to the system critical applications. With the help of that, we can easy, uh, we can easily debug very challenging applications also in the Kubernetes.

Yeah. I'm coming to sensitive information. Yeah. The Kubernetes already provided the full feature, secrets. There is a call as a secret. So we can, uh, we can put our all sensitive information in the form of the secret. The secret is the, uh, encrypted format. The information is stored in the encrypted format. So it's a completely secured, and we can easily integrate it to our applications from secrets. We can easily access our sensitive information, uh, from the secret, and we will also maintain them very securely, uh, using the secrets. That is, uh, uh, that is Kubernetes world provided thing, secrets. And coming to actionable tools. The actionable tools also there, like, vault, key vault. So, uh, we can also maintain our sensitive information in the key vault, and we can integrate the keyword to the our our Kubernetes applications. Um, now that is also that is the actional, uh, actional, uh, actional service to make the sensitive information secret. And the default one inbuilt feature is the, uh, uh, the secret object in Kubernetes. And, yeah, uh, and, uh, that is the 2 things. And we can also, um, yeah, come into securities, image security, like, uh, Docker image security also, we can provide using the private registries and the JFrog registries. Yeah. Mainly, we can I can simply say the best thing is a secret, the won't provide a secret, the Kubernetes provided secret? Yes. The external service, sir, we can go to the vault, key vault. And, uh, and yeah. And okay. If in the CACD pipeline, if we go we we will manage in from the c CNE pipeline. Here also, there is a one, uh, uh, one section is the, uh, credential section. In this credential section, we can, uh, we can store our data, uh, data. That is also in the entry format only.