profile-pic
Vetted Talent

Pendyala Chaitanya

Vetted Talent
An experienced IT professional with desired skills and utilize my communication and interpersonal skills to work towards the Organisational goals.
  • Role

    Senior DevOps Engineer

  • Years of Experience

    6.1 years

Skillsets

  • Github
  • NewRelic
  • JFrog
  • GitHub Actions
  • Azureappinsights
  • Azure openai
  • Azure Monitoring
  • AKS
  • Veracode
  • Linux
  • Jenkins
  • DevOps - 4.5 Years
  • Git
  • Checkmarx
  • Bitbucket
  • YAML - 4 Years
  • SonarQube
  • Jira
  • Docker - 3 Years
  • Kubernetes - 3 Years
  • Azure - 2 Years
  • Terraform - 3 Years

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Azure DevOps Engineer (Remote)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :gitlab ci, Ci/Cd Pipelines, PowerShell, Terraform, AWS, Azure, Azure DevOps, Jenkins, JSON, Python
  • Score: 45/90

Professional Summary

6.1Years
  • Sep, 2024 - Present1 yr 6 months

    Senior DevOps Engineer

    Qualitest Group Of Technologies
  • Aug, 2022 - Sep, 20242 yr 1 month

    DevOps Engineer

    Hexaware Technologies
  • Feb, 2020 - May, 20222 yr 3 months

    Associate Engineer

    Atos

Applications & Tools Known

  • icon-tool

    Azure DevOps

  • icon-tool

    Azure

  • icon-tool

    YAML

  • icon-tool

    Terraform

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    SonarQube

  • icon-tool

    Linux

  • icon-tool

    Jira

  • icon-tool

    Azure DevOps

  • icon-tool

    Terraform

  • icon-tool

    Linux

  • icon-tool

    Veracode

  • icon-tool

    X-ray

Work History

6.1Years

Senior DevOps Engineer

Qualitest Group Of Technologies
Sep, 2024 - Present1 yr 6 months
    Design and manage CI/CD pipelines to automate build, test, and deployment processes. Containerize applications using Docker and orchestrate with Kubernetes. Collaborate with development teams to streamline release management processes. Deep dive into containerization of database products. Automated the build and deployment process using Azure DevOps.

DevOps Engineer

Hexaware Technologies
Aug, 2022 - Sep, 20242 yr 1 month
    Build management experience in tools such as ADO and Jenkins. Implemented and managed CI/CD pipelines using ADO and Jenkins. Configure multi-stage YAML pipelines in Azure DevOps for scalable release management. Managed On-Prem and Azure Kubernetes Service for DB pipelines. Implement Infrastructure as Code using Terraform in Azure environments. Automated Docker image builds and push to Azure Container Registry (ACR). Creating and maintaining Android and IOS pipelines. Orchestrated containerization using Docker. Integrated SonarQube for code quality and Veracode for security vulnerability.

Associate Engineer

Atos
Feb, 2020 - May, 20222 yr 3 months
    Deep knowledge of CI and CD methodologies. Design and manage CI/CD pipelines using Azure DevOps for automated build, test, and deployment workflows. Azure resource creation using Terraform. Improve deployment reliability and reduce downtime using automated validation gates. Maintain version control and branching strategies using Git. Created Docker images and integrated into pipelines to automate building, testing and deploying. Managed build and release pipelines for different applications. Maintained CI/CD pipelines to automate deployment of apps to AKS. Automating all the build and deployment tasks using Azure DevOps. Configured Azure Virtual Networks, Subnets and Virtual Machines using Terraform. Worked on continuous integration builds, deployments across different environments Dev, QA and Production. CI/CD using YAML scripting. Configured and managed Virtual Machines using Windows.

Achievements

  • First place at 6th International Conference on Emerging Research in Computing Information. (07/2018)
  • First place at 6th International Conference on Emerging Research in Computing Information.

Major Projects

6Projects

Ascott-Capitaland Migration

Feb, 2024 - Present2 yr 1 month
    Created and maintained Android and iOS pipelines. Integrated SonarQube for code quality and Veracode for security vulnerability. Orchestrated containerization using Docker.

AXA GO_France (CI-CD Automation)

Aug, 2023 - Feb, 2024 6 months
    Created CI-CD pipelines for Python applications and integrated SonarQube implementation.

Wolters Kluwer (DNA_Montana Project Migration)

Aug, 2022 - Jul, 2023 11 months
    Managed build and release pipelines for different applications. Created Docker images for different applications and automated build, testing, and deployment.

American Express

May, 2021 - May, 20221 yr
    Automated build and deployment tasks using Azure DevOps. Worked on automated continuous integration builds and deployments on web app. Configured Azure Virtual Networks, Subnets, and Virtual Machines using Terraform.

Statestreet

Jun, 2020 - Apr, 2021 10 months
    Worked on continuous integration builds and deployments across different environments. CI/CD using Groovy scripting. Configured and managed virtual machines using Windows.

My-Atos-Syntel

Apr, 2020 - Jun, 2020 2 months
    Worked on creating dashboard using Javascript. Developed services to populate data and acknowledgements.

Education

  • Master Of Computer Applications

    Vignan's Institute of Information Technology (2019)
  • Bachelor of Computer science

    M.V.N Degree College (2016)

AI-interview Questions & Answers

So myself, So I'm work currently working in the Hexabit Technologies I have overall, 4 100 years experience in the IT industry. So from past 4 years, I have been working on the Microsoft Azure DevOps So maintaining the CACD pipelines and, creating the resources So my overall experience coming to the cloud, so I'm totally experienced on Microsoft Azure. And they have the knowledge on Terraform to creating the infrastructure, and I have the knowledge on Docker for, creating the images and running the containers and the deploying to the Azure Kubernetes cluster. So my day to day activities from the projects are to maintaining the CHP pipelines creating the new pipelines from the applications. And build and deploying and, managing those pipelines for the different projects and different clients.

So for, conditional operators like, we can, define the operators in the ML file for the different deployment workflows like, we can, deploy into the different environments like, prod, UAT, and, developer environments. With that conditional operators like, let's see let's take an example, like, if you want to make a service connection for that. So first, you will just, check the condition. If the service connection is, set already then, go ahead and, Continue with the connection. Otherwise, if it is failed, the condition. And let's take another example like, Suppose if we have the pre deployment and post deployment dates, So if the condition is satisfied, then, we can able to deploy the pipeline. So we can use the condition operator in that way for the deployment workflows.

So, like, Python, we can, use for the automated deployment of other resources like, to automate

So what impotency means, like, uh, when we are writing the, like, PowerShell or Python script for the different users, creating the, say an example for the Azure 1. So we need to check that it should be as a, like, without, uh, giving the redundancy. Right? So it's not able to create the current configuration. Maybe we can face the errors with that.

For Terraform approach, programmatically upgrading Terraform model without calling downtime. Let's say an example. Like, uh, a Terraform modules we'll use for the recode reusability, like, uh, if we created the modules. Right? So we can use the modules in the different environment, environment, like, uh, different environments for dev or whatever we have environments we are using in our project. Uh, we can use the Terraform modules. It will be easil easy, like, uh, to deploy same kind of environments, same kind of resources we are using. Right? So we can have the modules. So without causing downtime means, like, uh, we can just, uh, create this Terraform task to the, uh, CICD pipelines. Through the CHD pipelines, we can, uh, deploy the Terraform code for creating the Azure resources.

So in Terraform, how would you safely manage secrets requiring for cloud resources deployment? For the secrets, like, uh, some, we can store the keys and everything in the storage account, uh, and, uh, for managing these secrets, uh, we can use Azure keyword also. And, uh, when we are writing the Terraform code, uh, we should not be hard coded. Uh, we should be use the variables and, uh, TFW as separate files for that. So we don't need to hard code the code anywhere. We can just, uh, simply write the plain code, like, uh, whatever, uh, resources we need. We can just, uh, write the code, and, uh, we can give it in a variable, and, uh, we can store that in the way variable page and TFS page. And if coming to the secrets, uh, we can just manage in the Azure keyword as well as in the storage account. Uh, we can store the different secrets, like, uh, in the storage account, we have blobs. We have we can store the queues. We can store the tables. Uh, the data should be in the form of tables or queues. We can store in the storage account. So, actually, in the storage account also, we will, uh, store the Terraform state file to manage the Terraform state file. We will use the storage account.

Policy definition policy. K. Explain why Azure AM policy assignment might fail. I think, uh, in the resource twin, uh, I guess, Azure RM, you have, like, in the snippet given, it should be the we have the spell issue in the resource creating Azure RM policy assignment, and we are giving the name and scope. So Azure subscription ID dot primary dot ID. So for policy assignment, might fall with the we are not giving the proper names. And, uh, while creating the resource, right, so there should be a particular naming convention for that. I think, uh, resource, uh, by default in the Terraform registry, we have the resource with the correct spell name policy. I think, uh, Azure RM is not correctly spelled. I think Azure is that URE should be there for the policy assignment. I think that name is not called here.

K. Looking at this, as you see, I'll recommend to create a result group item for the syntax server and explaining the So here, uh, looking at this CLA command, uh, we have an AZ group. So zed group create name, resource group, like, uh, you have given the name with the resource group and location history here, but, uh, you're asking to create a resource group named resource world group. Right? The resource group world group name is not given in the above command, So it will fail. The resource group will be created with the name as resource group. And if we are using variable, right, you can create the variable. But here, I think variable is not called. So resource world group is not created because of the, uh, the name you have mentioned only resource group.

Design a Python script to interact the Terraform CLI for the purpose of automating custom of raw structure reports. So, like, uh, frankly, I don't have, uh, that Python much experience on, uh, scripting to interact with the Terraform CL and that script.

Okay. So the approach we follow for the sensitive information using that Terraform scripts. Right? So for Terraform script, we have the 3 different pages, like, first, we have the main dotdf place where our entire code will be there, and, uh, we should not hot code that. And we can use that sensitive information in the variable is on the TFIS space. And if most sensitive cases like, uh, the passwords and, uh, different things where we can, um, store it in the Azure keyword or in the storage account.

How would you optimize your Terraform project structure for a multi cloud environment? Yeah. For different, uh, multi cloud environment, like, Terraform supports, uh, multi cloud environment. Like, uh, it can support Azure as well as AWS and GCP also. So, accordingly, we have the documentation for that. For Azure, we should have some proper instructions and, uh, arguments for AWS we have. So for optimizing for these 2 conditions, we should have some models, uh, written for the both, uh, while only calling the values. We can, uh, create a separate page for that, and the overall model can be created for the optimizing, and it will be used for the both, uh, Azure and AWS. So we can create the script, whichever we are creating the script, it should be useful for the both Azure and AWS as well. And, uh, the values we can should call while deploying the resources.