
Senior Engineer Azure DevOps
Qualitest Group of TechnologiesDevOps Engineer
Hexaware TechnologiesDevOps Engineer
Atos
Azure DevOps
Azure
.jpg)
YAML

Terraform
.png)
Docker

Kubernetes

SonarQube

Linux
Jira

Azure DevOps

Terraform

Linux

Veracode

X-ray
So myself, So I'm work currently working in the Hexabit Technologies I have overall, 4 100 years experience in the IT industry. So from past 4 years, I have been working on the Microsoft Azure DevOps So maintaining the CACD pipelines and, creating the resources So my overall experience coming to the cloud, so I'm totally experienced on Microsoft Azure. And they have the knowledge on Terraform to creating the infrastructure, and I have the knowledge on Docker for, creating the images and running the containers and the deploying to the Azure Kubernetes cluster. So my day to day activities from the projects are to maintaining the CHP pipelines creating the new pipelines from the applications. And build and deploying and, managing those pipelines for the different projects and different clients.
So for, conditional operators like, we can, define the operators in the ML file for the different deployment workflows like, we can, deploy into the different environments like, prod, UAT, and, developer environments. With that conditional operators like, let's see let's take an example, like, if you want to make a service connection for that. So first, you will just, check the condition. If the service connection is, set already then, go ahead and, Continue with the connection. Otherwise, if it is failed, the condition. And let's take another example like, Suppose if we have the pre deployment and post deployment dates, So if the condition is satisfied, then, we can able to deploy the pipeline. So we can use the condition operator in that way for the deployment workflows.
So, like, Python, we can, use for the automated deployment of other resources like, to automate
So what impotency means, like, uh, when we are writing the, like, PowerShell or Python script for the different users, creating the, say an example for the Azure 1. So we need to check that it should be as a, like, without, uh, giving the redundancy. Right? So it's not able to create the current configuration. Maybe we can face the errors with that.
For Terraform approach, programmatically upgrading Terraform model without calling downtime. Let's say an example. Like, uh, a Terraform modules we'll use for the recode reusability, like, uh, if we created the modules. Right? So we can use the modules in the different environment, environment, like, uh, different environments for dev or whatever we have environments we are using in our project. Uh, we can use the Terraform modules. It will be easil easy, like, uh, to deploy same kind of environments, same kind of resources we are using. Right? So we can have the modules. So without causing downtime means, like, uh, we can just, uh, create this Terraform task to the, uh, CICD pipelines. Through the CHD pipelines, we can, uh, deploy the Terraform code for creating the Azure resources.
So in Terraform, how would you safely manage secrets requiring for cloud resources deployment? For the secrets, like, uh, some, we can store the keys and everything in the storage account, uh, and, uh, for managing these secrets, uh, we can use Azure keyword also. And, uh, when we are writing the Terraform code, uh, we should not be hard coded. Uh, we should be use the variables and, uh, TFW as separate files for that. So we don't need to hard code the code anywhere. We can just, uh, simply write the plain code, like, uh, whatever, uh, resources we need. We can just, uh, write the code, and, uh, we can give it in a variable, and, uh, we can store that in the way variable page and TFS page. And if coming to the secrets, uh, we can just manage in the Azure keyword as well as in the storage account. Uh, we can store the different secrets, like, uh, in the storage account, we have blobs. We have we can store the queues. We can store the tables. Uh, the data should be in the form of tables or queues. We can store in the storage account. So, actually, in the storage account also, we will, uh, store the Terraform state file to manage the Terraform state file. We will use the storage account.
Policy definition policy. K. Explain why Azure AM policy assignment might fail. I think, uh, in the resource twin, uh, I guess, Azure RM, you have, like, in the snippet given, it should be the we have the spell issue in the resource creating Azure RM policy assignment, and we are giving the name and scope. So Azure subscription ID dot primary dot ID. So for policy assignment, might fall with the we are not giving the proper names. And, uh, while creating the resource, right, so there should be a particular naming convention for that. I think, uh, resource, uh, by default in the Terraform registry, we have the resource with the correct spell name policy. I think, uh, Azure RM is not correctly spelled. I think Azure is that URE should be there for the policy assignment. I think that name is not called here.
K. Looking at this, as you see, I'll recommend to create a result group item for the syntax server and explaining the So here, uh, looking at this CLA command, uh, we have an AZ group. So zed group create name, resource group, like, uh, you have given the name with the resource group and location history here, but, uh, you're asking to create a resource group named resource world group. Right? The resource group world group name is not given in the above command, So it will fail. The resource group will be created with the name as resource group. And if we are using variable, right, you can create the variable. But here, I think variable is not called. So resource world group is not created because of the, uh, the name you have mentioned only resource group.
Design a Python script to interact the Terraform CLI for the purpose of automating custom of raw structure reports. So, like, uh, frankly, I don't have, uh, that Python much experience on, uh, scripting to interact with the Terraform CL and that script.
Okay. So the approach we follow for the sensitive information using that Terraform scripts. Right? So for Terraform script, we have the 3 different pages, like, first, we have the main dotdf place where our entire code will be there, and, uh, we should not hot code that. And we can use that sensitive information in the variable is on the TFIS space. And if most sensitive cases like, uh, the passwords and, uh, different things where we can, um, store it in the Azure keyword or in the storage account.
How would you optimize your Terraform project structure for a multi cloud environment? Yeah. For different, uh, multi cloud environment, like, Terraform supports, uh, multi cloud environment. Like, uh, it can support Azure as well as AWS and GCP also. So, accordingly, we have the documentation for that. For Azure, we should have some proper instructions and, uh, arguments for AWS we have. So for optimizing for these 2 conditions, we should have some models, uh, written for the both, uh, while only calling the values. We can, uh, create a separate page for that, and the overall model can be created for the optimizing, and it will be used for the both, uh, Azure and AWS. So we can create the script, whichever we are creating the script, it should be useful for the both Azure and AWS as well. And, uh, the values we can should call while deploying the resources.