
As a Build and Release Engineer, Good knowledge in setting up build automation and version control mechanisms to support multiple and parallel development.
Building and deploying applications by adopting DevOps practices such as Continuous Integration (CI) and Continuous Deployment (CD )in runtime with various tools like Git, Jenkins, Terraform, Docker, Kubernetes and managing cloud services with AWS and Azure.
Experienced in implementing Organization DevOps strategy in various operating environments of Linux and windows servers along with cloud strategies of Amazon Web Services and Microsoft Azure
Expertise in Continuous Integration and Pipeline Jobs using Jenkins also Deployment Automation and Infrastructure as a Code using Terraform
Handling Build and Deployment process by Automating CICD pipelines for different Java based applications.
Extensively worked with Version Control System like GIT. Also handled administration activities in Subversion.
Experienced in implementing Continuous Integration and Continuous Delivery using tools like Jenkins, AWS Code pipeline and Azure Devops using k8’s Cluster
Automate the deployment process by writing the ShellScript and YAML
Virtualized the Docker container for the different application and orchestrate using Kubernetes cluster.
Create quality gates in SonarQube dashboard and enforced in the pipeline to fail the build when conditions are met.
Experienced in AWS services like EC2,Templates,AMI,Volumes,Snapshots,SecurityGroups, Elastic Ips, Auto Scaling Groups ,VPC, S3, IAM , Load Balancer, EKS
Experience in dealing with Windows Azure IaaS like Virtual Networks, Virtual Machines, Resource Groups, Key-Vaults, Subscriptions, Private End Points, VPN, Application Gateways, AKS, Azure Backup.
Major focus on Configuration, Build/Release Management, Infrastructure as a code (IAC) and as Cloud DevOps operations into EKS cluster
Execute Weekly Release Cycle and Managing software source code, change control, configuration management, build and deployment activities, and Setup Build & Release mechanism for new product lines.
Worked closely with Development and QA teams to maintain & enhance staging and production environments to meet uptime, performance, and security goals.
Sr. Devops Engineer
Annalect IndiaSr. Devops Engineer
Bourntec Solutions Pvt.Ltd.Devops Cloud Engineer
Capgemini TechnologiesSoftware Engineer
Innovative MindsDevops Engineer
DXC Technologies.png)
Jenkins
.png)
Docker

Kubernetes
Azure

AWS

Terraform

Cloud Watch

AppDynamics

Terraform

AWS

Terraform

SonarQube
Hi. My name is. I have total 6 point ports and automation, like, how to shape those containers using, like, Kubernetes, and troubleshooting the issues, and also networking concepts like, connecting connection between the 2 ports within a different network using the networking concepts and networking policies in Kubernetes. And complements in the Kubernetes, handling the controlling manager for the shipping in the ports. So all these, like, I have experience in creating the Kubernetes cluster activity in both in, like, infrastructure, AKS and EKS. So in easy Kubernetes, like, uh, infrastructure as a cloud, creating using Terraform scripts. So using Terraform, like, I have created IIS platform into, uh, upgrading the infrastructure as you back what's the service, what's new to your environment, and also, like, maintaining infrastructure, creating different different services, like handling the entire environment using Terraform, like, uh, using condition statements and Terraform state files when managing the locking the telephone state file in some different folders so without accessing other users, only for single users. We're doing all this stuff with the given it is. Um, so these are my overall experience I have with, uh, my voice and response for this. Thank you.
I like to integrate, uh, Kubernetes with the IAS, uh, using, like, Terraform. First of all, we have to create a main dot p f file and also, like, uh, how to, uh, integrate with the, um, Terraform state files. And first of all, like, variables we have to create, and, actually, like, we have to call that variables inside the name file name dottf file. So so, like, local rtf, like, uh, provisions, we have to call, like, a provision, like, uh, state file and also local dot d f. So local dot d f, like, uh, we have to mention the data, whatever the data it is present inside the services. Only created services, we have to integrate that services with that Terraform script. First of all, like, we have to mention the Terraform script to the print directory, and we have to call it parent directory using the modules. For modules, like, we can create a telephone script and, like, version, we have to create what are the version of telephone that we have to create inside the account. So so like this, we can integrate, uh, whatever the script we have to return the telephone with the, uh, IDAC, like, uh, cube and it is whatever the cube is cluster we have created, that we can integrate with the telephone script. So so so that we'll have to access the permission for the RBAC room based access control. So then, like, they have to forward the user's permission, like, both for particular AKS cluster, whatever the cluster we have to forward. So for that, we, uh, we have to forward the RBAC rule, like, rewrite access, execution access, contribution access for particular service in the is you in the Azure RBAC role permissions we have to provide and also, like, uh, the subscription user permissions. This one, we have to provide inside the IT manager for that particular role.
Security, uh, we can go for, like, um, serial firewall. Serial firewall, we can create a networking some with the Kubernetes cluster, like, network policy. So it's like, um, we have to mention a port IP address. So to which to which port it has to be connected using the, uh, the network, uh, identity, what we have mentioned, Crino Crino network is there and and also Flannel network is there. So inside network, part to part communication be within the different nodes has to be, uh, mentioned inside the, uh, flannel network so so that, like, port has to be communication between one network other network has been communicated. So what are the service from? What are the service the application hits from the other outside of the network so that there has to be connection between one port or another port between the different nodes. So like this, like, communication has been between, like, different network policies and also portable communication helping within the CNN, like, overlay under the network. So using, like, under the network, we can configure the port to port communication between, like, different networks. Like, whatever the fan network the configuration has been done, that has to be integration, that has to be found. Like, it will act as a, uh, or choke to wait, like, private cloud and access the CID or block, whatever the block it has been provided that has to be integrated within the, uh, within different subnets of the ports. So so that I can integrate between the port to port communication. Like, this like, different like, you can implement the port to access RDAC access also. We can have provided to to different different different users. So whatever the access we have provided to get access to particular application, so we can conclude the access to that particular role also. So we can restrict access. We can analyze the access to particular access of the role.
One second. K. A port, like, um, we have different ports, uh, like, state like, stateless, stateful set, and, uh, lightness props, readiness props. Uh, different different parts, we can, uh, have, like, deployments. So while creating deployments, like, uh, when the port has been created and also, like, the percent warning percent warning case has been created. So in the power, I've seen there's been control, like, control manager, and master to the latest cluster. So master component, like, it will it will, like, um, control the power, like, in which node it has to be deployed and what are the, uh, in which state file, like, uh, other state file is having and the port is ready or not, the readiness of the port. So so so we have to mention, like, um, the pod is ready state means, like, we have to mention, like, for this, uh, delay seconds, we have to mention particular seconds. So after that seconds only the port will be restarting. So the condition should be mentioned here, like, um, to start the port after such condition has been satisfied or not. So, like, this, we have to mention the port like readiness and liveness. So mention the port like liveness code means, like, um, you can mention the port that has to be it should be immediately in live condition, that it should be restarted that it should be started in particular condition. Immediately, it has to be started using, like, same car containers. Like, you can use the same car containers as per particular form. The logs has been generated. The cache will distribute into particular location volumes into the particular location of the path what has been provided. So that logs has to display. We have to mention the script transcript you have to display. So after executing that port, like, we can check the logs has been generated, whatever the last day of the logs or last particular, uh, time of the logs has been generated using that, uh, condition, like, what do you call, like, uh, unit containers, uh, sidecar containers, we have to mention inside that port. So these are the different, uh, ports and life cycles that has to be integrated with the given address.
I have not worked up, uh, mostly on Tanjoo Kubernetes on EKS. So, um, I have, like, idea about that Kubernetes, the Tanjoo, that has to be deployed whenever using the post configuration without the the headless of the, uh, particular Kubernetes cluster. That means, like, without having connecting with the IP address. It's only we can connect with you using the host. IP address, whatever assigned that has to be that is not displayed. Only the back end is connected to the host. The load balancer, what we have connected with the service, that is integrated with back end with the IP address that is not displayed. So only the host configuration, the part we have mentioned using the URL that has been displayed with the client or user. Using that client using that URL, uh, the user can able to log in with an application. And for that news, thank you, given it is to, uh, deploy into AKS cluster.
CCD by then, we have to include first stages, like, to get the source code code from the repository in a particular branch. After that, like, we have to build the code for that. Like, in the code, like, we have the Docker file. So for that, like, uh, second stage is to build the code using the Dockerfile, and with that, like, image has been created. And next stage is to push that code into the container registry. Container registry having some concepts, uh, like private and public repositories. So we have to give permissions now to access to get the image from that particular repository into our local. So we have to give on permission to the private repository to get access to that image so that after giving permissions so we have to deploy stage. At that deploy stage, we have to deploy into the Kubernetes cluster. So we have to provide the access to the Kubernetes cluster using config file. So using that config file location, we have to configure and we have to write the manifest file for deployment. Deployment. Yml file in that image has to be repository name. From that, like, it has to be deployed and also service we have to create that service like Ingress Load Balancer or other Cluster IP to get connect internally using the Cluster IP only. I'll be exposed to our website. We can use the English load balancer as a host path configuration. And we have to so we have to apply those manifest file by creating those 1 and creating the deployment activity. So and this entire setup, like CICD pipeline, we have to use for Kubernetes cluster from Dockerfile. And that using the dangerous load balancer, the host, whatever, the client hit the URL so that application will get access to the particular user. So these are these are the stages of Kubernetes cluster for deployment activity.
Using a hand chart, like, we have to suppose we have to deploy different, uh, environments at the same time, like, they will create fraud environment at the same time using head chart. And, also, we can customize the values file, like, deployments, uh, or database. So 3 tier application we want to deploy, like, uh, front end, back end, and and middleware applications. At the time, like, uh, we have to deploy at the same time. So not to integrate that, we can create hand charts and dependencies of hand charts. So you can use a parent hand chart, a child hand chart. So all the hand charts, like, whatever the 3 type hand charts suppose, for example, like, um, front end application, back end application, and media applications. So entire application like these 3, we can have to deploy into the production, dev, PA, and. So at the single shot, we can deploy using the hand charts. So these are the dependencies like chart dot yml, values dot yml. In chart dot yml, you have to represent the applications that we have deployed inside that particular end chart. So like that, we have to mention as a dependency there in the chart dot yml. And while you start yml, you can customize, like, whatever the things we have to request and the database. And we can use secrets that for dependencies of database, whatever the connect to particular application server. A country maps, you can use. Lengthy different event, uh, and also PC question volume claims. You can represent, uh, as per our requirement, as per product requirement. So like this, we can use the dependency in engine.
Blue wheel deployment is, like, uh, one of the strategy in the, uh, deployment. So whenever there is a request come from the like, any new tag has been deployed, so until then, all the, like, all the bots has been deployed with the same single, um, changes. So until those changes has been reflected so either, like, uh, we have the configuration of HPA, high end of module load balancer, auto scaling. So there has been, like, uh, without any downtime, other, like, other port has been there's no configuration we have to run. So in there, like, we have the latest tag, what we have deployed. So there is no downtime, like, until unless there is, like, uh, any, like, view changes has been reflected into the particular environment. So that, like, uh, the new build deployment will come under, like, uh, suppose, like, the, uh, the load balancer, like, it will go to the old one or new one. Like, uh, it depends upon, like, uh, whatever the load balancer it is it is going to particular node. So so like that, we can control, like, load, uh, using Bluegreen deployment. Like, there is no downtime we can use for this strategy.
Stateful set, like, uh, we can use the stateful sets, uh, in the given address for, uh, to start a pod using the MySQL database was, like, if you you want to have the MySQL database creation, so you have to create a state for that SQL database because, like, whenever there is a port has been created, the masters the master of the database has been created. The replica of that master of database server is being, like, slaves. So it has to be started, uh, after, like, master has been started because, like, for every for every application, for every port, there is, like, namespace. Like, every port has been one has been has been one, like, space, like, what you call, like, uh, space means, like, uh, volume. So because like, it has to be controlled in one strategy. Like, so the master has will be sequencing with the, uh, um, what will master has been synced with the sales of the database. So so whenever there is a download to the master, we can control the data from the slave as well. So well, for that, like, we can have the backup of the database from the slave concepts martial slave concepts in the easy Kubernetes stateful set concept.
Uh, we can use, like, uh, secrets. We can manage secrets like base 64. Like, it will hide whatever the, uh, data we have represented, it will represent in the encryption format. From there, like, we can call, like, uh, secrets reference key from the secrets value. So like this, we can encrypt that data using the secrets dot YAML file. In Zoom, like, suppose we have to we have configured environment in the, uh, port deployment dot yml. So by integrating this one, like, uh, that back end has been connected with the database of particular application. So like this, we can, uh, secure the data agent secrets and as instance, 2 data. Suppose, like, if we're using easy pebble like this, so we can more as a more security, we can use, like, key words. So from key words, we have to call inside the secret value and config maps we can use. From the config maps and secrets, we can call those data inside the, uh, deployment file. So, uh, while deploying, like, if you if we store the, uh, values of database sensitive data, like database name, username, password, Like this, like, uh, we want to store inside the keyword of the SEO, and we call those keywords using config NATS as secrets value. And the secret value will be, uh, configured inside deployments as a port. Then it will be displayed as a, like, uh, it will not other users. While deployment, we can see the box means that username password not displayed. So like this, we can secure sensitive data, uh, inside the Kubernetes cluster using, like, secrets or configurals.
On the logging architecture, um, I have used because based upon the config file we have configured and also secrets, keywords, key values, secret values. So using that, uh, we are, like, logging into the Kubernetes cluster and config dot config file will be there there. Like, uh, we have the key, authorization key, and CA certificates. All these, like, using that, uh, we can authorize, uh, uh, authenticate the Kubernetes clusters.