Senior Cloud Architect
ATOS Global IT Solutions and Services Private LimitedSenior DevOps Architect
Persistent Systems LtdCloud Ops Architect
Accenture Solutions LtdVmware Administrator
CMS IT Services LdSenior Vmware Administrator
Wystek Systems LtdCloud Architect
Wipro Technologists LtdWintel Administrator
HCL Infosystems LtdRemote Support Engineer
Capgemini LtdAnsible
PowerShell
Eclipse
Selenium
Python
vSAN
VPC
EC2
IAM
S3
AWS EKS
Terraform
DHCP
DNS
Infoblox
vSphere Replication
Jira
confluence
Jenkins
CircleCI
VMware
Nutanix
Prometheus
Grafana
Terraform
ARM
Azure Kubernetes Service
Azure DevOps
ArgoCD
AWS EC2
AWS S3
AWS RDS
AWS Lambda
CloudFormation
CodeDeploy
Abhishek Sawant is a Highly skilled, Goal-Oriented and Collaborative IT Professional with over 11 years
of expertise in Cloud Architecting, Solution Design and Build in Virtualization, Cloud Automation, Cloud Design,
System Engineering, Consulting, IT Operations, Disaster Recovery, System Administration and Infrastructure
Support. Dedicated to excellence and service, continually pursuing challenges to learn more and enhance the client
experience. Excellent communication, problem solving and decision-making skills.
Migrating Servers from VMware Infrastructure to AWS
Deployment of Stack using Cloud Formation Template
Currently, I'm working as a cloud architect in Atos, uh, Global ID Services. Yeah. So next, Yeah. Currently, I'm using someone, uh, working as a cloud architect in Global IT Services. And, um, currently, I'm getting this engineering team from India. So in that, we are doing the automation one private cloud as well as on public cloud. So for doing the automation in private cloud, we are using, uh, a we're using Ansible as a automation tool and for doing the um, public cloud automation in AWS and Azure using Terraform as a code. Uh, and, uh, also, we are using BICEF for Azure as well as, uh, we are using cloud formation number for AWS. So I'm the part of the infrastructure as a core, basically. So I'm, uh, do the infrastructure deployment on a different customer environment. I, uh, do the request gathering from the customer end. And, uh, based on the customer requirement, uh, we develop our code and, uh, we develop a architect and, uh, we deploy it on the customer environment like that.
How will we build a AWS Lambda function to proceed and respond to the API gateway while maintaining. So first of all, you have to, you know, uh, you have to, uh, enable the AWS Lambda function to to enable that API gateway, then you have to deploy the API gateway. Then only you'll be able to build an AWS and then reference
How to implement a monitoring solution using AWS Cloud application is implemented. Yeah. Then, uh, these are 2 instances or if you have any that will be created on your, uh, like, cloud formation template or you have any landing zone being created for the data and whatnot. You can integrate that with the AWS CloudWatch, and, um, you can implement that, uh, API gateway application server with them. And, uh, we can monitor your environment by using CloudWatch as a service.
What approach you will take, uh, to handle the state persistent, uh, Lambda? Uh, state persistent. Uh, like, uh, basically, uh, in stageless and moment like AWS, uh, the process can typically involve the external solution. Sorry. Like, database or, uh, if you want to integrate the, uh, you know, Amazon DB or Amazon RDS or Amazon Aurora. So Lambda function are designed to be stateless, basically. So you don't, you know, don't have to retain the memory between the invocations. So any data that needs to persist in between the invocations, we can store that into externally. Like that, uh, you can able
In Python based cloud automation task, how you'll ensure your code add her to the solid principle? Uh, so I don't have much expertise, you know, uh, in Python, basically. But, uh, I don't know. Like, there's very difficult questions in me because I don't have much expertise in, uh, simple Python code adding to the solid principle, uh, in that, basically, uh, you know, uh, you can create a simple responsibility principle or you can create a open close principle that, uh, like, each class has the single responsibility. Like, for example, there are a rectangular class that is going to calculate your area and perimeter Like that, uh, you can then, uh, in open or close principle. The code is open for extension, but, uh, closed for the modification. The new shapes by creating a new class that you narrate from the shapes. Or you can, you know, in interface that segregation principle concept right now I'm remembering.
Method to manage Kubernetes secrets in, uh, CICD pipeline. Kubernetes, um, take rates, uh, in, uh, and then there can be, uh, there there can be multiple, you know, uh, method to use that variable. Like, uh, you can use any secret management tool, like, or a Docker secret vault or and then we have to store that in moment variable as a secret in, uh, that CICD tool configuration. This variable can be you know, where you can encrypt it or mask to prevent the exposure. Also, you can, uh, do, uh, so you have if you have any encryption encrypted file, okay, so that encrypted data while can then integrate, so you can use a tool like Ansible Vault or GitHub built in encryption mechanism, and you can decrypt that during the CACD pipeline acquisition. Also, you you can, uh, set some, uh, access control, like, uh, the limit access to the secret by implementing rule based access control. Or you can set some, uh, you know, temporary credentials, like the temporary credentials with the limited, uh, permission for accessing the resources with the CSCD process. And you can re rotate also that credentials regularly to enhance the security. Uh, like that, basically, you can use it. Also, for audit purpose, you can enable the locking and auditing feature with the CICT tool. So like this, uh, you can, uh
Okay. So review the AWS command used to modify I'm policy. If I see what is wrong or missing from the command that may cause issues while executing it. Okay. This is very, you know, very convenient command used to modify where what is missing or, um, with the command that caused, uh, engineering, enable the input policy, role name, sample role, policy name, and whatever. File dot j s. I think the file name which you mentioned is not, uh, the right way to because you are given slash files and dot, uh, slash policy dot JSON. So I don't know whether the this will be the, you know, exact formulation. Um, exact, uh, basically, you can say, um, exact way to define that file, uh, because uh, because if you add JSON name, uh, that file is there, and you want to access this content. Okay? So you typically need to, uh, specify the full file path on your, uh, in your local session. For example, if your file name is, uh, file my policy dot JSON. Right? So look into current entity. So, uh, you have to you have to you have to mention, like, the path and the key, uh, then the value. So this is the not the right way to mention the rules of that basically file. It will not the command will
What are the best practices for managing Docker image? Um, Docker. Uh, what is a bad manner of managing Docker image? Uh, who's within AWS cloud infrastructure? Uh, so in that, uh, like, for Docker image, basically use Amazon EKS, uh, capability service. Uh, that, uh, basically help us to, you know, simplify the deployment management scaling of Kubernetes applications. Okay? So, uh, and, uh, if you want to deploy an EC 2, like Elastic Compute Cloud, so you can run that, uh, a Docker container directly on EC 2 instance. So that will give more control of the infrastructure, but we will be need to manage tasks such as provisioning, scaling, and monitoring, uh, the EC2 instance yourself. Also, you can go with the, you know, um, AWS service. This is a server, uh, serverless compute engine for containers, uh, that basically allow you to run the Docker container without managing the underlying infrastructure. Uh, basically, we use that service tool. Only we have to specify the CPU and memory requirement for your container, and AWS will handle the last. Like, that basically, uh, we use.
illustrate a workflow from python based microservices to illustrate a workflow for python based microservices to interface workflow for python this is very difficult like a micro service with how can i will describe this in this portal basically it's like so first of all i think we have to develop the microservices then you know you have to containerize that docker microservices by creating a docker file then we can install the necessary dependency files in docker libraries then we have to expose the appropriate port for the communication with the microservices then we will test that code locally to ensure that microservices is working as expected then we have to set up the aws account and we have to you know we need that amazon command line interface for interacting with the aws services then we can choose the deployment option like aws lambda aws ecs or aws faggot service or aws ec2 instance and deploy aws environment deployment on the options then we can configure your networking you can scale and monitoring then you can create your cict pipelines or if there are any security and you know security or access controls are there that we can implement it and testing and maintenance we can do in this way basically the the python-based microservices again you can you know deploy on the aws environment by different settings
Demonstrate the, uh, knowledge of container security. Uh, security, container, basically, uh, deployment with the AWS. Right? So, uh, like, uh, there are different type of security that you guys set up. Like, there are image security, runtime security, can set up an isolation. You can uh, some security to the network security, access control, security management, logging and, um, monitoring, then, uh, batch management, security auditing on complex. So these are the security, you know, uh, we can set up on your AWS environment
Crafting examples are where AWS KMS is using Python. I never use AWS, uh, kitten, uh, in, uh, Python, basically, um, to enter the key resources for cloud. Right? I think, uh, for that, uh, basically, first of all, you know, I think, uh, you can, uh, you can install, uh, first of all, you have to need that Python. Python, uh, should be installed on that, uh, so that it will, you know, uh, easy to, uh, so that it will interact with your KMS service. So first of all, I will install the Python, then I will configure the and then I will set up the Python to interact with the KMS service. So I will write my code, uh, basically, and make sure that your key IDs or alias of your KMSD are the, you know, uh, stored in your, uh, uh, only a secret tool, basically. So in this way, basically, KMS use Python to manage encryption for your