
6 years of hands-on experience in designing, implementing, and maintaining secure and scalable AWS infrastructure, leveraged Terraform to streamline resource provisioning and maintaining automated infrastructure deployments, experienced in application migration to cloud, containerizing applications, implementing CI/CD pipelines
Over 24 years of IT experience ranging from Cloud Infrastructure design, architecture and automation, Project Management, Development of Dot Net based Web Applications. Experienced in managing the clients across industries and geographies
Excellent interpersonal, analytical & negotiation skills, with the ability to relate to people at any level of business
AWS & Terraform Consultant
Incepta Solutions Inc.Cloud and DevOps Consultant
NEC CorporationConsulting AWS Solutions Architect
The Cherry Hill CompanySoftware Programmer
Allion Technologies Ltd.Sr. Software Engineer
Illusions Online India Pvt. Ltd.Technical Project Manager
Capgemini India Pvt. Ltd.Software Programmer and Ecommerce Faculty
V to U.Com India Pvt. Ltd.
AWS ECS

ECR

EC2

S3

VPC

WAF

API Gateway

Lambda

Beanstalk

EBS

RDS

DynamoDB

CloudFormation

SNS

CloudWatch

AWS CLI

Azure VM

SQL Server

Terraform

Terragrunt

Packer

Github Actions

Chef

Github

Bitbucket

AWS CodeCommit

Shell Scripting

PowerShell

Linux

Ubuntu

CentOS

Windows Server 2016
.png)
Docker

Kubernetes
Vikas is highly skilled technically, while also being an excellent communicator - a rare combination. He is also very diligent, flexible, hard working and proactive, whether working on a team or on his own. In short, he's a pleasure to work with.
Vikas Arora worked in my team for around 4 years at Rave/NEC Software Solutions. He is a talented AWS/DevOps Architect with extensive experience which is always a value add. He has been instrumental in designing the architecture and contributing to the operational issues. He is a mature individual and always keen to learn and support team members whenever required.
Hi, my name is Vikas Arora. I have over 6 years of experience in cloud computing. That includes AWS, bit of experience in Azure, Terraform, then containerization using Docker and Kubernetes. Okay, overall, I have 24 years of IT experience. Also that I've worked for 14 years in Capgemini in capacity of technical lead and technical manager. Okay, this is my brief.
What method would you implement to automate the backup of ETLs? What is the best way to automate the backup of ETLs? What is the best way to automate the backup of ETLs? What is the best way to automate the backup of ETLs? What is the best way to automate the backup of ETLs?
my for example it could be serverless as well so like lambda or your API gateway and we can then using failover like failover mechanism of route 53 we can switch to another region but that is only possible in case of active-active in case of active-passive we have to manually bring it up and then change the route 53 to find to the another region.
The role of the load balancer in case of Kubernetes cluster is to capturing all the pods, okay? So means like there are multiple clusters in which there are multiple pods. So load balancer will point to all these pods. So like means using auto-scaling, if there are more pods are coming up, so as more pods comes up, load balancer will start pointing to them or in case of scale in, the number of pods reduces, then load balancer will stop pointing them as the pods are destroyed.
Can you suggest a strategy to migrate on-premises applications? So, assuming that these applications, on-premise applications, they are non-microservice-based or non-containerized applications. So, first we have to come up with the strategy of converting them into the microservices, okay? Microservices-based, we have to modularize that application, on-premise application accordingly. Then, after creating, I mean, after dividing the application into different microservices, we will have to write the required Docker files for all these microservices. And then, either using CICD or manually, we have to create Docker images for these microservices, which will go into ECR, assuming it's an AWS solution. And from AWS, these can either be deployed on ECS or EKS, depending upon the size of the application or the amount of people which are going to come on this website.
There are couple of ways which I am aware of, so one you could use HashiCorp Vault to secure the sensitive data, other ways to use the environment variables.
we'll be back. Only issue I see in this particular docker file in the command run mkdir slash code. So that may give an error. So rather than using slash code, we will just use code, okay? So it will create a directory named code. And similarly, then we have to use ampersand ampersand cd code. Rest looks okay to me.
you need a security code set for instance in instances this cancels the CPU it's just a mistake not too bad let's go and change the code to set for instance and let's install this it will do the rest of the instructions it could probably be related to multiple instances and how this could be how this list will be
I have no idea about this question.
Though we can use any CI-CD with Kubernetes, I have personally used GitHub Actions to create a CI-CD pipeline for Kubernetes, but the popular ones would say Argo-CD is there.
Sorry, I have no idea about this question.