A multi cloud certified Cloud expert with 9 years of experience into Google Cloud, AWS, Kubernetes, DevOps and Terraform. Experienced across different verticals and industries involving migrating of applications, databases from on-premise or any other cloud (AWS & GCP). Building CICD pipelines and helping clients to adopt DevOps cultures. Refactoring, replatforming, and archetyping the infrastructure to adopt and leverage cloud capabilities. Leading the team of cloud engineers to deliver on business/client expectations.
IT Tutor
Aspire2 InternationalConsultant
Atos SyntelSr. Consultant
Virtusa ConsultingQuality Engineer
SercoCloud Engineer
Searce CosourcingGit
AWS
Google Cloud Platform
AWS
Kubernetes
Docker
GitHub
GitHub Actions
Jenkins
Ansible
Terraform
Helm
BigQuery
Cloud Composer
Cloud SQL
OpenShift
VPC
Cloud Functions
Jira
Confluence
Bitbucket
ECS
GKE
RBAC
AWS
Azure
Google Cloud
Windows
Ubuntu
CentOS
Debian
Maven
Gradle
Groovy
ELK
Helm Charts
Azure DevOps
Github Action
Asana
Shell
PowerShell
Python
Terraform
CloudFormation
Bicep
Go lang
Yeah. Hi. Uh, myself, Uh, I have done engineering in computer science and also have around, like, 9 years of experience into cloud and DevOps. Uh, I have worked with very big clients like uh, Google PSO. I work with, uh, Al Jazeera Media Network, uh, Lloyds Banking Group, and a lot of, uh, Health Corporation of America and a lot of other projects. So in, you know, in my oral tenure, I worked with AWS Google Cloud primarily and a bit of Azure. And I have looked into projects, uh, worked on projects, uh, related to migration, uh, of applications from on premise to cloud, uh, from data center to cloud. I worked on projects where, uh, migrating databases and the teradata warehouse to Google Cloud BigQuery. And also worked on projects, uh, where, uh, creating CICD pipelines, automating, uh, implementing DevOps pipeline for, uh, a data project, something like that. So I have good hands on experience of around 6 plus years in cloud and DevOps, and my overall experience is 9 years. So, yeah, that's all I want for me.
So so the data we have in, uh, in AWS, let's say, for example, if you're storing the data on s 3 bucket, and, uh, we have option of enabling encryption, and we can use customer managed keys or AWS provided keys for that particular data and to encrypt that particular data which is addressed. So either we can use, uh, or we can create our own, uh, KMS keys within AWS, or we can upload our own keys to manage that particular data.
What strategy would you use to not music? We can enable CloudTrail on AWS account, so that we can monitor the logs and see which user has used what kind of, uh, what up what activities he has done on a particular database account.
What is what is to securely manage secrets and sensitive information? In order to create, we can use something called a secret manager, where we can, let's say, for example, we have a database and we want to store the username and password or any other credentials, which are sensitive data. For that we can use something called a secret manager to mask that particular username password and make use of that API calls to, uh, to read that particular information without exposing the actual content.
We can I haven't worked much on AWS config though, but, uh, CloudTrails, uh, whatever CloudTrail logs are generated, we can keep that logs on s 3 bucket and, uh, based upon that, so we can store that information on s 3 bucket, and we can analyze those logs data using different analytics tools available?
What design would you suggest to build a font tolerant connection between an on premise data center and AWS using? So when we set up a VPN, we can set up a site to site VPN, which has 2 different tunnels, uh, what which works as a active and passive setup. And, uh, to to ensure that even if 1 of the tunnel goes down, make sure that there is connectivity between the on premise services servers. And let's say if we're dealing with, uh, maybe, like, a database server, we can set up, uh, an application like a master slave kind of setup to ensure this highly available. And in case of any event happen, we can switch, uh, any of the servers as primary.
Do the following term on board, what change would you recommend to ensure that daily load days in instance is not unintentionally distracting to We need to any we need to add a parameter called termination termination protection so that if it it will prevent any resource from accidental deletes. Termination protection.
Given the save it was not a function snippet written in Python, can you any of the error it might throw during execution? Explain why. Definite and prevent, we'll have an even context. Go to 3.23. Probably we get an error at response, get object. I'm not sure that, uh, that s 3 client has the permissions to do to read the bucket we need to. In In order to get object, we need certain permissions. Probably we need to use credentials like secret access here, access key or access ID in order to access that particular bucket and get the data.
How would you optimize cost when scaling an application using easy to auto scaling in smart instances? Uh, when you create an after scaling group in EC 2, uh, there are certain policies we can set up saying that if, uh, the load on the application server goes more than 80% or 90%, it should scale more VMs. And whenever there is drop in traffic or load on the traffic or load on the application, it should scale down the application. Scale down the VM stood of different side to less some numbers, like, maybe keeping only minimum 2 or 3.
In which scenario would you choose AWS target for over Amazon EC 2? That's all for running containers. Uh, let's say, for example, uh, AWS target is a container, uh, management service. Let's say I want I don't want to bother about, uh, building the old whole cluster or a Docker, uh, Docker servers to make to to deploy the containers. And I don't want the more than like of maintaining cluster or administration. Then in that case, I simply go ahead and use AWS Fargate instead of using the EC two instances where we can just, uh, deploy the Docker image and pass some parameters to access that particular image to to access the particular Docker container like, uh, port numbers and all.
AWS EKS offers a lot of, uh, flexibilities in order to maintain security. 1 of the thing is we can create a VPC native uh, EKS cluster so that we can control the external access to that particular cluster and, uh, isolate services, create different services, and also we can install STO as a sidecar container on Kubernetes cluster so that the traffic can be will be reach the particular services behind through a host through through a service through a SEO service.