profile-pic
Vetted Talent

Vysakh Venugopal

Vetted Talent

Infrastructure Architect with over a decade of expertise in developing robust cloud solutions seeks to leverage extensive experience in high-availability architectures, strategic cost optimization, and security best practices to drive digital transformation.

  • Role

    AWS Solutions Architect

  • Years of Experience

    11 years

  • Professional Portfolio

    View here

Skillsets

  • Kubernetes - 1 Years
  • CI/CD - 5 Years
  • Azure DevOps - 2.5 Years
  • Git - 6 Years
  • Ci/Cd Pipelines - 6 Years
  • AWS Lambda - 4 Years
  • API Gateway - 2.5 Years
  • Jenkins - 3 Years
  • Github - 3 Years
  • AWS Services - 11 Years
  • Docker - 4 Years
  • Cloud Infrastructure - 11 Years
  • DevOps - 6 Years
  • Windows - 5 Years
  • GCP - 2 Years
  • AWS - 9 Years
  • Security - 8 Years

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior DevOps Engineer (Lead)- RemoteAI Screening
  • 62%
    icon-arrow-down
  • Skills assessed :Jira, Lean-Agile framework., Perl, AWS Cloud, CI/CD, Docker, Java, Jenkins, Kubernetes, 組込みLinux, Python, Ruby
  • Score: 56/90

Professional Summary

11Years
  • Jun, 2024 - Present1 yr 6 months

    Cloud & DevOps Architect

    Freelancing
  • May, 2023 - Jun, 20241 yr 1 month

    Cloud Solutions Architect

    Tenderd Track
  • Jan, 2022 - May, 20231 yr 4 months

    Senior Cloud Architect

    DevOpSpace LLP
  • Dec, 2016 - Sep, 2017 9 months

    Cloud Solutions Consultant - Presales

    Nubelity
  • Dec, 2017 - Jul, 20191 yr 7 months

    Senior Cloud Engineer

    Sycomp
  • Jul, 2019 - Jan, 20222 yr 6 months

    Senior Solutions Architect - Presales

    Tata Communications
  • Apr, 2015 - Nov, 20161 yr 7 months

    Sr. Cloud Solutions Engineer

    CloudThat Technologies
  • Mar, 2013 - Feb, 20151 yr 11 months

    Associate System Engineer

    RMESI

Applications & Tools Known

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Jenkins

  • icon-tool

    TeamCity

  • icon-tool

    AWS Code Pipeline

  • icon-tool

    Git

  • icon-tool

    Grafana

  • icon-tool

    Zabbix

  • icon-tool

    Azure Monitor

  • icon-tool

    AWS CloudWatch

  • icon-tool

    GitLab

  • icon-tool

    Microsoft Teams

  • icon-tool

    AWS (Amazon Web Services)

  • icon-tool

    Azure

  • icon-tool

    Azure Active Directory

  • icon-tool

    Active Directory

  • icon-tool

    Bash

  • icon-tool

    PowerShell

  • icon-tool

    GitHub

  • icon-tool

    Google Cloud Platform

Work History

11Years

Cloud & DevOps Architect

Freelancing
Jun, 2024 - Present1 yr 6 months

    As a DevOps Architect consultant for real estate organization, designed the CICD processes and pipelines using

    AWS DevOps tools.

    Re-architected the existing infrastructure on GCP for a fintech based startup to enhance scalability, security, while

    creating isolated lower-level environments to establish DevOps best practices including setting up CI/CD pipelines.

Cloud Solutions Architect

Tenderd Track
May, 2023 - Jun, 20241 yr 1 month

    Segregated & re-architected the product environments into development, staging & production.

    Analyzed GCP services, their utilization and the cost, prepared measures to reduce the monthly billing effectively by around 20%.

    Implemented backup & DR for critical infra components.

    Re-architected the backend services to start making use of serverless services from GCP & Azure to increase system efficiency.

    Re-designed & implemented the CI/CD process and pipelines for a better DevOps approach increasing the productivity of the dev team.

    Implemented Azure Resource Manager & Traffic Manager for routing the user requests based on the geo-location to different origins.

    Incorporated security best practices in GCP, enhancing the overall security posture with IAM, SCC, VPN, WAF, Armor, and ensuring compliance with industry standards.

    Implemented MongoDB with sharding at on-premises and did migration from Atlas cloud.

Senior Cloud Architect

DevOpSpace LLP
Jan, 2022 - May, 20231 yr 4 months

    Working closely with the pre-sales team in helping customers leverage public cloud infrastructure and services by understanding business, product, and technical challenges

    Lead an efficient team of engineers to build highly available & resilient systems with CI/CD.

    Fostered the team to incorporate security enhancements at all levels of the infrastructure.

    Migrating on-premise servers to Azure using Azure Migrate & setup DR using Azure Site Recovery for a Core Banking Systems client.

    Completed the migration of 80 TB of data from on-premises NAS storage to AWS S3 storage.

    Assessed the application and its dependencies, current deployment on AWS & re-architected the environments from scratch as the Managed Services Provider team lead.

    Managed and mentored multiple team members, fostering their professional development and ensuring adherence to best practices.

    Orchestrated cross-functional teams to ensure seamless project execution, efficient implementation by reviewing and optimizing cost & security of cloud solutions.

Senior Solutions Architect - Presales

Tata Communications
Jul, 2019 - Jan, 20222 yr 6 months

    AWS Certified Architect with MRA certification, worked on RFPs, architected migration, designed target infrastructure, security enhancements.

    Experienced in preparing BOM for customers along with the TCO calculations. Worked on multiple RFPs as the requirements are released by clients.

    Skilled in understanding customer pain points, architecting migration, designing target infrastructure, and explaining solutions to clients.

    Designed Azure infrastructure with services including VNet, VM, AKS, Traffic Manager, CDN, DBaaS, Azure Blob Storage, User, group privileges & RBAC setup etc.

    Worked closely with the security team to identify and mitigate vulnerabilities, and ensure compliance with industry standards & security enhancements.

Senior Cloud Engineer

Sycomp
Dec, 2017 - Jul, 20191 yr 7 months

    Designed and managed AWS infrastructure for high-traffic and high-availability applications enabling 98% uptime.

    Collaborate closely with pre-sales team and customer to tailor solutions to their requirements by performing proof of concepts when necessary.

    Developed effective security solutions leveraging AWS services such as IAM, WAF, KMS, CloudTrail, etc., and actively participated in resolving security incidents.

    Skilled in maintaining infrastructure with Terraform, crafting modular structures for EC2, S3, RDS, and VPC resources, managing state files in S3, and utilizing GitHub for version control.

Cloud Solutions Consultant - Presales

Nubelity
Dec, 2016 - Sep, 2017 9 months
    Worked with pre-sales to design & architect cloud infrastructure, managed high traffic applications on AWS, implemented cost-effective, secure updates.

Sr. Cloud Solutions Engineer

CloudThat Technologies
Apr, 2015 - Nov, 20161 yr 7 months

    Worked on AWS IAAS Cloud.

    Conducted research, POC & production implementation of solutions for issues within Windows Server and AWS services.

    Suggested and implemented cost/performance/security updates on AWS, including DR and backup/restore mechanisms.

    Implementation & management of Azure infrastructure services like VM, Database, Active Directory, Network, S2S VPN, IAM etc.

    Managed multiple projects with client-facing roles and led teams with multiple members.

    Completed on-site project 'Tightening Database Security', conducted POC & production implementation, cost/performance/security updates on AWS.

Associate System Engineer

RMESI
Mar, 2013 - Feb, 20151 yr 11 months
    Maintained Azure virtual machines, Esxi hosts, backup/restoration of VMs, experience with Windows Server environments.

Major Projects

2Projects

CICD Setup Using AWS DevOps Tools

Jul, 2024 - Aug, 2024 1 month

    For a real estate organization, designed the CICD processes and pipelines using AWS DevOps tools. The client had their web application developed in Node JS which are dockerized. Their VCS was in AWS Code Commit. Hence, it was easy for me to make use of AWS Code Build and Code Deploy to do the build and deploy. The pipeline was created by making use of AWS Code Pipeline. Docker image was built in the build phase & pushed to ECR (Container Registry) and is pulled to deploy the same to ECS cluster in the deploy phase. The variables and secrets are stored in the SSM parameter store and the pipeline is configured to pull those details from parameter store.

Tightening Database Security

Feb, 2016 - Jun, 2016 4 months

    Completed on-site project in Johannesburg, SA, tightening database security. This project was of a duration of 3-4 months, migrating Oracle database from on-premise to AWS cloud (RDS) for a Banking customer. The team comprised of 3 people - Project Manager, Oracle Golden Gate expert and myself. My role in the team was as an AWS expert, and my KRA was to integrate RDS with AWS CloudHSM physical device and ensure the data at rest (when migrated from on-premise) is encrypted using TDE.

Certifications

  • Aws certified architect

  • Mra(migration readiness assessment) champion

AI-interview Questions & Answers

And the introduction of yourself. Hey. Uh, hi. Uh, I'm. I have 11 years of overall experience. I have been working I started my career working as, uh, Azure, uh, and VMware engineer, uh, for a UK based subsidiary firm. And from there, I have progressed my career, uh, towards, uh, working on to AWS cloud platform, where I have been working with multiple cloud customers across the globe. Uh, and I have been part of, uh, the team who was responsible to ensure the uptime of their application as well as on the infrastructure side, uh, predominantly on AWS and Azure. Then I progressed my career towards working with, uh, multiple different customers, understanding their requirements, uh, designing the, uh, cloud solutions based on AWS and AWS Azure and GCP, and, uh, prepare the commercials, prepare the estimates, as well as the PCU calculations, submit it before the customer team, uh, help the customer team understand what sort of, uh, solution are we, uh, looking at and how is that, uh, how that is going to be beneficial for the, uh, for the, uh, customer requirements, uh, on a longer time, and, uh, help the business team close the requirements. So it was kind of a presales, uh, solution architect or maybe a cloud architect role, but I was not just part of the, uh, overall solution preparation, but also, uh, but I also had been, uh, working with the larger team, uh, delivery team to ensure that the deliverables are going proper. And, uh, my, uh, last job role was all about redesigning the existing infrastructure on g c, a b to b side SaaS based, uh, organization where they had, uh, a production environment, but they didn't had a lower grade environment. So my responsibility was to make sure that there are lower grade environments, like development, test, staging, and production, uh, uh, setup, uh, with the necessary best practices, the necessary security, uh, loopholes closed, and the scaling should be perfect. So the those were the responsibilities. I was part of, uh, making sure that, uh, there are multiple environments, uh, multiple isolated enrollments, uh, for the same application. And I also be part of, uh, setting up the CICD process right from the scratch for the organization. Uh, they were not using, uh, any sort of CICD tools. Uh, and, um, my, uh, so as soon as we have the relevant environment set up, uh, I had to kind of, uh, introduce the CACV pipelines, create the CACV pipelines, ensure that the built and deployment processes are working fine, and, uh, also make sure that the development team, uh, uh, within their organization are, uh, I mean, do understand, uh, uh, the value that we are bringing into the table and, uh, follow or other to the, uh, DevOps practices and processes.

So, basically, this, uh, so we will have to, uh, do multiple stages here, uh, right from, uh, the code checking, uh, to a source control or maybe version control system, uh, source code management system, which is like, uh, which could be like code commit or maybe GitHub or, uh, GitLab, uh, or anything of that sort. And, uh, from there, we will need to create a pipeline, uh, which, uh, which should be based on the, uh, different sort of environments. Right? Um, so for example, uh, to lower grade environments like dev or test, uh, probably an automated code deployment process, uh, could be fine. But, uh, when we move on to a higher grade environment like production, we definitely have to have multiple, uh, stages, uh, where we do the manual, uh, check ins or not where we, uh, do the manual approvals and then proceed further before we go ahead and deploy something on the, uh, on that specific higher grade environment. And especially when when it comes to, uh, the Python project, which is, again, deployed on AWS, uh, depending on where the Python project is, uh, deployed. Uh, for example, if the Python project or Python application is being deployed on an EC 2, uh, basically, we'll have to set up the pipeline in a way, uh, that, uh, the pipeline is able to communicate or the pipeline is able to do SSH into the, uh, machine EC 2 instance, uh, do the, uh, initial, uh, build, and then, uh, using the SSH what we have, uh, what we have connected with, uh, they'll do the process as well. Uh, and in between, uh, whatever code integrity, uh, checks or maybe, uh, the vulnerability scans, uh, for the code which has been checked in, uh, all those, uh, can be different different stages within the same pipeline.

Optimize docker and exercise for a victim notification. So we should be able to make use of multistage bills here, uh, uh, rather than, uh, having 1, uh, or maybe a single stage build. Uh, we could have or we could, uh, segregate, uh, the entire Dockerfile into a multi stage Dockerfile structure or multistage structure, where each stage would uh, create a single Dockerfile, uh, or maybe a single image, uh, and that image should be able to be used in the subsequent stages. Uh, so in that case, uh, we can, uh, we can reduce the overall size, which is, uh, which is there for the final build or maybe the final image, what is getting created, uh, out of the docker, uh, file. And, uh, it it is not going to affect the performance as well. Basically, it is going to improve the overall performance, uh, of, uh, the docker build, uh, when we introduce multistage, uh, docker file structure.

Setting up this thing. Setting up what was getting the services running in Kubernetes, considering fluctuations and traffic. Uh, so, uh, here, basically, we will have to, uh, introduce, uh, multiple services, uh, definitely, uh, when we go with Kubernetes, let's say, uh, for EKS for that matter, uh, we will have master worker structure, uh, within the EKS, service, database service. Uh, now within, uh, the I mean, the master is highly available, uh, or the master is taken care by AWS itself, Whereas our our primary tool is to make sure that there are sufficient worker nodes available, and we use the deployment pattern to, uh, make sure that, uh, all the ports are or maybe, uh, the number of based on the number of ports for each microservices, uh, those are running across, uh, multiple, uh, nodes or multiple worker nodes. So in that case, we should be able to, uh, improve our availability of your application. Even if one node goes failed one node fails down, uh, there is one another, uh, worker node, uh, which already has, uh, the same port running. And, uh, we should be able to do, uh, the load balancing as well as we should be able to route the traffic based on, uh, the ingress, uh, which is again, uh, which again could be a load balancer, right, a base load balancer itself. So it'll be a load balancer based on the path pattern or based on the actual application routing that need to be there, uh, we can or we, uh, those requests should be able to, uh, route to the specific node or maybe to a specific ports, uh, specific containers, uh, based on, uh, the load balance of configurations.

Okay. So, uh, basically, when we, uh, talk about, uh, secrets, uh, either way, you can make use of AWS secrets, uh, manager, uh, or, uh, we could, uh, for example, if you are making use of an if you are, uh, uh, making use of environment variables within the, uh, CSC rep pipeline, we would also do it, uh, using the SSM database system parameter, for example, or maybe we could use AWS as well. Uh, so there are different sort of services, uh, which are available. And from, uh, these services, we should be able to, uh, uh, pick up, uh, the, uh, relevant, uh, environment variables, uh, from, uh, these secrets, uh, while while the pipeline is running. So, basically, in an on demand fashion, when a deployment happens, uh, instead of hard coding the variables within the code or maybe by making use of another file where, uh, then your own variables are present, we could, uh, fetch these informations from AWS secrets or, uh, systems manager, uh, to, uh, uh, parameter store, uh, to fetch those variable values and, uh, and and, uh, keep it or maybe push it while, uh, we actually deploy the Python, uh, application. So in that ways, we can make sure that, uh, the environment variables are always kept secret or maybe are not accessible by the developers or by the, uh, other team members who doesn't have or who who shouldn't need, uh, permission to, uh, access those.

We handle rollbacks in Kubernetes for a failed deployment without impacting the current user experience. So, uh, basically, we could, uh, go for a blue game deployment methodology, uh, where we, uh, could, uh, deploy, uh, the latest release, uh, onto a newer set of environment created, uh, which is, uh, I identified as green. And, uh, the blue represents, uh, the current version, uh, where, uh, which is running in the production environment where the, uh, where most number of users are available. And then, uh, we could isolate or maybe, uh, we could, uh, reroute the traffic, uh, a smaller amount of traffic to the green deployment, uh, to, uh, do the initial level of testing to make sure that the users are not affected based on the deployment, what we have done recently. So that is what is called as a group blueprint model. And then, uh, we always have the ability to kind of, uh, route the traffic to a green, uh, environment or to a green model. If, uh, if the testings are completed, if, uh, we are uh, sure that, uh, the most recent deployment, uh, doesn't affect the user in any ways, we could, uh, easily, uh, switch, uh, the green into a blue one by routing the 100 percentage of traffic, uh, to, uh, the new latest version or to the new latest environment. So in that case, uh, we can have a rollback as well defined. So we will have blue we will have the new blue, uh, as the latest code, and, uh, we can temporarily, uh, keep the other environment keep the oldest environment also running, uh, for a very short amount of time. Just in case if we need to do a rollback, we just have to, uh, route the traffic back to, uh, the same, uh, old, uh, infrastructure or, uh, to the same old application, uh, environment. So in that case, we don't have to disturb the current, uh, working environment, and, uh, we are also, uh, we are also able to test, uh, the user. Uh, I mean, uh, we are able to test the, uh, how the users behave, uh, your latest application, uh, through Bluegreen Deployment methodology.

The problem with this document that in terms of best practice for the human headsets. Explain how would you optimize? I think, uh, we could, uh, we could create as multistage built here. So, uh, your, um, the first step, which is, uh, from Python 3.8 slim, uh, that is, uh, basically your base image, uh, where we are going to configure your entire application. And then you can turn, uh, the apt get update and apt get install command, uh, for for installing the gutter. And then instead of copying the entire, uh, uh, directory over there and then, uh, installing the requirements. Txt, maybe we could, uh, we could, uh, create an image out of, uh, those initial two commands. And in the next stage or in the subsequent stage, we could make use of that, uh, image which has been already built and pushed to the repository as the base image. And on top of it, we could make use of, uh, uh, the copy command, uh, as well as, uh, installing the other requirements dot txt. So in that case, uh, we are able to reduce the overall size of, uh, the image which has been created, uh, in the last. And, uh, and we are able to improve the overall, uh, docker or maybe, you know, improve the overall, uh, build process as well. So I think we could, uh, make use of, uh, the multistage, uh, structure or multistage process here.

Not very sure about this, uh, how we can achieve.

It's maintenance to let's see if that time management has to test code to promise as a new request confirmation. For managing infrastructure, cloud, cloud, and switching database, cloud formation, and Python.

Process centralized logging of distributed microservices in Kubernetes with the focal on traceability. It will probably make use of, uh, something like an ELK stack, uh, where, uh, where we get or where we fetch all the, uh, logins from I mean, the, uh, what we fix the logs from, uh, all the, uh, microservices available or maybe from all the containers which are running. And, uh, it's it's going to give us a single, uh, pane of glass, uh, or maybe a single dashboard where we can keep an eye on, uh, what is happening, uh, um, within each of the containers, uh, based on the application. And, uh, we could also, uh, make use of the same to kind of, uh, for for any kind of our audit purpose as well. Uh, that is one thing which we can do, or, uh, possibly, we could also move these logs to, CloudWatch logs, for example, and, uh, make that, uh, service as a dashboard, uh, for us to, uh, kind of do any trace back or, uh, do any kind of audit. That that also is a possibility. So you'll can stand call cloud using CloudWatch logs is something which we can, uh, use to, uh, ship the logs from each of the container services for each of the containers available and use it for a longer, uh, duration. I mean, keep it for a longer duration.