profile-pic
Vetted Talent

Saket Mehta

Vetted Talent
Skilled IT professional with 10+ Years of experience in IT industry with expertise in AWS Cloud Infra and AWS DevOps engineer with strong background in Multi Cloud services, automation, and containerization technologies, expertise in Version Control, DevOps/Build with CICD Pipeline and Automate deployments as Release management with Infrastructure as Code (IaC), Lambda, Terraform, Docker, Kubernetes, and GIT with Innovative solutions and process improvements to ensure customer satisfaction.
  • Role

    AWS DevOps & Cloud Infra Engineer

  • Years of Experience

    10 years

Skillsets

  • automation
  • Python - 1 Years
  • AWS - 4 Years
  • Java - 1 Years
  • vulnerabilities
  • patching
  • Multi Cloud
  • IAC
  • AWS Cloud
  • Scripting
  • Containerization
  • Security
  • DevOps - 05 Years
  • Docker
  • version control
  • RDBMS
  • Lambda
  • Terraform
  • Kubernetes
  • Git
  • SQL

Vetted For

20Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    AWS Cloud Implementation Specialist (Hybrid, Bangalore)AI Screening
  • 53%
    icon-arrow-down
  • Skills assessed :Microsoft Azure, Team Collaboration, Vendor Management, API Gateway, AWS GuardDuty, AWS Inspector, AWS Lambda, AWS Security Hub, Ci/Cd Pipelines, Cloud Automation, Cloud Infrastructure Design, Cloud Security, Identity and Access Management (IAM), AWS, Docker, Good Team Player, Google Cloud Platform, Kubernetes, Problem Solving Attitude, Python
  • Score: 48/90

Professional Summary

10Years
  • Sep, 2022 - Present3 yr

    AWS DevOps & Cloud Infrastructure Engineer

    Mphasis
  • Aug, 2020 - Oct, 20222 yr 2 months

    AWS SRE & AWS DevSecOps Engineer

    Cognizant Solutions
  • Jul, 2017 - Oct, 20225 yr 3 months

    Lead-Database Administration & AWS Cloud Engineer

    ZeOmega Healthcare Pvt Ltd
  • Dec, 2011 - Jun, 20142 yr 6 months

    SQL Database Administration

    Bharat Electronics Limited (BEL)
  • Jul, 2015 - Aug, 20172 yr 1 month

    Senior SQL Database Administration

    Trianz IT Cloud Solutions

Applications & Tools Known

  • icon-tool

    AWS Services

  • icon-tool

    Jenkins

  • icon-tool

    Maven

  • icon-tool

    Apache

  • icon-tool

    Nginx

  • icon-tool

    Tomcat

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Ubuntu

  • icon-tool

    Red hat

  • icon-tool

    Centos

  • icon-tool

    Linux

  • icon-tool

    Windows

  • icon-tool

    GitHub

  • icon-tool

    SQL

  • icon-tool

    Postman

  • icon-tool

    SSIS

  • icon-tool

    SSRS

Work History

10Years

AWS DevOps & Cloud Infrastructure Engineer

Mphasis
Sep, 2022 - Present3 yr
    Primary expertise in public cloud infrastructure design with DevOps processes by design, develop, modify and integrating complex infrastructure automation and deployment systems.

AWS SRE & AWS DevSecOps Engineer

Cognizant Solutions
Aug, 2020 - Oct, 20222 yr 2 months
    Hands on Building servers, integrating infrastructure automation, security risk assessments, vulnerability management and implementing security controls and best practices for cloud environments.

Lead-Database Administration & AWS Cloud Engineer

ZeOmega Healthcare Pvt Ltd
Jul, 2017 - Oct, 20225 yr 3 months
    Managing Production MS SQL Server Database of organization on Cloud and on-premises Datacenters.

Senior SQL Database Administration

Trianz IT Cloud Solutions
Jul, 2015 - Aug, 20172 yr 1 month
    SQL Database Operational support to tech users, developing advanced queries, and database system management.

SQL Database Administration

Bharat Electronics Limited (BEL)
Dec, 2011 - Jun, 20142 yr 6 months
    Implementation and Maintenance of E-Governance project and database management activities.

Education

  • Bachelor of Engineering-Computer Science

    Visvesvaraya Technological University (VTU) (2011)

AI-interview Questions & Answers

Could Okay. So I have overall 11 years of experience, uh, in IT industry. Out of which, uh, initially, uh, 5 to 6 years I was as an database administrator and, uh, or all then the rest of 6 years currently, um, I am working as an AWS DevOps engineer and, uh, AWS infrastructure engineer. And, uh, whereas my my roles and responsibility when it comes for an infrastructure engineer is, uh, like, uh, setting up of an AWS instances, taking care of completely all the EC 2 instances, RDS instances. Uh, we have a a different flavor of databases. Also, um, uh, I, uh, for spinning up of a server so servers, uh, I write an infrastructure as a code, with Terraform, wherein we will spin up servers as per the needs. And once the servers are set up, uh, I I take care of, uh, providing the roles for that using an I'm I'm roles and, uh, setting up of AWS serverless services using AWS Lambda, k, and and also making sure there is no vulnerabilities on the server, uh, by, uh, we use different tools like AWS inspector, uh, which will help in, uh, which will help in flagging out if there is any vulnerability on a server and patch, uh, accordingly, those instances which have vulnerability. K. Uh, this is about the infrastructure. Apart from this, also working as an AWS DevOps engineer wherein, uh, you know, DevOps engineer where I set up a CICD pipeline user where it goes for Jenkins, GitHub, then a Docker, and then we do a deployment using Kubernet. K. So this is my overall roles and responsibility as an AWS, uh, infrastructure engineer and AWS DevOps engineer role. I work with the different teams, networking team, different stakeholders, QA application, application setup. Uh, we are using Python and Java as an application, which, uh, I majorly take care of and setting up of a CICD pipeline for those as an infrastructure role. And, also, as I said initially, as in, uh, within AWS setting up of secret managers for that secret keys using KMS, making sure our data has been encrypted at rest, and data has been encrypted at at rest and at transit with, uh, TLS, SSL certificates. And also a manager team of 16 members, um, as in a team leader.

To pass the pass and respond to API. Gateway request will maintain the high availability. How do we build AWS Lambda function Lambda function to parse and respond to API gateway request while maintaining Okay. So, um, uh, for building an AWS Lambda function as we know, AWS Lambda is an AWS serverless, uh, service, which is provided by AWS, uh, wherein, uh, for these 2, request. Okay. So for performing this, we can, um, uh, we can make use of, um, AWS Fargate, which will help you in our deployment. So So what we can do is, uh, it as AWS Lambda is in serverless, um, uh, service wherein we can we need to write a Python code in, uh, AWS Lambda. So once when we write a Python code within that, and then we can, um, deploy, uh, this, uh, use AWS Lambda as in a container and deploy, uh, this container using an AWS target for doing a, deployment. And in order to, um, and in order to as, like, for a API gateway, because the, uh, AWS Lambda, takes a request from a different APIs. Uh, for that, we have to create we have to first create an AWS once we have this, um, AWS Lambda as a Docker, then we have to create a AWS, uh, EC two instance. Uh, and then that EC two instance there, we have to create an a I'm role. In that I'm role, give an permission, um, creator role and policies, which will give a permission for that AWS Lambda to execute and and run it. Yeah. So once that we are done, and we can also maintain it as a high availability, uh, by by, uh, making sure the we use this AWS instance, which we are building in a virtual private cloud, uh, wherein we have to, uh, make sure we have a different we use we use different subnets for, uh, private and public IPs where where the, uh, any other user who is not supposed to have access to our code, we have to give a list permissions, uh, list privileges by I'm role and have an auto scaling, uh, function wherein by using an auto scaling, we can move in and move out of the resources. If there is a less resources being utilized, uh, like, for AWS Lambda, then we can, uh, move in by reducing the CPU And more if the high CPUs is needed, we can adjust auto scale, uh, with a more instance. Okay. So that is how we can build a AWS Lambda function, uh, which will respond to API gateway. And, also, uh, in, uh, in order for an AWS Lambda to integrate with our APIs, we can uh

Okay. AWS Security Hub and Lambda to automate compliance. Check for multi account AWS environment. Leverage AWS security hub and Lambda. Automate compliance checks. Okay. So as, uh, as a, like, a AWS Lambda can be used, um, uh, to trigger the events, uh, as Lambda can be used to, um, used as a mechanism for triggering out, uh, the mails or any alerts wherein we can write the code uh, and put that in s three bucket, integrate the AWS Lambda with an s three bucket, uh, and, uh, and integrate that with an AWS security hub wherein whenever, um, suppose we have an instance of, uh, any, uh, alert test to be notified. Then in that case, uh, uh, we have to write a code in an AWS Lambda and integrate it within a security hub. So whenever the trigger is being, uh, happened, k, when when the code is being triggered, then that would, uh, give a response to an security hub. Uh, in the same way this, uh, AWS Lambda, as it is a serverless, we can integrate it with the multiple AWS accounts, k, at a different different accounts, and then, um, uh, we can integrate this through a different APIs. Uh, for each account, we can use a different APIs and then integrate, um, our Lambda code within, uh, a different AWS securities. Like, we have an AWS inspector, k, or a cloud cloud trial. Uh, cloud trial can be used for monitoring the API logs um, and, uh, and the other functionality of, uh, logging and monitoring part.

Wait. This cloud automation Um, for Python cloud automation task, how do you ensure Okay. So yeah. So there are best practices which has to be followed whenever we write a Python code, uh, as, um, as we can use Terraform as an infrastructure, as a code. We can also use a Python which will help in automating, uh, our code, um, which will help in automate a different task. Uh, and we have to ensure that adheres to a solid principles by by following the different principles of writing a Python using a bloggers within the code, which will help, uh, uh, which will help us understand if there is any error. Uh, whichever line is giving error, we can notify those lines, um, uh, and print statements. So these these are few of the things which, uh, we need to uh, take care when we, uh, use a Python as an automation for, for any automation task.

What is cross region replica? Designing from I think for this, like, uh, we can write a function which will, uh, like, which will use us both or 3 libraries to interact with the AWS s three. Like, it configures the cross region replication between a source and a destination bucket, which will ensure, uh, encryption and uh, consistency. This function will enable in versioning and default server side encryption on both the buckets for added security. Uh, we need to just, like, uh, adjust the parameter according to the, uh, like, uh, specific use case. And, also, we have to ensure the data encryptions and consistency by, uh, making sure the data is encrypted at rest and at k. This can be done in, like, within Amazon s three, uh, where we can, like, do a definition. Like, for example, you can say replicate, uh, s three bucket, then source bucket name, destination bucket name, source region, destination region, then create a s three client for source and destination region. Once we do that, then we have to create a, uh, replication configuration, like, sync replication config config. K. So there, we need to write a code of the role we have to mention, then give a status as enabled. K. So this way, we can make sure we have, like, a replication configuration and replication

Developers. Develop an automation strategy. Automation Develop an automatic strategy using okay. So for this, developing an automation strategy. Okay. So we need to first um, define, um, custom resource definition identifying the, uh, stateful service. We need to manage with the Kubernetes operators, um, define the custom resource that represents a desired state, then implement the, uh, operator where we need to, like, develop the Kubernetes operator using a framework, like operator SDK or cube builder. Define the reconciliation logic to ensure that the actual state of stateful services matches the desired state, and handle deployment and scaling, uh, implement logic for deploying and scaling stateful service instance based on, uh, based on, like, uh, configuration specified in CRD using Kubernetes, stateful sets, or deployment for managing the life cycle. And we are to ensure data persistence and storage, configure persistent storage for stateful service data using Kubernetes persistent volume, handle backup and restore, implement logic for back for backing up and restoring stateful service data using Kubernetes, Prometheus, or external backup solution, implement monitoring and uh, alerting. Integrate with, um, like, Kubernetes monitoring solutions like Prometheus and Grafana. Testing and validation, we need to perform where we need to develop unit tests, um, integration, uh, test and end to end test for the Kubernetes operators. K. Uh, like and documentation and training. Provide comprehensive documentation for installing, configuring. K? By, like, following these steps, we can develop an automation strategy using operator for for a straightforward deployment and scaling and, like, for an by this way, we can, like, uh, ensure, uh, we are managing the applications with the deployment and

The AWS CLI command used to modify an I'm policy. This is what is wrong or because if I what is wrong or missing from the command that What is wrong? What was it called? This command will, um, attach the policy name, my sample rule policy to the I'm role, uh, using the policy documents stored, um, in the, uh, the JSON file, my hyphen policy dot, uh, JSON.

What are Uh, what process would you use? Like, okay. To, um, uh, to isolate and resolve performance bottleneck in a dockerized application, uh, I think we we can follow, like, monitoring and we need to monitor like, utilize AWS CloudWatch, Prometheus, Grafana, and other monitoring tool to collect the performance metrics, monitor key matrices, uh, such as, uh, a dub like CPU, memory usage, set up the alert to notify any of the performance abnormality, identify the we we need to identify the bottleneck by, like, analyze and analyze the collected metrics from the, uh, Prometheus and Grafana, uh, monitoring tool, uh, which may indicate a performance bottleneck. We have to look for a resource constraint such as, like, CPU, uh, CPU utilization, IOs, input, outputs, memories, performance testing, and profiling, conduct performance testing to simulate real world workloads and identify performance portal, like, under load. We have to use a, uh, load testing, um, tool like Apache JMeter. Uh, for that, uh, we need to, like, profile our application code and dependencies using tools like cprofile by Spark, optimize resource allocation, adjust resource limits, and request for the CPU memory. Uh, we have to tune the containerized services, optimize the application and docker configuration, review network and storage performance, then review AWS service performance. And, uh, iterative testing and optimization has to be done. Continuously monitor and iterate on the performance. By by, like, following these steps, we can effectively isolate, um, and resolve performance bottleneck, um, um, in a doc dockerized

Workflow for Python this Okay. Okay. To, uh, like, to design a workflow for a Python based microservices to, um, interfere with the AWS managed, uh, part. Like, we have to choose a AWS managed with, like, a database. Select an AWS managed database service that provides a ACID compliance, as, like, AWS RDS, uh, for SQL database or for Amazon DynamoDB. Then define the microservice architecture for that. Design, uh, the microservice architecture following the best practices such as, uh, 12 factor app methodology, implement data access layer, develop a data access layer within each microservice responsible for inter surfacing with the AWS managed database, Then transact transactional, um, operations and error handling has to be maintained. Implement the transactional operation within the data access layer, uh, ensure isolation and, uh, concurrency control is in place. Consider database isolation, uh, like, we are like example, read committed, serializable. K. Ensure durability and fault tolerance. Leverage the durability and fault tolerance feature provided by AWS managed database service, monitor and analyze, um, uh, database performance, a set of monitoring and logging for the database performance metrics. Testing and validation has to be performed. Implement unit test, integration test, and end to end test for, uh, microservices, uh, by com compliance with the ACID properties. By following this workflow, we can, um, ensure that the Python based microservice interact within, uh, AWS managed database using the asset

So for an example where, um, for AWS key managed services can be used with the Python, uh, for cloud resource specifically, encrypting and decrypting, uh, data using KMS, wherein we need to, like, uh, write a Python code, uh, like, start, like, the import, uh, both of 3 libraries, initialize the KMS client, uh, with the definition of encrypt underscore data, uh, then encrypt data using a KMS key, pass the arguments, like, then take a return the encrypted data response, kms_client.encrypt, then decrypt the data, uh, then response you have to take that response, like, response equal to kms_client.uh, decrypt. Ciphertext block. We have to use the ciphertext block and then, uh, take the response in a plain text. K. Okay. So plaintext underscore data, uh, encrypted data using KMS key. Encrypted underscore data equal to encrypt underscore data, then print print that encrypted data, and then decrypt the data using KMS key. Decrypt and decrypt, uh, type underscore data. K. So so here, like, we use a Boto 3 library to create a KMS. Use a Boto3 library to create a KMS client. The encrypt underscore data function has to be used, which takes a plain text data and a KMS key ID as a input and encrypt the data using the specified KMS key. And then decrypt underscore data function has to be used, which takes the cipher text to data as a, uh, input and decrypts it using KMS. K. Uh, we we we need to, like, make sure the usage of these functions by encryptor plain text. We have to make sure to replace key underscore ID with the ARN. Uh, whatever the ARN, we use it, um, uh, like, for a alias, for KMS key so that we have to replace with key underscore ID. Ensure that I'm role or, um, user executing the script has the necessary permission to perform the KMS encryption and decryption

So, um, when orchestrating containerized deployments with Kubernetes on AWS, it is essential that we follow few of the container security best practices like use use least privilege principle, assign the minimum required permission to the Kubernetes service accounts, nodes, and ports, Utilize Kubernetes role based access control, RBAC, to control the access to resource within the, uh, cluster. Uh, limit access to sensitive resources such as secrets and configuration files. Secure the container images. It is important where we have to secure the container images. Use trust to be trust to base your, uh, images from official repository or reputable sources. Regulatory update and patch container images to address security vulnerability. Network segmentation has to be done. Uh, implement implement network policies to control, um, uh, traffic between ports and services, use network isolation techniques such as network policies and Kubernetes network policy to restrict the communication between ports and external network, implement secure communication, enable transport layer security, TLS, for communication between Kubernetes components and external system. Secure, um, Kubernetes API server. Restrict access to a Kubernetes API, um, server using authentication and authorization mechanism, enable audit logging for the, uh, Kubernetes, um, API server to track and monitor the API request, and secure the storage because the data is taking up catching up from our storage where we need to encrypt the data at rest using Kubernetes native features or AWS managed encryption service like AWS KMS using Kubernetes secrets or AWS secret manager to manage, um, sensitive data. And regularly audit and monitor, Uh, stay updated and patched. As there might be vulnerabilities on Kubernetes, we need to enable automatic updates for Kubernetes and, uh, infrastructure components where it's possible. Regularly review security advisories and CV common vulnerability and exposure to identify address potential security vulnerabilities. By doing this way, we we can take care of the container security when, uh, uh, deploying within