profile-pic
Vetted Talent

Saket Mehta

Vetted Talent
Skilled IT professional with 10+ Years of experience in IT industry with expertise in AWS Cloud Infra and AWS DevOps engineer with strong background in Multi Cloud services, automation, and containerization technologies, expertise in Version Control, DevOps/Build with CICD Pipeline and Automate deployments as Release management with Infrastructure as Code (IaC), Lambda, Terraform, Docker, Kubernetes, and GIT with Innovative solutions and process improvements to ensure customer satisfaction.
  • Role

    AWS DevOps & Cloud Infra Engineer

  • Years of Experience

    10 years

Skillsets

  • automation
  • Python - 1 Years
  • AWS - 4 Years
  • Java - 1 Years
  • vulnerabilities
  • patching
  • Multi Cloud
  • IAC
  • AWS Cloud
  • Scripting
  • Containerization
  • Security
  • DevOps - 05 Years
  • Docker
  • version control
  • RDBMS
  • Lambda
  • Terraform
  • Kubernetes
  • Git
  • SQL

Vetted For

20Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    AWS Cloud Implementation Specialist (Hybrid, Bangalore)AI Screening
  • 53%
    icon-arrow-down
  • Skills assessed :Microsoft Azure, Team Collaboration, Vendor Management, API Gateway, AWS GuardDuty, AWS Inspector, AWS Lambda, AWS Security Hub, Ci/Cd Pipelines, Cloud Automation, Cloud Infrastructure Design, Cloud Security, Identity and Access Management (IAM), AWS, Docker, Good Team Player, Google Cloud Platform, Kubernetes, Problem Solving Attitude, Python
  • Score: 48/90

Professional Summary

10Years
  • Sep, 2022 - Present3 yr 8 months

    AWS DevOps & Cloud Infrastructure Engineer

    Mphasis
  • Aug, 2020 - Oct, 20222 yr 2 months

    AWS SRE & AWS DevSecOps Engineer

    Cognizant Solutions
  • Jul, 2017 - Oct, 20225 yr 3 months

    Lead-Database Administration & AWS Cloud Engineer

    ZeOmega Healthcare Pvt Ltd
  • Dec, 2011 - Jun, 20142 yr 6 months

    SQL Database Administration

    Bharat Electronics Limited (BEL)
  • Jul, 2015 - Aug, 20172 yr 1 month

    Senior SQL Database Administration

    Trianz IT Cloud Solutions

Applications & Tools Known

  • icon-tool

    AWS Services

  • icon-tool

    Jenkins

  • icon-tool

    Maven

  • icon-tool

    Apache

  • icon-tool

    Nginx

  • icon-tool

    Tomcat

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Ubuntu

  • icon-tool

    Red hat

  • icon-tool

    Centos

  • icon-tool

    Linux

  • icon-tool

    Windows

  • icon-tool

    GitHub

  • icon-tool

    SQL

  • icon-tool

    Postman

  • icon-tool

    SSIS

  • icon-tool

    SSRS

Work History

10Years

AWS DevOps & Cloud Infrastructure Engineer

Mphasis
Sep, 2022 - Present3 yr 8 months
    Primary expertise in public cloud infrastructure design with DevOps processes by design, develop, modify and integrating complex infrastructure automation and deployment systems.

AWS SRE & AWS DevSecOps Engineer

Cognizant Solutions
Aug, 2020 - Oct, 20222 yr 2 months
    Hands on Building servers, integrating infrastructure automation, security risk assessments, vulnerability management and implementing security controls and best practices for cloud environments.

Lead-Database Administration & AWS Cloud Engineer

ZeOmega Healthcare Pvt Ltd
Jul, 2017 - Oct, 20225 yr 3 months
    Managing Production MS SQL Server Database of organization on Cloud and on-premises Datacenters.

Senior SQL Database Administration

Trianz IT Cloud Solutions
Jul, 2015 - Aug, 20172 yr 1 month
    SQL Database Operational support to tech users, developing advanced queries, and database system management.

SQL Database Administration

Bharat Electronics Limited (BEL)
Dec, 2011 - Jun, 20142 yr 6 months
    Implementation and Maintenance of E-Governance project and database management activities.

Education

  • Bachelor of Engineering-Computer Science

    Visvesvaraya Technological University (VTU) (2011)

AI-interview Questions & Answers

Could okay. So I have overall 11 years of experience in the IT industry. Out of which, initially, 5 to 6 years I was a database administrator, and then the rest of 6 years, I am currently working as an AWS DevOps engineer and AWS infrastructure engineer. And whereas my roles and responsibilities when it comes to an infrastructure engineer is, like, setting up AWS instances, taking care of all the EC2 instances, RDS instances. We have different flavors of databases. Also, I for spinning up servers, I write infrastructure as code with Terraform, wherein we spin up servers as per needs. And once the servers are set up, I take care of providing roles for that using IAM, and setting up AWS serverless services using AWS Lambda, and also making sure there are no vulnerabilities on the server by using different tools like AWS Inspector, which will help flagging out if there is any vulnerability on a server and patch accordingly those instances which have vulnerabilities. This is about the infrastructure. Apart from this, I am also working as an AWS DevOps engineer wherein, you know, a DevOps engineer where I set up a CICD pipeline using Jenkins, GitHub, then Docker, and then we do a deployment using Kubernetes. So this is my overall roles and responsibilities as an AWS infrastructure engineer and AWS DevOps engineer role. I work with different teams, the networking team, different stakeholders, QA, application setup. We are using Python and Java as an application, which I majorly take care of and setting up a CICD pipeline for those as an infrastructure role. And also, as I said initially, within AWS setting up secret managers for secret keys using KMS, making sure our data has been encrypted at rest, and data has been encrypted at rest and at transit with TLS and SSL certificates. And also a team leader of 16 members.

To pass the pass and respond to API Gateway request will maintain the high availability. How do we build an AWS Lambda function to parse and respond to API Gateway request while maintaining high availability? So, for building an AWS Lambda function as we know, AWS Lambda is an AWS serverless service, which is provided by AWS. For these requests, we can make use of AWS Fargate, which will help us in our deployment. So, what we can do is, since AWS Lambda is a serverless service, we need to write a Python code in AWS Lambda. Once we write a Python code within that, and then we can deploy this using AWS Lambda as a container and deploy this container using an AWS target for doing the deployment. And, in order to integrate with API Gateway, because the AWS Lambda takes a request from different APIs, we have to create an AWS IAM role. In that IAM role, we give a creator role and policies, which will give permission for that AWS Lambda to execute and run it. So, once that's done, we can also maintain it as a high availability by making sure we use this AWS instance, which we are building in a virtual private cloud. We have to make sure we have different subnets for private and public IPs, where any other user who is not supposed to have access to our code, we have to give list permissions, list privileges by IAM role and have an auto-scaling function. By using auto-scaling, we can move in and move out of the resources. If there is less resource being utilized, like for AWS Lambda, then we can move in by reducing the CPU, and more if the high CPU is needed, we can adjust auto-scale with more instances. So, that is how we can build an AWS Lambda function, which will respond to API Gateway. And, in order for an AWS Lambda to integrate with our APIs, we can use AWS API Gateway.

AWS Security Hub and Lambda to automate compliance. Check for multi-account AWS environment. Leverage AWS Security Hub and Lambda. Automate compliance checks. Okay. So as a AWS Lambda can be used to trigger events. Lambda can be used as a mechanism for triggering emails or any alerts wherein we can write code and put that in an S3 bucket. Integrate AWS Lambda with an S3 bucket and integrate that with an AWS Security Hub. Whenever we have an instance of any alert test to be notified, we have to write code in an AWS Lambda and integrate it within a Security Hub. So whenever the trigger happens, when the code is triggered, then that would give a response to a Security Hub. In the same way, this AWS Lambda, as it is serverless, we can integrate it with multiple AWS accounts at different accounts, and then we can integrate this through different APIs. For each account, we can use a different API and then integrate our Lambda code within different AWS services. Like, we have an AWS Inspector, or a CloudTrail. CloudTrail can be used for monitoring API logs and other logging and monitoring functionality.

Wait. This cloud automation for Python cloud automation task, how do you ensure that? So, there are best practices which have to be followed whenever we write a Python code. As we can use Terraform as an infrastructure as code. We can also use Python, which will help in automating our code, which will help automate different tasks. And we have to ensure that it adheres to solid principles by following different principles of writing Python code, including logging within the code, which will help us understand if there's any error. Whichever line is giving an error, we can notify those lines and use print statements. These are a few things we need to take care of when using Python as an automation for any automation task.

What is cross region replica? Designing from this, we can write a function which will use both or three libraries to interact with the AWS S3. Like, it configures the cross region replication between a source and a destination bucket, which ensures encryption and consistency. This function will enable versioning and default server side encryption on both the buckets for added security. We need to just adjust the parameter according to the specific use case. And also, we have to ensure data encryption and consistency by making sure the data is encrypted at rest and in transit. This can be done within Amazon S3, where we can define a replication configuration. For example, you can say replicate an S3 bucket, then source bucket name, destination bucket name, source region, destination region, then create an S3 client for source and destination region. Once we do that, we have to create a replication configuration, like sync replication config. So there, we need to write a code role and mention it, then give a status as enabled. This way, we can make sure we have a replication configuration and replication.

Developers, develop an automation strategy. Automation requires developing an automatic strategy. So, for this, developing an automation strategy. So, we need to first define a custom resource definition identifying the stateful service. We need to manage it with the Kubernetes operators. We need to define the custom resource that represents a desired state, then implement the operator. We need to develop the Kubernetes operator using a framework, like the operator SDK or Kube Builder. Define the reconciliation logic to ensure that the actual state of stateful services matches the desired state, and handle deployment and scaling. Implement logic for deploying and scaling stateful service instances based on the configuration specified in CRD using Kubernetes, stateful sets, or deployments for managing the life cycle. And we need to ensure data persistence and storage. Configure persistent storage for stateful service data using Kubernetes persistent volumes. Handle backup and restore. Implement logic for backing up and restoring stateful service data using Kubernetes, Prometheus, or an external backup solution. Implement monitoring and alerting. Integrate with Kubernetes monitoring solutions like Prometheus and Grafana. Testing and validation require developing unit tests, integration tests, and end-to-end tests for the Kubernetes operators. Like documentation and training. Provide comprehensive documentation for installing and configuring. By following these steps, we can develop an automation strategy using operators for a straightforward deployment and scaling. This way, we can ensure that we are managing applications with deployment and scaling.

The AWS CLI command used to modify an IAM policy. This is what is wrong or because if I what is wrong or missing from the command that what is wrong. What was it called? This command will attach the policy name, my sample rule policy to the IAM role, using the policy documents stored in the JSON file, my hyphen policy dot JSON.

To isolate and resolve a performance bottleneck in a Dockerized application, we can follow a process that involves monitoring and analyzing performance metrics. We can use tools like AWS CloudWatch, Prometheus, Grafana, and other monitoring tools to collect performance metrics, monitor key metrics such as CPU, memory usage, and set up alerts to notify any performance abnormalities. We need to identify the bottleneck by analyzing the collected metrics from Prometheus and Grafana, which may indicate a performance bottleneck. We have to look for resource constraints such as CPU utilization, I/Os, memory usage, and conduct performance testing to simulate real-world workloads and identify performance issues under load. We can use a load testing tool like Apache JMeter for that. To profile our application code and dependencies, we can use tools like cProfile by Spark, optimize resource allocation, adjust resource limits, and request for CPU and memory. We have to tune the containerized services, optimize the application and Docker configuration, review network and storage performance, then review AWS service performance. And, iterative testing and optimization has to be done. Continuously monitoring and iterating on the performance. By following these steps, we can effectively isolate and resolve performance bottlenecks in a Dockerized application.

Workflow for Python is okay. To design a workflow for a Python-based microservices to interfere with the AWS-managed part, like we have to choose an AWS-managed database service that provides ACID compliance, such as AWS RDS for SQL database or Amazon DynamoDB. Then define the microservice architecture for that. Design the microservice architecture following the best practices such as the 12-factor app methodology, implement a data access layer, and develop a data access layer within each microservice responsible for interfacing with the AWS-managed database. Then, transact transactional operations and maintain error handling. Implement transactional operations within the data access layer, ensure isolation, and concurrency control is in place. Consider database isolation, like read committed or serializable. Ensure durability and fault tolerance. Leverage the durability and fault tolerance feature provided by the AWS-managed database service, monitor and analyze database performance, and set up a set of monitoring and logging for database performance metrics. Testing and validation have to be performed. Implement unit tests, integration tests, and end-to-end tests for microservices, by complying with the ACID properties. By following this workflow, we can ensure that the Python-based microservices interact with the AWS-managed database using its features.

So for an example where, for AWS key managed services, can be used with the Python for cloud resource, specifically, encrypting and decrypting data using KMS, wherein we need to, like, write a Python code, start, like, by importing both of three libraries, initializing the KMS client with the definition of encrypt_data, then encrypt data using a KMS key, passing the arguments, like, then take the return of the encrypted data response, kms_client.encrypt, then decrypt the data, wherein we have to take the response, like, response equals to kms_client.decrypt, ciphertext_block. We have to use the ciphertext block and then take the response in a plain text. So, plaintext_data, encrypted_data using KMS key, encrypted_data equals to encrypt_data, then print that encrypted data, and then decrypt the data using KMS key, decrypt_data, K. So, here, like, we use a Boto3 library to create a KMS client. We use the Boto3 library to create a KMS client. The encrypt_data function has to be used, which takes a plain text data and a KMS key ID as an input and encrypts the data using the specified KMS key. And then the decrypt_data function has to be used, which takes the ciphertext data as an input and decrypts it using KMS, K. We have to make sure the usage of these functions by encrypting plain text. We have to make sure to replace key_id with the ARN. Whatever the ARN we use, like, for an alias for a KMS key, so we have to replace with key_id. Ensure that the role or user executing the script has the necessary permission to perform the KMS encryption and decryption.

When orchestrating containerized deployments with Kubernetes on AWS, it is essential that we follow a few of the container security best practices, such as using the least privilege principle, assigning the minimum required permissions to the Kubernetes service accounts, nodes, and ports, Utilizing Kubernetes role-based access control, RBAC, to control access to resources within the cluster. We also need to limit access to sensitive resources such as secrets and configuration files. Securing the container images is important; use trusted images from official repositories or reputable sources. Regulatory updates and patching container images to address security vulnerabilities are necessary. Network segmentation must be done. Implementing network policies to control traffic between ports and services, using network isolation techniques such as network policies and Kubernetes network policy to restrict communication between ports and external networks, implementing secure communication, enabling transport layer security, TLS, for communication between Kubernetes components and external systems. Securing the Kubernetes API server is also crucial; restrict access to the Kubernetes API server using authentication and authorization mechanisms, enable audit logging for the Kubernetes API server to track and monitor API requests, and secure storage by encrypting data at rest using Kubernetes native features or AWS managed encryption services like AWS KMS using Kubernetes secrets or AWS secret manager to manage sensitive data. Regularly auditing and monitoring is necessary; staying updated and patched is essential. As there might be vulnerabilities in Kubernetes, we need to enable automatic updates for Kubernetes and infrastructure components where possible. Regularly reviewing security advisories and CVEs to identify and address potential security vulnerabilities. By doing this, we can take care of container security when deploying within Kubernetes.