profile-pic
Vetted Talent

Rahul Goplani

Vetted Talent
Determined cloud Engineering with expertise in multiple cloud platforms. Passionate about improving network uptime and driving system availability. Strong team player with excellent work ethic, collaboration skills and independent judgement.
  • Role

    DevOps Lead

  • Years of Experience

    6.7 years

Skillsets

  • Cron
  • SSL/TLS
  • Shell Scripting
  • Python
  • nginx
  • Nagios
  • MySQL
  • Kibana
  • IAM
  • IAC
  • Helm
  • GitHub Actions
  • GCP
  • ELK
  • Docker
  • AWS - 5 Years
  • CloudWatch
  • Cloudformation
  • Certificate Management
  • Bash
  • Azure DevOps
  • Azure
  • audit logs
  • Argo CD
  • Apache
  • Linux
  • CI/CD
  • CI/CD - 5 Years
  • Terraform - 4 Years
  • Kubernetes - 4 Years

Vetted For

11Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Infrastructure Engineer (Remote)AI Screening
  • 56%
    icon-arrow-down
  • Skills assessed :Ansible, AWS Cloud Formation, ISO 27001 Standards, Application Security, Cloud Infrastructure, HIPAA, SOC2, Terraform, AWS, Git, 組込みLinux
  • Score: 50/90

Professional Summary

6.7Years
  • Sep, 2023 - Mar, 20251 yr 6 months

    DevOps Lead

    P S Intelegencia
  • May, 2022 - Sep, 20231 yr 4 months

    Build & Release Engineer

    Baker Hughes
  • Jan, 2021 - Apr, 20221 yr 3 months

    DevOps Engineer

    P S Intelegencia
  • Aug, 2018 - Jan, 20201 yr 5 months

    Linux System Administrator

    ICS
  • Mar, 2020 - Jan, 2021 10 months

    Associate Cloud Engineer

    Scriptuit Technologies

Applications & Tools Known

  • icon-tool

    Nagios

  • icon-tool

    Kibana

  • icon-tool

    Github Action

  • icon-tool

    Kubernetes

  • icon-tool

    Helm charts

  • icon-tool

    Argo CD

  • icon-tool

    Azure Devops

  • icon-tool

    AWS ECS

  • icon-tool

    AWS EKS

  • icon-tool

    Git

  • icon-tool

    Docker

  • icon-tool

    Cloudformation

  • icon-tool

    Terraform

  • icon-tool

    Cloudwatch

  • icon-tool

    SNS

  • icon-tool

    IAM

  • icon-tool

    VPC

  • icon-tool

    EC2

  • icon-tool

    AMI

  • icon-tool

    Azure Devops

  • icon-tool

    Code Pipeline

  • icon-tool

    Code Deploy

  • icon-tool

    Terraform

  • icon-tool

    Apache

  • icon-tool

    Nginx

  • icon-tool

    Kibana

  • icon-tool

    AWS ECS

  • icon-tool

    ELB

  • icon-tool

    S3

  • icon-tool

    VPC

  • icon-tool

    IAM

  • icon-tool

    RDS

  • icon-tool

    ECS

  • icon-tool

    EKS

Work History

6.7Years

DevOps Lead

P S Intelegencia
Sep, 2023 - Mar, 20251 yr 6 months
    Designed and maintained AWS architecture components to ensure availability and secure access. Improved monitoring and incident visibility using Nagios/Kibana and AWS CloudWatch; reduced diagnosis time via dashboards and alerting. Automated recurring operations and maintenance tasks (scripts/IaC/CI improvements), improving deployment repeatability and reducing manual effort.

Build & Release Engineer

Baker Hughes
May, 2022 - Sep, 20231 yr 4 months
    Managed CI/CD workflows using GitHub Actions and deployment on Kubernetes with Helm and Argo CD. Supported containerized application releases; improved rollout stability through versioning, rollback readiness. Worked with Azure DevOps and Azure services (ACI, Web Apps, Storage, Azure Database services) for pipelines and environment management.

DevOps Engineer

P S Intelegencia
Jan, 2021 - Apr, 20221 yr 3 months
    Managed AWS infrastructure for container workloads (ECS/EKS) and core services (Route 53, ACM, IAM). Maintained Terraform repositories for ; executed infra changes via workflow. Automated operational tasks using Python/Bash; handled SSL certificate binding and secure endpoints.

Associate Cloud Engineer

Scriptuit Technologies
Mar, 2020 - Jan, 2021 10 months
    Managed GCP infrastructure operations and routine changes using Python CLI/automation scripts. Supported API-based integrations by handling requests/responses, troubleshooting failures. Maintained Azure DevOps pipelines with pre/post-deployment gates (sanity checks), improving release quality. Assisted with early container adoption work (Docker basics / packaging) to support consistent environments.

Linux System Administrator

ICS
Aug, 2018 - Jan, 20201 yr 5 months
    Managed Linux servers across Ubuntu and CentOS. Owned backup operations (scheduling, retention/rotation) to improve recoverability. Automated routine operations using cron jobs and shell utilities; performed standard monitoring checks. Provided L1/L2 incident support: triage, log analysis, escalation and closure documentation.

Major Projects

1Projects

Family Business transformation

    Digitally transformed family retail business by implementing a barcode-based billing and inventory system.

Education

  • MBA (EPGP)

    Indian Institute of Management Indore (2026)
  • B.E. Electronics & Tele-Communication Engineering

    (2018)

Certifications

  • Certified Kubernetes administrator

  • AWS

    AWS
  • Certified kubernetes administrator

  • C language certification (centre of advanced tech.)

  • Terraform associate cloud engineer

  • Aws solution architect certified

  • terraform

    hasicor[
  • C language certification

AI-interview Questions & Answers

Hey there. I'm Rahul Gokplani. I have completed my graduation in 2018. And my branch of specialization was engineering in electronics and communication. Then, normally, I've started my career as a Linux system admin, and during that period, I realized that cloud and DevOps technology was something moving into the market, and I started learning AWS. I got an opportunity in one of the startups, Scrip2 Technologies, where I joined as an associate cloud engineer. Unfortunately, after the year COVID hit us very badly, and that startup got shut down, so I had to look for another opportunity in the market. Since I was good in DevOps technologies and tools, I got an opportunity in BS Intelligence Private Limited, which is a company located in Noida. Currently, I'm working here as a senior DevOps engineer. My tools and technologies include Kubernetes, Terraform, AWS Cloud, and we are using Azure Cloud to a limited extent. In AWS, we use EC2, IAM, CloudWatch, CloudTrail, and RDS. Additionally, for our department, we use GitHub actions for CICD. This is all my current tool stack.

Well, so far, disaster recovery what we generally practice is that we have all of our infrastructure created with Terraform. Terraform is one of the infrastructure as a service in the cloud. And if something goes wrong in any region, within a half hour we are able to replicate that whole infrastructure in another region. That is the practice we have. Also, in case of databases, we have databases in different availability zones. This is just in case something happens to one of the availability zones in that region. We have some replicated servers in another zone. So, we can plan for any kind of disaster recovery. We should have a plan for infrastructure as a goal.

Well, so we have one infrastructure locally. And we need to address all of this infrastructure to our AWS Cloud Platform. In that case, we have one service in AWS that I am unable to recall the name right now. However, by using that service, we can use our local infrastructure setup into our cloud platform. Like, we have a lot of services for servers: EC2 instances. For databases, we can use RDS. For storage, we can use simple storage service, S3. And for security purposes, we have user access management with IAM and AWS.

So, if multiple engineers are working in a Terraform code base, what we can do is put our TFC state file in one of the S3 buckets. And that way, we can log that state file, so when someone else is trying to run any of the apply commands, it will block that key with the engineer who made the last change. So another engineer will not be able to perform any Terraform commands. That way, we can.

put our TF straight file. So better, we use a directory kind of a solution, in case of a large-scale environment. We can have users with this kind of active directory integration. That way, we will have a more secure way.

We can use, KMS This is one of the encryption servers offered by AWS. For encrypting data of any of the service in cloud. In terms of how it differs from s 3, to be honest, I'm I couldn't understood, question, clearly here. Yeah.

There's a problem with the if condition over here. The reason is the if condition is not in the string, and we are appending that string over here.

Problem I see over here is CID or IP for both protocols. Like, it's open for everyone, 0.0.0.0/0. So this is a wide open network which can lead to security risks.

So, in this case, if we have a microservice-based architecture, we can use EKS, the elastic Kubernetes service offered by AWS. That way, we won't have to handle the servers, and we won't have to bear the cost of it. Just we'll have to pay $70 per month for EKS. And we will have containers and ports deployed over there, which would be maintained as auto-scalable whenever the requirements come up, like CPU utilization goes high or something, then automatically a new port will be rolled out. And so, that is how it will be auto-scalable. And yeah, for maintaining SOC 2 compliance requirements, we can restrict third-party installations, and we can ensure that the third-party tools which we are using in our clusters are following SOC 2 guidelines.

So for maintaining high availability, we can always have an auto scale for a Kubernetes cluster. Let's say if CPU utilization is going high, then we can have a new instance rolled out, and automatically the respective pods will be scheduled on that. I don't have much idea on automatic failover.

Sorry. So for having a CICD workflow with Docker, what we can do is we can have a GitHub actions where we can write a workflow file. We can mention the destination where we want to push our image. We want and we have we can mention a step of build, and then we can mention a push location where we want to push. The location could be a container registry or it could be AWS Elastic Container Registry or maybe Azure Container Registry, whatever it is. So prior to that, we can run some vulnerability scanning. We can run any of the previous scan, TF scan, or we can use Wazoo for checking out the vulnerability on that code. Also, after creating the image, as well, we can scan the image as well. That is also an option. And then yeah, so that is the way we can have our CICD workflow. CD is mentioned here, but maybe we can use GitOps, any of the GitOps technologies like Argo CD to check if any of the changes happen on GitHub, on Argo CD code base. And it will automatically capture that change, and it will deploy that new image into our Kubernetes system.