profile-pic
Vetted Talent

Gopinathan Krishnasamy

Vetted Talent
With over 7 years of experience in System Design, Site Reliability Engineering (SRE), specializing in CI/CD, Cloud Computing, and Chaos Engineering. Aiming to apply my Agile methodologies to improve operational efficiencies and system reliability.
  • Role

    Sr. Cloud & DevOps Engineer

  • Years of Experience

    8 years

Skillsets

  • Helm
  • Chaos test
  • Shell Script
  • Jira
  • IAM
  • ELK
  • DynamoDB
  • Control M
  • Confluence
  • Linux
  • AWS - 7 Years
  • DevOps
  • Docker
  • Python - 3 Years
  • Prometheus
  • IBM Cloud
  • AWS - 7 Years
  • Groovy
  • Grafana - 5 Years
  • Git
  • Docker
  • Ansible - 05 Years
  • Python - 2 Years
  • Kubernetes - 4 Years
  • Jenkins - 5 Years
  • Terraform - 5 Years
  • Virtualization
  • automation
  • DevOps - 6 Years
  • CI/CD - 5 Years
  • EKS
  • Agile

Vetted For

13Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior DevOps Engineer (Hybrid - Hyderabad)AI Screening
  • 56%
    icon-arrow-down
  • Skills assessed :Go, logging and monitoring, application server, CI/CD, Configuration Management, DevOps, Terraform, AWS, Docker, Java, Jenkins, Kubernetes, Python
  • Score: 56/100

Professional Summary

8Years
  • Nov, 2022 - Present2 yr 10 months

    Sr. Cloud & DevOps Engineer

    IBM
  • Mar, 2021 - Oct, 20221 yr 7 months

    DevOps Engineer

    Infinite Computer Solution
  • Apr, 2017 - Feb, 20213 yr 10 months

    Associate Engineer

    Accenture

Applications & Tools Known

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Git

  • icon-tool

    Jenkins

  • icon-tool

    DNS

  • icon-tool

    ELK Stack

  • icon-tool

    EC2

  • icon-tool

    Route53

  • icon-tool

    S3

  • icon-tool

    RDS

  • icon-tool

    SNS

  • icon-tool

    SQS

  • icon-tool

    IAM

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    Terraform

  • icon-tool

    Helm

  • icon-tool

    Terraform

  • icon-tool

    IBM Cloud

  • icon-tool

    AWS

  • icon-tool

    IBM Cloud

  • icon-tool

    AWS

  • icon-tool

    Terraform

  • icon-tool

    Control M

Work History

8Years

Sr. Cloud & DevOps Engineer

IBM
Nov, 2022 - Present2 yr 10 months
    Performed thorough chaos testing on IBM Cloud and Kubernetes services, evaluating resilience through Net Promoter Scores for IBM Cloud. Collaborated with pillar teams to implement CI/CD pipelines, reducing deployment time by 25%. Reverse-engineered an e-commerce platform to identify cloud security gaps, implemented network hardening and cost optimizations, reducing cloud spend by 20%. Implemented observability solutions using ELK Stack and Grafana for real-time log monitoring. Worked on Kubernetes cluster setup and configuration of various kubernetes add-ons and cluster monitoring.

DevOps Engineer

Infinite Computer Solution
Mar, 2021 - Oct, 20221 yr 7 months
    Deployed and managed a Kubernetes cluster on IBM IKS, resulting in 99.99% availability for a high-traffic e-commerce site. Created and maintained distroless Docker images to containerize the modules. Designed and automated IAC projects using Terraform to manage AWS resources.

Associate Engineer

Accenture
Apr, 2017 - Feb, 20213 yr 10 months
    Involved in designing and deploying multitude applications utilising almost all of the AWS stack (EC2, Route53, S3, RDS, Dynamo DB, SNS, IAM) focusing on high-availability.

Achievements

  • Strategically optimised the infrastructure on IBM Cloud resulting in a cost reduction of over 50% while fortifying network security
  • Implemented a detect secret script in all Git repositories to scan for and prevent inadvertent exposure of sensitive data
  • Authored several internal blogs detailing the configuration of TGW with VPE endpoints, integration of COS with a custom resolver, and implementation of a Hub & Spoke architecture
  • Pinnacle award - FY19
  • Best Internal Blog winner
  • Strategically optimised the infrastructure on IBM Cloud, resulting in a remarkable cost reduction of over 50% while fortifying network security.
  • Implemented a detect secret script in all Git repositories to scan for and prevent inadvertent exposure of sensitive data, enhancing security compliance and reducing risk.
  • Authored several internal blogs detailing the configuration of TGW with VPE endpoints, integration of COS with a custom resolver, and implementation of a Hub & Spoke architecture.
  • Pinnacle award - FY19 Accenture
  • Best Internal Blog winner IBM

Major Projects

2Projects

Infrastructure Optimization on IBM Cloud

    Strategically optimized the infrastructure on IBM Cloud, resulting in a remarkable cost reduction of over 50% while fortifying network security.

Detect Secret Script Implementation

    Implemented a detect secret script in all Git repositories to scan for and prevent inadvertent exposure of sensitive data, enhancing security compliance and reducing risk.

Education

  • B.E (Mechanical Engineering)

    Sri Krishna College of Technology (2017)

Certifications

  • Aws certified solutions architect associate

  • Ibm certified advocate - cloud v2

AI-interview Questions & Answers

Hey, hi, good morning, this is Vopi here. So I have a total of 7 years of experience with the field of cloud and DevOps engineering. So currently working with IBM as a senior DevOps and cloud engineering. So yeah, so I have total experience with good experience with AWS, and then like I do cloud Jenkins, Kubernetes and everything. So that's about myself.

So, like what we actually do is like when we dockerize the application, we will try to put into the artifactory, from artifactory we will try to pull the images as a containerization and then like we will put into a human it is to like EKS, so like that will be cross region or multi-region and that will be spread across the multiple zones, so like there won't be a downtime like when you perform multiple operations. So, that is why like a typical production environment needs to be performed when you are playing with some Kubernetes containerization or pods.

So for this scenario, like I could be using a sonar cube, which is a third party tool. So that might be scanning the repo like whenever there is an issue with the configuration mismatch and then check it out, check out, get a check out issues. So that tool can be used to predict the issues which we are getting before getting into issue. So that can be used to scan it properly, that can be used to maintain it easily. So I strongly recommend to use a sonar cube is one of the best examples apart from sonar cube like we have multiple tools available in the market that can be used across to validate and to avoid the configuration mismatch issues.

Yeah, like, if you want to create some infra, like, Terraform is the only best option to create infra. So, you can assist, like, within AWS, there is an option called CloudFormation. You can use CloudFormation to create your infra. So, CloudFormation, like, if you know, like, Google CloudFormation, you can do the CloudFormation. But, you know, Terraform, Terraform is, like, widely used for all the cloud environments, like, for Azure GCP, IBM Cloud, like, that can be used as, like, a multi-tool. Infrastructure as a tool can be used to maintain or, like, create infra in all the environments. So, that can be used as well.

So, methodology like I could recommend to use Agile methodology so that like CI, CD continuous integration, continuous deployment. So when you talk about the methodology like the two kinds of methodologies will come into picture. The first one would be like Agile one, so continuous integration, continuous development so that like there won't be a delay in like testing, integration, development and then like post production go live. So that is one of the main option of Agile and then like might be a CI, CD pipeline. So whatever changes you have made like everything parallel CI, CD so that like what are each and every changes you are making that will be pushed into the pipeline that will be developed test and like put into production. So maybe one option is like Agile and the other one is CI, CD.

So, I am not sure of this question. So like what I could say is like, so maintaining the terraform state file could be one of the key reason, but I am not sure like what, in which meaning they are referring to this question. So terraform state file is a file like which manages the all the infra logs, like what are the things that are being created. So that file will be used to destroy, like when you want to destroy it in the future. So I think that is what like they are referring to, but I am already not sure about this.

Sorry, like I'm not good with the JavaScript, so I'm just good with Python and then like Visual Script. Let me go through it. Sorry, I'm not sure about this.

... ... ... ... ... ... ... ... ... ... ... ... ... ... ...

So, during the like peak hours like what we can do is like maybe you can introduce a load balance in autoscaling place in concept. So, what autoscaling will do is like when there is an high load like it may take a vertical or horizontal load balance like autoscaling. So, based on the load like e-commerce website can be balanced using autoscaling. So, that like infra can be enlarged to different instance like different type or like it can be slowed down. So, depends on the requirement it can slow like increase and decrease. So, the best option like we could recommend AWS to handle is like an engineer to handle is like to use a autoscaling group. So, based on the answers. Since, it is an e-commerce like maybe like only for OT. Autoscaling in load balance could be a correct answer in this scenario.

so like you can introduce this file called as secrets so secrets can manage the all your credentials that can be used in Kubernetes so apart from that like what you can do is like maybe you can create a vault a vault also one password and you can store all your passwords under the vault and you can by using the like AP endpoint of the vault you can just push that like call the endpoint using the variable name which you assign to the credentials so that credentials will be pulled out from the vault and they'll be assigned to the particular variable temporarily like maybe a kind of a token one that may be valid only one time that can be used inside like maybe your pipeline or applications so other way like what you can do is like that can be used in within a kubernetes like if there is a scenario that can be then it should be used in within kubernetes like you can go with the secrets inside kubernetes kubernetes will store all the credential information that can be used within kubernetes if not within kubernetes that can be used across the regions then there are the highest like recommended ways like use a vault or own password that can be used in external tool can be used