profile-pic
Vetted Talent

Saurabh Relan

Vetted Talent
Dedicated Cloud Engineer with Years of experience in steering transformative DevOps initiatives & IT Operational Excellence; targeting to express potential in challenging roles to leverage expertise in Automation, CI/CD, Oracle & Cloud to optimize software development, enhance operational efficiency & contribute to the success of an innovative organization; committed towards delivering scalable, secure, and reliable solutions that align with the company's strategic objectives
  • Role

    Cloud Engineer

  • Years of Experience

    15 years

Skillsets

  • Incident Management
  • Technical Documentation
  • Team Leadership
  • System Optimization
  • Solution Design
  • Security management
  • Root Cause Analysis
  • Risk Assessment
  • Performance monitoring
  • infrastructure as code
  • CI/CD
  • Containerization
  • Cloud solutions architecture
  • Capacity Planning
  • business continuity planning
  • Agile methodologies
  • Oracle
  • Cloud Infrastructure
  • automation
  • DevOps

Vetted For

15Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Staff, Software Engineer (SRE)AI Screening
  • 31%
    icon-arrow-down
  • Skills assessed :Ansible, ArgoCD, BuildKite, Chef, CircleCI, Puppet, Spinakker, DevOps, SRE, Terraform, AWS, Docker, Jenkins, Kubernetes, System Design
  • Score: 31/100

Professional Summary

15Years
  • May, 2023 - Present2 yr 7 months

    Cloud Engineer

    Numerix
  • Apr, 2018 - May, 20235 yr 1 month

    Cloud Engineer

    Genpact
  • Jun, 2016 - Apr, 20181 yr 10 months

    Senior Production Support Engineer

    Epsilon
  • Jul, 2009 - Jan, 2010 6 months

    Implementation Engineer

    Iris Unified Technology
  • Jun, 2010 - Oct, 20133 yr 4 months

    Application Administrator

    ERATE (Sprint Nextel In-House Billing System)
  • Nov, 2013 - Jun, 20162 yr 7 months

    Application Administrator

    Mphasis

Applications & Tools Known

  • icon-tool

    AWS Lambda

  • icon-tool

    Devop

  • icon-tool

    Cloud Computing

  • icon-tool

    SQL

  • icon-tool

    Unix

  • icon-tool

    PL/SQL

  • icon-tool

    Oracle

  • icon-tool

    Java

  • icon-tool

    Javascript

  • icon-tool

    C++

  • icon-tool

    Terrafrom

Work History

15Years

Cloud Engineer

Numerix
May, 2023 - Present2 yr 7 months
    Designed cloud infrastructure, automated deployment processes using Infrastructure as Code (IAC), implemented logging and monitoring solutions, managed build and integration workflows, optimized system functionality and resource utilization, and ensured network performance.

Cloud Engineer

Genpact
Apr, 2018 - May, 20235 yr 1 month
    Collaborated on cloud platform solutions, resolved technical issues, migrated applications to cloud, developed cloud deployment strategies, implemented best practices in cloud integration, and optimized resource allocations.

Senior Production Support Engineer

Epsilon
Jun, 2016 - Apr, 20181 yr 10 months
    Maintained cloud platforms, provided support for production environments, monitored job scheduling, and optimized workflows ensuring availability and performance.

Application Administrator

Mphasis
Nov, 2013 - Jun, 20162 yr 7 months
    Administered AWS-based applications, conducted cloud monitoring, optimized system performance, and migrated workloads seamlessly.

Application Administrator

ERATE (Sprint Nextel In-House Billing System)
Jun, 2010 - Oct, 20133 yr 4 months
    Developed and maintained billing systems for schools and libraries, monitored production operations, and optimized database configurations.

Implementation Engineer

Iris Unified Technology
Jul, 2009 - Jan, 2010 6 months
    Enabled feature implementations, provided client support, and configured security solutions including antivirus and firewalls.

Major Projects

1Projects

Introducing a component in Test automation for new brands

    Designed automation components during transformation programs for new brands.

Education

  • M.Sc. (Computer Science & Engineering)

    Christ University, Bengaluru (2013)
  • B.Tech. (Computer Science)

    BMIET, Maharishi Dayanand University (2009)

Certifications

  • 2022: AWS Certified Solution Architect Associate

  • AMCAT Certified Software Development Trainee

    Aspiring Minds
  • AMCAT Certified Software Engineer - IT Services

    Aspiring Minds
  • AMCAT Certified Software Engineer - IT Services

    Aspiring Minds
  • AMCAT Certified Software Engineer - IT Services

    Aspiring Minds

AI-interview Questions & Answers

Hi. I'm Saurabh. I've done multiple, uh, cloud roles. Like, uh, I have worked on a SaaS application, yeah, for the development and testing of the SaaS application. Apart from that, I've worked on, uh, for, uh, logging and monitoring for AWS application for a health care provider. Uh, in it, we use Kubernetes, uh, monitoring, and we use services like Prometheus, Grafana, and, uh, uh, secrets manager for management of the logging and routing solution. So I've worked on the on, uh, integration and and making of the activities, Uh, application with the cloud for their logging and marketing part. So, uh, and, uh, apart from that, I worked in the operations and, Portfolio is cloud applications for, uh, a dominant Uh, financial client for US. I used to manage their workloads which used to run on AWS cloud primarily on couple of EC 2 and EMI clusters, uh, processing their day to day operations. Yeah. So I worked for them. I worked for, uh, app as a application support engineers, uh, engineer for a cup couple of accounts. And I used to take care of day to day activities for supporting, uh, performance tuning, deployment, and, Other activities on various hours and video system. Apart from that, I have worked as a infrastructure engineer I worked on data center for deployment of various servers, uh, various applications and servers, then taking care of the Uh, antiviruses, uh, and things like so that's a brief about what I have done so

So if you want to, uh, scale up application on Kubernetes for the horizontal scaling we can add more number of ports using the YAML file that's here so we can add increase the number of ports that are there so that will Uh, that will increase the number of services that can be run. So and we are increasing the number of nodes, So that's horizontal scaling. So if we are increasing the memory of the particular instance type that is being used in the Kubernetes cluster, Uh, so memory and other resources, uh, like CPU. So that's, uh, vertical, uh, scaling. Uh, so wherein in the same node, we are adding a more pressing capability. So in there, uh, it will be Uh, it is one as, uh, vertical scaling. So, uh, for us on the scaling, just we add more number of uh, in the given it is JML file. So increasing number of nodes and YAML, uh, is called horizontal scaling, we can achieve this, and, uh, we can add more number, uh, number of nodes, which will help in running more ports and increase the capability of the Kubernetes cluster. So this is when when the like, we want to achieve scalability, uh, and, like, the road is more. So based on various parameters, we can configure to when it is, uh, to, uh, you know, in it's got boot auto scaling. Uh, so, uh, with the help of APS, I wonder it can, uh, help in the scaling for the community's

So, uh, for optimizing the goal line pro programs, uh, we need to see which kind of application it is And which kind of resources it is, uh, using, whether it is using PVC, whether it is using compute, and, uh, like, things like that. So based on that, we can use, uh, a kubelet, which has an controlling and maintaining set of ports, And, uh, it watched for the votes back through Kubernetes API servers. So, uh, it helps in preserving the life cycle, And, uh, it runs on cubelet, uh, run on each node and enables communication between master and slave. So, So based on, uh, the the feedback received from the cubelets, uh, so And, uh, we can see the what what are the what are the automation that can be done. So, uh, if it requires more compute, we can add the, uh, we can change the node type. Like, if it is AWS, we can use, m four x large and increase the capacity and the resource of the node that is being used.

So, uh, so, basically, uh, so, uh, in Terraform, we'll be provisioning infrastructure as a code. Uh, so so on the basis of changes environment, uh, we can do the, like, auto scaling, wherein, uh, based on the load, we can we can provision more number of nodes. And if the load decrease, uh, we can give back the notes and decommission the notes. Uh, one way of doing it, uh, doing that way. So based on the dynamic, uh, enrollment, we can do auto scaling. And, uh, so, And if we are doing some kind of, uh, we can use auto ALB for if the application load load balancer for, uh, load balancing the, uh, load to different different, uh, nodes which will be used.

Can you detail the process of recognizing and existing goal in a period? Consistent development deployment. Yes. Uh, we can use, uh, sort of, Jenkins CI jobs for pulling the code from the repository? It can be GitLab, Bitbucket, SPN. Uh, then we can build it using Maven. Then, uh, using SonarQube, we can, uh, do the static code analysis. So then, uh, we can do the build and deployment of build and deployment build and push deployment to the Docker app? Then, uh, on that, we can use a trivy to scan Docker image. After that, we can use Argo CD to deploy it on the ATS cluster. In that way, we can do a consistent development and deployment using CICD pipeline for and, uh, using Jenkins? So, uh, like, that's a simple example wherein we can do the dockerize the application and have a consistent development

So So the instance, I will see 2 micro. So it may happen that, uh, this course is unavailable, and, uh, we are not able to, provision t to micro instance, and, uh, the pipeline would fail. So that's the problem in this. Uh, we have to give some other, uh, option, uh, instance time. So in case the k two micro is not available. We can use a alternate

So, uh, see, uh, if that Kubernetes forward exhibit failures, uh, we can use for monitoring of the uh, cluster. Uh, so we'll see, uh, why it is failing, uh, whether it is, uh, image pullback off error. Uh, so we'll check the logs and, uh, identify why it is failing. Uh, so it could be because of the resource provisioning or the underlying hardware or the node availability. Uh, and, uh, so we have to rectify the underlying issue that is happening and, uh, work on it. Uh, so, basically, we need to identify the log and the root cause for why the issue is happening. And, uh, based on that, we need to take action. Uh, so, uh, we can use the command like you could unlock, uh, port port name and, uh, namespace as a name namespace, uh, minus n as a namespace. So which will give the logs for the particular node and, uh, uh, what is happening in that node, uh, we can see the graph for the errors and which errors are happening in that particular, uh, node and, uh, see why it is happening. So and we need to identify the, uh, reason why it is happening and, uh, address that issue uh, that will help in resolving the underlying issue for the when we are facing intermittent

Function

When migrating a monolithic system to Microsoft, we have a general service machine.

So Bluegreen method helps in, Kubernetes to ensure minimal disruptions. Let's say there's a, uh, new version of an application available. So, uh, we will update it to minimum set of, uh, nodes. And, uh, so based on it, minimum, set of notes, let's say, 10% will be updated to, uh, the newer version, uh, which will be given to the set of users. But they will be able to test it and give the feedback. So based on that, uh, though we'll have 2 versions simultaneously running in that production moment. So based on the feedback received, we can have, uh, update uh, rest of the notes, uh, in the capabilities lesser, or we can roll back the notes so that, The if in case there is a issue with the current exist deployment. So that helps in achieving some kind of resilience and, early detection of issues from the users and, at the same time, ensure that we don't, uh, deploy all the all the code to the new, uh

So, uh, by using serverless, uh, we can scale up and scale down For, uh, cloud based resource, like, we can use Lambda functions for various activities. So, uh, and, uh, even we can use Lambda for provisioning of the resources. So based on the resource requirement and, uh, on our requirement, we can scale up and down. So it can help in easy manageability, uh, for web application. And, uh, based on the load, it can scale up and scale down. So the disadvantages, like, uh, the control, uh, it it takes time for building the app serverless application. And, uh, so, uh, it is very time consuming.