
Cloud Engineer
NumerixCloud Engineer
GenpactSenior Production Support Engineer
EpsilonImplementation Engineer
Iris Unified TechnologyApplication Administrator
ERATE (Sprint Nextel In-House Billing System)Application Administrator
Mphasis
AWS Lambda

Devop

Cloud Computing

SQL

Unix

PL/SQL

Oracle

Java

Javascript
C++
.jpg)
Terrafrom
Hi. I'm Saurabh. I've done multiple, uh, cloud roles. Like, uh, I have worked on a SaaS application, yeah, for the development and testing of the SaaS application. Apart from that, I've worked on, uh, for, uh, logging and monitoring for AWS application for a health care provider. Uh, in it, we use Kubernetes, uh, monitoring, and we use services like Prometheus, Grafana, and, uh, uh, secrets manager for management of the logging and routing solution. So I've worked on the on, uh, integration and and making of the activities, Uh, application with the cloud for their logging and marketing part. So, uh, and, uh, apart from that, I worked in the operations and, Portfolio is cloud applications for, uh, a dominant Uh, financial client for US. I used to manage their workloads which used to run on AWS cloud primarily on couple of EC 2 and EMI clusters, uh, processing their day to day operations. Yeah. So I worked for them. I worked for, uh, app as a application support engineers, uh, engineer for a cup couple of accounts. And I used to take care of day to day activities for supporting, uh, performance tuning, deployment, and, Other activities on various hours and video system. Apart from that, I have worked as a infrastructure engineer I worked on data center for deployment of various servers, uh, various applications and servers, then taking care of the Uh, antiviruses, uh, and things like so that's a brief about what I have done so
So if you want to, uh, scale up application on Kubernetes for the horizontal scaling we can add more number of ports using the YAML file that's here so we can add increase the number of ports that are there so that will Uh, that will increase the number of services that can be run. So and we are increasing the number of nodes, So that's horizontal scaling. So if we are increasing the memory of the particular instance type that is being used in the Kubernetes cluster, Uh, so memory and other resources, uh, like CPU. So that's, uh, vertical, uh, scaling. Uh, so wherein in the same node, we are adding a more pressing capability. So in there, uh, it will be Uh, it is one as, uh, vertical scaling. So, uh, for us on the scaling, just we add more number of uh, in the given it is JML file. So increasing number of nodes and YAML, uh, is called horizontal scaling, we can achieve this, and, uh, we can add more number, uh, number of nodes, which will help in running more ports and increase the capability of the Kubernetes cluster. So this is when when the like, we want to achieve scalability, uh, and, like, the road is more. So based on various parameters, we can configure to when it is, uh, to, uh, you know, in it's got boot auto scaling. Uh, so, uh, with the help of APS, I wonder it can, uh, help in the scaling for the community's
So, uh, for optimizing the goal line pro programs, uh, we need to see which kind of application it is And which kind of resources it is, uh, using, whether it is using PVC, whether it is using compute, and, uh, like, things like that. So based on that, we can use, uh, a kubelet, which has an controlling and maintaining set of ports, And, uh, it watched for the votes back through Kubernetes API servers. So, uh, it helps in preserving the life cycle, And, uh, it runs on cubelet, uh, run on each node and enables communication between master and slave. So, So based on, uh, the the feedback received from the cubelets, uh, so And, uh, we can see the what what are the what are the automation that can be done. So, uh, if it requires more compute, we can add the, uh, we can change the node type. Like, if it is AWS, we can use, m four x large and increase the capacity and the resource of the node that is being used.
So, uh, so, basically, uh, so, uh, in Terraform, we'll be provisioning infrastructure as a code. Uh, so so on the basis of changes environment, uh, we can do the, like, auto scaling, wherein, uh, based on the load, we can we can provision more number of nodes. And if the load decrease, uh, we can give back the notes and decommission the notes. Uh, one way of doing it, uh, doing that way. So based on the dynamic, uh, enrollment, we can do auto scaling. And, uh, so, And if we are doing some kind of, uh, we can use auto ALB for if the application load load balancer for, uh, load balancing the, uh, load to different different, uh, nodes which will be used.
Can you detail the process of recognizing and existing goal in a period? Consistent development deployment. Yes. Uh, we can use, uh, sort of, Jenkins CI jobs for pulling the code from the repository? It can be GitLab, Bitbucket, SPN. Uh, then we can build it using Maven. Then, uh, using SonarQube, we can, uh, do the static code analysis. So then, uh, we can do the build and deployment of build and deployment build and push deployment to the Docker app? Then, uh, on that, we can use a trivy to scan Docker image. After that, we can use Argo CD to deploy it on the ATS cluster. In that way, we can do a consistent development and deployment using CICD pipeline for and, uh, using Jenkins? So, uh, like, that's a simple example wherein we can do the dockerize the application and have a consistent development
So So the instance, I will see 2 micro. So it may happen that, uh, this course is unavailable, and, uh, we are not able to, provision t to micro instance, and, uh, the pipeline would fail. So that's the problem in this. Uh, we have to give some other, uh, option, uh, instance time. So in case the k two micro is not available. We can use a alternate
So, uh, see, uh, if that Kubernetes forward exhibit failures, uh, we can use for monitoring of the uh, cluster. Uh, so we'll see, uh, why it is failing, uh, whether it is, uh, image pullback off error. Uh, so we'll check the logs and, uh, identify why it is failing. Uh, so it could be because of the resource provisioning or the underlying hardware or the node availability. Uh, and, uh, so we have to rectify the underlying issue that is happening and, uh, work on it. Uh, so, basically, we need to identify the log and the root cause for why the issue is happening. And, uh, based on that, we need to take action. Uh, so, uh, we can use the command like you could unlock, uh, port port name and, uh, namespace as a name namespace, uh, minus n as a namespace. So which will give the logs for the particular node and, uh, uh, what is happening in that node, uh, we can see the graph for the errors and which errors are happening in that particular, uh, node and, uh, see why it is happening. So and we need to identify the, uh, reason why it is happening and, uh, address that issue uh, that will help in resolving the underlying issue for the when we are facing intermittent
Function
When migrating a monolithic system to Microsoft, we have a general service machine.
So Bluegreen method helps in, Kubernetes to ensure minimal disruptions. Let's say there's a, uh, new version of an application available. So, uh, we will update it to minimum set of, uh, nodes. And, uh, so based on it, minimum, set of notes, let's say, 10% will be updated to, uh, the newer version, uh, which will be given to the set of users. But they will be able to test it and give the feedback. So based on that, uh, though we'll have 2 versions simultaneously running in that production moment. So based on the feedback received, we can have, uh, update uh, rest of the notes, uh, in the capabilities lesser, or we can roll back the notes so that, The if in case there is a issue with the current exist deployment. So that helps in achieving some kind of resilience and, early detection of issues from the users and, at the same time, ensure that we don't, uh, deploy all the all the code to the new, uh
So, uh, by using serverless, uh, we can scale up and scale down For, uh, cloud based resource, like, we can use Lambda functions for various activities. So, uh, and, uh, even we can use Lambda for provisioning of the resources. So based on the resource requirement and, uh, on our requirement, we can scale up and down. So it can help in easy manageability, uh, for web application. And, uh, based on the load, it can scale up and scale down. So the disadvantages, like, uh, the control, uh, it it takes time for building the app serverless application. And, uh, so, uh, it is very time consuming.