profile-pic
Vetted Talent

Tejas Meshram

Vetted Talent

Eight years of development and devops expertise with focus on automation and containerization. Skills include provisioning cloud infrastructure through automation, orchestrating containers, programming, scripting, developing pipelines for infra and apps deployment.

Ability, readiness and experience in learning and implementing new technologies quickly on the job, coordinating effectively with team-members.


Certified Kubernetes Administrator.

  • Role

    Automation Developer

  • Years of Experience

    8 years

Skillsets

  • Kubernetes
  • Terraform
  • Kubernetes - 3 Years
  • Terraform - 2 Years
  • Google Cloud
  • AWS - 1 Years
  • Docker - 3 Years
  • GCP
  • GitHub Actions
  • Groovy
  • Java
  • Jenkins
  • Kong
  • MuleSoft
  • FluxCD
  • Bash scripting

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Kubernetes Support Engineer (Remote)AI Screening
  • 71%
    icon-arrow-down
  • Skills assessed :Ci/Cd Pipelines, Excellent problem-solving skills, Kubernetes architecture, Strong communication skills, Ansible, Azure Kubernetes Service, Grafana, Prometheus, Tanzu, Tanzu Kubernetes Grid, Terraform, Azure, Docker, Kubernetes
  • Score: 64/90

Professional Summary

8Years
  • Aug, 2023 - Present2 yr 4 months

    Senior Software Engineer

    Motorola
  • May, 2022 - Jul, 20231 yr 2 months

    Cloud DevOps Engineer

    Priceline
  • Jun, 2019 - May, 20222 yr 11 months

    Senior DevOps Engineer

    HSBC
  • Jul, 2016 - May, 20192 yr 10 months

    Software Engineer

    Wipro Technologies

Applications & Tools Known

  • icon-tool

    GitHub

  • icon-tool

    Bitbucket

  • icon-tool

    Confluence

  • icon-tool

    Nexus

  • icon-tool

    Eclipse

  • icon-tool

    IntelliJ IDEA

  • icon-tool

    VS Code

Work History

8Years

Senior Software Engineer

Motorola
Aug, 2023 - Present2 yr 4 months
    Write Helm charts for new containerized applications. Manage deployment pipelines in Bitbucket, to deploy the helm charts in EKS-managed Kubernetes cluster. Automate various aspects of the platform using bash scripting and Java.

Cloud DevOps Engineer

Priceline
May, 2022 - Jul, 20231 yr 2 months
    Designed and Developed GitHub Actions Workflow to Manage GCS Buckets and IAM Roles in GCP. Developed a CI/CD Pipeline for deploying AWS Infra. changes written in Terraform, via GitHub Actions. Manage entire GCP Infrastructure using Terraform & Kubernetes clusters following the GitOps model using FluxCD.

Senior DevOps Engineer

HSBC
Jun, 2019 - May, 20222 yr 11 months
    Deployment Automation Automated MuleSoft API Deployments via CI/CD Jenkins Pipeline using REST APIs. This pipeline is used by hundreds of developers to host their APIs on HSBC API Platform. Integrated Sonar, Checkmarx & Sonatype Nexus IQ scans in the pipeline. Kong Gateway Infrastructure Automation (GCP) Set up a Compute Engine VM for managing all other infra. components in an automated fashion. Created Kubernetes cluster for managing Kong API Gateway workloads. Pushed Docker Images from Private Repository onto Google Container Registry. Created Google Cloud Storage buckets for storing automation metadata. Created PostgreSQL cloud instance, database and users, managed by Cloud SQL. Used Helm charts with custom values injected to it, to create Kubernetes services, ingresses and deployments. Softwares Onboarding to HSBC Brought-in several Kong-developed softwares into HSBC for global use, by following required governance.

Software Engineer

Wipro Technologies
Jul, 2016 - May, 20192 yr 10 months
    Smallcell System Management Network Management System to manage small-cell devices in a telecom network via FCAPS operations. Developed and implemented a feature related to configuration of small-cell devices, called Policy Manager wherein a policy is applied on devices when an event occurs on the device.

Education

  • B.Tech

    VJTI, Mumbai (2016)
  • Bachelor of Technology (BTech) Computer Engineering

    VJTI (2016)

Certifications

  • Certified Kubernetes Administrator

    Cloud Native Computing Foundation (CNCF) (Nov, 2022)
    Credential ID : LF-kddrjmyw1h
    Credential URL : Click here to view
  • Certified kubernetes administrator (cka)

AI-interview Questions & Answers

Hello. My name is Tejas Meshram. I have completed my computer engineering from VJTI College from Mumbai. And, um, I started working at Wipro first as a software engineer. I worked there in multiple projects. One of those, which was an important one, was for the client Samsung, and I worked as a Java developer in that project. And, uh, that is where I spent most of my, um, career in Wipro. That was my first company. Then I changed to HSBC Bank, uh, which was in Pune. And there, I started off as a Java developer, but eventually got into DevOps and started creating and writing, uh, pipelines in Jenkins to automate, uh, the API deployments, uh, on the platform, uh, which was our team. And, uh, after some time with that work, I started getting into cloud. And I on the job, I learned the technologies to set up the infrastructure, mostly the cloud infrastructure. So I created, um, VMs, SQL instances, Kubernetes cluster, road Helm Chart as a part of setting up Kong, uh, API manager on GCP. After that, I worked at, uh, Priceline briefly in Mumbai, where, again, I was responsible for managing, uh, Kubernetes clusters in production. And, uh, everything was configured via code there. So I heavily used Terraform, uh, in that project. And, uh, for Kubernetes workloads, uh, I, with my team, used FluxCD as a tool to, uh, manage the changes that needed to be pushed into the Kubernetes cluster. So it worked as a GitOps approach, which means that if you have a change to make in a Kubernetes cluster or in a namespace, you just make that change in, uh, a git repo. And that that repo is configured with a flux agent, which is deployed on the cluster. And after a certain interval, again, which is configurable, after a certain interval, Flux keeps on checking if there is any change in GitHub. And whenever it finds that change, it pulls that that change into the cluster and deploys. I also tried to, uh, write, uh, a pipeline, um, for some of the AWS infrastructure in that project. After that, I am now working at Motorola. And I, uh, here, I am mostly writing Helm charts. I am also managing Bitbucket pipelines. Our Kubernetes cluster is in AWS in EKS, and, um, I heavily use bash scripting here to automate a lot of things. Thank you.

How do you configure horizontal path auto scaling in Kubernetes based on custom metrics? So horizontal part auto scaling is a concept, uh, with the pitch. The Kubernetes cluster can spin up multiple parts if required based on the load that the port or or the, uh, deployment is getting. And configuring horizontal port auto scaling can be done, uh, by enabling the horizontal part auto scaler first. And I I don't exactly know, but there should be a horizontal pod autoscaler field, uh, in some of the configuration in the manifest that we create, uh, for a deployment.

In Kubernetes, what is a namespace, and how does it help in cluster management? Um, so a namespace is a segregation of, uh, workloads, and that can be used for a variety of purposes. Um, an example of which could be having a namespace for a different team where a department has a single cluster. This would, uh, allow those teams to work independently and not step on other teams' workloads unless they want to, um, for which they can use another Kubernetes other other Kubernetes concepts like ingress and egress and network policy. And, uh, we can also set resource limits on a namespace, uh, which would help by allowing pods to only, uh, utilize the numb the maximum amount of resources that we have configured in the namespace, and that would also help not kill other parts in some other namespace. Um, so this would help segregate, uh, the workloads in in a namespace. This is how we can manage the resources of our cluster as well.

When setting up a CICD pipeline for a Kubernetes application, what key components and stages would you include? The first stage, uh, would be to, uh, clone the repo, assuming that it, uh, the application is, uh, stored in a in some kind of a git repository. The first stage would be cloning the git repository. And since this is a Kubernetes application, I am assuming that there would be a Docker file in that application, which would uh, build which would contain the code for building the image of that application. So the second stage, uh, in the CICD pipeline would be building the, uh, image using that Dockerfile, uh, using the Docker build command. The next stage would be to push that image to some remote repository depending in depending on, uh, which remote repository services the organization uses. And the next stage would be to, assuming that we have manifest templates for Kubernetes resources. The next stage, that is the 4th stage, would be to, uh, inject the Docker image along with its tag into the template that we have for a part, for example. Um, so that will contain the image specification with the docker image that we just built in at previous stage, uh, previous to previous stage. And now we have the port configuration ready. So the next stage would be to go ahead and perform the deployment of that part in the Kubernetes cluster, and that can be done through kubectl or through Helm commands depending on what type of templating is used, uh, in that application or, uh, in that organization. And once, uh, the application is deployed, the application can be may made sure to have appropriate checks to test if the application is up and running using liveness probes, uh, etcetera. And once that is done, yes, that could be the last stage of the pipeline.

How would you go about exposing a Kubernetes service to the Internet? So I am assuming that we in in the Kubernetes cluster, we have an ingress controller through which a request comes in. It hits the service, Kubernetes service, and the service center hits the appropriate part that the request is sent for. And once the, uh, now this is about the ingress traffic, incoming traffic, and the question is about exposing a Kubernetes service to the Internet. Okay. So, uh, ingress controller is, uh, one of the ways that we can configure, and and one of the famous Ingress controllers that we can use, which I have used, is NGINX Ingress controller. And, uh, the we can configure that, uh, for the request to be allowed from the Internet. And once it hits the ingress controller, we can configure the appropriate ingresses based on the path that the request is making. And that ingress will, uh, point to the Kubernetes service. And based on the selectors and the labels, it will be, uh, redirected to the appropriate parts in Kubernetes.

What do you need to consider when creating a persistent volume claim in, uh, Kubernetes? While creating a persistent volume claim, we need to know the storage class of that volume. We need to know, uh, what kind of access mode we want. That could be read, write. And we also need to know the, uh, storage, uh, that this claim is going to need in terms of gigabytes or gigabytes or, uh, megabytes or megabytes. These 3 configuration, uh, items should be minimally known to, uh, create a persistent volume claim.

Can you explain the process of scaling our deployment in AKS, and what metrics would you consider during scaling? I am not particularly familiar with Azure Kubernetes service, but speaking roughly about, uh, the general concept of scaling a deployment in any Kubernetes cluster. We could modify the configuration of a deployment by updating its replicas field to the desired number of replicas that we want for that deployment. Another way is to use kubectl to perform the same activity. And what metrics would you consider during scaling? The metrics that need to be considered would be the resources that are required for the deployment. And if if there is any auto scaling already enabled, in which case, we we may not need to do, uh, to scale the deployment manually. Yep. I can think of these.

What considerations are important when configuring network policies in Kubernetes for microservice architectures? What con considerations are important when configuring network policies in Kubernetes for microservice architectures? We have to consider if the SD parts in a particular namespace should be exposed, uh, to another namespace, or should the request be allowed to come into this particular namespace or whichever namespace is configured. And we can do that by specifying the ingress and egress configurations in the network policy, and we can configure them to either allow, uh, the traffic or to deny the traffic based on the namespaces and the labels among other options that can be used to filter, uh, the traffic.

How would you handle disaster recovery and backup strategies for stateful applications running on Kubernetes in Azure. How would you handle disaster recovery and, uh, backup strategies for stateful applications running on Kubernetes in Azure? The, uh, the most basic thing that can be done to ensure that there is, uh, no downtime or minimal downtime is to have multiple replicas of a stateful deployment, a state stateful application. And we can configure that, um, in in its manifest. That would allow us to, uh, and it would even help to, uh, deploy each of the replicas in a in a different node by setting the appropriate node selector or node affinity, uh, if that if, uh, we have to absolutely ensure that we don't want any, uh, any downtime, if that is the highest priority, then, uh, each of the replicas or at least some of the replicas could be, uh, deployed in nodes, uh, which are present in different zones or different regions in the Kubernetes cluster. The backup strategy. One of the backup strategies that I can think of is to, uh, keep on taking backup of the HCD, uh, which is the database for the entire Kubernetes cluster, and this can be done at a regular interval. This can be automated, uh, through some pipeline, and we can schedule a pipeline to, uh, perform this backup. Yeah.

Which logging architecture would you use for a Kubernetes cluster and why? Since I have worked in GCP majorly, I would consider using Fluentity as the mechanism for a logging of applications in a Kubernetes cluster, uh, since the native service of GCP, uh, goes very well with that. And uh, the logging explorer or the logging service, uh, in GCP has, um, has great capability of directly fetching the logs from a Kubernetes part. So I would depending on the cloud that I'm using, I would prefer the logging mechanism, which, uh, is very well supported by that cloud's native services.

How do you approach performance test testing for deployments in Kubernetes, and how does it influence capacity planning? How do you approach performance testing for deployments in Kubernetes, and how does it influence capacity planning? There are some third party services that we can use for performance testing in a Kubernetes. The idea behind any testing would be to make a a lot of requests to your application that is deployed in the Kubernetes cluster and test various aspects of the application and try to try to make as many requests as possible within a certain, uh, period of time and see how the application behaves and, uh, and see if the pod auto scales, if that is enabled, and how the application performs, uh, when it is load tested or stress tested.