profile-pic
Vetted Talent

Subhendu Sekhar Patro

Vetted Talent

A passionate DevOps Engineer with 4+ years of relevant experience. I aim to leverage my expertise to optimize and automate infrastructure management, streamline CI/CD pipelines, and ensure robust monitoring and security practices. With a particular focus on Kubernetes, I seek to enhance container orchestration, deployment scalability, and cluster management within a dynamic and forward-thinking organization

  • Role

    AWS DevOps Consultant

  • Years of Experience

    5 years

  • Professional Portfolio

    View here

Skillsets

  • Terraform - 4 Years
  • AWS - 4 Years
  • Git - 5 Years
  • automation
  • CI/CD
  • Containerization
  • Kubernetes - 4 Years
  • Orchestration
  • Scalability
  • Security practices
  • AWS Tools
  • Sast & dast tools
  • Build Tools
  • Operating Systems
  • cluster management
  • Docker - 4 Years

Vetted For

15Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Software Engineer, DevOpsAI Screening
  • 62%
    icon-arrow-down
  • Skills assessed :infrastructure as code, Terraform, AWS, Azure, Docker, Kubernetes, 組込みLinux, Python, AWS (SageMaker), gcp vertex, Google Cloud, Kubeflow, ml architectures and lifecycle, pulumi, seldon
  • Score: 56/90

Professional Summary

5Years
  • Feb, 2023 - Present2 yr 10 months

    AWS DevOps Consultant

    Minfy Technologies
  • Feb, 2023 - Present2 yr 10 months

    Consultant

    Minfy Technologies
  • Mar, 2019 - Apr, 20223 yr 1 month

    DevOps Engineer

    Globallogic
  • Mar, 2019 - Apr, 20223 yr 1 month

    Support Engineer

    Globallogic

Applications & Tools Known

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    ECS

  • icon-tool

    Jenkins

  • icon-tool

    AWS CodePipeline

  • icon-tool

    Terraform

  • icon-tool

    CloudFormation

  • icon-tool

    EC2

  • icon-tool

    EBS

  • icon-tool

    VPC

  • icon-tool

    IAM

  • icon-tool

    Lambda

  • icon-tool

    API Gateway

  • icon-tool

    EKS

  • icon-tool

    SQS

  • icon-tool

    SNS

  • icon-tool

    CodeDeploy

  • icon-tool

    S3

  • icon-tool

    Grafana

  • icon-tool

    Prometheus

  • icon-tool

    ELK

  • icon-tool

    Cloudwatch

  • icon-tool

    Dynatrace

  • icon-tool

    Rapid7

  • icon-tool

    OWASP ZAP

  • icon-tool

    Maven

  • icon-tool

    Gradle

  • icon-tool

    NPM

  • icon-tool

    Windows

  • icon-tool

    Linux

  • icon-tool

    Jira

  • icon-tool

    Github

  • icon-tool

    ServiceNow

Work History

5Years

AWS DevOps Consultant

Minfy Technologies
Feb, 2023 - Present2 yr 10 months
    Centralized Ci-Cd setup with Aws Code-pipeline for Cross account Deployments. Created infra with CFT. (Lambda, Api gateway, SQS, SNS, RDS, CodeBuild, CodePipeline, S3, Cloudfront, WAF, Cloudwatch) Ensured app admins to receive a mail triggered each time application produces an ERROR log. Implemented Lambda functions with SQS and API gateway triggers. Lambda layers were updated with required modules. Configured API gateway to route traffics to appropriate backend microservice with resource and method settings. Enabled Authorization at API Gateway with Cognito to ensure users with valid JWT tokens only can access our backend application. Creation of api-keys and attaching them with Usage plans based on customers requirement.

Consultant

Minfy Technologies
Feb, 2023 - Present2 yr 10 months
    As a part of Dr. Reddys, roles and responsibilities were diversified. Administrated Jenkins and aws accounts and closely worked with CloudOps, InfoSec, Monitoring and Development teams. Enabled matrix-based authorization strategy and Fine-Grained Access policy to restrict users access to their particular project. Provision of EKS cluster with Addons and HPA enabled. Provision of Ingress Controller and fluent-bit for logs exporting to cloudwatch/ELK. Creating namespaces with RBAC enabled and giving Fine-grained access to developers. Deployed application in ECS which involves creating Task-definitions, Services, Service discoveries and load balancers based on requirements. Deployed frontend application in AWS Amplify/Cloudfront. Configuring URL path parameters and query string parameters in REST Api gateway to tweak the incoming request and integrating with backend services like Private NLB with VPC Links. Attached SQS, Lambda as target to API method integration to route the traffic effectively. Created Pipeline jobs in Jenkins in collaboration with DevSecOps team. Maintaining Jenkins server. Writing jenkinfiles, managing Jenkins agents, Plugins etc.

DevOps Engineer

Globallogic
Mar, 2019 - Apr, 20223 yr 1 month
    Created required infrastructure and networking setup for Developers to deploy their applications. Integrated AWS Rest API Gateway for Routing the Backend microservices. Enabled mTLS in REST Api Gateway by creating a trust store in s3 bucket. Maintaining DNS records in route53, Creating SSL certificates with ACM. Deployed application in Kubernetes which involves provision of CRDs, Deployments, Statefulsets, PV, PVC, Secrets, CSI drivers.

Support Engineer

Globallogic
Mar, 2019 - Apr, 20223 yr 1 month
    Communicating with Developers and Implementing best security practices in SDLC. Implemented Git Branching strategy, Shadowed senior team members during Critical tasks/failovers. Monitored Grafana Dashboards and reported respective App owner in case of Alerts. Created Jenkins jobs with Maven, Junit, Sonarqube and deployed in Ec2 instance. Was a part of DevOps team in a support role during the migration of application form Ec2 to ECS.

Major Projects

4Projects

MATSON

Feb, 2023 - Present2 yr 10 months
    Centralized CI-CD setup with AWS CodePipeline for cross-account deployments. Created infrastructure with CloudFormation templates. Ensured app admins receive mail triggered each time application produces an ERROR log.

Dr Reddys

Mar, 2023 - Present2 yr 9 months
    Administrated Jenkins and AWS accounts closely worked with CloudOps, InfoSec, Monitoring, and Development teams. Provision of EKS cluster with addons and HPA enabled.

USAA

Mar, 2019 - Apr, 20223 yr 1 month
    Created required infrastructure and networking setup for Developers to deploy their applications. Integrated AWS Rest API Gateway for routing the backend microservices.

Technoxander

Aug, 2021 - Apr, 2022 8 months
    Communicating with Developers and implementing best security practices in SDLC. Monitored Grafana dashboards and reported respective App owner in case of alerts.

Education

  • B.Tech

    Vignan Institute of Technology and Management, Berhampur, Odisha (2018)

AI-interview Questions & Answers

Hey. Hi. So first of all, thanks for the opportunity. So this is Shubhinde. It will basically belong to, uh, Odisha. We are currently staying in Hyderabad. Yeah. So, uh, I I I I I I hold around 4 and a half years of experience in AWS in the boot stores, which, uh, you know, which contains various AWS tools such as such as EC 2. Then comes, uh, EBS, s three buckets, and API gateway load balancer, EC, ACKS, all those things into AWS. But apart from that, in DevOps tool, I do I'll take good experience in Jenkins, dot com, Kubernetes, uh, and there are few DevSecOps tools as well, which comes in the JAP, Sonu, HadoLink, kubelinter. Uh, these are the, uh, few tools which I have worked on. And for, uh, monitoring and all cases, they, uh, for logging ELK, CloudWatch, then, uh, OpenSesame. These are the few tools, uh, I have worked with. So that's that's currently working in MinuFi Technologies. It's been 1 and a half 1.7 years to be precise. I have been working in MinuFi Technologies. So that's that's pretty much it. Thank you.

Okay. So sounds good. Docker, Python, and AWS services. So we've been, uh, we may, uh, we can automate the infrastructure using Terraform or CED. We can, uh, use Docker, the Docker Docker images, uh, Python based Docker images. So we can run them as a Lambda functions, or we can simply run them as an ECA ECA service or if you get the option to Kubernetes because, uh, if you are going for EKS service, uh, we have thought that suppose, uh, if your application is communicating with other CNC and certified case, we can go with the case. Otherwise, ECS would be able to get ops preferred ops in in my point of view. So we need to create a robust CICD pipeline that might be if you want AWS services, we do have AWS code pipeline using AWS pipeline. We'll maybe code we'll code deploy all those agents. Using that, we can deploy our code. And the same thing, if you want to replicate production environment, we just have to, you know, uh, the same safety or same, uh, telephone would be it will be used to create a similar kind of infrastructure in higher environments as well. And for deployment, the CHD is there anyway. So we can create just create an approval stage and move it to further, uh, environment. So that's how we can, uh, design this workflow using top line, Python, and internal list services. Suppose, uh, we are cutting communicating with Adi or something, if it is easiest, it will be easier to communicate establish a communication. Uh, in in terms of Adi, also previously, there will there used to be services and limitations. Like, it used to be difficult creating I'm those using OIDC and annotate that role to the service account. But now as the port identity agent came, so it it becomes easier for them also to communicate.

Compare using AWS CDK system for infrastructure gateway with a focus on a specific use case like network provisioning. Okay. Uh, we can use AWS CDK. To be honest, uh, I have never worked with AWS CDK, uh, but I have a good idea, like, using TypeScript, uh, with, uh, Python's, uh, database CDK. We can provision the AWS resources. The same thing, Terraform Terraform, if you are using edge infrastructure as a code tool, uh, we we if you want to focus on a specific use case like network provisioning. So, uh, in that case, both of them works fine, but, uh, Terraform gives us in a double CTK also, we can, uh, create we can create 1, uh, you know, artifact, and that artifact, we can deploy this, if I'm not wrong. Other than that, Terraform will be the will be the state file using that. We can create our network in these VPCs and there's Internet gateway, NAT gateway, uh, the DHCP things, the route, uh, routeable associations will be there, uh, that the security groups, the technical things. Uh, all those things we can create using both of them.

So just an automated approach to scale. Kubernetes deployment in response to increased web. We can set up HPA. Uh, that that's something we can do. Uh, based on, uh, request, uh, we can automate, uh, auto scale our number of ports and all. Uh, that too, again, uh, we have to, uh, count the CPU and memory with the reasons we do. We have to keep the resource limit as well as that in case of data centers, we should not, uh, overprovision the resources. Uh, that's one thing we can do. We can create a deployment. Uh, in deployment, we can keep, like, uh, the replicas number of replicas, the the GR design number of replicas, the minimum and the And, again, uh, for, uh, infrastructure level, the node groups and node groups also, there will be auto scaling group. Uh, there will be a target node group option as well, which you can go with that option.

Outline the steps to implement a zero downtime deployment strategy for a cumulative charge data. Okay. Uh, for zero downtime deployment strategy, we will be having, uh, various things, like the blueprint deployment, the canary deployment, and we might be having some other deployment methods like rolling back, uh, the AB method. Uh, there are various methods, uh, but, uh, highly, uh, what I have seen is people are using deployment or canary. So in deployment, what happens, there will be another replica of our existing application. Suppose v one is existing, the v two will be created, and the traffic will be shifted to v 2. Then the v two will be the v one will be getting the provision. That approach we can go, but in that one, resource allocation will be more. Other than that, we can do one thing like, uh, candidate deployment. The traffic will be moving slowly using HTO. We can do, uh, that that service mesh. In that, what will happen, uh, slowly, the traffic will be moving, like, from g 1, uh, 90% traffic will be going to b 2. 10% traffic will be going if everything is working fine. We can slowly move forward, like, 3080, 3070. Then, eventually, the v one will be getting deprovisioned, and v 2 will be fully active in that.

How would you include a Terraform module? Yeah. We can, uh, define Terraform modules, uh, and while while we wanted the Terraform module in in our, uh, main dot t f, we just need to call that module. And if you want for multi cloud infrastructure to I I don't know if you want to reuse them in IRO environment environments. We'll be having TFR files so we can get multiple TFRs, or else we can use Tera Grant in that case to to know, you know, to reuse this knowledge, uh, in in multiple stages. You know? So, uh, multi cloud infrastructure components in the sense that we need to have multiple providers, block and based on my. So I might be wrong at this point, but this is what

First of all, uh, we should not be using the default key. Uh, we need to create a key each time we launch in an instance, uh, then, uh, using, uh, data block, and we have to import that. Yeah. We just have to keep an output of that key into a file, and we need to define that file here. And security group IDs, we should not hard code. I am a spine. Extension type is fine. That, uh, key name is something, uh, that that keeps this risk here. And the security ID should be fine. Apart from this listing, that's SSS key which is creating. Uh, we need to create that, uh, SSS key also needs to be in contact. That's one thing.

Uh, I'm not able to find an error. Maybe the 3 statement which has been written, I'm not sure whether that might be one issue. Other than that, docker will have been t and then come. The tag needs to be very light in this case, I think, apart from the time

First of all, uh, if you are going for hybrid cloud, we need to do all these steps like, uh, direct connect checkup for the network connectivity from on prem or it might be some other cloud to headless cloud. So once the network setup has been done, uh, using Kubernetes in the sense, uh, suppose, uh, there are there are a few models, uh, no, not more than the models which are running the front end we can do here. And that using Glue and, uh, we'll be able to work the tools, uh, the airflow, and will be maybe SalesMaker, something we might use from AWS. Uh, those things will be communicating with each other how to design a system to auto scale content? Uh, like, uh, normal given is the deployments with, uh, HP enabled. The secret should be kept in secret managers. Uh, the configurations file should be in config map. That's it. Coming up from there. The volumes we can mount, it's EFS or EVS. That's how the EKS stuff works.

What methodologies would you apply in the DevOps website? Yeah. So the first of all, the methodologies would be the assigned methodology. So there should be some sprint plans. There should be some plans for that initially. Then once the code is there, we can check using GetGuardian or CheckMaths. Uh, and and, uh, we have these tools will be basically checking, uh, if there is any credential or any vulnerability in our code, then we can scan the code, the SAC. Uh, we can do using SonarQube. Once the SAC is done, we can build it. Uh, during build also, uh, the dependencies would be there. We can use dependency track to get an edge bone, like the field of material. Then then comes, uh, once the build is done, then the Docker file will be there to scan the Docker file. We'll be being idle in. Once the Docker file is scanned, Docker image will be created. That up to scan the Docker image will be having quality since, like, the we will sneak a few other tools as well to scan the Docker container. Once the Docker container is running fine, there are no vulnerabilities so we can deploy them in Kubernetes. Uh, to dip before deploying the, uh, QB intern would be used to check the YAML files, whether are they following any complaints or not? Then, uh, once the deployment has been done, we need to make sure no secrets are exposed. We need to check that in terms of networking, uh, there should not be any port or, uh, no ports are initially opening to Internet. Things should be the application should be in private subnets, should be exposed via door balance at the API gateway. If it is API gateway, we need to make sure that the authorization step is in place. It should not be, uh, you know, uh, there are few things, uh, which you need to keep into consideration, uh, while deploying application to meet the compliance guidelines. That's it. Thank you.