profile-pic
Vetted Talent

Satyabrata Karan

Vetted Talent

12 years of industry experience in Cloud DevOps Engineer with specialization in AWS, Kubernetes, Terraform, Docker, Jenkins, Python, and Linux.

  • Role

    Senior Perl/DevOps Engineer

  • Years of Experience

    12.00 years

Skillsets

  • automation
  • AWS
  • CI/CD - 5 Years
  • Cloud Infrastructure
  • DevOps
  • Perl
  • Python - 5 Years
  • Scripting
  • SQL
  • Terraform
  • AWS Cloud - 3 Years
  • Docker - 5 Years
  • Azure - 4 Years
  • Azure DevOps - 5 Years

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior DevOps Engineer (Lead)- RemoteAI Screening
  • 44%
    icon-arrow-down
  • Skills assessed :Jira, Lean-Agile framework., Perl, AWS Cloud, CI/CD, Docker, Java, Jenkins, Kubernetes, 組込みLinux, Python, Ruby
  • Score: 40/90

Professional Summary

12.00Years
  • Oct, 2023 - Present2 yr 1 month

    Senior DevOps Engineer

    Nomis Solutions
  • Oct, 2020 - Oct, 20233 yr

    Senior DevOps Engineer

    Smatbee Network
  • Aug, 2019 - Oct, 20201 yr 2 months

    DevOps Engineer

    Xoriant Solutions Pvt Ltd
  • Jan, 2012 - Mar, 20142 yr 2 months

    Senior Software Developer

    Ebusinessware
  • Mar, 2014 - Mar, 20173 yr

    Perl Developer

    Cognizant Technology Solution (CTS)
  • Mar, 2017 - Jun, 20181 yr 3 months

    Perl Developer

    Scan IT & Shipco IT
  • Jan, 2011 - Dec, 2011 11 months

    Software Engineer

    Artech Infosystem

Applications & Tools Known

  • icon-tool

    Jenkins

  • icon-tool

    Git

  • icon-tool

    Docker

  • icon-tool

    GitHub

  • icon-tool

    Bitbucket

  • icon-tool

    Maven

  • icon-tool

    Nexus

  • icon-tool

    SonarQube

  • icon-tool

    Ansible

  • icon-tool

    AWS ECS

  • icon-tool

    Terraform

  • icon-tool

    Datadog

  • icon-tool

    SumoLogic

  • icon-tool

    EC2

  • icon-tool

    VPC

  • icon-tool

    S3

  • icon-tool

    Auto Scaling

  • icon-tool

    CloudWatch

  • icon-tool

    Route53

  • icon-tool

    ECR

  • icon-tool

    CodeDeploy

  • icon-tool

    TeamCity

  • icon-tool

    uDeploy

  • icon-tool

    Splunk

  • icon-tool

    Perforce

  • icon-tool

    Sybase

  • icon-tool

    Tableau

  • icon-tool

    PagerDuty

  • icon-tool

    Perl

Work History

12.00Years

Senior DevOps Engineer

Nomis Solutions
Oct, 2023 - Present2 yr 1 month
    Taking care of CICD pipelines in Jenkins & deploying into AWS cloud infra. Fixing the production issues and bugs as per SLA. Automation of the process using Python, AWS Lambda or Cloud Formation. Cloud Infrastructure automation using Terraform. Monitoring the production resources using the Datadog, Uptrends and Sumologic. Working on EC2 services along with VPC networking and other required services such as Load balancer, Autoscaling group, and Target group. Maintaining AWS ECS Tasks and Taskdefinations and update the JSON files. On call support and alerts handling on PagerDuty. Discussing about the plans in the Team Meeting & lineup the work items and worked along with the team members on the distributed application.

Senior DevOps Engineer

Smatbee Network
Oct, 2020 - Oct, 20233 yr
    Running Jenkins CICD pipelines for different releases. Preparing the Dockerfile for build the docker images. Automation of the process using Perl and Python and Shell Script. Data Analysis on the delivery reports, app installation, customer feedback. Monitoring the production resources using the Splunk tool. Stored the data objects in AWS S3 buckets and configured the bucket policies.

DevOps Engineer

Xoriant Solutions Pvt Ltd
Aug, 2019 - Oct, 20201 yr 2 months
    Built the CI/CD pipeline projects using BitBucket, TeamCity, Maven, and uDeploy tool. Preparing the Dockerfile for build the docker images. Responsible for handling deployments through IBM uDeploy tool. Taking care of the Release Management System in each Sprint goal and drive the deployment process in the production env by the Production Support team. Implementing the automation configuration settings using Python scripting language. Troubleshoot and resolving the issues of the UAT and PROD applications on the priority basis (severity of the ticket). Involved in the configuration and setup of the tools for facilitating the development process.

Perl Developer

Scan IT & Shipco IT
Mar, 2017 - Jun, 20181 yr 3 months
    Automation of the application using scripting language such as Perl and Shell script. Worked on the provisioning, configuration setup using Ansible playbook for the different infrastructure. Managed the application images through the Docker containers. Mentored the junior members with regards DevOps practices.

Perl Developer

Cognizant Technology Solution (CTS)
Mar, 2014 - Mar, 20173 yr
    Automation of the application processes using Perl and Moose framework. Fixed up of the test environment and configuration of the applications. Taken care of Build and Release in each sprint. Monitored and tracked the production batch jobs in the Autosys monitoring tool and taken the necessary steps if any failure happened. Wrote the JIL (batch instructions) for the autosys jobs, test them in QA server and post then deployed them in the production batch jobs.

Senior Software Developer

Ebusinessware
Jan, 2012 - Mar, 20142 yr 2 months
    Fixed the various bugs in the Perl script and also, wrote some script from scratch to automate the batch process on Autosys. Code checkout, commit and push the code into Perforce repository. Developed and automated SQL queries which fetches data from Sybase database. Conducted health check or sanity check of portfolio for different strategy. Supported the software process through improving the look and flow of the process. Focused on user and system support, answering hotline calls, monitored system alerts. Comprehension of support tickets and resolved within SLA timeline.

Software Engineer

Artech Infosystem
Jan, 2011 - Dec, 2011 11 months
    Fixed the various bugs in the Perl script and also, wrote some script from scratch to automate the batch process on Autosys. Code checkout, commit and push the code into Perforce repository. Developed and automated SQL queries which fetches data from Sybase database. Conducted health check or sanity check of portfolio for different strategy. Supported the software process through improving the look and flow of the process. Focused on user and system support, answering hotline calls, monitored system alerts. Comprehension of support tickets and resolved within SLA timeline.

Achievements

  • Taking care of all technical & business operations as a DevOps Lead
  • Implementation of Tableau installation across AWS client networks using Terraform
  • Worked on the migration of microservices applications to containers and Cloud services

Major Projects

3Projects

Banking Application

Oct, 2023 - Present2 yr 1 month
    Taking care of CICD pipelines in Jenkins & deploying into AWS cloud infra. Fixing the production issues and bugs as per SLA. Automation of the process using Python, AWS Lambda or Cloud Formation. Cloud Infrastructure automation using Terraform. Monitoring the production resources using the Datadog, Uptrends and Sumologic. Working on EC2 services along with VPC networking and other required services such as Load balancer, Autoscaling group, and Target group. Maintaining AWS ECS Tasks and Taskdefinations and update the JSON files. On call support and alerts handling on PagerDuty. Discussing about the plans in the Team Meeting & lineup the work items and worked along with the team members on the distributed application.

Smatbee

Oct, 2020 - Oct, 20233 yr
    Running Jenkins CICD pipelines for different releases. Preparing the Dockerfile for build the docker images. Automation of the process using Perl and Python and Shell Script. Data Analysis on the delivery reports, app installation, customer feedback. Monitoring the production resources using the Splunk tool. Stored the data objects in AWS S3 buckets and configured the bucket policies.

Digital Collections Engine

Aug, 2019 - Oct, 20201 yr 2 months
    Built the CI/CD pipeline projects using BitBucket, TeamCity, Maven, and uDeploy tool. Preparing the Dockerfile for build the docker images. Responsible for handling deployments through IBM uDeploy tool. Taking care of the Release Management System in each Sprint goal and drive the deployment process in the production env by the Production Support team. Implementing the automation configuration settings using Python scripting language. Troubleshoot and resolving the issues of the UAT and PROD applications on the priority basis (severity of the ticket). Involved in the configuration and setup of the tools for facilitating the development process.

Education

  • B.Tech in Computer Science

    Biju Patnaik University of Technology, Odisha
  • Diploma in Computer Science

    RIT College (under SCTE & VT, Bhubaneswar)

Certifications

  • Lean six sigma yellow belt certification from tuv-sod

  • Itil foundation training from novel vista, pune

  • Red hat certified system administrator (rhcsa) from linux academy

  • Opening doors to microservices and containers organized by cognixia

AI-interview Questions & Answers

Hi. I'm. I'm a senior DevOps engineer. I have 12 years of experience in IT. Out of that, I have, uh, 5 years of experience in DevOps engineer. Along with that, I have experience with, uh, cloud computing and automation engineering. Uh, so far, I have worked on different domain industry. Majorly, I worked with the Fintech company and investment banking. In my last project, I worked with Nomi Solution. Here, I'm, uh, I'm working as a senior DevOps engineer and part of, uh, cloud engineering team as well. Here, I have taken care of the AWS cloud and its automation and did a lot of automation using Terraform and Python. And I have also worked on Red Dot Linux, parallel Python, and shell scripting. Along with my these tools, I have experienced with Oracle database and optimize the SQL query. Mongo, uh, MongoDB SQL. Along with that, I have experience with CICD pipeline in Jenkins, taking care of the CICD pipeline, then optimize the optimize it and understanding some critical CICD pipeline for build and deployment activities. Also, I have experience with the monitoring tools like Datadog, Splunk, in AWS side, CloudWatch. Yeah. That's all about me. Thank you.

Yep. So, uh, when we talk about AWS CloudFront for for s three automation. So we have to, uh, share s three bucket name, and we have to we need, uh, AWS credentials like AWS ID and, uh, a passkey, which, uh, and the I'm user ID is supposed to have s three bucket access, executable access as well, and download access as well for the s three bucket. Then we can, uh, check-in history we have data or not.

Uh, I can say here, uh, whenever we think about using Python, we have to import the library. Uh, here, dockerized, uh, if the microservice architecture, uh, is dockerized, then we have to, uh, create a docker file. And we have, uh, like, as we know in Dockerfile, we have to write step by step, uh, about the image construction. And then we can, uh, write docker build command, which will build the image file, and it will store in architecture. All these things we can, uh, also write together in Python, make it automate. For that, we need to call import Docker. And using Docker, we can create the object file, and from object, we can call to the different methods.

Okay. So doc if we want to optimize the document sizes, then we have to, uh, imp like, we have to write in the doc file the light version of the, you know, source so that the image file will not get loaded too much. So there are 2 type of source. 1 is light version and one is full place library version. For example, Alpine, like, we can write from Alpine light. So the, uh, image will will reduce from 100 MB to something that just like 2 MB or 3 MB. In that way, we can optimize the docker image file.

In this case, we, uh, think about, uh, to monitor the Python application. For example, uh, the URL, SEDP or HTTPS URL, we need to monitor. And the application logs, we need to monitor using different monitoring tools such as, uh, SumoCloud or Grafana, then we can monitor the applications using Datadog or Splunk And the URL using upload uploads uptrends. Sorry. Yeah. Up trends. In that way, we can monitor and restrict the application downtime and observe the, uh, application downtime.

So if we talk take about, uh, docker container, uh, in CICD pipeline, then we have, uh, when the Docker image is getting built in the pipeline process, we need to, uh, take care of the security, uh, like, uh, Docker. Docker have the compatible compatibility, uh, to deal with the password in a secure way, encrypted way that we can take care of.

I don't have any idea on this.

So this is a transcription. I I see the there is a if block, else block, and if block 10 4 dot TXT file, if it is a file type, then it is saying file exist. Actually, uh, instead of hyphen f, we can write hyphen zed also whether it is a 0 file or, uh, geosized file or not. And, uh, f hyphen f is for file type. So for this file type, then only this tech, uh, this condition will pass and the rest of the thing will run. Otherwise, simp yeah. The condition may be wrong here.

In this case, we think about the load balancer auto scaling tool. And that load balancer connecting to target group, the target group pointing to EC two instances, We need to have, uh, a number of, uh, issue 2 instances in the target group, and load balancers should be pointing to to availability zone. And both are, like, issue 2 instances are supposed to be in different availability zone. So one if one availability zone goes down or application goes down, then it will get same data from another another submit. In this way, we can make, uh, we can perform 0 downtime deployment.

No idea how to do this.

For any application, we can think of, uh, think of disaster recovery plan where we will have similar application in another subject, another availability zone. And data should be also, uh, get back up into the d r server, and we should test under Doctor. And we have to take the backup of the data, uh, was, like, sometime weekly or monthly and keep it up to date in the DSR board. In this way, we can make a strategy of recovery and or backup of application.