profile-pic
Vetted Talent

ANSHUMAN SINGH RATHORE

Vetted Talent

I am a software engineer at Tata Consultancy Services (TCS), specializing in innovative web solutions across various domains. With over 4 years of experience, I excel in the .NET framework and Azure DevOps Services, contributing to projects that enhance user experience and efficiency. I hold a bachelor's degree in computer applications and science from DR VIRENDRA SWAROOP INSTITUTE OF PROFESSIONAL STUDIES, KANPUR. My core skills include web design, development, testing, security, analytics, and communication. Passionate about web development, I am eager to learn new skills in AI, machine learning, and data science, aiming to become a web communication specialist.

  • Role

    Developer

  • Years of Experience

    4 years

Skillsets

  • Java
  • AWS
  • Python
  • SQL
  • C++
  • Azure Data Factory
  • Microsoft Azure
  • S3
  • ADLS Gen2
  • Azure

Vetted For

7Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Jr. Azure Cloud Engineer (Onsite)AI Screening
  • 52%
    icon-arrow-down
  • Skills assessed :PowerShell, Azure, Github, Java, Kubernetes, Problem Solving Attitude, Python
  • Score: 47/90

Professional Summary

4Years
  • Feb, 2022 - Present3 yr 7 months

    Developer

    TCS

Applications & Tools Known

  • icon-tool

    WinSCP

  • icon-tool

    Visual Studio Code

  • icon-tool

    PowerBI

  • icon-tool

    ServiceNow

  • icon-tool

    Cassandra

  • icon-tool

    Visual Studio

  • icon-tool

    ARM Assembly

  • icon-tool

    EEPROM

  • icon-tool

    UEFI

  • icon-tool

    .NET Core

  • icon-tool

    MySQL

  • icon-tool

    AWS Cloud

  • icon-tool

    Microsoft Power BI

  • icon-tool

    SQL

  • icon-tool

    Algorithm

  • icon-tool

    C#

Work History

4Years

Developer

TCS
Feb, 2022 - Present3 yr 7 months

    Working on Microsoft Azure, constructing data ingestion pipeline that handled multiple sources such as on-premises SQL, Oracle, SAP, and Salesforce.


    Tata Consultancy Services

    Software Engineer

    Jan 2022 to Present 3 yrs 1 mo

    Systems Engineering, Infrastructure as a Service (IaaS)


    Graduate Trainee

    Feb 2020 - Jan 2022

    Systems Engineering, Documentation

Achievements

  • Multiple employee of the session

Major Projects

2Projects

School Management System

    ASP .net Based MCQ app connected to local database(excel), where we can review if the question is attempted, skipped or not and we can go back and review our answer and clear are answers to in the end we get calculated Result.

Reminder App

    App based on desktop time that rings at specified intervals. The UI allows users to set intervals and messages for reminders. The app interacts with the desktop environment to track the system time. Users can specify intervals for reminders. Users can input messages associated with each reminder interval. The application is encapsulated within a login shell.

Education

  • Bachelors in Computer Applications

    Dr. Virendra Swaroop Insititute of Computer Science

Certifications

  • Python and asp.net 4.5 certification from vertex institute

  • Azure certification from linkedin

  • Unity3d and java from expino technology

  • Microsoft azure(5260)

  • Reactjs(5630)

  • Tcs core java(3224)

  • Asp.net(2810)

  • Innovation & creativity(1164)

  • Team skill(1007)

AI-interview Questions & Answers

Hello. This is Shuman. I I'm working in TCS for last 4 years. I'm started as a fresh air in TCS. I got, um, placed in 2019, uh, 2020 in February. It's about, like, 4, 4 years. And, uh, in that project, I got an in.net project in which I used to, um, work and develop my soft application based in where I used to work as a developer. After that, I got chain project in Azure, uh, cloud based, in which I used to manage pipelines, online pipelines, data manage manager, where I used to work as, um, FTP control, uh, controller or and also pipeline monitor, in which if there is an application and, uh, pipeline which got failed, I used to check up on those, like, if the what's the issue about that? And, uh, I need to rectify, uh, checking after the longer screens after that. Uh, according to that, if the data is not, uh, not getting pulled correctly. So I have to manage also that. These were the things I used to work in

Azure service would you utilize for scaling a containerized Python application and why? For scaling a contrast Python application on Azure, we would like to typically utilize, uh, uh, Azure Kubernetes. Kubernetes is easily, uh, is managed by Kubernetes service, uh, that makes it easy to deploy, manage, and scale the containerized applications. Um, there are some Kubernetes capabilities, like scalability. AKS allows you to easily scale your application by adding all the moving parts. There's also, um, in Kubernetes, there's also, uh, container orchestration in which Kubernetes, which, uh, AKS is built on, provides powerful container orchestration features like automated deployment, scaling, and managed of, uh, containerized applications. And, also, uh, there is an integration with Azure services, um, which integrates well with Azure services like Azure Monitor, Azure Active Directory, and Azure Policy, making it easier to monitor, secure, and manage your applications. Developer productivity is also there in which, uh, AKS simplifies the process of deployment and managing continuous application, allowing developers to focus more on writing code and less on managing, uh, infrastructure.

Detail the steps you would like to troubleshoot a Python application experiencing latency issue in on Azure. Troubleshooting latency issues in a Python application. Running on Azure involves identifying identifying potential bottlenecks and performing, uh, each performance issue at different layers of the application. There are some steps we can use, like, check application code. Uh, while checking the application code, we can review the Python application code for in any inefficient algorithms, database queries, or the, uh, resource intensive operations that could be causing latency. Also, we can monitor application metrics in which we can utilize Azure Monitor. Uh, there are some monitoring tools, sir, together metrics such as CPU usage, memory usage, request throughout the response times. Uh, look for any spikes, abnormalities that could in indicate, uh, that there is a latency. Uh, and we can also review network configurations, like, uh, checking network configuration of the US resources, including virtual networks, subnets, uh, networks, or security groups ensure that there are no network bottlenecks or restrictions impacting the application performance. We can also check the database performance. Like, if your Python application interacts with database, monitor the database performance metrics, check for slow queries, indexing issues, or database connection pool. Limitation that could that could cause a latency. We can also check the scaling resources, like considering scaling up or scaling audio, uh, Azure resources based on the monitor monitoring data. This could involve increasing the size of your virtual machines, adding more instances of your applications. And we can also, uh, check the cache data implementing caching mechanism to reduce the need for the predictive competition or database queries. As you provide servicing services like Azure Cache or that is that can help improving application performance, uh, by caching, um, frequently accessed data. We can also check the distributed tracing. Like, we can implement distributed tracing using tools like application insights of OpenTelemetry to identify the performance bottlenecks across the different components of your application. We can also check the load testing in before, uh, we can go real world. Like, uh, we can, uh, load test it, simulate the real world traffic and identify the performance bottlenecks under heavy load conditions, used tool like Apache JMaster, JMeter, or Locust, uh, dot I o to perform the load testing on your application. We can also review Azure services limits, uh, and show that you are not hitting any Azure services limits such as CPU limitations, storage limits, or network bandwidth. These are the we can, uh, use a latency issue problem.

Enable logging and monitoring for Python application deployed on a AKS cluster. Enable logging and monitoring for the Python application deployed on Azure Kubernetes services. Cluster involves utilizing Azure Monitor and integrate with your, uh, AKS cluster. We can use enable moni, uh, monitor Azure Monitors for Containers. In the Azure portal, navigate to your, uh, cluster. Under monitoring section, select insights. Click on enable to enable, uh, enable to Azure monitor for containers of your ATS cluster. We can also configure logging, uh, how configure your Python application to log events and metrics using a logging library like Python's built in logging module or third party libraries, like Log Guru, uh, Structlog, or Logbook. Ensure that your application logs and important events, errors, and performance metrics are as steady out or as steady as as a. Kubernetes collects logs from these teams by default. Install monitor, uh, monitoring agent. Install the monitor, uh, monitor container, uh, agent on your EKS cluster. This agent collects colors telemetry data from your container and send it to Azure Monitor. Configure metrics and alerts. Define custom metrics and alert based on your requirements of your Python application. Use mind monitor to create a metric alert based on threshold of for CPU usage, memory usage, uh, request latency, or other relevant metrics. We can use instrumentation. Your Python application with the application insights or OpenTelemetry. We can visualize and analyze data, continuous improvement. In this, uh, continuously monitoring, analyzing performance and behavior of your Python application. By following these steps, we can, uh, effectively, we enable logging and monitoring for your Python application deployed on

Concept of, uh, content implementation and content of the machine. What mechanism would you suggest to manage Python application configuration settings in In context of c I n c d n for Python application deployed on Kubernetes, managing configure configuration settings effectively is crucial. Uh, one common mechanism to manage configuration settings in Kubernetes is by using Kubernetes config maps, uh, and secrets. Uh, we can use ConfigMap, allow you to, uh, store non sensitive configurations, uh, data as key value pairs. Create a con config map in Kubernetes containing your Python application configuration settings, such as database connection, strings, AI API endpoints, feature flags, etcetera, and mount the config map as environment variables or volumes in your Python application container. Update the config map as needed without rebuilding or redeploying your application. We can use secrets as, uh, secrets, uh, similar to ConfigMap, but especially designed to store sensitive information such as passwords, API keys, and TLS certificates. Um, we can also use external configuration providers like, Azure Key Vault, HashiCorp Vault, or AWS system manager. Parameter store tools to manage and configuration settings. Configuration management tools, uh, can use, uh, by, like, management tools like Helm or define the managed configuration settings as part of your Kubernetes deployment manifest. Helm allows you to templatize your configuration selector and, uh, inject them into your deployment manifest. Automated deployment pipelines is also there. Integrate configuration management into your CICD pipelines to automate the deployment, uh, settings along with the, uh, Python application. Use tool like Jenkins as your, uh, DevOps or GitLab CICD automate the deployments of Kubernetes manifest containing configuration settings.

What do you use to integrate Azure AD authentication in a Python based Kubernetes application? To integrate, um, Azure Active Directory authentication into Python go Python based Kubernetes application, you can use Azure IoT library and Azure SDK for Python. We can, uh, start like that, uh, set up an Azure AD application in which register an application in Azure Active Directory portal. Note down the, uh, note down the application ID and directory ID. Configure the, uh, configure the redirect u, uh, URL if your application requires it, uh, and assign application and permissions, um, in permissions to your Azure AD application to access Azure resources. Install Azure SDK Python. We can use, uh, PIP command to install Azure SDK for Python, which includes Azure identity library for authentication. Configure Kubernetes secrets. Update your Kubernetes deployment manifest, um, in which we can, uh, change the parameters as your AD tenant ID, client ID, client secret, and any other relevant authentication settings in which we can also Python application integrations are there, Update your Python application code to authenticate with Azure AD using the Azure and identity liability. Use the default Azure credentials class from the Azure identity library to automatically authenticate using environment variables managed identity for Azure CLI authentication. Azure Access Azure Resources. Use authenticated credentials to access Azure resources securely from your Python application from, for example, like, you can use access as your key word secrets as your storage blobs or interact with other as your services testing deployment.

How would you design a resiliency strategy for the Python application running on a virtual machine scale set in Azure? Designing a resiliency strategy for Python application running on a virtual machine scale set in Azure involves implementing practices and technologies that ensure high availability, fault tolerance, and scalability. We can use multiple availability zones to deploy your, uh, VMS's instance across multiple zones to ensure redundancy and fault tolerance observability zone are physically separate data centers within Azure region, uh, providing higher resiliency against future failures. We can use load balancing in which utilize Azure load balancer or application gateway to distribute incoming traffic across VMSS instances. We can also use auto scaling in which we can configure auto scaling rules for VMSS, uh, based on metrics like CPU utilization, memory utilization, or incoming traffic in which allows your application to scale up and down dynamically based on the demand and health probes of configuring, uh, health probes to, uh, monitor the health of your VMS instances as your load balancer can perform health checks on your application endpoints and do traffic only to health instances. Use managed disk, uh, in VMSS instances to ensure data durability and high availability. Managed disk automatically replicate data across different storage nodes within within the Azure region. Reducing the risk of data loss, uh, due to hardware failure implement application level redundancy, which In which a python application to be state stateless and horizontally scalable Distribute application components across multiple, uh, VMSS instances to minimize the impact of individual instance failure. Monitoring and alerting setup monitoring and alerting for your VMSS instances using Azure Monitor. We can also back up any, uh, disaster recovery. We can, uh, set up a layer automate, uh, layer setups, which can store data in multiple instances in which, uh, data can be recovered easily if data gets lost. Implementing chaos engineering. Conduct a regular chaos engineering experiment to proactively identify them and mitigate the potential points of your failure in your application and infrastructure. Regular Regular maintenance is also required to keep your VM instances and underlying infrastructure up to date with the higher security patches and updates.

Outline a strategy using Python in Azure Databricks to process streaming data and deploy it to database. To outline a strategy using Python in Azure to process streaming data and deploy it to SQL database. You can follow these steps. Choose a streaming data data source. Identify the streaming data source you want to process. This could be data. This could be, uh, data from IoT devices, Azure Event Hubs, or ingestion data. We can also, um, follow these steps. Set up Azure function for data processing data processing logic. We can write Python codes in Azure function to process the streaming data according to business. Azure SQL database for storing process data. We can use Azure SQL database to store and process data and streaming data source. We can also set data deployment strategies. We can use Python libraries like PyDoc bio Pyodoc SQLAlchemy to interact with Azure SQL data. We can also authentication security implementation secure authentication. We can also access Azure services from your Python code. Use Azure Active Directory authentication, manual entities. Error handling also with their data testing and validation should be there. After that, deployment and monitoring should be also

Where you would use Python to perform automated network configuration updates in Azure. Uh, you have a dynamic cloud environment in Azure with multiple virtual networks and network security groups. As your infrastructure evolves, you frequently need to update network configurations to accommodate with new services, applications, or security requirements. Solutions should be like infrastructure as a code. Use tools like Azure resource manager, uh, templates, or Azure Bicep to define your virtual networks, subnets, and, uh, network security. Azure, uh, SDK for develop Python. Utilize the SDK for Python to interact with resources programmatically. You can use Azure, uh, MGMT network, allow you to manage virtual networks updates, etcetera. Automated configuration updates, write the Python scripts to automate the, um, process of updating networks configuration in Azure. Monitoring and validation will be there. Schedule updates, change, and, uh, chain management. Other handling and rollbacks, we cannot we also need to be other hand handle and rollback means we can, uh, set up an anchor points which we can get back to after the update if anything goes wrong. Security and access control also should be there. Securly manage credentials and access keys required to authenticate with Azure