profile-pic
Vetted Talent

Rajkumar Gupta

Vetted Talent
Around Years of experience in complete software development life cycle including requirement analysis, design, and development. Working as Python Developer with Flask to develop REST API for the Validation and Intent data upload. Worked extensively in Machine Learning Algorithms like CNN, Naive Bayes, Jaccard Similarity, Random Forest, Statistics Analysis, and deployment on CUDA. Worked extensively in Machine Learning Programming Language like Python, Flask, R, MATLAB, PySpark, PyTorch, Lua, Matplotlib, Seaborn, Bokeh, NumPy, Pandas & OpenCV. Worked extensively in Microservice Architecture pattern. Worked extensively in REST Web Services. Worked extensively in Database like MongoDB, MySQL & Oracle. Good implementation Knowledge in CICD process tool GIT, Jenkins.
  • Role

    Sr. Python Developer

  • Years of Experience

    13.6 years

Skillsets

  • Mechanize
  • Statistics
  • Matplotlib
  • Docker
  • Ansible
  • BeautifulSoup
  • Bokeh
  • Cnn
  • CUDA
  • Databricks
  • FastAPI
  • HPC
  • IntelliJ
  • Lua
  • Machine Learning
  • SQL - 6 Years
  • MLFlow
  • Naive Bayes
  • Naïve Bayes
  • NLTK
  • object detection
  • OpenCV
  • Requests
  • REST
  • Snowflake
  • VIM
  • XgBoost
  • YAML
  • Chart director
  • Jenkins
  • MySQL - 4 Years
  • Git - 5 Years
  • Git - 5 Years
  • NumPy - 4 Years
  • NumPy - 4 Years
  • pandas - 4 Years
  • pandas - 4 Years
  • Flask - 3 Years
  • Flask - 3 Years
  • Scikit-learn
  • Azure
  • Microservice
  • Deep Learning
  • Confluence
  • MySQL - 4 Years
  • PySpark
  • Active Directory
  • Seaborn
  • MATLAB
  • C
  • Python - 7 Years
  • LDAP
  • AWS - 3 Years
  • Random Forest
  • PyTorch
  • Kubernetes
  • PHP
  • BDD

Vetted For

6Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Backend Python DeveloperAI Screening
  • 36%
    icon-arrow-down
  • Skills assessed :Mongo DB, AWS RDS, MySQL, Django, Python, REST API
  • Score: 36/100

Professional Summary

13.6Years
  • Feb, 2024 - Present2 yr

    MLOPS/ML Engineer

    Altimetrik India
  • Jan, 2023 - Dec, 2023 11 months

    Sr. Python Developer

    Nityo Infotech
  • Mar, 2022 - Jan, 2023 10 months

    Sr. Python Developer

    Euclid Innovation
  • Sep, 2018 - Sep, 20213 yr

    Data Scientist

    Capgemini India
  • Sep, 2021 - Dec, 2021 3 months

    Sr. Python Developer

    Bristlecone India
  • Dec, 2021 - Mar, 2022 3 months

    Sr. Python Developer

    CalSoft India
  • Nov, 2016 - Sep, 20181 yr 10 months

    Python Developer and Automation

    Oliveborad Comptech

Applications & Tools Known

  • icon-tool

    Jenkins

  • icon-tool

    AWS

  • icon-tool

    Bitbucket

  • icon-tool

    ServiceNow

  • icon-tool

    Jupyter Notebook

  • icon-tool

    Kubernetes

  • icon-tool

    SQL

  • icon-tool

    Apache Airflow

  • icon-tool

    Scikit-learn

  • icon-tool

    Docker

  • icon-tool

    Beautiful Soup

  • icon-tool

    Nmap

  • icon-tool

    Wireshark

  • icon-tool

    JSON

  • icon-tool

    requests

  • icon-tool

    Nessus

  • icon-tool

    Qualys

Work History

13.6Years

MLOPS/ML Engineer

Altimetrik India
Feb, 2024 - Present2 yr
    Developed APIs using Flask and Python; mentored functional team members; used XGBoost model for retail data prediction; scripted experiments in Databricks MLflow; created workflow pipelines and dashboards for prediction comparison and cost evaluation; wrote scripts for model training and evaluation; connected and audited Snowflake database records; wrote unit and functional tests for APIs; followed Agile methodology with JIRA; conducted code reviews; optimized applications for speed and scalability; evaluated new technologies and led troubleshooting efforts; drove continuous improvement initiatives and team development.

Sr. Python Developer

Nityo Infotech
Jan, 2023 - Dec, 2023 11 months
    Developed APIs using Python and OOPS concepts; connected applications for network build; used Flask REST API to fetch network event data; wrote Ansible playbooks and YAML files; developed data models for SAND data framework; wrote unit and functional tests; used ServiceNow for production release; documented processes in Confluence; performed code reviews and optimization.

Sr. Python Developer

Euclid Innovation
Mar, 2022 - Jan, 2023 10 months
    Developed APIs using Python and OOPS concepts; connected applications for network build; validated tenant subnet addresses using Flask REST API; worked on LDAP queries with ldap3 library; created global load balance for validation API; accessed credentials from NGV and EPV vault; used ServiceNow for production release; followed Agile methodology; documented work in Confluence; used Git for version control; wrote BDD functional test cases; deployed Jupyter Notebook on Kubernetes for POC.

Sr. Python Developer

CalSoft India
Dec, 2021 - Mar, 2022 3 months
    Explored API methodologies; used Flask REST API for data processing; used NumPy and Pandas for data mining and filtering; wrote unit tests; connected to cloud with PySpark; worked on data engineering; documented in Confluence; used Git for deployment; troubleshooting and debugging; used vim editor for scripting.

Sr. Python Developer

Bristlecone India
Sep, 2021 - Dec, 2021 3 months
    Connected Azure cloud to collect SQL data; used PySpark for database processing; used Pandas for data cleaning; used Azure app service for database transformation; used Flask REST API for database updates; wrote unit tests; performed code reviews; documented in Confluence; used Git for deployment; troubleshooting and debugging.

Data Scientist

Capgemini India
Sep, 2018 - Sep, 20213 yr
    Worked on image processing and computer vision; rewrote Lua code to Python; implemented deep learning models with PyTorch; used Pandas, NumPy, Matplotlib, OpenCV, Seaborn, Bokeh for data handling and visualization; deployed projects on Kubernetes and Docker; performed troubleshooting and code reviews; loaded and augmented image data; used Gaussian algorithms for heat maps; created videos for model validation; trained models using CNN and evaluated performance; used NVIDIA CUDA for GPU training; used HPC for model training; worked on data pipelines with SQL and PySpark; wrote unit and BDD tests; deployed Jupyter Notebook on Kubernetes.

Python Developer and Automation

Oliveborad Comptech
Nov, 2016 - Sep, 20181 yr 10 months
    Parsed HTML and text documents; uploaded data to MySQL using Python scripts; used regex for parsing; generated automated questions for exams; cleaned and tagged documents; used NumPy and Pandas for file reading; merged and stored data; developed programs for automated question generation; used Matplotlib, Chart Director, Panda, NumPy for development; wrote unit tests.

Major Projects

4Projects

Malware Classification

Oct, 2015 - Oct, 20161 yr
    Investigated malware files and classified them into families using Bitshred and machine learning algorithms. Used C, Python, NLTK, Scikit-learn, and PE Python library for classification. Applied Random Forest algorithm and statistics analysis for feature engineering and classification.

DNS Log Analysis

Jan, 2015 - Oct, 2015 9 months
    Crawled DNS logs using port mirror; parsed PCAP files and stored in database using Python and Scapy. Mapped IP location using latitude and longitude database; identified anomalous behavior in queries and responses; developed a web application for IP location using PHP, Python, and MySQL.

Web Scraping and Vulnerability Analysis Application

Jan, 2014 - Dec, 2014 11 months
    Developed vulnerability analysis application using Python and open-source scanner tools. Collected system fingerprints, mapped vulnerabilities from NVD, and analyzed website data using Python libraries. Stored data in MySQL and CSV; performed data modeling and visualization.

Vulnerabilities Assessment and Penetration Testing

Dec, 2012 - Dec, 20131 yr
    Conducted online vulnerability assessment and penetration testing for organizations as part of CERT-In. Used tools like Nessus, Nmap, Qualys, Burp Suite; implemented new ideas for vulnerabilities in APIs and web services.

Education

  • Bachelor of Technology (Computer Engineering)

    GB Pant University

AI-interview Questions & Answers

Yeah. So we're starting for my career. I have start working in IS. Yeah. There, I have worked on lots of project. I what on the DNS log analysis where I need to, uh, parse the log, process the log, and then load it into the, SQL SQL server. So for parsing and processing, I use the Python with related library like Scrappy and beautiful show to extract the data from the log file. Then coming to next malware classification problem where I need to collect the malware file data code section using the Python p library and then regenerate the fingerprint using the Anglamps method. And then, uh, the data has value and then prepare the dataset in the thread method, uh, dataset and then load, uh, uh, fed this data to machine learning algorithm, random forest, and, uh, uh, train the model, validate the model. Next, I have word in the Mercedes Benz, where I need to, uh, load the data using the Python. And using the PySpark, I need to Train the model. Validate the model. So, uh, I need to do the transformation, apply the transformation, uh, and then normalization of the data using the Python library. Then next, I worked in the, uh, uh, JPMorgan where I need to develop the API to validate Uh, uh, subnet data and also do the automation process of the red build. So I need to use the Python with the Flask API and use the REST API to, Uh, generate the API. I use also microservice to deploy our product into the uh

So for, um, developing the high performance API, we need to first, we need to, uh, check the, uh, optimization of the code, and then we need To, uh, apply the RESTful API to connect the servers and generate the sessions to further security analysis also. And then we need to generate, uh, create the abstract class so that we can use this this abstract class for our development purpose. We also use design, Python design pattern to, um, develop our application using the singleton, like, creating creating the design pattern with structure behavior and and structure the design pattern and then behavior of this design pattern we need to perform for the API. And, um, we need to deploy our service into microservice to, uh, connect to all API to run the all API with the Swagger, uh, perform uh, yep. We can use the Swagger to, uh, do our API performance check.

So for doing the scalable application, we need to, uh, check the resources available when, uh, uh, for, uh, use in our application. And then, uh, we need to build a pipeline where from where we can push our code. And from there, the, uh, cloud will take our updated code to run this run the code and, uh, perform the, uh, application behavior according to the, uh, application we have written

So for, uh, real time data processing, first, we need to use the, uh, in bill. Uh, there's a lots of open source cloud, like this path architecture we can use for the real time data process. And then for the, uh, like, for the finance product, we need to connect to our financial data center using the files password that we can get the real time data, and we can processes, transform it, and then utilize it. And then we can view it also using the Spark act on the Spark architecture with our real time updated data.

So, uh, in AWS service, like, I have used the Databridge s three to s three with the Databridge so that, uh, we can, I process our data, uh, with the using the different different Python commands, and we can, apply all all the visualization and transformation method on the data so that we can uh, enhance our performance, uh, on the cloud and, uh, further enhancing more of our, uh, application? We need to, I use that, uh, different different cluster, uh, node so that our performance will be, uh

So for analyzing with business needs, we first need to, uh, gather the information. Then we need to, uh, apply the logical, logical behavior, then we need to, uh, uh, store it into that, uh, physical database. And then we apply on the our, uh, logic analysis logic on the dataset and perform our, uh, perform our, uh, application operations on the application so that, uh, we can get the good result of the analysis.

So this is basically the recursive method of calculating, uh, the sum of n numbers. But here we have seen that the n is less than equal to 2, uh, that's one issue if number is greater than 2 Then, uh, get a equal to 2, then in that case, also, there will be the sum will become the 3 bit. It should not return 1. It should return 3. So this one issue is There. But, yeah, this, uh, this is one recursive method, um, we have implemented in the

So the basically so basically, the singleton method we use so that we can use our object once at a time in the entire, uh, entire process like, uh, here, we are connecting the data, uh, database once, and we can use this database anywhere inside our programmer, uh, to perform our connection and apply, um, to do our operation on the database.

So, basically, uh, I use a Python based Application in my, uh, recent project where I need to, um, validate the subnet. So I need to write the API, uh, Then deploy it into the our micro service cloud cloud platform. And then we, uh, connect different different data sources, uh, using the API with the help of the A RESTful API and the Flask API written in the Python. And and then when we validate the subnet whether it is available on the data frame or not. If it is not available, then, Uh, we return the response code not found. Otherwise, we, um, we return return the response code to one way that, uh, uh, subnet is Already exist on the, uh, different

So, basically, using the Python and PostgreSQL or any other SQL, we can, Uh, connect to the database we apply. We can use the Python library like pandas or NumPy or other, um, data frame library in Python to perform our all this, uh, operations, like, um, unified, uh, like the group being aggregation method, transformation method. And then we, uh, uh, do our process data processing and then push back to the, database again. And, uh, using this, uh, transformation aggregation, we can, uh, do the analysis of the s and database, Uh, like, how much the data I use previously use, what we need to do for that future purpose of the data, uh, to use more application use.