profile-pic
Vetted Talent

Shivang Vijay

Vetted Talent

With over four years of experience in the field, I have honed my skills as a developer with expertise in C++, Python, and Agile development methodologies. Throughout my career, I have successfully implemented load balancing techniques to optimize performance and ensure seamless user experiences. My proficiency in these areas, combined with my strong problem-solving abilities, allows me to tackle complex challenges and deliver high-quality solutions. I am passionate about staying up-to-date with the latest industry trends and technologies, enabling me to continuously improve and adapt to the evolving needs of the development landscape.

  • Role

    Senior Robotics Software Engineer - L3

  • Years of Experience

    5.33 years

  • Professional Portfolio

    View here

Skillsets

  • ROS2
  • Pd controller
  • Cycle gan
  • Natural navigation
  • Zed camera
  • Frontier detection
  • Vda5050
  • Multi-agent path planner
  • Yolo
  • TensorFlow
  • SLAM
  • AMQP
  • Ros1
  • Robotics
  • Mosquitto
  • Fleet management system
  • Docker
  • DDS
  • CI/CD
  • auto-encoders
  • Aruco markers
  • Ant colony optimization

Vetted For

5Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Robotics Simulation DeveloperAI Screening
  • 62%
    icon-arrow-down
  • Skills assessed :Large Language Models, Isaac Sim, NVIDIA Omniverse, Problem Solving Attitude, Python
  • Score: 56/90

Professional Summary

5.33Years
  • Nov, 2025 - Present 6 months

    Robotics Simulation Intern

    NVIDIA
  • May, 2025 - Present1 yr

    Robotics Simulation Intern

    NVIDIA
  • Mar, 2025 - Aug, 2025 5 months

    Google Summer of Code Mentor, Alaska

    Google Summer of Code
  • Jul, 2022 - Nov, 20231 yr 4 months

    Robotics Software Engineer - L2

    Unbox Robotics
  • May, 2023 - Aug, 2023 3 months

    C++ Developer

    Google Summer of Code
  • Nov, 2023 - Jul, 2024 8 months

    Senior Robotics Software Engineer - L3

    Unbox Robotics
  • Aug, 2021 - Jul, 2022 11 months

    Software Engineer

    Addverb
  • Sep, 2020 - Feb, 2021 5 months

    Internship - Mobile Robotics Department

    Addverb

Applications & Tools Known

  • icon-tool

    Python

  • icon-tool

    Keras

  • icon-tool

    C++

Work History

5.33Years

Robotics Simulation Intern

NVIDIA
Nov, 2025 - Present 6 months

Robotics Simulation Intern

NVIDIA
May, 2025 - Present1 yr

Google Summer of Code Mentor, Alaska

Google Summer of Code
Mar, 2025 - Aug, 2025 5 months

Senior Robotics Software Engineer - L3

Unbox Robotics
Nov, 2023 - Jul, 2024 8 months
    Created an advanced simulation ecosystem for robotics algorithm testing, implemented transfer from ROS1 to ROS2, developed Multi-Agent Path Planner for swarm robots, optimized traversable area using Ant Colony Optimization, incorporated Junction-based re-planning and Lane relaxation rules, Dockerized the stack for CI/CD pipeline.

C++ Developer

Google Summer of Code
May, 2023 - Aug, 2023 3 months

Robotics Software Engineer - L2

Unbox Robotics
Jul, 2022 - Nov, 20231 yr 4 months
    Played a pivotal role in the development of the Fleet Management System (FMS) for Autonomous Mobile Robots (AMRs), designed and implemented a Multi-Agent Path Planning system, developed communication protocols including DDS, AMQP, and VDA5050 using Mosquitto library.

Software Engineer

Addverb
Aug, 2021 - Jul, 2022 11 months
    Developed an algorithm to automate the manual process of creating an occupancy grid using SLAM techniques, ARUCO markers, and frontier detection.

Internship - Mobile Robotics Department

Addverb
Sep, 2020 - Feb, 2021 5 months
    Developed an algorithm to automate the manual process of creating an occupancy grid using SLAM techniques, ARUCO markers, and frontier detection. Received a Pre-placement offer (PPO) from the company.

Major Projects

8Projects

AI Tools Aggregator Web Application

http://aihubs.co/
    Developed a web application that centralizes a comprehensive list of AI tools, implemented user login functionality, enabled users to curate a personal list of favorite AI tools, and provided additional user-centric functionalities.

ROBOMUSE 5.0

    Designed and developed an Autonomous Mobile Robot (AMR) capable of transporting payloads up to 100 kg between locations using natural navigation. Integrated a ZED Camera for human-robot interaction.

Contactless & Modular Design for Actuation of Elevator Buttons

    Project 1:- Contactless & Modular Design for Actuation of Elevator Buttons.

Sterilization of Escalator Handle using UV rays

    Project 2:- Sterilization of Escalator Handle using UV rays.

Image Super-resolution using Auto-encoders

    Successfully implemented an image super-resolution project using Auto-encoders in the Keras framework, improving image quality and clarity.

Sentiment Analysis using TensorFlow

    Conducted basic sentiment analysis using TensorFlow, gaining insights into the field of natural language processing.

Cycle GAN for Map and Satellite View Conversion

    Developed and implemented a Cycle GAN to facilitate the conversion between map views and satellite views.

Inter IIT Tech Meet

PlutoX Hackathon
Jan, 2018 - Dec, 2018 11 months
    • Represented IIT Jammu at the 7th Inter IIT Tech Meet in 2018 during the PlutoX hackathon event hosted at IIT Bombay.
    • Implemented a Proportional-Derivative (PD) controller to effectively minimize errors in infrared (IR) sensor data.
    • Successfully completed various challenging tasks during the hackathon, including playing table tennis (TT) with a drone and programming a drone to navigate in straight lines under both continuous and discontinuous wall scenarios using IR sensor technology.

Education

  • B.Tech

    Indian Institute of Technology (IIT) Jammu (2021)

Certifications

  • Mastering SYSTEM DESIGN From Low-Level to High-Level Solutions Detailed Course Syllabus \x0cCONTENTS

  • Mastering SYSTEM DESIGN From Low-Level to High-Level Solutions Detailed Course Syllabus \x0cCONTENTS

AI-interview Questions & Answers

Yeah. So I'm Shivang Vijay. I graduated from IIT Jammu. And from 1st year of my college, I enrolled in robotics activities. In 2nd year, I became the head of a robotics club. I took part in various international and national competitions. I represented my college, IIT Jammu, in Techfest, IIT Bombay in Inter IIT Tech Meet, in Exodia, IIT Monday event, and in IIT Bombay, in IIT Monday event, we grabbed 1st positions. In 2nd year, I also got the opportunity to work with Professor SK Saha. I worked with them on making AMRs, autonomous mobile robots, for hospitals, which can carry 100 KG from one point to another. And, my contribution was creating a human robot interaction mode using current 3D depth camera and current sensors. I got exposure to ROS and robotics. The industrial robotics, I got exposure from my 2nd year. I continued that thing throughout my college time. Even my BTech project was in that only. In 3rd year, I got the opportunity to work with Adwarp Technologies, which is a top company or industry in robotics in India. You already benchmarked the international market also. So I'm in 3rd year, I got the opportunity to work in mobile robotics department, and I worked in AMR only. I created an algorithm to automate all the creation of documents. And by this, I also involved in the simulation of robotic arms, simulation of AMRs in Gazebo. I explored Omniverse at that time, but not very far. I also got a BPU due to my great performance in that internship. I got a job in that company as a full-time employee after graduation. I involved again in AMR activities. So I created a fleet management system which is controlling more than 50 robots, and we have deployed a fleet management system in our international site. I worked with the Duality software. I worked with Omniverse. I worked with Gazebo as a simulation platform, and more than 300 robots, we have run in simulation using our fleet management system. After one year after completing one year, I joined Unbox Robotics. I joined Unbox Robotics as a senior robotics engineer. And in that, I'm working with AGVs, or automatic guided vehicles, which is running through QR codes by scanning QR codes. And, my contribution is that I was working as a part of a simulation team and a part of a fleet management system for AGVs. So, currently, we are capable of running more than 300 robots in that simulation. And I also contributed. I'm a core member of the fleet management system for AGVs and that has been deployed in international sites as well as Indian sites. In Indian sites, there are more than 40 robots running through our Fleet Management System. This is my whole background in robotics.

All these strategies or tools, how do you recommend? Simulation outcome, yeah. So, firstly, we will talk about simulation outcomes, and in simulation, there are very few noise. There are no friction. We can create that. However, there will always be a difference between the simulation and the reality. So I'm going to talk about how we can minimize that gap, what parameters and what tools can help us to minimize that gap and how we can get a proper outcome of simulation. So firstly, in the real world, there are a lot of errors occurring, maybe in fleet management systems, maybe in robots. So how we will tackle them? In simulation, I created a Grafana board personally. In the Grafana board, there is an instance which is recorded properly with the logs. Whenever there is any error occurring in a robot or in any system, the timestamp is loaded, and we are recording continuously. There is a rollover mechanism creating ROSspec files when the logs are generated. So in the Grafana dashboard, we are directly recording that error ID and the timestamp, and that error ID is linked with the cloud, and the logs are uploaded directly. So it is very helpful in doing post-analysis. Now, this Grafana dashboard is a great tool to get to do post-analysis. Another tool is Traffic. Traffic directly merges your IP with the DNS server. And if you want to access any system, you can directly access through such a name. You can directly create a DNS. So this open-source tool is really very helpful. Then there is another tool like rclone. You can use rclone to unpack the ROSspec file. It is a GUI, so you don't need to do much. We have custom messages, and for custom messages, you'll require something. So Artron is a great tool. And if you come to the technical parts, there is a friction in real life, there is a noise on the velocity curve. Maybe it's not very similar to what we want, and it's not very similar-looking. So in simulation, there will be no noise. There will be no such things. The velocity profile, the controller that we have brought for simulation or for the real world, it should be the same, but simulation will not give the noise because there are no such external parameters, but in the real world, it will definitely give some kind of noise. So we need to mimic that also. We need to introduce the friction. Meaning, we need to introduce some more kinds of courses which reduce our robot to match the real world. And there are maybe some communication latency because all the systems in simulation are in one system, but in the real world, maybe some servers are communicating through communication. So communication is a very important part. All these factors are very important.

Protocol you would implement for inter-process communication in a distributed robotics system using Python. It's dependent on the things that we need in a robotics simulation. Suppose there's only one master, there is a centralized system. There is a master, and multiple robots communicate through that master. Then the best protocol may be TCP/IP, or maybe MQTT. Maybe different. And could it, like, there should be no peer-to-peer communication. But when you're talking about drones, each drone has to talk with all other drones. In robotics, they decentralize, so you want to talk peer to peer. So for that peer-to-peer talking, the best choice is FastDDS because Fast DDS is very helpful to communicate across the network. So even, and I think for decentralized systems, we can use FastDDS. That is the best protocol. For centralized systems, I think AMQP protocol is the best, advanced message queuing protocol. You can implement that AMQP very easily through RabbitMQ. And for MQTT, there is a MQTT library. There are multiple free, open-source libraries. Like, through that, you can implement these protocols very easily. But I think AMQP because, in advanced message queuing protocol, the queuing will be very advanced. It is automatically handling all the messages, and it is not likely you will get your important message lost very easily. In Python, all these things can be implemented. The RabbitMQ framework is supported in Python. Many other things are supported. Like, even Kafka is supported in Python, so all this AMQP and Fast DDS can be implemented using Python. Even ROS two supports Python, and raw under root of ROS two, there is a Fast DDS. And under root of cross on various DCP IP mechanism, kind of thing, and they are like, there is a centralized master. But the disadvantage of that is there is a single point of failure. So, in Fast DDS, we can use an initial peer list or discovery server, which creates Fast DDS as a single master who everyone communicates to that single master, and then it distributes to every other client or server, the messages. So Fast DDS can also be used when there is a centralized system.

If a simulation. K. Okay. So, for nondeterministic behavior, definitely, firstly, we can do this thing in real-time, and we can do our post analysis. So, for real-time, you need to write some Python scripts or so that you can generate the data properly, of each and every feature that you are implementing or the message transformation from one process to another process. You can draw the graph using a library. Or, if you are using ROS, then there is a directly RQT graph, which can plot everything very easily. And in real-time, you can do that. Post analysis, you can record all the data. If you're using ROS, then there is a ROS spec file. And if you're not using ROS, you can dump the log logs in a JSON format and a CSV format. And in post analysis, you can run the back file or you can check that JSON. You can read that JSON to plot the graph. Even I developed one tool, which is a, like, a web tool. And in that, the previous raw debug file is extracted into a JSON format, and then JSON is read. Like, there's a robot state. The robot state, we have recorded all the robot state time-based. I can, like, in every timestamp, we are recording the robot state, and we are using that tool to read that robot state, and we are running the robot, similar to that manner. So, some nondeterministic behavior has been happening. We can definitely, the logs have been generated, and the logs dumped into some files. We can, again, run that files to see what actually happened and how the robot is moved and how things have been done previously, and we can record that video also when we are running that tool. And we can run that tool again and again, using those files, and we can exactly determine what exactly happened. Like, the supposed robot is moving, like, in a manner. Maybe there is a controller issue or the controller is not very efficient. So, from observing that, we can do certain things. We can, like, make some assumptions. Like, maybe it is a control issue, and we can put more logs. We can do some tuning, and again, we can run in our simulation other things. We can deploy to go straight to our simulation. The, and through that robot state, the simulation will be run, and we can see that our nondeterministic behavior has been solved or not solved. At least, we can figure it out, like, at once.

Scale. Yes. So we have used Azure Cloud for the deployment of large-scale robotics simulation. So, initially, we faced a very significant problem. We had run more than 40 robots through that cloud, and we were facing certain problems. So earlier, our system was based on ROS. So in ROS, we directly created one master cloud service. It is a centralized system. There is a master, and there are multiple robots that communicate through the master, and all the master sends commands to all the robots. We created that server as a master, and all the robots were slaves. So just we needed to run ROS, we were running on a master, and all our slaves. So very easily, we can communicate as ROS1 gave us the benefit of creating a master and slave node. Under root, there is a TCP/IP or TCP ROS, which is working on. When we shifted from ROS one to ROS two, so in ROS two, we faced a lot of problems. Because in ROS two, and the root, there is a fast DDS. And fast DDS, like, if we are running 40 robots, then all the 40 robots are communicating with the other 39 robots because of the architecture. It is communicating all the things that are present in the network. So that became a very big problem for us, and to run the robots. So, and like, running robots to the cloud because there is an exchange of messages happening from the local system to the cloud. So, firstly, what we did, we shifted the cloud to our nearest geographical location. We were running the robots in the Asian network. So, like, we have shifted that cloud service to a geo-Asian cloud service. We have taken Azure, that, like, nearest to our India Cloud Service. Then what we did, we introduced an initial peer list. What is an initial peer list? It's kind of a discovery server. There is only one single point, like every robot is communicating, and that discovery server is communicating with the master, and then the master is sending the command again to the discovery server. And suppose the master sends a message for robot number 1, so that discovery server has a role that this message is only delivered to robot 1, not the other 39 robots. So, also, we have reduced our payload. We are only transferring important things from the robot to the Cloud service master, and we are using multithreading. We use batching. Batching. Batching. Like, we are not completely calling that communication model. We are sending, like, suppose in 10 seconds, we are only sending one time the commands when it is necessary. So on an average, we are sending 1 message on a 10-second interval to a robot, and, like, the robot can, and we have put some smartness inside the robot also so that the robot can take up certain calls without communicating with the master. So we have reduced the payload. We have introduced patching, introduced asynchronous, multiple threadings, and this all solves our, like, communication between the cloud and the local server. And, I worked with Azure Cloud Service, And I know how to, like, we have also put a security feature, like, we have introduced false data certificates both in the cloud and as well.

Yeah, so reinforcement learning, I will talk a little bit about what is a reinforcement learning. Suppose you want to move a robot in a particular direction. So there is a point A and a point B. A robot has started from point A and is taking the path to point B. However, in some situations, it is very far from that viewpoint. So we are giving the robot a reward, but that reward is not very good. Slowly, it is coming towards B. We are giving the robot good rewards as it gets closer to B. And slowly, it's moving away from us. We are giving the robot bad rewards. So, accordingly, that model is trained, and it starts getting good rewards and moving towards point B because the model requires good rewards. And it is the model that gives the rewards. We have just done a little bit of mathematics and defined all the like, we have created the model. So, with that model, the robot starts moving towards point B. And, so, the first task is completed. Now we have changed that B point. Again, the model starts tuning its parameters. And slowly, the model becomes so accurate that whenever we give a point A or point B, the robot very accurately moves from A to B. Now the thing is, why is it very important? Reinforcement learning, with another traditional methods like A* and Dijkstra, because reinforcement learning gives its parameters on its own, and it is very complex inside the model, but we don't want to do much to implement that model. Yeah. Maybe parameter tuning is a little bit tricky. And with reinforcement learning, we can use different algorithms like mapless navigation. Even in reason-based navigation has been achieved very accurately through reinforcement learning. I just talked about path learning, but reinforcement learning, in a simulation framework, there are great advantages. Like, reinforcement learning can be used to sense data, not just for path planning. Even for the controller, we can use reinforcement learning, even for some perception activities like detecting any object and accurately, with a monocular camera, to run a robot very accurately. All the things can be easily achieved, like it's hard to achieve, but very easy to implement. We can do this through reinforcement learning. So, reinforcement learning plays a very important role in robotics as well as the simulation framework, like Isaac Sim. So, it will ultimately improve our robotic behavior.

In the context of simulating a robotic arm moment using NVIDIA Omniverse, consider the following pseudo code snippet. Current position. Target position. Moment speed. Current position is not equal to target position. Yeah, so firstly, the direction equals the normalize of target position minus current position. And you're ultimately giving that current position to the robot in which he wants to move. Firstly, that robot arm position should give you 3D coordinates. So current position should be 3D coordinates. Then, again, target position should be 3D coordinates. And when you are doing a normalize, it should also give out 3D coordinates. If target position and current position are 3D, then direction will be 3D. You are directly adding current position and multiplying direction into moment speed? I don't think so. It can be possible very easily because direction is a 3D coordinate. You need to use a different line to do this calculation: current position plus direction times moment speed, then you will get a new position. And then you can update that robot arm position. And in that distance, you are ultimately seeing the target what you want to achieve and what is the current position. And if you're subtracting that, and it comes under the tolerance, then you are breaking. But distance is not defined currently in your code snippet, so you need to define distance. Like, you need to subtract all the 3D points with the 3D pose of current position minus the 3D pose of target position. And if it comes under the tolerance, then you can say it reached the final position.

So you are using I six m for developing a realistic robotics simulation. Okay. Yeah. So there may be logical errors in two steps. The first is the simulate step. So, suppose the simulate step, like, while the simulation is running, it should continuously run through a loop. However, it's currently not tending towards the end of the simulation. Instead, it's simulating a case or not tending to the condition where the simulation should stop. Suppose there is a condition, like, when theta becomes 0, but the simulate step is not making theta tend towards 0. It's increasing theta continuously. So it's not going to be never-ending. And the check for the simulation end condition should be like theta is less than 0. But that simulation step is continuously increasing theta, so theta is not becoming less than 0. This loop is continuously running and occupying our RAM continuously. Even in this file loop, there is no sleep. While using any loop in Python, in C++, or any programming language, we should put some kind of sleep because it will reduce CPU utilisation very effectively. So there is no sleep also. Another point, in relevance to the question, is that the logical error is only in the simulation step or in the check of the simulation end condition. Maybe the end condition is different. You have put maybe another end condition, but you have put a different end condition, and it's not coming to that part. Another thing, let me check - while the simulation is running, simulation steps, and check the simulation end conditions. If the simulation running goes to false, it will do that. Yeah. Also, like, simulation cleanup will not be happened because

Concurrency provides the parallel calculation of graphs or the parallel execution of multiple code snippets. So, in RoboDKAM, there are format animatics and inverse animatics, and there are multiple frames, origins, or parameters to calculate, such as x y theta or x y z theta. You can perform those calculations in a different thread so that your calculation is faster. By doing that faster calculation, you can merge the results into one place and use that graph to move the robotic arm quickly. Another aspect of concurrency is that you need to handle threadings properly, because there is limited control of threads in our developer hands. You need to use a proper locking mechanism in threading. In Python, there is a limitation due to the Global Interpreter Lock (GIL), which gives the illusion of multithreading but is not actual multithreading. If you want proper multithreading or concurrency, you should use a language like C++ or any other language that is good for multithreading. However, some developers have removed the GIL in Python, and there are some upgrades to Python that allow for multithreading using libraries. For robotic movement, you can perform different calculations in different threads, such as inverse kinematics or forward kinematics. Ultimately, you will merge or join the threads of the calculation into one place and then provide the final calculation to the controller.