I work on developing and deploying innovative solutions for generative AI applications, such as text, image, and video synthesis, using LLMs, state-of-the-art techniques and frameworks, such as Langgraph, Langchain, Vector Databases, Advanced RAG, Multi-Agentic Systems, MCP etc.
I have over 10 years of professional experience in software engineering, data science, and machine learning, with a strong background in Python, ML, NLP, LLMs, Multi Agentic Systems, Computer Vision, Azure, and AWS. I graduated with a B.Tech. in Computer Science from the Indian Institute of Technology, Mandi, where I also served as a teaching assistant for the course on Artificial Intelligence.
Lead AI Engineer
MyPocketLawyerData and Applied Scientist II
MicrosoftData Scientist - LLM and NLP Lead
CallidusAIResearch and Development Intern
Siemens Technology and ServicesLead Data Scientist
Envision Global LeadershipData Science Manager
Fortem Genus LabsNLP
LLM
GPT-4
Fast API
OpenAI
Mistral
Qdrant
RAG
Python
Machine Learning
AI
.NET Core
Azure
MLOps
PowerBI
PyTorch
Docker
Kubernetes
AWS
GCP
SQL
NoSQL
Prompt Engineering
Flask
DevOps
Nodejs
Redis
Hadoop
Nagios
Linux Kernel
Yeah. So my name is I'm not a I have around 8 years of experience. I in my primary skills are data science machine learning, and generative AI, language models. I also worked, at Microsoft. For around 5 years. And I am looking for the next challenge. I've worked on many challenging products. I have also worked I would go by it using the last language models at Microsoft, and I've
been able to bring solution that's key. Yeah. So, basically, if you if there's a PyTorch model, And And if we were high load cloud based production environment, then there are various steps. One is that we can deploy it in a Kubernetes cluster and then do horizon scaling. And then do more than that. So that it it the the request to get sort of load balance, that is one thing. The other thing is that we can also convert the model to 0 and exit format, which is more optimized for this. That is the second. 3rd is we can also, use do vertical scaling, use a high, end GPU for serving this.
on top of my mind.
Yeah. So, basically, AWS is a service function, and, uh, if you want to maintain managed state in a Python based microservice that interacts with TensorFlow models of hosted AWS. So for this state, we can use some external database and a a config database or something like that. And, basically, uh, the AWS number can connect to that and get the state from that. So that is one way of managing the any state in a Python based microservices.
Yeah. So some of the best methods are basically using git. Not so that there is version control number second is that for, uh, Python, there are there are there is this package standard with if all the developers follow, then the code will be getting well managed. And we can also use my pipe, which is a library for doing the dynamic and static, uh, testing of the code. Another good practice is to basically add those type types also, uh, in the code, although Python doesn't require it. But, um, for credibility purposes, we should do that and also do proper exception handling.
Yeah. For containerization, I think, if a cycle learn application, we can use we can use Docker to containerize it. Like, we can, um, create an API, like, fast API or something like that, and then use Docker to containerize it and deploy it as a Docker image.
Yeah. Just the screen dot add thing, basically, always will return through, whether it's seen or not seen, whether it's there in the set or not. So, basically and then if you do return all of true, it will be true.
Yeah. I think in this case, it can be there will be some, uh, racing condition that will happen, and we cannot and that's why the threads will not function as dessert. So we can use some same of words, uh, or logs in order to prevent
Yeah. So blue green deployment is can easily be done in a Kubernetes. Like, if you if you are deploying your models to the Kubernetes as Docker containers, then essentially, if you're using help charts for doing the same, then what happens is that it happens in a blue green manner. Only thing if you want if we deploy some if you do a new deployment, then say there are a couple of containers of the old, uh, model that are running, they will then now continue with will start up and they will get decommissioned, but it will happen in a good way manner on.
Yeah. I think PyTorch is more widely adopted. There's a there's a much more developer support for PyTorch, much more open source support for time, uh, PyTorch, uh, and, uh, and that's why I think that is, uh, the preferred way to go. TensorFlow allows us some customizability, so that's the pro of TensorFlow.
Yeah. So basically, scalability always takes into account, like, um, you wanna scale it. So we have to deploy it in a in a manner with it it is you can easily, uh, sort of vertically scale it or horizontally scale it. That is 1. Resilient neural networks will become say, if you are having if you do augmentation in the training data set so that it becomes more robust, more resilient. And similarly, we have to adopt all the responsibility principles while developing, uh, these. So those are the 2, 3 things that can be done in order to make it