profile-pic
Vetted Talent

Shantanu Sharma

Vetted Talent

I am a passionate programmer, dedicated learner, and experienced data scientist with a deep love for solving complex problems through data-driven approaches. My enthusiasm for exploring and implementing diverse algorithms has driven me to continually expand my expertise and tackle a variety of challenging, real-world issues.


Currently, I am working as a Senior Data Scientist, where I develop innovative and optimized solutions for the healthcare industry. My work involves leveraging deep learning, reinforcement learning, natural language processing (NLP), and large language models (LLMs) to create impactful and efficient outcomes.


Let's connect and explore how we can collaborate to drive data science initiatives forward!

  • Role

    Senior Data Scientist & AI/ML engineer

  • Years of Experience

    7 years

Skillsets

  • Scikit-learn
  • LLMOps
  • MLOps
  • MySQL
  • programming languages
  • Spark
  • Streamlit
  • Python
  • SQL
  • NLP
  • PyTorch
  • TensorFlow
  • pandas
  • Deep Learning
  • Linux
  • Generative AI
  • Machine Learning
  • Python
  • SQL
  • NLP
  • PyTorch
  • TensorFlow
  • pandas
  • Deep Learning
  • Scikit-learn
  • Generative AI
  • Machine Learning
  • Reinforcement Learning
  • Java
  • Python
  • SQL
  • PyTorch
  • TensorFlow
  • NumPy
  • pandas
  • Keras
  • Matplotlib
  • Scikit-learn
  • Algorithms
  • Generative AI
  • NLTK
  • JavaScript
  • AWS
  • Data Engineering
  • Data Structures
  • Databricks
  • Deep Learning
  • Django
  • Feature Engineering
  • Flask
  • Git
  • HDFS
  • HTML/CSS
  • Jupyter Notebook

Vetted For

13Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior AI Engineer (Remote)AI Screening
  • 64%
    icon-arrow-down
  • Skills assessed :AWS Lambda, Expo, graph database, GraphQL, Next Js, react (js+native), Serverless, LangChain, Prompt Engineering, Vector databases, Leadership, Postgre SQL, Type Script
  • Score: 58/90

Professional Summary

7Years
  • Sep, 2022 - Present3 yr

    Senior Data Scientist

    Tata 1mg
  • Aug, 2019 - Sep, 20223 yr 1 month

    Data Scientist

    3LOQ Labs
  • Sep, 2018 - Jul, 2019 10 months

    Product Engineer

    FeasOpt.AI
  • Sep, 2017 - Aug, 2018 11 months

    Software Engineer

    EduGorilla

Applications & Tools Known

  • icon-tool

    Python

  • icon-tool

    SQL

  • icon-tool

    PyTorch

  • icon-tool

    LangChain

  • icon-tool

    AWS

  • icon-tool

    Databricks

  • icon-tool

    Spark

  • icon-tool

    Azure

  • icon-tool

    Hadoop

  • icon-tool

    Scala

  • icon-tool

    Docker

  • icon-tool

    Linux

  • icon-tool

    Django

  • icon-tool

    Google Cloud Platform

  • icon-tool

    PHP

  • icon-tool

    MySQL

  • icon-tool

    Git

  • icon-tool

    Jupyter Notebook

  • icon-tool

    Streamlit

  • icon-tool

    HDFS

  • icon-tool

    Amazon SageMaker

Work History

7Years

Senior Data Scientist

Tata 1mg
Sep, 2022 - Present3 yr
    Developed dynamic pricing systems, health assistants leveraging LLMs, and portfolio discounting engines; implemented fraud order detection models; optimized data pipelines and reduced execution times.

Data Scientist

3LOQ Labs
Aug, 2019 - Sep, 20223 yr 1 month
    Developed machine learning pipelines for banks to improve customer engagement; created models to predict customer behavior and increase average balances; optimized runtimes and led engineer teams.

Product Engineer

FeasOpt.AI
Sep, 2018 - Jul, 2019 10 months
    Built a logistics management web application; created delivery clustering models and scheduling algorithms; conducted experimental design and data preprocessing; trained interns in AI and optimization algorithms.

Software Engineer

EduGorilla
Sep, 2017 - Aug, 2018 11 months
    Developed a platform for processing Indian schools data; optimized algorithms for tagging and data mining; increased Google AdSense revenue through data analysis scripts.

Major Projects

4Projects

Billboard Hit Predictor

    Developed an interactive interface to predict the likelihood of a song becoming a Billboard hit using Spotify song data.

Movie Recommendation System

    Built a web app recommending movies based on similarity using NLP and content-based filtering; deployed it live and used by 40+ users.

Attritor Checker

    Used banking data to predict customer attrition; employed various machine learning models and achieved optimal performance using DNN frameworks.

Abstractive Text Summarizer

    Code implementation of a text summarizer using pre-trained and fine-tuned transformer models with a Rouge score of around 50.

Education

  • Bachelor of Technology in Computer Science

    SRM Institute of Science and Technology (2017)
  • Intermediate (Computer Science)

    Modern School (2013)

AI-interview Questions & Answers

Okay. So my name is Shantun Sharma. I'm working as a senior data scientist at Tata 1 MG. I lead a pricing team here, merely a pricing project, what we do. So we if you will go to Tata 1 MG and, uh, every price of a medicine is dynamic in nature. And that is coming from a algorithm that I have created using deep learning and reinforcement learning. So, uh, uh, that is a like and we are making around 6 to 7% more profit than business using my algorithm. And yeah. So right now, I'm in work, and I'm working in a health, uh, as I'm working in health care domain. So this is one of the my one of my project. And one of my project is, uh, is using LN, which I'm work on working right now. Uh, so what we do, we create a, we have created a health summarization bot where you can approach prescription. Usually a patient approach prescription and ask questions on that. So, uh, even you can ask the prescription, not very clear, So you can ask any questions, uh, regarding your prescription or even, uh, regarding 1 mg or regarding some healthcare domain, you can ask any question like how much water should I drink in X sugar or something like that. Yeah. We are using rag here and l m open GPT, like, all the stuff to implement it. So, yeah, these are the present currently working on. I have worked on few other projects here, like fraud detection, anomaly detection, and all. Yeah. Before that, I was working in a company called Fluke Labs. So what I was doing, I was developing machine learning solutions for banks, some top 5 banks of India. We were developing machine learning solutions for them. And, uh, like, attrition system we developed, recommendation system we developed. We were, uh, improving monthly average balance of the customers, creating dashboards, e d a multiple EDAs and all. Yeah. So under it has been 7 years since I am in this Python data science machine learning background. And, yeah, that's all.

Could you propose a method for integrating prompt engineering feedback into vector database using TypeScript? Okay. Uh, so, uh, okay. What we can do, uh, usually, uh, like, I mainly I've worked with Python and the vector DB and all. So but we can, uh, like, what we can do, we can simply, uh, like, we can also use TypeScript. There's no issue here. So, uh, what we can do, uh, uh, you can start with, uh, like, setting up setting up the experiment here. And, uh, like, here like, in Python, you install using Pip and all. Here, you can do, uh, using Node. Js to install the packages, like NPM and all. Okay? So what happens then, uh, then, uh, in typescript, uh, you can, I guess, define some data structures, like, uh, like, you have to define feedback and, uh, any prompt which we we need to find and prompt could contain, like, uh, used ID, your like, what what the text and the vector? Okay. Then, uh, you can create a function to calculate vector from the prompt. Okay. Uh, like, you're doing that. Then, uh, there can be a function which will store feedback which will store the feedback, uh, in the vector database. Okay? As everything needs to be stored in the vector database. Then, uh, you can have a function to integrate the feedback. Okay? So yeah. Uh, Yeah. So these are the main high level steps which you can, uh, like, which you can use to create problem if you run into vector DB. Okay? And for calculating vectors, uh, you can for everything we have, we are creating a function and calculating vector. You can use a uh, maybe use an external API. And for the model, you can go for, uh, maybe GPT or the llama or anything.

How can the builder pattern in TypeScript simplify the process of crafting complex prompt for langchain.js? Okay. Uh, sure. So, uh, okay. What happens here? That, uh, so this would this build up pattern, which which you're talking about in typescript, what it does, it, uh, like, it provides a structured and flexible way to construct prompt, like, your in a step step by step fashion manner. And this pattern, this builder pattern, what it does, it helps you manage the option parameters also, which improves code, uh, like, uh, re even even the like, improves code readability and makes it easier to maintain and, uh, like, makes you maintain and modify the, uh, the prompt creation process. Okay. So yeah. What? Yeah. That's 1.

How would you implement a typescript type interface for managing complex queries in a vector DB? Okay. Okay. How are you doing? Uh, during the TypeScript time, we did it for managing the DB. Okay. So to, uh, like, to manage complex query in our VectorDB with TypeScript, we can create a set of typed interfaces and classes that defined, uh, that basically define the structure and the behavior of the queries. Okay? So what what it will do, this will help to ensure the type safety first versus the type safety, and it will provide a clear API for constructing and executing the queries. Okay? So, uh, yeah. Like, this is a thing, and, uh, like, the basic type of, uh, types and interfaces could be vectors. Like, what what you want? Like, vector based query, similarity query, and all those things. Yeah. Like okay.

What strategy what strategy would you use to handle vector DB schema? Changing in a typeshift code base. Okay. So, like, not just, uh, TypeScript. We can go for any like, you know, it would work same in Python. So but, like, that when when you talk about TypeScript, so handling this team much in vector DB, whether types of code, uh, it involves, like, in the types of it involves multiple strategies which ensures, uh, like, data integrity, minimize downtime, and maintain code type. So what you can do, uh, like, first, you can define your schema. That will be any, uh, so where you are meeting different versions of your schema using types of interfaces. Uh, where, let's say, you haven't created an interface for document and you've for for document 1 document 2 just like that. Okay. Then what you can do, you can create a, uh, something like migration script where you can write script to migrate data from old schema to new new schema. Okay? Then what you can do, you can update query builders. Uh, here, what you're ensuring, you that query builders are aware of the schema. Uh, like, the okay. Uh-huh. So, uh, like, what you're doing, you are ensuring the query like, you the query builders. So you're ensuring the query builders are aware of the like, all the schema version, and you can handle then then you want to handle appropriately. Okay? Then, uh, maybe you can, uh, implement a a schema registry side sort of thing where you maintain the history of all of label schemas and and also the particular versions. Okay? Uh, also, what we can do, uh, then you want to handle a document. For that, you can use a factory factory pattern here to handle the various documents. Okay? And, uh, oh, okay. It actually came out. And maybe you can handle the backward and forward compatibility also that that application can read and write both and both old and new version schemas. It can read both. Okay? Then, uh, yeah. So and for obviously, when that all that is done, you can go for testing and validation for, uh, like, uh, you can test your migration script through, um, and, like, schema handling, uh, like and schema handling to ensure the integrity where you want, uh, like, based on some types of code. Yeah.

What TypeScript best practices ensure safe communication of data from vector DB in an AI context? What, uh, time script? Best practices. I'm sure that safe safe consumption of data. Okay. Okay. So, uh, like, say you want to ensure safe communication of data from vector DB in a AI context using TypeScript. So, uh, you can use some practices, uh, which I'm gonna like, uh, that that will enhance the safety, security, and even the maintain maintainability. Okay. So what you can do, uh, one could be, uh, time guard for validation. So what you can do, you can implement these time guards, uh, and run time validation to ensure the data, uh, conforms conforms to expected types. So you want, uh, data to be of expected type only. Second thing could be strong typing and interfaces. We are you are defining clear and the stricter types of 4 data models, uh, even queries and, uh, respond and even and responses to ensuring that you have a type safety. Okay? Uh, other thing could be you want to, uh, handle the data securely. So what you can do, you can, uh, like, include encryption somehow, and, uh, you can secure the communication between them. Encryption is done. Then what we can do, also, you can handle error somehow. Uh, like, you can uh, handle the error, and you can log the errors to see whether the, uh, what error which I've worked, how you can improve them, how you can work on that. Okay. Other thing would be, obviously, the basic one. You can write the abstract and modular code so, uh, that the readability is good, readability is good, all those things. Yeah. Uh, documentation and comments are one of the other factor which you want, uh, in your system. And testing, obviously, when you are building something, you can write some comprehensive test to ensure the correctness and the reliability of your code. Yeah. So

Okay. Looking at this code in React, explain why complete might not be rendering the expected results and prop items changes. Okay. And then we'll comprehend it. So what is happening here? Okay. So what is happening exactly that you are okay. So what we're doing that Okay. Okay. Got it. So, uh, like, uh, the component may not render the expected results when the item prop is changing because the state is only set once in this, uh, constructor when the component, uh, is first instant like, instantiated. Okay? So if the this there's this item prop changes later, then the state will not automatically update to the, uh, to reflect these, uh, changes, which is leading to, uh, uh, like which which is essentially leading to the component. Uh, okay. So it is leading to the so that's why, like, it is not automatically updating the state. So that, right, is not giving component, uh, like, not rendering the new items. That's why it is happening. So, uh, the issue initialization of construction and state does not update with crop changes. Okay? So, like, you can update the states to have a better solution.

While reviewing this GraphQL query, can you spot any potential issues that could lead unexpected results? Please your answer. Okay. So, uh, what I want, um, to here, I want to, um, okay. So, uh, okay. Okay. Okay. So we are getting a query. We are getting a get items, and it is having ID, price, unrated fee. So this is some simple graphical query, which has issues and inputted results. Right? So so what is happening? Uh, and the query is requesting some unrelated field. Right? Where k. Uh, query okay. Okay. So the here under the field is outside the uh-huh. Okay. Okay. So, uh, what are issues here? I can see that this query, whatever we are trying to do, is, uh, request unrated field, which is, uh, uh, outside the scope of the items query. The this, uh, get item inside out or outside get items. So this will may not exist in the schema, uh, and or might not be related to the items query. Hence, which, like which is leading to the unexpected results or even errors. Okay? So, uh, if you want to fix this, uh, like, either we could we should remove this unrated field or if it is not needed. Else, we should ensure that it is, uh, that it is a valid field, and and it is in the schema. And so it it can be correctly queried.

What techniques in TypeScript ensure the prompt engineering code remains scalable as new functionalities are added? Okay? So what techniques in time frame ensures? Okay. So what it is, it shows okay. Fine. So okay. Uh, few things which I can think of are, like, one is modular design. You are breaking your code into some self contained modules where each module should have, uh, should handle a funk like, each module should handle, uh, uh, like, sep working on separate functionality. So it is easier to maintain and, uh, extend even extend the code. Okay. Other could be, uh, you can use interfaces and, uh, types, uh, to, like, in like, have a better, like, better, uh, extension, and you can implement the type safety here also. Plus, uh, what you can do is let me link. Okay? So thanks, Ricky. Okay? What else you can do? I think you can also go for factory pattern. Like, you can use, uh, here, you can go for fact like, use factory patterns to create, uh, create objects, and with this can help and manage, uh, creation logic and also make it easier to, uh, extend the code or even modify the code. Okay. Else, what else could be, uh, your architecture could be like something like event driven, where you're implementing an event driven architecture to decouple some components and handle the complex workflows. Uh, also, uh, one could, uh, other way could be asynchronous programming. You are using async, uh, or you can go also use a bait and promises to handle all these an asynchronous, uh, operations. A better testing could be one of the way. Like, you are doing writing some unit test cases, integration test cases, and then doing this to ensure the robustness of your code. Yeah. Then you're probably documenting and, uh, writing comments. You are maintaining each version. You're using some version equal system. And, yeah, that's all.

What strategy would you use with TypeScript decorators to add metadata functionality in langchain.js applications? Okay. What strategy would you use with TypeScript Acuritas to add metadata? Right? Okay. Okay. So, uh, what I can do here uh, okay. So uh, okay. So I only okay. That's a simple thing, and we can move with that. Okay. Um, so what we can do, 1 there's few things I can think of is, uh, uh, like, we can enable experiment experimental decorators and metadata. That that that could be one thing. You can, uh, like, we can define method decorators for metadata also. We can, uh, like, apply decorators to classes and methods that, uh, so all these things are related to decorators, then, uh, we can ensure that this, uh, that we oh, okay. So there is something on, uh, reflect my data. We we we can import that, and we can ensure that that reflect my data is imported at the entry point of application. Uh, so that that could be one thing. Other thing could be, uh, okay. So we are working with metadata. Hard so we are, uh, in experimental decorators, we have done. We have done in, uh, reflect. We have that defined for metadata. We have applied here. Okay. So so what it will do, it will all these things will, uh, benefit income. So encapsulation, you are, uh, encapsulating the metadata, handling, lodging, and keeping your classes and method, uh, clean. You are given flexibility, reusability, and also readability. Uh, then example use case here, uh, could be, like, prompt metadata. For example, uh, you are, uh, you can annotate the prompts with some, uh, descriptions, versions, and other relevant metadata to manage, uh, and track them efficiently. So, yeah, that's 1.

How can TypeScript enums optimize prompt engineering for language models in terms of mandible mandible and error reduction? Maintainability. Maintainability. How can TypeScript enums optimize? Okay. How can TypeScript enums count, guys? So, uh, optimize prompt engineering for language models in terms of. Okay. So, uh, like, Type 3 VM can play, like, uh, in any language, in terms of if you're talking about in a type stream enums. So you can play significant role in optimizing the, uh, prompt in g mail for this, uh, language mod language models by providing a structure and, uh, typesafe to handle predefined sets of content, such as prompt types, uh, such as the prompt types, categories, uh, even, uh, like, response options and all those things. So what you are improving, you're trying to improve maintainable maintainable maintainability here and, like and reduce errors. Okay. So what it does, you, uh, one thing is type safety that, uh, you know, provide compile time checking. So that is reducing likelihood of errors that is caused by type like, uh, typos or even invalid values. Okay? Uh, what else could you, uh, could do? You are an enhancing the enhancing the readability. You are simplifying the You are improving the maintain maintainability. So yeah. Some, uh, use cases could be here, uh, like, uh, prom like, you are defining prom types. Some define different types of prom, like in informational, uh, even instruction or even warning. So and, like, you can define prom categories. Like, you can categorize the prompts based on the usage context, uh, like, error handling, sometimes, uh, feedback, all those things. Or you are giving some, uh, like, you are managing pre different response options for prompts. So you are, uh, handling the response options also. Yeah.