profile-pic
Vetted Talent

Tanisha Medewala

Vetted Talent

Tanisha Medewala is an accomplished Artificial Intelligence Engineer with extensive experience in developing and deploying AI solutions, managing MLOps pipelines, and architecting scalable systems on Azure and AWS. Proficient in TensorFlow, Python, API development, and Infrastructure as Code with Terraform, she excels in driving innovation through Gen-AI projects. Tanisha has a strong background in healthcare technology, having released multiple projects into production, and demonstrated expertise in Azure services, security measures, and reinforcement learning models. She holds advanced degrees in AI and ML and cloud computing, complemented by numerous certifications and a proven track record in hackathons and team leadership roles.

  • Role

    Senior MLOps Engineer

  • Years of Experience

    14.83 years

  • Professional Portfolio

    View here

Skillsets

  • Load balancer
  • Venafi
  • Anaconda navigator
  • API Gateway
  • AWS Lambda
  • Aws sagemaker
  • AWS VPC
  • Azure DataBricks
  • Azure ml pipelines
  • Azure vnet
  • cosmos Db
  • Elastic
  • Google colab
  • Hugging Face
  • IBM Cloud
  • Jupyter Notebook
  • Ubuntu
  • Matplotlib
  • Milvus db
  • MLFlow
  • Network acls
  • Open AI
  • Pinecone
  • Seaborn
  • Sgs
  • Spyder ide
  • Timestream db
  • Unix
  • VM
  • VS Code
  • Watsonx.ai
  • Windows
  • AWS S3
  • Azure - 3 Years
  • TensorFlow - 3 Years
  • Jenkins - 1 Years
  • Python - 4 Years
  • Java - 3 Years
  • R
  • Keras
  • Scikit-learn
  • OpenCV
  • LangChain
  • NLTK
  • spaCy
  • XgBoost
  • Gensim
  • ADF
  • AWS - 3 Years
  • Azure Data Lake Storage
  • Azure Function App
  • Databricks
  • EBS
  • EC2
  • Figma
  • Git
  • GitHub Actions
  • Jira
  • Linux
  • Looker
  • MongoDB
  • MySQL
  • PostgreSQL
  • Terraform

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Python Backend (MLOps) engineer(Remote)AI Screening
  • 74%
    icon-arrow-down
  • Skills assessed :Design patterns, MLOps, Database Design, Product based Project, AWS, Docker, Leadership, machine_learning, Mongo DB, MySQL, NO SQL, Python
  • Score: 67/90

Professional Summary

14.83Years
  • Feb, 2025 - Present1 yr 3 months

    Senior MLOps Engineer

    Fractal
  • Oct, 2024 - Dec, 2024 2 months

    WatsonX AI Engineer

    IBM
  • Mentor

    Testbook
  • Jan, 2020 - Nov, 20211 yr 10 months

    Backend Developer

    Zumen Software India
  • Expert

    GreyNodes
  • Subject Matter Expert

    upGrad
  • Oct, 2018 - Jan, 20201 yr 3 months

    Junior Java Developer

    Orion India Systems
  • Jun, 2018 - Sep, 2018 3 months

    Trainee

    Zoho Technologies

Applications & Tools Known

  • icon-tool

    Python

  • icon-tool

    Jupyter Notebooks

  • icon-tool

    Spyder

  • icon-tool

    Visual Studio

  • icon-tool

    Tensorflow

  • icon-tool

    Keras

  • icon-tool

    AWS EC2

  • icon-tool

    Lambda

  • icon-tool

    Quicksight

  • icon-tool

    S3

  • icon-tool

    MySQL

  • icon-tool

    MongoDB

  • icon-tool

    PostgreSQL

  • icon-tool

    Rally

  • icon-tool

    Figma

  • icon-tool

    Matplotlib

  • icon-tool

    Seaborn

  • icon-tool

    Looker

  • icon-tool

    AWS Quicksight

  • icon-tool

    Hugging Face

  • icon-tool

    Open AI

  • icon-tool

    Terraform

  • icon-tool

    Git

  • icon-tool

    Github Actions

  • icon-tool

    Jenkins

Work History

14.83Years

Senior MLOps Engineer

Fractal
Feb, 2025 - Present1 yr 3 months
    Led the design and implementation of RAG (Retrieval-Augmented Generation) pipelines, integrating retriever, summarization, multithreading, post-processing, and Snowflake database connections for scalable client solutions. Developed and deployed GPT-4.1 powered code utilities via LLM gateway services, enhancing automation and intelligence across enterprise workflows. Guided and mentored junior MLOps engineers, providing technical leadership and framework best practices to ensure high-quality deliverables. Actively participated in LLMOps projects, collaborating with cross-functional teams to build robust AI systems tailored to client requirements. Engaged in Fractal Hive corporate induction trainings, contributing to knowledge sharing and onboarding for new team members.

WatsonX AI Engineer

IBM
Oct, 2024 - Dec, 2024 2 months
    Engaged in IBM Enablement by participating in SKO4 2024 Sales Kickoff, IBM Health and Safety, IBM Core Training and IBM Sales Incentive Education. Participated with multiple industrial leaders in Global Sales School by learning IBM portfolio in IBM Automation, Data and AI in WatsonX, Mainframes, Storage, Power, Expert Labs, Consulting etc. Presented a show and tell demo as a Tech-Seller for Hybrid Cloud and AI usecase and performed on Sales Play. Working with team to create, build and review the pilots for Watsonx.ai, Watsonx.data and Watsonx.governance along with support activity in Security, ITAM and Sustainability Products.

Mentor

Testbook

Subject Matter Expert

upGrad

Expert

GreyNodes

Backend Developer

Zumen Software India
Jan, 2020 - Nov, 20211 yr 10 months
    Developed engaging product tours using React and Java to introduce new features. Collaborated with UI/UX team to enhance existing applications and contributed to API and service development, leveraging Spring Boot, Spring JPA, Maven and Microservices Architecture. Participated in architectural and database design forums for new products, utilizing PostgreSQL and Mongo DB as a backend solution. Engaged in scrum meetings, team refreshment tasks, and addressed reported bug tickets with precision, utilizing Git for version control. Prioritized and scoped feature requests with an agile team, ensuring efficient development workflows.

Junior Java Developer

Orion India Systems
Oct, 2018 - Jan, 20201 yr 3 months
    Led API and service development as Project Team Lead, ensuring specification compliance and resolving staging issues with Spring Boot's robust capabilities. Participated in scrum meetings and requirement discussions, aligning development efforts with project goals, and utilizing Spring Boot, Spring JPA, Liquibase, PostgreSQL and Maven for streamlined monolithic application development. Engaged in design forums to analyze and prepare C4 model for visualizing the software architecture. Collaborated with the AI team to enhance project understanding and contribute to requirement discussions.

Trainee

Zoho Technologies
Jun, 2018 - Sep, 2018 3 months
    Worked on client-specific JavaScript tasks, implementing modular design patterns for streamlined execution. Developed bot event management flows for diverse industries like automobile and food delivery using Deluge. Integrated third-party tools such as HubSpot, Calendly, and Mailchimp into bot templates. Participated in code-based design forums to enhance development practices.

Achievements

  • Received Reward & Recognition in the form of iAppreciate Card as a Team Lead for MLOPs Monitoring Framework
  • Hold a First Rank in Forum Discussion for AI & ML in UpGrad.
  • Received First Place for Team Hackathon in Zumen Software India Private Limited
  • Received Third Runner place in Orion Hackathon for Prediction of Bank Marketing Subscription
  • Received Third Prize in Complete the code event (Coding) in Velammal Engineering College at OZMENTA 2k17.
  • Cleared AWS ML Specialty Exam
  • Hold a First Rank in Forum Discussion for AI & ML in UpGrad
  • Received First Place for Team Hackathon in Zumen Software India Private Limited for Twilio Demo on procurement notification
  • Received Third Prize in Build-Out Web Event (Web Designing) and Second Prize in ZIUQ event (Technical Quiz) at TECHVIZHAA 7 IT symposium in Velammal Engineering College.
  • Received Third Prize in Complete the code event (Coding) in Velammal Engineering College at OZMENTA 2k17
  • Received Proficiency awards for academic performance.

Major Projects

2Projects

Building a RAG-enabled Customer Feedback Assistant

    This project is used to analyze and provide insights into customer feedback using NLP & OpenAI.

Automating Data Visualizations with Generative AI

    This project is an AI-powered automated data visualization tool built with Streamlit and OpenAI's GPT-4.

Education

  • M.Sc. in Artificial Intelligence and Machine Learning

    Liverpool John Moores University
  • PG Diploma in Cloud Computing

    Great Learning (2024)
  • PG Diploma in AI and ML

    IIIT-Bangalore & UpGrad (2022)
  • B.E. in Computer Science and Engineering

    Misrimal Navajee Munoth Jain Engineering College (2018)

Certifications

  • AWS

    Great Learning (Jan, 2023)
  • C, c++ certified from niit adyar, chennai.

  • Pg diploma in ai & ml from iiit-bangalore & upgrad

  • Healthtech specialization level 1 - healthcare domain specialist

  • Genai hackathon

  • Aws certified machine learning specialty

  • Aws certified machine learning specialty (mls-c01)

  • Tensorflow extended

  • Serverless inference using aws

  • Ml with imbalanced data

  • Deployment of ml models

  • Testing and monitoring ml model deployments

  • Data science course 2022: complete data science bootcamp

  • Mlops: continuous delivery and automation pipeline

  • Pg diploma in ai and ml - statistics and ml-1

  • Programming in c

  • Object oriented programming language using c++

Interests

  • Exploring
  • Singing
  • Dance
  • Painting
  • Travelling
  • AI-interview Questions & Answers

    Hi, everyone. This is Tanisha Medawala currently working as an AI engineer in CTS Tech Health Care Technology private company. So here, this is a health care domain company where we, as an AI engineers, basically work towards the designing, building, developing, and maintaining the machine learning applications. So when I started my journey as an AI engineer here, basically, I started with MLOps assessment projects, which was dealing with AWS as well as the Azure cloud, where the client used to come up, basically asking for these solutions to be architecture workflows so that we will be able to identify security process gaps and regulatory compliances that needs to be put into the architecture. And, basically, we, analyze the current architecture and provide them the correct way of implementation. Then, this is what I've done in MLOps assessment project. Followed up by this, we also did something like, MLOps implementation projects where 1 of the project requires the MLOps registry module as well as the MLOps monitoring module. We need to track all the models that are being created. And once we track all of them together, what are the evaluation metrics are coming up based on that 1 model has been selected and stored in a central repository. After that, we basically push it to prod and basically, you know, come up with different visualization capabilities to make business stakeholders, data scientists aware about the, model performance metric, data drift, business KPI, outline detection, and so on. So this was the work that we have done using AWS SageMaker, QuickSight, TimeStream DB, s 3 bucket, and so on. The next piece that I worked was, you know, connecting, the calls. The calls that were connected to customer service center, we were intelligently finding out the call intent summarization and the purpose of the call and who's the caller by doing the AI capabilities such as using the routing model, using the artificial intelligence models, like, reinforcement learning models, intent model, and so on. So by using this such kind of models, we use to get, the different values, key value pairs so that can be sent to the response. And, basically, we get all the, decision making steps, during the call, which call it has to be classified, which call has to go towards concerned department. So, intelligently, we're trying to do it at the back end service. So this was the work that we have done, and I basically worked towards the infrastructure, setting up the entire project into the cloud, that is Azure, by using Terraform GitHub Actions and basically working towards Azure Machine Learning Studio Workspace where we have leveraged notebooks, data experiments, and deploying it as a web service endpoint. So this was the work that I've done so far in machine learning. Apart from this, I have also worked in 1 of the, POC, which is nothing but brain tumor segmentation where we have leveraged 3 d unit architecture. And using the 3 d unit architecture, we thought of, you know, early detecting the tumor cells within the brain tumor, different regions. I'm basically annotating them and recommending them, the surgeon. The surgeon will be using the dashboard. through that, it will recommend the patients that they have to go to chemotherapy or radiotherapy and so on. So these are the use cases that we have worked so far in AI and machine learning. And apart from that, I also started my journey with, Java development. Basically, I worked 2.5 years in the Java development as a back end developer in Zoom in, which is a product based company in Limited, which used to be a 1 analyst before, and then the Zoho company. So these are the companies that have worked immensely over it and, as a Java developer.

    Share your experience with database in writing queries, highlight specifically where you have worked. Okay. So now coming to the databases, I have worked with TimeStream DB, which was used, within the AWS cloud. It helps you to track each and every event, which are basically depending on the timely, know, dimension. So timely identifiers are used within all the rules and observation. The second piece that I've worked is, RDS instance, which basically use Aurora or Postgres or MySQL kind of an engines. So that was used for transaction based data for, you know, storing data in a tablet, format. The 3rd piece that I have worked with is MongoDB. And, with the use of MongoDB, we basically save key value pairs as a documentation, and this was done during the back end, applications development. So MongoDB was used. Similarly, I have also used some Postgres for developing Java development, applications. with regards to, we have used, if we come to the cloud, I've used storage accounts. In storage accounts, basically, going for blob storage, maintaining all the file systems, hierarchy of folders, and so on. The 4th piece that I have observed is the s 3 bucket. Definitely, using for the transaction logs, keeping it in a proper format and the modern artifacts.

    Okay. So when it comes to experience in ops concept yes. So I have done the ops concept. I basically use object oriented programming, which is basically based on the class, which is exactly a blueprint, and object is an instantiation of the class. So by using these 2 properties, we try to encapsulate the entire data unit into variables and methods which are being kept as a class. We also use polymorphism. So majorly, I've used a lot of polymorphism in Java. In Python, you cannot, basically go for method overloading, or, we basically prefer it to have, class arguments be done or the class methods to be written. So we used to have class methods, static methods using class variables or static variables. So this was the and instance variables. So the we basically you utilize them as per the needs. Now when it comes to solid principles so, with regards to solid principles, we try to use recursive functions. So the function should be defined with their documented strings and within the comment section that what exactly that function is basically used for. It should be reusable. It should not be we should not write the redundant logic. First thing is the dependency inversion should be there. So it should be, like, when we are creating inheritance, the base class has been created. All the properties of the base class will be inherited by the by the child classes. So we give implementation in the child classes. Similarly, when we use interfaces, we basically define it as a abstract, methods in the parent class, and the abstract methods has to be implemented in the child class. So these are the things that we do it in ops concept. We basically modularize the code, create different Python utilities for different purposes. Let's say if you're away from machine learning, so creating a separate utility for data processing, model training, model evaluation, hyperparameter during model building, and so on. So this will we try to, you know, come up with different designs and concepts.

    Okay. Describe my experience in writing unit case test cases in Python. So, basically, when it comes to unit test cases, I have used Pytest as a framework where I have leveraged, Pytest's Pytest, library version. So we installed it and then basically write, different, functions there, test functions. With the test functions, we use assertion commands. So by using assertion commands, whatever are the major functions that you've written and has a very big service logic, we'll try to evaluate it by giving inputs as well as the corresponding output and basically maintain a quality condition over the asset command so that this, all the all the function parts, whichever output they are trying to give, will be noted down, and then we are checking it through Boolean condition. Similarly, we also perform some of the test function using negative cases and see what kind of an error should come up if it is handled very well. So these are the things that I've done in test cases.

    So let's talk about advanced Python concept. So in advanced Python concept, definitely, the removal of all loops is being done by using list comprehensions. List comprehensions are the ones which are using optimized methods. So you can basically work around with it. The second piece is Lambda functions. So if you have a single task to perform, you can basically write lambda functions. That is also a way to, you know, introduce, a proper way of writing the code. The third piece is using inheritance, creating interfaces, and, basically, going for the implementation in the, child classes so that we basically handle it. Then there are different forms of inheritance. You can basically try to see, according to the use case, which will be the parent class, which will be the base classes. And, abstract classes can be done by importing it and getting it inherited it, then that is what there what is there in the pie advanced Python concept. Apart from that, using the built in method such as map, filter, reduce, this will also help you to map basically or transform the entire function using the I trimmers. the second piece is filtering all the conditions over the roles of data frames. reduce means when you do some cumulative submission or you do cumulative subtraction, and you can easily do it by using Python concepts. So these are the things that I've done so far.

    And this is how we will utilize oops in Python to ex create extendable machine learning models within. Yeah. So when it comes to, you know, working with machine learning models, basically, creating a project folder with all the Python utilities, use, when we create data preprocessing or let's say we when we create loggers, when we create, model training, we basically try to define it using a class function. Within that class function, all the predefined methods have been written there. And then, basically, we use this to create an object in the class in Python and extend it in the AWS services. So when it comes to the model development, I would say a project should be created properly. The main dot p y file should be there where all the initiation of the objects have been done using, different workflows. So data processing separately, data visualization separately, model query, model building, and hyperparameter tuning and leading to the scoring or evaluation of the models. So this we do it through the oops concept. Utilize the classes, utilize the inheritance, utilize the polymorphisms, utilize the abstract method whenever needed. Then, instance variables which we keep on, you know, initiating through the methods, class variables where the values seems to be the same throughout or should be shared among all the instances of the object that should be written there. So that is how we have done it through ops cons.

    So taking a look at this Python code submitted items to the finance checklist. The issues with the, template methods and for tools and data. Okay. And Okay. So if you see this, from ABC import ABC comma abstract method, we are, basically inheriting ABC as the class. Now when we're inheriting this, we are, not annotating the method with abstract method here. I'm basically not writing the execution. It should have the execution here. The first thing we do, something should be there with, some code or, let's say, even if you have passed, that sometimes works. Yeah. So at abstract method should be used for the, I would say, for implementation here. Yeah. So either you if you're using abstract methods, it should be basically used for defining it, declaring it. Now if you wanted to use it or give some implementation code, then you have to use another class for the same.

    What are the some of the challenges you have faced, while implementing your microservices in Python on AWS? How do you overcome them? Yes. So when it comes to microservices, in Python, the main thing is, you know, making our calls as synchronous calls to different, services that we basically implement. So let's say if you are making up an application and you have lots of, services being made, 1 is payment service, other 1 is used for some other core logic services, the other is used to keep the utility services. So here, when we are implementing all of these models together, interacting with all the modules, keeping separation of concerns, and making the parts asynchronous. That is the first issue or the challenges that you will face. 2nd piece is, working with API gateways. So you have to make sure that, whenever we are connecting with different endpoints or the different services, the URL hits the or sees the pattern and route the direction as per the pattern given there. So this is how we have done it. And, basically, when we are creating so many services together as a microservice architecture, we need to make sure that, it is purely fault tolerant. It is scalable, elastic in nature. Whenever required instances will be created for the services in the, target groups, and load balancer can be used. Load balancer basically behaves in such a way where we can use weighted methods. if it is not properly being configured, then, whenever you will get high request rate, you will not be able to, you know, overcome this, particular issue. And the customer kind, basically feel the bad experiences. So this is what all the challenges will come. The first is integration with all services, talking with all the services, and keep keeping the loose coupling among all the services. That is a must to have. Then connecting with the databases. So when whenever we're using this, we basically let's say, if you're going for app Java application, I we'll be having Spring JPA repositories to connect with the database. So how it should be updated, whether we should go for asset compliance, whether we should go for base compliance status to be decided when we are implementing the same. And, the infrastructure, whenever we are using, a simple application, then we should have separation of concern in web tier, app tier, as well as database server tier. And to overcome this, I would say you can do it by using loose coupling, techniques, by using inheritance, polymorphism, by, creating interfaces as well as the implementation in different other classes that is nothing but the child classes. The second piece is load balancing. API gateway should be given out patterns. We should have, a plan for can release deployment as well as the blueprint deployment so that you can basically give the, weighted methods. Whenever a new feature is coming up, it should not give a downtime to our customer experiences, and these are the things that will help us to overcome it.

    Okay. Now when we talk about designing a Python based MLOps pipeline, I would say, let's say you have created a project folder. In that project folder, you have created different utilities, data preprocessing, model evaluation, model training, scores, deployment, configuration. Within configuration folder, you can have environment YAML files. You can have settings. You can have JSON files to configure all the parameters and so on. And, basically, you can utilize, let's say, if you're going for Azure, you can utilize Azure services and basically implement the entire code through it as your machine learning services or as your MS SDKs. You can utilize for the same. Now when we create this project folder, you can also create GitHub actions workflows so that you can push the code to the Azure machine learning workspace. We can perform the testing. We can also have test files, spend time for your testing for and, basically, when you have YAML workflows, you can go for checkout. You can go for unit testing. You can go for, you know, reviewing it through pilot and then, triggering the data preprocessing step followed up by the dependency on training, p y file. Then through the training py file, once the model has been available in the model section, you can, or let's say the training model has been done. You can see the evaluation metric. Then, basically, we can deploy the model and create a web service endpoint. So for this, we can basically utilize the YAML workflows and you utilize the Azure CLI commands to perform the entire activity.

    Okay. When it comes to unit test cases, I would say the major functions which are highly weighted, we create Pytest framework. And through the Pytest framework, we create Pytest functions. We keep on, you know, finding it out how many test cases have been passed, whether we can, fail a test case, whether we cannot fail it, fail a test case, all should be jotted down in a separate file, Python utility. When it comes to integration testing, I would say, whenever we are integrating any services, we need to look at those pages which are integrating with that system and note it down. Basically, create an excel sheet of positive cases, negative scenarios, and jot down the comments for the same. And create bugs and fixes in the sprint planning used in Jira or project management tools. So these are the ways that we can basically follow to write unit hand integration testing.