profile-pic
Vetted Talent

PRATUL GOYAL

Vetted Talent
Experienced data science professional with an MBA in Data Science, patent in Artificial and Machine Learning, and proven track record of excellent interpersonal skills. Seeking Data Scientist position where I can leverage my 8+ years of hands- on experience in statistical modeling, machine learning, and web deployment to drive business success leadership and
  • Role

    Senior Consultant

  • Years of Experience

    8 years

Skillsets

  • GitHub Actions
  • Agents
  • BERT
  • Chatbots
  • churn modelling
  • Classification modelling
  • clustering
  • CSAT
  • Cx
  • Data Wrangling
  • EDA
  • LangChain
  • llama 2
  • Machine Learning
  • Material Design
  • OpenAI
  • OpenCV
  • regression modelling
  • tensorflow lite
  • Transformers
  • A/B testing
  • Python - 8 Years
  • CI/CD
  • NLP
  • Data Warehousing
  • Data Analysis
  • API
  • Docker
  • MLOps
  • PyTorch
  • Python - 8 Years
  • YAML
  • TensorFlow
  • Azure
  • Deep Learning
  • LLM
  • SQL
  • BigQuery
  • Kubernetes
  • Data Visualization

Vetted For

11Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Data Analytics Manager (Hybrid, Delhi/NCR)AI Screening
  • 73%
    icon-arrow-down
  • Skills assessed :A/B testing, Complex Analysis, Business Intelligence, Data Analysis, Snowflake, SQL, Data Visualisation, Google Analytics, Leadership, Python, Tableau
  • Score: 66/90

Professional Summary

8Years
  • Sep, 2021 - Present4 yr 1 month

    Senior Consultant

    Ltimindtree
  • Feb, 2018 - Aug, 20213 yr 6 months

    Data Science Consultant

    Simplilearn
  • Mar, 2017 - Jan, 2018 10 months

    Subject Matter Expert

    Byju's - The Learning App
  • Aug, 2013 - Jan, 20162 yr 5 months

    Data Analyst

    Svm Infotech
  • Mar, 2017 - Jan, 2018 10 months

    Data Science Trainer

    The Princeton Review

Applications & Tools Known

  • icon-tool

    Tableau

  • icon-tool

    BigQuery

  • icon-tool

    SQL

  • icon-tool

    Python

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    Azure

  • icon-tool

    Machine Learning

  • icon-tool

    OpenAI

  • icon-tool

    Azure Synapse Analytics

  • icon-tool

    OpenCV

  • icon-tool

    PyTorch

  • icon-tool

    LangChain

  • icon-tool

    LLM

  • icon-tool

    GitHub Actions

Work History

8Years

Senior Consultant

Ltimindtree
Sep, 2021 - Present4 yr 1 month
    Project 1: Azure Database Migration (Duration: 3 months) Responsible for migrating On-Prem Kaggle: Notebook Expert Databases to Azure Amsterdam, utilizing cloud migration and data warehousing skills. Used SQL commands extensively to manage data during the migration. Successfully migrated critical business components and legacy systems within three months deadlines. Project 2: Leveredge Analytics (Ongoing) Developed advanced Chatbots for the Unilever Project, utilizing OpenAI's LLM and LangChain, integrated with Llama 2. Designed and deployed Machine Learning pipelines on Azure, showcasing extensive knowledge in Python, data analysis, and machine learning. Applied advanced clustering algorithms on BigQuery for market segmentation. Administered A/B testing for Unilever products. Utilized deep learning modules such as OpenCV to devise a product recommendation system for the UShop application. Activities involved: Cloud Migration, Data Visualization, Data Analysis, Python Coding, Data Wrangling, EDA, Kubernetes, Docker, MLOps, Data Warehousing, Machine Learning, CI/CD Pipelines, GitHub Actions, YAML Pipelines, POC, Churn Modelling, Classification Modelling, Regression Modelling, A/B testing, OpenCV, PyTorch, Transformers, TensorFlow, LLM, NLP, TensorFlow Lite, Azure Migrate, Agents Creation, API.

Data Science Consultant

Simplilearn
Feb, 2018 - Aug, 20213 yr 6 months
    Spearheaded initiatives to create educational content, capitalizing on Microsoft's technologies. Crafted user-focused digital learning materials by incorporating data science and machine learning concepts via Azure and Azure Machine Learning. Optimized learner interaction and engagement through Microsoft Application Insights. Deployed sophisticated data techniques and Azure Synapse Analytics to tailor and enhance the impact of educational content.

Subject Matter Expert

Byju's - The Learning App
Mar, 2017 - Jan, 2018 10 months
    Developed a Mathematics Content Recommendation engine using Hybrid Filtering, combining Collaborative and Content Filtering Models to improve user experience and engagement. Implemented a novel method to calculate the response rate of Mathematics questions and utilized it to predict the performance of users per attempt.

Data Science Trainer

The Princeton Review
Mar, 2017 - Jan, 2018 10 months
    Trained working professionals in Data Science Domain.

Data Analyst

Svm Infotech
Aug, 2013 - Jan, 20162 yr 5 months
    Collaborated with stakeholders during requirements meetings and data mapping sessions to gain a deep understanding of business needs. Conducted research and development on available data to devise new and advanced data analysis techniques. Developed a KPI dashboard using Tableau to visualize and track key performance indicators.

Achievements

  • Spearheaded initiatives at Simplilearn to create educational content, capitalizing on Microsoft's technologies. Concentrated on crafting user- focused digital learning materials by incorporating data science and machine learning concepts via Azure and Azure Machine Learning. Optimized learner interaction and engagement on through Microsoft Application our platform Insights. The data- driven insights were instrumental in fine-tuning our educational content and in automating personalized learning paths. Deployed sophisticated data techniques and Azure Synapse Analytics to tailor and enhance the impact its of our educational content, guaranteeing adaptability to varied its learner needs and scalability across diverse geographical locations.
  • Successfully migrated critical business components to Azure within a three-month deadline.
  • Developed advanced Chatbots utilizing OpenAI's LLM.
  • Applied advanced clustering algorithms for market segmentation.
  • Administered A/B testing for Unilever products.
  • Utilized deep learning modules for product recommendation system.

Major Projects

11Projects

AZURE DATABASE MIGRATION

    Responsible for migrating on-prem databases to Azure, involving cloud migration and data warehousing skills with extensive use of SQL commands.

LEVEREDGE ANALYTICS

    Developed advanced Chatbots for the Unilever Project, utilizing OpenAI's LLM and LangChain, integrated with Llama 2.

DATA SCIENCE PRODUCT DEVELOPMENT

    Spearheaded initiatives to create data science and machine learning educational content at Simplilearn.

MATH RECOMMENDATION ENGINE

    Developed Mathematics Content Recommendation engine using Hybrid Filtering at Byju's.

PROJECT 1:

PROJECT 2:

LEVEREDGE ANALYTICS (DURATION: ONGOING) DAILY TASK

1.

2.

3.

DATA ANALYST:

SVM INFOTECH, NOIDA, INDIA

Education

  • Executive MBA: Data Science And Business Analytics

    Indian Institute Of Management, Kashipur (2017)
  • BTech: Biotechnology

    Pantnagar University, Pantnagar (2013)

Certifications

  • Az 900: microsoft fundamentals

  • Dp 100: designing and implementing a data science solution on azure

  • Az 400: designing and implementing microsoft devops solutions

  • Tableau a-z: hands-on tableau training for data science!

  • Introduction to r

  • Machine learning a-z hands-on python and r in data science

  • Data science and machine learning with python - hands-on

  • The complete sql bootcamp

  • Langchain with python bootcamp

  • Applied plotting, charting & data representation in python

  • Introduction to data science in python

  • Llm fine tuning course on openai

  • Applied machine learning in python

  • Applied text mining in python

  • Applied social network analysis in python

  • Data science methodology

  • Tools for data science

  • What is data science?

  • Python for data science and ai

  • Databases and sql for data science

  • Data analysis with python

  • Applied data science capstone

  • Machine learning with python

  • Data visualization with python

  • Introduction to machine learning in production

  • Machine learning data lifecycle in production

  • Machine learning modelling pipelines in production

  • Improving deep neural networks: hyperparameter tuning, regularization and optimization

  • Neural networks and deep learning

  • Deploying machine learning models in production

  • Structuring machine learning projects

  • Convolutional neural networks

  • Ai for everyone

AI-interview Questions & Answers

So, uh, as far as my background goes, I've been working uh the skill set I possess over here is based on Python SQL R and as well as Azure Machine Learning Studio. Here I'm dealing with the data set which are more dealing with the churn prediction modeling a by b testing how to augment the customer experience, NPS score, uh, review analysis, even I'm also making one of the chatbot which is being created on top of large language models with the help of open aiapn and along with that we are also been using gemini in our one of the recent use cases. I have almost like 9 years of experience. I'm a bachelor's in engineering, uh, by in my graduation and post graduation, I did an MBA on data science and business analytics from Indian Institute of Management, one of the elite institute in India way back in 2015 to 2017. IIIA possess 40 plus certification on data science and machine learning including certification on deep learning machine learning and also on the Azure cloud. I also do have 1 patent which is on the virtual assistant virtual assistant field which is being filed in Indian patent portal and along with that I also have 5 research paper which is published on some of the elite journals international journals. Overall apart from this I also do have an experience of working with the PowerBA and also do have an experience of working with the recommendation engines uh and also edge devices I also did migration for one of the recent project of the Unilever in my current company and in my current role I am been doing the task related to how to how the Unilever can enhance its business my core objective is to improve the customer experience improve the product reviews and also the customer experiences for 6 to 6 countries and namely what we are working with I'm also working with CICD pipelines and using GitHub actions and also I've been using a Jira repos for assigning the task. We are constantly following the technique of agile methodology where we are given different types of sprints as per the user requirements and I am currently handling a team of 7 people under me, uh, where I'm giving them a regular task about the project deliverables, and we are also dealing and doing a regular check of data drift also, uh and model tweaking also. So these are some of the things I so overall I got 7 plus 8 plus years of experience uh in data science and machine learning.

Okay. So this is something. So craft a strategy to migrate SQL based legacy report to a modern business intelligence system, ensuring data continuity and accessibility. So this is something, uh, can be done in variety of ways. So some of the example I would like to mention based on this is like assessment of the current environment So that is like, uh, so if I bifurcate this thing in 2 steps, so this will be including understanding the existing SQL based report structure, data structure and dependency. After that also identifying key stakeholders and their requirement for the new system. Uh, research and choose modern business intelligence platforms that aligns, uh, you know, like, uh, with the organization needs scalability and compatibility with the existing technology Data modelling and mapping we will map out data schema Relationship and transformation needed to migrate SQL based system to the chosen BIE platform, uh, so that is also one step. Uh, apart from this, uh, data extraction and transformation is also one thing. We extract data from our legacy SQL databases, uh, with using ETL tools, uh, and transform the data to fit the schema. Testing and validation can also be used where, uh, we thorough testing to validate the accuracy and integrity of the migrated data and reports that involve involve, uh, stakeholders to verify new reports meet the requirement and the expectations. We will be user training and adoption. We provide training session for end user and on how to navigate and utilize the new BI platform effectively. Data continuity and backup is also one thing where, uh, implement backup and recovery procedure to ensure data continuity in case of unforeseen issues of failure. Security and access control is also one thing. We configure access control and security measures to protect sensitive data and ensure compliance with regulatory, uh, requirements. And monitoring and optimization is also one thing. We set up monitoring tools to track system performance usage, pattern, and data quality. And, uh, apart from this, uh, documentation and knowledge transfers can be used where, uh, document the migration process, uh, is also one thing, uh, like and best practices for future references can be inculcated. So by following these steps, I think, uh

Okay. So to develop a contingency plan for data analytics team with uh, critical tableau dashboard, fail to update the latest data. So, So while developing a contingency plan for data analytics team with critical w dashboard, fail to update with the latest data. We can, uh, actually mitigate some of the strategies. We can actually identify potential failures. Okay? Uh, points. Uh, like, uh, we determine the possible reason why the dashboard may fail to update such technical issues, so this can be done. We establish monitoring system. Uh, we implement tools or scripts to regularly check the status of updates on the dashboards. Uh, like, uh, it can be involving alerts. Okay. We can set up alerts. Uh, we can create a backup of the data sources. So, uh, if we regularly maintain the backup of the data sources to populate the dashboards, Uh, it will actually help that if the primary data source fails, so it will help, uh, in order to refer to the alternative sources. So we also develop workflows that primarily involves the contingency workflows that create a predefined workflows and procedure to follow in case of the dashboards, update to fail, uh, that assign a specific task. Okay? Apart from this, we will be developing a communication plan. So we define communication channels and protocols for notifying stakeholders about the dashboard status and potential delays in data updates. Okay? Temporary solution is also one thing. We prepare temporary solution or alternative method for accessing critical data if the dashboard is unavailable for an extended period. Escalation process, we establish an escalation process for escalating unresolved issues to higher level of management or IT support if necessary. Training and documentation is also one thing. We provide training to team members on the contingency plan procedures and ensure that the documentation is up to date. Regular testing and review can also be done. We conduct testing of the contingency plan. That is also one thing to identify the, uh, mainly the weaknesses and the gaps. Post incident analysis can also be done. So after any incident, uh, if dashboard fails to update, so, uh, we conduct a post incident analysis to identify root causes and the areas of improvement. So this can also be done. And user feedback for improvements can be done. So these are the things by which if we develop a contingency plan for the data analytics team, critical tablet

So recommend a scalable method for managing data pipeline dependencies and scheduling in a system like Snowflake or a comparable cloud data platform. So this generally requires lot of steps. So some of them are, uh, like, uh, we can create a workflow, uh, like orchestration tools. So, uh, like, Apache Airflow, AWS step of functions to manage the dependencies between different components of the data pipeline, And these tools allowed to define workflows as directed, uh, a credit graph, DAG, where what we just generally say, uh, where task can be executed in parallel or sequentially based on the dependency. We define that for data pipelines. We break down, uh, the data into smaller manageable task and define them as that with the workflow orchestration tool. Okay? And, uh, we parameterize the workflow to make them reusable and adaptable to different scenarios. Different parameters for input data sources, processing logic, destination tables, and scheduling parameter to customize the execution of each workflow instance. Uh, we also utilize trigger based scheduling. So we implement trigger based scheduling mechanism to initiate data pipeline execution based on events such as completion of the upstream task, uh, arrival of the new data, and, uh, what else, or predefined schedule intervals. Integrate with cloud data platforms, uh, features can also be done, like, uh, where we implement trigger based scheduling, uh, how to initiate data pipeline execution based on events such as the completion of the upstream task, arrival of new data, predefined schedule intervals. Uh, we integrate with with cloud data platform features. So, uh, like, uh, native data platforms such as Snowflake, uh, Task and Streams to facilitate data by blowing scheduling and management. These feature often offer built in functionality for automating common tasks and orchestrating data workflow within the platform itself. Monitoring and alerting. Implement monitoring and alerting mechanism to track the execution status and performance of the data pipeline in real time. We scale resource, uh, dynamically, configure the workload orchestration tool and cloud data platform to dynamically scale resources, uh, based on the workload demands. Uh, this generally ensures that sufficient compute and storage resources are allocated to execute data pipelines efficiently. Um, version control and CICD. Implement version control practices for the data pipeline. Enchures definition and associated code artifact. Integrate with, uh, continuous integration, uh, development pipelines to automate the deployment and testing of the changes. So these are the some of the strategies which can be done. Apart from this, documentation, uh, workflows and dependencies, what we generally check, and regular performance optimization, uh, we in which we generally monitor, uh, the pipeline and the workflow orchestration processes, uh, while identifying the bottlenecks and resource utilization. Also, if, uh, taking the time efficiency also. So these are some of the steps which can be followed to manage data pipeline dependency in a system like Snowflake.

So to determine how to resolve conflicts between differing more data models in Salesforce and an enterprise data warehouse like snowflakes. So some of the things which can be done so I will try to explain it point by point. So we can identify conflict data models by doing some thorough analysis like Salesforce and the snow Snowflake document the schema structure, data type relationship, and other dependencies between the two system. We can understand the business requirement. We can gain a deep understanding of the business requirement driving the data integration between Salesforce and the Snowflake, And we can identify key objects, fields, and relationship that are critical for reporting analytics and decision making processes. We can do data mapping and transformation. We develop a comprehensive data mapping and transformation strategy, uh, to to reconcile the differences between, uh, the data models in Salesforce and the Snowflake, we determine how the data will be mapped, transformed, and loaded from Salesforce, uh, into Snowflake while ensuring data integrity and consistency. And, uh, normalization and denormalization can also be done. We evaluate normalization level of the data model in the Salesforce and Snowflake to weather the normalization and denormalization. Strategies are needed to align the data structures. We implement data integration tools. We utilize data integrations tool such as Informatica, Talentend, and to streamline the process of extracting, transforming, and loading data between Salesforce and the Snowflake. Establish data government policy. We define data government policies and standard to ensure consistency, quality, and, uh, compliance across Salesforce and Snowflake data. We automate data synchronization. We implement data automated data synchronization process To keep data models in Snowflake, uh, Snowflake can, um, generally in sync and schedule regular data synchronization jobs to update Snowflake with the latest data from Salesforce and vice versa, we can do. Apart from this, data quality assurance, we can do. So we implement data quality assurance, please, to defect and resolve any discrepancies or anomalies between Salesforce and Snowflake data. We monitor and audit data changes, so establish, uh, monitoring and auditing mechanism to track data changes and modifications between Salesforce and the Snowflake. We iterate dev moment. So continuous evaluate and refine the data integration process based on the feedback and, uh, evolving business requirement. So this can be inculcated.

So to propose a method to automate data retrieval from Snowflake and visualize metrics in Tableau for a weekly performance report. Some of the steps, uh, we can follow. So one of which is, uh, like, primarily based on data preparation in Snowflake. So we generally ensure that the relevant data tables are properly structured and maintained in the Snowflake, and uh we implement necessary data transformation such as aggregation of the snowflake. Views or materialize views to prepare the data for reporting. We create schedule views or queries. So we write SQL queries or views in Snowflake to retrieve the required metrics. And dimension for the weekly performance report. Schedule these queries are used to automatically on a weekly basis using Snowflake task scheduling. We export data to Tableau Server. We configure Snowflake to export the query result or view output to a CSV file stored in a designated location accessible by Tableau server. Tableau data connection Tableau data source connection set up a data source connection in Tableau to CSV file generated by Snowflake. We configure the connection to automatically refresh on a weekly basis to fetch the latest data for the performance report. We designed Tableau Dashboard. So we designed Tableau Dashboard to visualize the performance metric based on the data retrieved from Snowflake. We create interactive visualization such as as chart graph and KPI to present key performance indicators. We schedule, uh, effectively, scheduled, uh, Tableau workflow refresh. Scheduled that Tableau workflow to refresh automatically on a weekly basis to reflect the updated data from Snowflake, and we configured Tableau server to send notification or alert if the, uh, you know, like, uh, if the refresh process encounters any errors or failures. We publish our Tableau server. So publish that Tableau workflow containing the performance dashboards to Tableau Server. We set the permission and access control to ensure that authorized user can view and interact with the weekly performances. Distribution and collaboration can be done. We share the Tableau workbook with the relevant, uh, Tableau stakeholders or email distribution list. We do generally do the monitoring and maintenance. So, uh, monitor the automated process of the data retrieval from Snowflake to, uh, uh, and visualization to ensure the runs smoothly without any issues. Address any errors or performance issue promptly and make necessary adjustment to the automation workflow as needed.

So I'll be taking little time to analyze this. So I'm still thinking. Given why it might not return the expected result and how would you modify it to ensure accurate data retrieval. K. So, basically, So, uh, what I can figure out with this that the with the queries that the having clause is used within, uh, like, uh, aggregate function with this count star, but without a group by clause. The event clause is used to filter groups of rows based on the result of the birthday but the query does not give any group based on, uh, before using having. So to, like, to ensure the accurate data retrieval, uh, what we can do we can count the number of rows in the order table and returns the count as total. And if you want the result based on the count of rows, we can use where clause instead of having clause. So, uh, while modifying this, it will help. So, generally, what it is doing it is The query doesn't group any rows before using having clause. So that is one thing what I can figure out after a lot of thinking.

So To formulate an approach for incorporating user feedback into the iterative development of the Tableau dashboard, uh, some of the following steps can be involved. So, initial requirement gathering can be done. Where, uh, like, uh, we start by gathering initial requirement from stakeholder and end user to understand their needs, objectives and expectations. For the Tableau dashboards we identify key metrics, visualization and features that users want to see in the dashboard. Prototype development can be done based on initial requirement develop an initial prototype of the dashboard. Keep the prototype flexible and not able to accommodate changes based on the user feedback. Uh encourage user to provide specific feedback on usability, scalability, and relevance of the metrics and overall user experience can be done. Uh, this kindly correlates with my current experience also. We use surveys, interviews, or feedback forms to collect structured feedback from user, allowing them to rate and prioritize different aspects of the dashboards. We analyze, uh, analysis and prioritization. We analyze the feedback collected from the user to identify common themes, pinpoints, and areas of improvement in the dashboard. We prioritize feedback based on its impact on the overall usability and effectiveness of the dashboard, and we focus on addressing critical issues in high priority feedback first. Iterative development incorporates user feedback into the iterative development process by making iterative updates and enhancement to the Tableau dashboards. We implement changes to the dashboard design, layout data visualization, and interactivity based on the prioritized feedback feedback from the user. Uh, we then maintain a version control to track changes and iteration of the dashboard throughout the development process. User validation and testing. Validate the updated version of the tablet dash with end user to ensure that the implemented changes meet the expectation and their addresses feedback. We conduct additional user testing session or usability study to gather the feedback on the updated dashboard. Feedback Loop Closer We close the feedback loop by communicating with the user. How their feedback has been incorporated in the tablet dashboard? Continuous improvement can also be done. We establish the process of the continuous improvement of the tablet dashboard based on ongoing user feedback and enabling business requirement. Regularly, solicitating feedback from the user and stakeholders and iterate the dashboard to keep it relevant and useful. So these are the some of the things which can be incorporated.

So how would then architect a scalable business intelligence solution that incorporates Tableau dashboard and adheres to the data government's, uh, policies. Can be done in varieties of ways, but some of which can be listed, like, uh, we can define a business requirement. We can start by understanding the business objectives and requirement from the BI solution, identify key stakeholders and their need for data analysis and reporting. We established governance framework. We established a robust governance framework that defines policies, standards, and procedure for managing data quality, security, privacy, and compliance. And we define roles and responsibility for data stewardships and data owners, data custodians to ensure accountability and transparency in the data management process. Data integration and consolidation can also be done. We integrate data from diverse sources such as databases, data warehouses, data lakes cloud application into centralized data repository we can use ETL processes or data integration platform to consolidate and harmonize data from different data sources while, uh, data quality and consistency, data modeling and semantic layer. So develop a comprehensive data model and semantic layer that provides a unified view of the data for reporting and analysis of the procedure, use dimensionality modeling, uh, techniques such as start schema, score, snowflake schema to optimize data retrieval performance, and, uh, facilitate data query. We design a scalable architecture that can automate growing data volumes, user, concurrency, and analytical workloads. Consider cloud based platforms such as Snowflake, uh, Amazon Red Chip, and Google BigQuery for scalability, elasticity, and performance. We deploy Tableau server or Tableau online to host and manage the Tableau dashboard, web workloads, workbooks, and data source centrality. Configure Tableau Server with the appropriate authentication, authorization, and access control to enforce data governance policy and ensure secure access to BIA assets. Data security and access control, we implement fine grained control and role based permission with Tableau Server to restrict access to sensitive data analytical capability based on user roles and privileges, and encrypt data in transit and risk to protect data confidentiality and integrity. Meta data management and lenient. They establish meta then, uh, data management to capture and maintain metadata about data sources, data definition, data lineages. Use metadata management tools or platform to catalog and govern metadata asset, asset enabling, uh, user to discover, understand, and trust the data used in Tableau dashboards. Implement mechanism to track the health performance and usage of the solution. Monitor Tableau server performance metrics such as CPU utilization, memory usage, and query response time to identify and model and optimize resources or locations. So these are the things that can be

Uh, Adobe Analytics is not my area of expertise, but, uh, I would try to, uh, jot down some of the steps. So, like, uh, let's say for an instance, what, uh, some of the steps can be done in order to, uh, like, Uh, Adobe Analytics. Okay. See, uh, how would I adopt a team workflow to incorporate BA, uh, best practices with a tool like Adobe Analytics? This can be done in, uh, varieties of ways. We define clear goals and objective. Uh, these are very generic steps, but, uh, yeah, we can define clear objectives and goals. Uh, planning and scoping can be done, uh, in which we generally plan with business analyst, data analyst to define scope of the project. Data collection and preparation. We use Adobe Analytics to collect the relevant data from various digital channels such as website, mobile apps, uh, marketing campaign. Ensure that data collection is configured accurately to capture the necessary metrics and dimensions. Conduct exploratory data analysis using Adobe analytics to get, uh, to gain the initial insight into the retax flow, trends, patterns, and correlation to identify area of interest to further investigation. Uh, hypothesis testing can be done. We formulate hypothesis testing based on our insight gathered from EDM business understanding. Iterative analysis and visualization. We use Adobe analytics to perform iterative analysis and visualization of the data. We create custom custom reports, dashboard, and visualization to communicate insight effectively to stakeholders, uh, foster collaboration between business analysts, data analysts, marketers, product managers, and other relevant stakeholders throughout the analysis process. Documentation and knowledge management. We document analysis finding methodologies and assumptions maintain transparency and reproducibility Feedback and iterations can be done we solidify feedback from stakeholder and team members at various stages of the analysis process that incorporate feedback to refine analysis approaches Continuous learning and improvement. And create a culture of continuous learning and improvement within the team. Share learning best practices and success stories to enhance collective knowledge of and capabilities. Performance, uh, quality assurance takes to ensure the quality, accuracy, reliability, and validity of the analysis result. Governance and compliance, data governance policies, privacy regulation in industry, standard when collecting, analyzing, and sharing data. Policies and external regulations such as GDP or CCPA can be done. So these are the steps which can be incorporated

Okay. So, So I'm still thinking about this. So the Python, uh, function over here, for calculating the factorial of a number seems to have a logical error what I can feel. The base checks is equal to n w is equal to n a, which is correct and returns 1. However, the recursive case should be factorial n minus 1 instead of factorial n which is given since n is not properly, uh, like, either it should be factorial n-one because see the thing over here is the factorial generally goes like uh, if it go undergoing see if n is 0, okay, so it is returning 1, uh, but n won't be 0. Let's say n is 5, so it will be going to the 2nd case. So it will be going into the recursive logic where it will be multiplied 5 and times factorial of n. So basically, it will be returning n minus 1. So accordingly, n minus 1 would be something where it goes in the recursive logic. So this can be 1 issue as for this. So, uh, yeah. So, basically n minus 1 is one of the logical error that I can figure out. So this can be our answer. Apart from this, uh, okay. This can be when I