profile-pic
Vetted Talent

Chanchal Chaudhary

Vetted Talent

Experienced Senior Software Engineer with a demonstrated history of working in the information technology and services industry. Skilled in SAP BI (SAP NATIVE HANA, Datasphere, BW), Analytical Skills and Data Analytics.

  • Role

    SAP Datasphere Consultant | SAP Analytics COE Team

  • Years of Experience

    6 years

Skillsets

  • SQL
  • Analytical
  • Problem Solving
  • Collaboration
  • Business Intelligence
  • design concepts
  • SAP Datasphere
  • SAP BW
  • SAP HANA
  • SAP BO
  • Work prioritization
  • Organizing

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Data Engineering Lead With Migration / Data Warehousing Experience - Onsite, BangaloreAI Screening
  • 48%
    icon-arrow-down
  • Skills assessed :Problem Solving Skills, PySpark, Spark Tool, SparkSQL, Data Engineer, Data Migration, S4 Hana, SAP, Azure Data Factory, Data Modelling
  • Score: 43/90

Professional Summary

6Years
  • SAP Datasphere Consultant

    Accenture Solutions Pvt Ltd
  • Designer and Developer

    Accenture Solutions Pvt Ltd
  • Analyst

    Capgemini

Applications & Tools Known

  • icon-tool

    SAP BW

  • icon-tool

    SAP HANA

  • icon-tool

    SAP BO

  • icon-tool

    SQL

  • icon-tool

    Azure

  • icon-tool

    OLAP

  • icon-tool

    Hadoop

Work History

6Years

SAP Datasphere Consultant

Accenture Solutions Pvt Ltd
    As core member of COE team, focusing on seamless integration and implementation of Datasphere models. Establishing bridge between SAP BW & Datasphere. Ensuring that processed data from Datasphere models is accurately channeled for consumption in Azure reporting tools.

Designer and Developer

Accenture Solutions Pvt Ltd
    Involved in solutioning and estimation on multiple reports in functional areas like Sales Distribution, Quality Management, Finance etc.

Analyst

Capgemini
    Data modeling workbench Custom development.

Achievements

  • Client Value Creation FY23 Pinnacle Awards
  • FY21 & FY22 Stand-out Performer

Education

  • Bachelor of Technology (B.Tech)

    Amity University

Certifications

  • Generative AI at SAP

    SAP

AI-interview Questions & Answers

Hi. My name is. And about my professional background, I've been working for, uh, on SAP BI Technologies for past 6 years, and I've worked on BW, HANA, and Bob Jay. And recently, I've started working on data sphere, which is quite in trend in the SAPBI technologies.

Okay. So to optimize performance, we always check on the joining condition, which should be on the key field of the table if you are joining multiple tables and also take care on the rules, uh, we have been followed on one of the data security. We are migrating from 1 to another, and, uh, it should be, uh, on based on the key field. Uh, it should not be repeated. We should also avoid the

So role based, uh, access control, uh, works on the role. Suppose I'm a developer, so I'll be getting all the roles related to the development, which includes view, edit, import, export kind of a role. So it it it always works on the particular object. Suppose we have company code business unit. So if we have company code, if I am working on that particular company code, uh, like, as a user or the product owner, then I will be having access for that particular object. If that column has a row has that particular, uh, company code, then I will be able to access the detail

It's complex data transform you perform as a part of data migration project from SAP to, uh, data lake. So we have seen multiple scenarios. Suppose if we I have to speak in perspective of a functional area, then material management is also one of it. A material moment where we see the moments are being calculated based on moment type in the, uh, SAP BW BW format in the transformation, and we suppose we have to go to the data sphere and perform the same thing. So it it it the KPI should be calculated based on the other columns. So we have to do the similar calculation while loading the data in the data

And target. So he works on the change data capture where we can have the before and after image in some of the data warehouse techniques. Uh, also, we can go for the, uh, some some calculation where we always pick up the latest request. So yeah.

Data lineage and metadata management when moving the data. So it it works on the, uh, data set structure. So if you're moving from ECC to, uh, that ERP to a business warehouse, we try to have a copy of the already existing table or the extractor structure in the target, uh, of our business warehouse. In similar ways, we, uh, if we are going for the data sphere, we always import that the, uh, last target table in the next SS remote table. So it will have the similar structure with the formatting, and then we load the data

Activity upon completion of current one and how to correct it. Not very sure on the zoo data factor

By Spark. Identifying mistake in the launch of how would they factor? Lesser than else. It should be addition of 100. Okay. Then source show transform data flow is equal to transform data. Uh, if else condition, uh, should be kept differently, we can have another bracket where we can define the 2nd column in this particular case. 1st, we define the if condition and then give the, uh, then condition. In this case, in the 1st case, if column is an existing column less than 100, then the column should be existing column, uh, and, else, we can go for the l else condition. So this is one of the mistake we can see that if condition come first, then the then condition.

Explain how would you set up continuous integration deployment for data pipeline that needs constant update from SAP datas too? By making it all we can always have an a real time replication in this case where it will always take the latest data and have our complete replica of the database