profile-pic
Vetted Talent

Manchali Nikam

Vetted Talent

I have been instrumental in designing and developing PLSQL scripts, scheduling jobs, and implementing changes in alignment with Agile principles. Additionally, I have served as the Single Point of Contact (SPOC) for highly critical data breach cases, managing end-to-end resolution processes with a focus on swift and decisive actions. Working on automating the activities where there is manual intervention by writing script in Python, Power Automate, Power Bi.


Spearheaded the development of Power BI dashboards for advanced data analytics and visualization, significantly

enhancing the overall quality of data.

• Proactively managed incident resolution, swiftly identifying and addressing issues. Communicated effectively

through ServiceNow and email interactions with cross-functional teams, ensuring seamless collaboration.

  • Role

    IT Consultant (Oracle/PLSQL Developer)

  • Years of Experience

    4 years

Skillsets

  • PLSQL
  • Performance tunning
  • IT Change Management
  • Visualization
  • Project management lifecycle
  • Scrum
  • Root Cause Analysis
  • Release Management
  • Python
  • PowerBI
  • Postgre SQL
  • SQL - 4 Years
  • Oracle
  • MySQL
  • Java
  • Debugging
  • Data Reporting
  • Data Modelling
  • Data Analytics
  • Business Analyst
  • Agile Methodology

Vetted For

7Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    SQL Server Database DeveloperAI Screening
  • 53%
    icon-arrow-down
  • Skills assessed :Unix, Database Design, etl programming, sql developement, Data Modelling, Python, Shell Scripting
  • Score: 48/90

Professional Summary

4Years
  • Nov, 2020 - Present5 yr 6 months

    IT Consultant (Oracle/PLSQL Developer)

    Capgemini
  • Feb, 2018 - May, 2018 3 months

    Software Development Intern

    9LedgePro Microsoft Partner Network

Applications & Tools Known

  • icon-tool

    PyCharm

  • icon-tool

    Power Bi

  • icon-tool

    Tableau

  • icon-tool

    Excel

  • icon-tool

    PowerPoint

  • icon-tool

    Visual Studio

  • icon-tool

    Service Now

Work History

4Years

IT Consultant (Oracle/PLSQL Developer)

Capgemini
Nov, 2020 - Present5 yr 6 months
    • Engaged as an on-site Subject Matter Expert (SME) in Gothenburg, Sweden, collaborating closely with the client to provide specialized insights and guidance.
    • Performed complex data manipulation using advanced PLSQL queries, enhancing consumer data management. Improved system efficiency and addressed production issues by developing high- performance Oracle scripts.
    • Advocated for Agile principles in PLSQL script development, job scheduling, and efficient change management processes. Demonstrated excellence in Oracle/PLSQL development, making pivotal system updates to support business objectives.
    • Created interactive dashboards for effective tracking of bugs, defects, and enhancements, improving project management workflows.
    • Pioneered Power BI dashboard development, elevating data analytics and visualization capabilities for informed decision-making. Played a key role in analyzing business KPIs and presenting valuable insights to clients and internal stakeholders.
    • Acted as the primary Single Point of Contact (SPOC) for immediate data breach interventions, maintaining high data integrity standards.
    • Guided new employees through comprehensive onboarding, focusing on client-specific applications and company standards.
    • Automated repetitive tasks using Python to reduce manual workload and ensure timely code execution.

Software Development Intern

9LedgePro Microsoft Partner Network
Feb, 2018 - May, 2018 3 months
    • Developed and implemented three Python projects focused on data analysis, gaming during internship tenure.
    • Presented projects to college faculty and peers, showcasing technical proficiency and problem-solving skills. Conducted online presentations, demonstrating project functionalities, architecture, and code documentation to a wider audience. Received positive feedback from supervisors for creativity, diligence, and adaptability in utilizing Python technology.
    • Gained hands-on experience in Python programming language, enhancing proficiency in some Python libraries, Python tools like PyCharm.
    • Strengthened understanding of software development lifecycle, including requirements gathering, design, implementation, testing, and deployment.

Achievements

  • Recognition Award as the Best Performer - Capgemini - Dec 2021
  • Fire Fighter Award - Capgemini - Dec 2022

Major Projects

1Projects

Python Data Analysis Project

    Developed and implemented three Python projects focused on data analysis, gaming during internship tenure.

Education

  • PGDM in Information Technology

    MITSDE
  • BE in Computer Science and Engineering

    Bharathi Vidyapeeth College of Engineering

Certifications

  • Career essentials in business analysis by microsoft and linkedin

  • Certified scrummaster (csm)

  • Scrum foundation professional certificate

  • Database management

  • Az-900 - microsoft azure fundamentals

  • Lean six sigma yellow belt certified

  • Jira project management

  • Google data analytics

  • Programming for everybody (getting started with python)

  • Power bi desktop

  • Oci 2023 certified foundations associate

  • Agile software development

  • Certified safe 6 practitioner

  • Google project management

AI-interview Questions & Answers

Yeah, hi. My name is Manjali Ajitnikam. I'm working in Capgemini for 3.7 years. I'm working on Oracle PL/SQL technologies. I have worked on search for 8 months, and I worked as a PL/SQL developer. And I have completed my engineering in computer science and engineering. I've completed postgraduation in information technology. I have a good knowledge of PL/SQL, Oracle SQL, PostgreSQL, data modeling, and Power BI. I'm creating dashboards of business KPIs, and I'm working on data breach cases and issues, including performance tuning. Also, I'm guiding team members for new members who joined our team. I'm leading 10 members in PL/SQL. I'm working on changing processes.

Yeah. Optimizing the database performance and metrics is completely dependent on the type of query it is and what exactly it's executing, so that the database performance is impacted. We need to do an analysis for all things. Then we need to check how much time it's taking when multiple sessions are going on, and how much time it's taking for a simple query to execute when all sessions are down. So, it's completely dependent on the scenario. For optimizing, we'll first check whether indexes are in place on which the particular query is executing, how much time it's taking, what the costing of that query is, and whether indexing is done properly for a particular table on which the query is executing. Then we need to check if there are some unconditional looping statements and what the flow is, what all scenarios are executing.

So the process of implementing partition tables and indexing SQL will first go and check if any particular consumer ID is executing, how much time it is taking. Based on that, if it is taking some time for a particular ID, then we need to insert based on that ID, or we can use one ID, so that the data will get partitioned on two different consumers for the same person.

Yes. So, writing complex queries with several table joins, based on this data, we want to use left join, right join. And when writing the query, I consider the things I need for the tables. Based on that, I will separate all the details. For the five tables, I will list the contents I require. Then I will join one table with another, then a to b to c. Left join or right join. What all columns I require? Or even though I will write as a select star from table 1, table 2 by giving the table names for every table. And then I would just put joins for tables. And after that, instead of star, I will replace with the column name by giving the table names. Like, from table 1, I need x, y, zed columns. So I will do that, table 1 as t one. That is t one.xyz column name, comma. Then all the columns on the respective tables, I will take so that the time will be taken less, and it will give all the column details, what will be required and it will function in length. But, yeah, every time whenever we are approaching for a complex query, we are definitely need to check the cost of the query so that it should not impact the performance of the database.

Database capacity planning actually depends on the old database, how much data it can take over in a particular session, or how many sessions it can take over. There are few sessions. There are few jobs that are continuously running as it is a data-level database, so we need data management. We need to handle multiple types of data. We have incoming data out. We have so many cases. So it completely depends on portal sessions, how many activities the database can handle. And these details, these particular data, we can get from a database administrator for the project. He can set up, these many sessions are okay for that production database. So what we can do is, we can process and plan our process. Like, this is one job which is taking more time. So what we can do is, we will assign this particular timing. And when this job is running, we will separate all other jobs, which are taking less time. So we will align those jobs with the other jobs. We will pick up specific jobs. And again, after some time, after that particular job is completed, we will again assign a type, a frequency tab. Like, one job which is taking time includes that one at 10, then the next will start at 11. So the time gap will be managed, and the database capacity is also managed. Because when in the working room, we get multiple data from different systems. So we should plan it properly, how we can modify it, how we can plan so that the database will not go down. Everything will be handled properly. And this computer will depend on the performance and the queries that we are writing.

Approach to manage database transactions. There are multiple transactions that are working, such as DML, DDL, and multiple types of transactions that SQL server database is going through. Our approach is to perform update, delete, and commit for update activities. These are some specific transactions that we need to do. Also, we extract data by writing select queries with certain conditions. If a particular query is required again and again, I create a physical block. This gives me the data immediately, without needing to write queries again and again. Based on the requirement, I can say that if something is required again and again on a daily basis, we can put it in a stored procedure or a blog. Alternatively, we can schedule a job to send the data to us daily via email. Otherwise, we can update and delay certain actions to avoid doing them manually. We will try to automate as much as we can. We will schedule procedures to update transactions based on count, so that if we're taking certain actions at the database, it doesn't impact other systems due to huge amounts of updates. We should do it in chunks to make it easy to manage for us and other downstream and upstream systems.

The text client identified the debug the syntax issue which will prevent the call from executing successfully. We don't need to write 'as' and 'as'. We can directly end and we can give the price category also. And another thing, the price category, if we are putting in a where condition as 'expensive', then we are particularly calculating for 'expensive' itself. Then why the case statement is required? Case when if list price is greater than 1,000. It's a renewal of the book. Products. Well, at least, I should be the next okay. We are just putting it in the 'expensive' price category. Oh, sorry. Yeah. There is no issue, I think, in this. It will get executed properly.

Exciting the point of sequence to a. What is the logic error present here which might cause an infinite loop while counting starts at 1,000,000? Here, it is active equals 1. It is already given. And if that is greater than 0, what we are doing is delete the top. We are just deleting the top row where is active equals 1. So we need to add a proper condition to delete the portal count we are receiving. We will just give all specific IDs if any are present. And for the distinct, we will delete because is active. If it is multiple, then it will go on executing. Also, we need to commit after delete so that each and every row will get committed after the delete statement. So it will not cause an issue.

I have worked on ETR, but not created any pipeline. However, I have created a pipeline for a VSTS Azure board. And for handling data quality checks, while writing the code in SQL, we can handle all the exceptions and validation errors. So it will not impact anything. If the errors are handled properly for any request, we can throw an error that this particular request has this data, and add data quality checks on that.

How will you implement dynamic SQL restore procedures happening? Okay, dynamic SQL or within a stored procedure, it is what we are writing so that we can basically create one particular block or declare a block. First, we will check if it's working fine. It gives us the exact result by using DBMS output or split lines. Then, we will create that particular stored procedure using dynamic SQL, which is real-time by passing parameters and everything. And we will put exceptions also. If there's any exception, we will come to know, so we will handle it. Because as it is data, we are not sure what kind of data we will assume. So we need to check all the pros and cons, what are the scenarios, what are the cases that will be there. Based on that, we will implement one procedure by using dynamic SQL, which is like bulk collect collections we can use or we can write a stored procedure for that. And if it's required, we can schedule the job. Or however it's done, how we can do that. But mostly, in dynamic SQL, we can write collections.

Before 2 to 3 months back, we had an issue. When a particular person or ID had multiple counts of supposed contact numbers, we could take an example. I'll illustrate the case when a new ID came in; we needed to keep the latest as active. So, what was happening is that it was passing all the related content. We had checked this to improve performance, and it was taking a lot of time. Sometimes, it was getting paid for that particular ID. So, in that case, I checked the queries where it was getting impacted. I then checked the table structure, the query cost, and how many indexes and key parameters were aligned. Based on that, there was one missing index. After adding the index, the query took significantly less time. In fact, we can say that more than half of the time was reduced for this. So, yeah, there was one scenario. This one, which I handled by writing the indexes and making some modifications in the query, improved the performance, and the query was taken.