I have been instrumental in designing and developing PLSQL scripts, scheduling jobs, and implementing changes in alignment with Agile principles. Additionally, I have served as the Single Point of Contact (SPOC) for highly critical data breach cases, managing end-to-end resolution processes with a focus on swift and decisive actions. Working on automating the activities where there is manual intervention by writing script in Python, Power Automate, Power Bi.
Spearheaded the development of Power BI dashboards for advanced data analytics and visualization, significantly
enhancing the overall quality of data.
• Proactively managed incident resolution, swiftly identifying and addressing issues. Communicated effectively
through ServiceNow and email interactions with cross-functional teams, ensuring seamless collaboration.
IT Consultant (Oracle/PLSQL Developer)
CapgeminiSoftware Development Intern
9LedgePro Microsoft Partner NetworkPyCharm
Power Bi
Tableau
Excel
PowerPoint
Visual Studio
Service Now
Yeah. Hi. Uh, my name is Manjali Ajitnikam. I'm I'm working in Capgemini from 3.7 years. I'm working on Oracle PL SQL SQL Technologies. I have worked on search for 8 months, and I worked as PL SQL developer developer. And I have completed my engineering from computer science and engineering. I've completed postgraduation in in information technology. I have a good knowledge of PL SQL, Oracle SQL, PostgreSQL, data modeling, Power BI. I have I'm creating the dashboard of business KPI, and I'm working as a spoke for, uh, some of the data breach cases and the issues, uh, working on performance tuning. Yeah. Uh, also, I I'm guiding the team members for new, uh, men new members who joined our team. Uh, I'm I'm leading 10 members in, uh, in PL SQL. Uh, I'm working on change my
Yeah. Optimizing the database performance and the metrics. It's completely depend upon what type of query it is, of what exactly it is executing so that the database performance is impacting. We need to do the analysis basically for all the things. Then we need to check out what how much time it is taking when multiple sessions are going on, how much time it is taking for a simple query to execute when all the sessions are down. So it's completely depend on the scenario. So for optimizing, we will first go and check, like, whether indexes on which the particular query is executing, how much time it is tracking, what is the costing of that query, Uh, whether indexing is done properly for a particular table on which the query is executing. Uh, then we need to check if there are some unconditional looping statements of what is the flow, what all scenarios are executing.
Uh, so the the process of implementing partition tables and indexing SQL, it, uh, will first go and check if any particular consumer ID or on which is executing, how much time it is taking. Based on that, if it is taking some time for a particular I for example, if you can say, for a particular ID, then we need to insert based on that ID, or we can use, uh, for one ID, we will use 2 IDs so that the data will get partitioned on 2 different consumers for the same particular person. So
Yes. So, um, writing complex queries with several table joints, uh, based on this, uh, data, what type of data we want, we will use left join, right join. And and, uh, by writing the query, what I'm considering, what are the things I am I needed, uh, for the tape from table. Based on that, uh, I will just separate all the details. Okay. For there are supposed 5 tables. So I will just list of what all contents I will require. Then I will join one table with another, then a to b, b to c. Left join or right join. What all columns I am requiring? Or even though I will write as a select star from table 1, table comma, table 2 by giving the allies name for every table. And then I would just put joins for table. And after that, instead of star, I will replace with the column name by giving the allies. Like, from table 1, I need, uh, 1 column, uh, suppose x, y, zed. So I will do that, uh, table 1, t one. That is t one dot xyz column name, comma. Then all the columns on the respective tables, I will take so that the time will be taken less, and it will, uh, give all the column details, uh, what will be required and it will function if in length. But, yeah, every time whenever we are approaching for a complex query, we are definitely need to check the costing for the query so that it should not impact the performance of the database.
Okay. So, see, database capacity planning actually, it depends upon the old database, how much, uh, data it can take over in particular, uh, particular session, um, particular time, or how many sessions it can take over. There are few sessions. There are few jobs that are continuously running as it is a data level database. So we like a data management. So we need to handle multiple types of data. We there is incoming data out. We have so many cases. So it completely depends upon portal sessions, how many activities that database can handle. And these details these particular datas, we can get from a database administrator for the project. So he can set up, oh, these many sessions are okay for that, uh, for the production database. So what we can do, like, we can process we can plan our process. Like, okay. This is one job which is taking more time. So what we can do okay. For this particular job, we will assign this particular timing. And when this job is running, we will separate all other jobs, uh, which are taking less time. So we will align those jobs with the or not all jobs. We will pick up specific jobs. And, again, after some time, after that particular job exhibition, we will again assign few means there is a type, uh, frequency tab. Like, one job which is taking time include that 10, then next will start at 11. So the time gap will be managed, and the database capacity is also managed. Because when in the working room, we get multiple data from different different systems. So we should plan it properly, how we can modify it, how we can plan so that the database will not go down. Everything will be handled properly. And this computer will depend on the, uh, performance and the queries that we are writing.
Uh, approach to manage database transactions. See, there are multiple transactions that are working, like DML, DDL, multiple types of transactions that SQL server database is going on. And so approach, like, uh, we mostly, the what all activities that are performing is, like, update, delete, commit for update we are doing. So these are some, uh, specific transactions that we need to do. Also, extracting the data by writing the select queries, then again, some conditions. Uh, so, yeah, if particular query, I'm requiring again and again. So, basically, I'm creating a physical block. So it will give me the data immediately. No need to write the queries again and again. And, uh, Yeah. So based on the requirement, I can say, like, if a thing is required again and again on daily basis, so we can put it in a blog or a procedure, however it is. So it or we can schedule a job also that will send that daily uh, to, uh, to us on the mail. And, otherwise, update and delay certain also we can not to go do do manually. We can over doing manually work. We will try to automate as much as we can. We will schedule some procedures, and we it will go and update the transactions based on the count so that they if suppose what all action we are taking at database, if it is sending to some other system, it should not impact because of the huge amount of updates to other systems. So we should do it in chunks and so that, uh, we it it is easy to manage for us also and the other downstream upstream systems also.
The text client identified the debug the syntax issue which will prevent the call from executing successfully. We don't need to write as and as. We can directly end and, like, uh, we can give the price category also. And another thing, the price category, if we are putting in where condition as expensive, then we are particularly calculating for expensive itself. Then why the case statement is required? Case when if list price is greater than 1,000. It's a renewal of the book. Products. Well, at least, I should be the next okay. We are just putting it as expensive price category. Oh, sorry. Yeah. There is no issue, I think, in this. It will get executed properly.
Exciting the point t sequence to a. What is the logic error present here which might cause an infinity loop while count star count 1,000,000 begin? Here, is active is equals to 1. It is already given. And if that is greater than 0, what we are doing is delete top. We are just deleting the top one row, uh, where is active equals to 1. So we need to add a proper condition to delete portal count we are receiving. We will just give all specific ID if any it is present. And for the distinct, we will delete because is active. If suppose it is multiple, uh, then it will go on executing. Also, we need to after delete so that each and every row will get committed after delete statement. So it will not cause issue.
Actually, I have worked on ETR, but not created any pipeline. But, yeah, I have created a pipeline for a VSTS Azure board. And for handling data quality checks, uh, while writing the code in SQL, we can do by we can handle all the exceptions, all the validation errors. So it will not impact anything. If the errors are handled properly for any of the request, we can throw an error that, okay, this particular request is having this this this data, and the data quality checks, we can add on that.
How will you implement dynamic SQL restore procedures happening? Okay. Uh, dynamic SQL or within store procedure, it is what we are writing so that, uh, we can basically basically create one particular block or declare and block. 1st, we will check, uh, it working fine. It is giving it, uh, exact by using DBMS output or split line. Then, uh, we will, uh, create that particular stored procedure, uh, by using dynamic SQL, which is a real time by passing the parameters and everything. And, uh, we will put exception also. If there is any exception, we will come to know so that we will handle. Because as it is a data, we are not sure what kind of data we will we are assuming. So we need to check all the pros and cons, what are the scenario, what are the cases will be there. Based on that, we will, like, implement one Procedure by and dynamic SQL is like a bulk collect collections we can use or we can use. And but, uh, we can write a stored procedure for that. And if it if it is required, we can schedule the job. Or however it is, what how we can do that. But mostly, uh, in dynamic SQL, we can write the collections
Uh, before 2 to 3 months back, we have a issue, like, uh, when particular person or ID is, uh, having multiple count of supposed contact numbers, we can take an example. I'll in that case, when when new ID is coming, we need to keep the latest as active. So what is happening is it is going and passing all the related content. So what we have checked is for improving the performance. It was taking so much of time, and it's request is sometimes it is getting paid for that particular ID. So in that case, what I've checked, I have checked all the I've of the query where it is getting impacted. Then I have checked the table structure, how much, uh, query cost is there. Then I have checked how many indexes, how many key parameters are aligned with that. Based on that, uh, there was one index missing. So after adding the index, the query was taking really less time for that. Means, we can say half of the time more than half of the time was reduced for this. So, yeah, there there was one scenario. This one, uh, which I have handled, uh, by writing the indexes and some modification in the query. Uh, it started in the performance was improved and query was taken