
·
Have 19 years of total experience in software development and 10+ years in MS
SQL and 5+ years in Power BI.
·
Efficient in standardizing development process and implementing best
practices in Database design, data
modelling and dashboard creation.
·
Highly
skilled and results-driven Power BI Developer with over 5 years of experience
in designing, developing, and deploying interactive reports and dashboards
using Microsoft Power BI. Expertise in data modeling, ETL processes, DAX and
SQL queries to deliver actionable business insights.
·
Have good understanding
of Data Warehouse and ETL.
·
Knowledge of Azure platform, ADLS, ADF and synapse.
·
Team leading, led result-oriented teams of engineers; encouraged
staff development.
·
Strong
communication, analytical and problem-solving skills
Principal Software Engineer
Saama TechnologiesLead Developer
KrtrimaIQ Cognitive SolutionsAVP
Citicorp Services IndiaSoftware Developer
Anand ERP Ind.System Engineer
Emerson Export Engineering CentreAssociate Projects
Cognizant Technology Solutions
Crystal Reports

Microsoft Power BI

tsql
Microsoft Excel
My name is Shrikant Chandrasekhar Singade. I have total 18 plus years of experience in software development. Currently, I'm working with Citicorp Services India Private Limited for more than 10 years now. My primary role, uh, is, uh, software development. I'm working as a senior developer where primary skill is Microsoft SQL Server development, which includes writing stored procedures, user defined functions, views, triggers, uh, and performance tuning as well. I also have done data modeling as well, database development also, and, uh, data analysis using crystal report, uh, template and SSRS. So this is my primary skills. In the secondary skills, in the my previous organizations, I have worked on accurate reporting tool for report development thing. Microsoft SQL Server as a primary skill set for writing stored procedure functions, SQL program or t SQL programming and performance tuning as well. And, uh, I also worked on, uh, csharp.netandbb.net around, uh, 15 years before in my first organization. Uh, this is, uh, all my, uh, background is.
Actually, I have not worked on health care software environment yet. Uh, I have worked on financial, uh, finance domain, which is fund accounting team. And before that, I have worked on resource planning and project management application. I can give you 1 example, uh, uh, in the current organization, which is finance, uh, which has a finance domain. Over there, uh, we have received 1 requirement, uh, which was very critical and, uh, complex, which has very complex calculations and, uh, logic to be applied, uh, and they wanted on very urgent basis. So, uh, what I have designed is I have communicated with the customer, and, uh, I received all the required information. And I proposed 1 solution that we can create a new, uh, report template for this. And, uh, we will use the, uh, SQL, uh, which is the database, uh, Microsoft SQL Server, and the Crystal Report will be the front end. So all the, uh, business logic, there are almost around, uh, more than 50, uh, fields, uh, which involves a lot of calculations, aggregations, uh, everything. And the input parameters were only 5. So based on those 5 input values, uh, we have to generate a report for almost more than 50 columns. So, uh, this is what I have done, uh, in SQL programming. I wrote a stored procedure for this. And on top of it, once it is approved, I again receive 1 more request on top of it, uh, which is enhancement. They want it for, uh, not just for 1, um, particular group. They want, uh, want the same report for, uh, number of more than 1 group. So, uh, the input parameters were changed to not for the single, uh, group, but for, um, list of groups. So I have moved changed the stored procedure, which we're using temporary tables to store temporary data and, uh, perform all the calculations, uh, in term in the temporary, uh, table only and then return a result set for my list of all the groups, uh, which are, uh, required to show the the output. So this is what, uh, the critical problem, critical task, and we have to, uh, deliver it less than 7 days. So we were able to give it on the 6th day on UAT environment. Yeah.
Yeah. Uh, data discrepancy in, uh, general report generated in SQL. So I have come across a couple of situations, uh, where, uh, data was not, uh, generating correctly. The first situation is we have created a new report, uh, where, uh, we have to, uh, give data for daily NVs. The report will be generated on daily basis. So over there, uh, we have on UA team. Uh, they have come back receiving fig uh, we have received a feedback from client or or user that, uh, some particular data is not coming properly. Not coming properly means there's there there's there are supposed to be records for 10 different funds, around 10 to 15 funds, but, uh, a report was showing only 80% of it, around, uh, 10 to 12, uh, funds, uh, the report was showing data. So I have observed what is the issue with this. So I have, uh, checked in a back back end in SQL. This what what is, um, uh, actually missing. So there I found some of the master data fund setup was not having the proper values in it. So I asked user to update some of this values in fund setup, and then the report got generated for all the required funds. So this is the 1 thing. The other said the other scenario was the there was a report running from many years, but suddenly, uh, I received, uh, I received mail from, uh, customer that for, uh, some of the, uh, funds not some of the funds. There are, uh, actually the report was showing, uh, data for around 50 funds, uh, out of which, uh, they have added 2 new funds in it. So for those 2 new for those 2 news funds, uh, were not, uh, providing date generating data in the report. So, again, I have checked, uh, why, uh, the data is not appearing. So there was a very, very little typo mistake which has happened by the, uh, user that those 2 funds were set set up, uh, created in the database. Those funds were, uh, missing some of 1 of the hyper hyphen character, which was required while, uh, to give the fund name because we were using that fund name, uh, in the crystal report to get the, uh, child, uh, data of child fund. So then in the naming convention, that hyphen was required in the fund setup level. So I asked user to you, uh, update the fund name and put a hyphen after the name, and then, uh, those 2 funds got, uh, were displaying in the report. So this is how I saw the issue.
Uh, web application functionality is breaking. So what I think, uh, from the SQL, uh, perspective, if any web application, uh, is breaking, that might be because of, uh, some, uh, errors occurred while processing, uh, any data at the SQL level. So, uh, what I will do, I will check the error, uh, and, I will check where at what, uh, uh, at what point on which event, uh, the application is breaking. 1st, I will identify that event. And then based on that event, I will check what actual code is written. And, uh, if, uh, and the as, uh, this is related with the SQL thing. So I will check which particular SQL is not running properly or whether it is giving any error. So, uh, 1 of the example I can tell you, uh, there was 1 form, uh, web web form, uh, which is getting generated from 1 of the view. And the thing is that, uh, there has been changes in, uh, database. So, uh, database level. So, uh, we have, uh, updated, uh, 1 column name, and, uh, that name, uh, was not reflected in the underlying view. So it was a a proper, uh, mistake from, uh, uh, developers. And, uh, your testers also, they couldn't identify that particular thing. And then, uh, we updated the view underlying view with, uh, the correct updated column name, and then, uh, the application, uh, were running, uh, properly.
How might you implement a test plan to validate the accuracy of an ETL process? Okay. So to prepare a test plan to get the accuracy of retail process, uh, first of all, uh, I will, uh, go through a few few of the steps. I will first check the, uh, requirement of, uh, what the ETL process should do. So based on the requirement, we can gather the sample data. Uh, and, uh, once we do we we can generate the sample data for which the ETL process should work, and then I will check what are the logic or transformations sample input data, then all the transformation on all the logic we have. So I will go step by step where, uh, in the transformation also, uh, uh, if there are any filtration required, so from the sample data, if any, uh, more calculation is required, So, uh, I can do a unit testing on transfer on each transformation, uh, whether filtering is working properly, whether the calculations, uh, are being done properly for all, uh, fields. And, uh, once the all business logic, business rules are working correctly, then we will load the data. And once we load the data, I can check, uh, the what is the requirement. The BRD also, we can come to know what is the require requirement of it, what is what should be the output of it. And then we can once we load it, we can check the, uh, loaded data and the, uh, required output. So if that matches, then we can say the heater process is working correctly.
How would you use SQL to create a report that shows trends over time for a particular dataset? Okay. In SQL, uh, when we want to show a trend, whenever we want to show a trend over time, that means, uh, there it could be a a graphical thing, uh, which will be, uh, generated on, uh, data. 1 would be the 1 access would be the time, and the other access, uh, would be the, uh, value of so so for particular time, what I will do is I will use the aggregation, means, uh, roll up total of during the, uh, time, uh, time frame. So, uh, as the time goes, the amounts will rolled up and, uh, will give the, uh, proper output. That means for 1 the for 1st year, the sales is, um, uh, 1 lakh INR. Then on the 2nd 2nd year, if it is again, um, 1.5 lakh, then for the year 2, the roll up total would be 1 lakh plus 1.5 lakh, which comes to around 2.5 lakh. So this is how I will roll up the values, and I will put, uh, the I'll generate the, uh, output from the SQL. So, uh, the using the some, uh, operation window function, uh, in SQL, I will use the sum of sales over the time in ascending or in ascending order. So this is how we can, uh, get the required data to show the trend, how the sales is moving for the for, um, against each year.
Examine the QA test script written in, uh, pseudo code below. Can you spot any shortcomings in the testing logic that might be that might let bugs slip through. Okay. Test has verified successful login. Input correct user. Click login button. Click if user is redirected to the dashboard page. Very fast successful login. Spot any short comments in the testing logic. Uh, whenever user, uh, press, uh, login button, there can be some checkpoints that username and password both should not be blank. That is the 1 checkpoint, uh, we can put. Then we can also have the length of password and the combination of characters that must be there, the minimal requirement of password complexity that we can check. And we can also check when create when creating the new user, the duplicate username should not be there. So username should be unique for any login. That is, uh, that I can think of. And In the pseudo code, I see the checkpoints which I have explained was, uh, not mentioned in the test case. So it is just it is not validating, uh, username and password for the for whether the login is valid or not. So no checkpoints are there in the pseudo code, which should be there.
I'll select ID first name, last name from users where is active code 1 and create a data better than there is a logic written in where clause that created date should be greater than 1st Jan 2023. And, again, there is a or condition with updated date should be less than, uh, uh, 1st Jan 2022. So this result set will return all all the user details who's who are created before 1st Jan 2023, but those records have been updated. Not but, actually. So this will return, uh, the users whose data is inserted or updated before 1st 2022. So those 2, uh, conditions are, uh, I guess, uh, misleading. So, uh, when we say created, it is greater than 11 of 1st and 2023, then the updated date less than, uh, is less than 1st and 2022 is a contradictory to the first. And since we are using our, uh, operation, it will return, uh, all the user details who are created after 1st in 2023 and who are either created or updated before 1st Jan 2022. So this logic is, I think, is not correct.
Can you talk about a time when you had to ensure compliance with security principles by working with sensitive data. I have not come across, uh, such situations where, uh, uh, I have to give any justification or any reason to the compliance, uh, regarding any security principles. We follow all the security principles by working with, uh, any kind of data.
Share an example of how you used ETL tools to announce data analysis and reporting in your previous projects. Okay. So for ETL thing, uh, what currently, uh, oh, you your previous projects. In the current previous project, uh, in very first organization, uh, when I was in Anand ERP, uh, we were having 1 in house developed tool, which were, uh, reading input, uh, input files. The flat files is those are either comma separated files or, uh, delimited files with raw data, which we needed to input, uh, which we needed to load into our, uh, database. So, uh, there is a code written in, uh, csharp.net, uh, which, uh, which reads the info a file. And, uh, I call once we read the file, uh, we load those data into our, uh, database directly. This was in very first organization. Later on, I didn't get much time to much chance, actually, to work on ETL part. But, yeah, in my current project, around couple of years before, they have done 1 small ETL, uh, package, uh, which we are currently having 3 different, uh, instant application. These applications are same, but the instances are different according as per the region, worldwide region. So there is 1 client whose data is coming to all 3 regions, and he wanted some, uh, aggregate, uh, consolidated report from all 3 regions. So what we have done is we have generated, uh, uh, flat files from all 3 regions. Those we have, uh, inbounded in 1 of the separate database. We have created separate database for it, which was very small database, uh, for that particular client. So, uh, we loaded that data received from 3 different, uh, regions to that single, uh, database, uh, uh, using SSIS package, and, uh, then we, uh, generate report from, uh, on that consolidated database. This is how, uh, we have done the detailed thing.
Describe how you would use Excel advanced features to organize and analyze large datasets effectively. In Excel, uh, I know few of the, uh, functions or features, we can say, to organize the data. 1 is that we, uh, we have lot of of, uh, some mathematical operations and sort, uh, sort sorting order. So we can order the data according to the requirement. Mostly, it would be based on the, uh, date thing. Uh, the record the data which, uh, is loaded earlier should come first. So the so, uh, sorting on ascending order for a date part date, uh, related columns, uh, that is what we can do. Then, uh, we can have, uh, some calculated columns as well. So if there are any calculations required and new uh, calculated fields are required, then we can use either sum or, uh, summation, multiplication. Any arithmetic operations we can do and create new columns in Excel. Then, uh, if we want to filter, uh, data, then also, uh, we can have uh, v lookup or, uh, h lookup, uh, functionality of Excel that we can use for, uh, filtration or lookup purpose. Uh, Yeah. Uh, this way also we can have we can analyze the data or organize the data in Excel.