profile-pic
Vetted Talent

Hemalatha

Vetted Talent
Senior Software Engineer with 4-year background in the Database field. Demonstrates exceptional proficiency in SQL and MySQL languages.
  • Role

    Senior Software Engineer

  • Years of Experience

    5 years

Skillsets

  • SQL - 5 Years
  • SQL - 4 Years
  • MySQL - 1 Years
  • MySQL - 1 Years
  • SSIS
  • Git
  • DevOps
  • Power BI
  • Python
  • Tableau

Vetted For

7Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    SQL Server Database DeveloperAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :Unix, Database Design, etl programming, sql developement, Data Modelling, Python, Shell Scripting
  • Score: 54/90

Professional Summary

5Years
  • Jan, 2022 - Present3 yr 8 months

    Senior Software Engineer

    Customer Analytics India, Pvt. Ltd.
  • Jan, 2019 - Dec, 20223 yr 11 months

    Senior Software Engineer

    KANTAR Analytics Practice

Applications & Tools Known

  • icon-tool

    SQL

  • icon-tool

    MySQL

  • icon-tool

    Power BI

  • icon-tool

    SSIS

  • icon-tool

    Azure DevOps

Work History

5Years

Senior Software Engineer

Customer Analytics India, Pvt. Ltd.
Jan, 2022 - Present3 yr 8 months

Senior Software Engineer

KANTAR Analytics Practice
Jan, 2019 - Dec, 20223 yr 11 months
    Implemented stored procedures, functions, views, and other SQL activities using SQL and MySQL. Directed the performance tuning of procedures, conducting thorough analysis of existing code and delivering recommendations for enhancement. Exhibited expertise in reviewing business requirement and technical design documents, leading to the creation of efficient database solutions. Collaborated in the automation of SSIS package creation, successfully streamlining the overall workflow. Supported the automation of ETL processes within SSIS packages using Python, improving data integration and reducing manual effort. Created stored procedures and collaborated with cross-functional teams to understand business requirements, translating them into SQL-based solutions. Implemented data migration tasks, including ETL processes, to transfer data between different database systems or environments.

Achievements

  • Development and implementation of Stored Procedures and Functions.
  • Utilizing Azure DevOps for seamless integration across multiple environments.
  • Enhancing existing Power BI reports, integrating new features.
  • Mentoring junior team members.

Major Projects

1Projects

Education

  • Bachelor of Engineering in Computer Science

    Government College of Technology, Coimbatore (2019)
  • Higher Secondary Certificate

    Bharathi Higher Secondary School, Namakkal

Certifications

  • Azure fundamentals

  • Azure fundamentals certified by microsoft

  • Power platform fundamentals certified by microsoft

  • Mysql essential training completion certificate at linkedin

AI-interview Questions & Answers

Hi. I'm Hemalitha. I have around 5 years of experience in the database field with the SQL, uh, Power BI as well. In SQL, I involved in the development part, uh, like, creating a procedure functions and other SQL activities based on the requirement. My roles will be, uh, we are creating one web page. So for that, I am giving the stored procedure to show the data for our, uh, web page. We are doing mostly doing the analysis of, uh, the product or product thing. Uh, also, I involved in the, uh, Power BI reports creating a Power BI reports from, uh, with the end to end process. Like, from the creating from, uh, gathering the business requirements to publishing the report, I was involved. Also, I have in I was involved in the automation automation thing to, uh, for the data processing, like, to load the data in our destination table. Also, I utilized Python for data processing for some of the things. Also, I started learning Azure Azure Things. And, uh, I started leading the team with the small small size of the company, the small size of the team. Yeah.

Okay. Uh, for complex queries that must join several tables and function efficiently. Um, Yeah. If I have multiple tables, um, I used several tables and functions. To reduce the time, I will create the index for, uh, required required fields. So what are our fields I have used to in joins or in the back condition? I will make it as a index for the table. Uh, so it will be, uh, it will not affect the it will improve the performance. So I will, uh, refer that. Also, I have, uh, valid I will validate the data types of each and every columns and the, uh, for the table. So yeah. And, also, while joining, um, if I used any index column means I will use the index column instead of not using, uh, I will not use the function in, uh, in the on condition. So it will not get the correct index. So, uh, need to use the actual column instead of using the column with any of the function, like, data function or something like that. Uh, several tables. I will make all the tables with the primary key so to join that. And, um, mostly, I will try to avoid the duplicate values to join the 2 tables. It will get the data easily. Like, it will not take more time. Yeah.

Okay. Conceptual data model, uh, we have a idea, like, what are all tables we have, what are all columns, and what was the relationship and all. Uh, so based on that, I will create a, uh, each table with the primary key. What are the primary key, foreign key for that uh, table? Um, also, uh, with that, I can, uh, do the relation. So what are the columns? What are the data types? So while creating the table in a scale, uh, I will make sure the table, um, the table and what are all columns need to be in, uh, in the table with the respective data type and the respective indexes or unique key or any, uh, any of the constraints. Also, I will make, uh, different different all other different tables will make the to make the relationships.

Slow running query. I will try to analyze the extinction plan for that query. So from the extinction plan, I can come to know, uh, and which part it will be getting slow. If any index is needed also, it will um, it will recommend for us to create a indexes. So I will, uh, book for that. Also, I will check the data types for that, uh, for those columns which are used in the table. Uh, it should be a, uh, corresponding that value. Like, if it is numeric column means it should be a, uh, integer or, uh, begin something like that. It should it don't want to do be a back end. So I will make sure, uh, the respective data type for the respective columns, and I will, um, I will improve the performance with the with the help of exception plan if any index is needed or any other, uh, any other joints are needed or any anything I can do, uh, with that, uh, like, uh, the data type change, anything, uh, we can

Scale injection attack. I'm not sure about the scale injection attack. Maybe I'm I'm not sure exactly about the injection scale injection, but I think it may it will be, uh, like, uh, it's like a deadlock and also, um, the same process same process, uh, like, same table, uh, the two resources trying to reach the, uh, trying to get the data from the same table. Um, so for that, I can go for the I I always, uh, make as a no lock, uh, keyword with the by by selecting the thing. Yeah.

I use transaction. Transaction in the I used. I will use transaction in the, um, stored procedure. So it will be, uh, only success or failure. It will not, uh, with the partial, uh, commit and all. So if it is successful, everything will success or otherwise, it we can commit our rollback with if any failure and all. So I will use, uh, begin transaction, end transaction. So all my um, queries and all getting the transaction. So if it it will be only success or failure, it will not in uh, partially commit.

So if the users table have more data means, um, when the condition is active is equal to 1, In while loop, always it shows more than 0 only. In the delete, uh, I we will delete, uh, from this query, we are deleting only the top one row. So each time, it will delete only 1 row. Uh, instead of, um, I can give the condition to remove all the, uh, to delete all the records based on the condition instead of doing only one row by everything. If it is one row means it will, uh, it will be fine for it will work for only one time, but it has 1 lakh or one pro record means every time it will delete and load. It it always be a greater than 0. Like, not always. So most of the time, it always so we are the instead of deleting only the top one, we are we can delete, uh, based on any, uh, other conditions.

Maybe I can go for index for the order details table for the order ID column. So it will take the order ID parameter, uh, as a back condition. So, uh, from the index, it can check easily. Um, code change. I think, yeah, on the index we can create.

How do you go about designing and database recovery strategy for mission critical? So for designing part, um, I will gather what are all, uh, data. Like, what are all attributes, um, what are all what are all attribute what are all entity, what are all attributes we need, like, what are all tables and what are all columns, uh, we want to be we want to create this, uh, application. Also, we need to make sure each one we can relate with the relation key, like primary key for in k and all. Basically, we have some dimension table, like, normalized, uh, database. Like, we we should it should not be any redundant. So instead of, uh, instead of loading all the data into one table, we can do normalize. Uh, like, we can create multiple tables with the ID and the fields so that ID only we can use in that transactional table. So if you want any data from the from the transaction table, we can match it with 1 dimensional with ID, and we can get the, um, other columns for that. So we need to make sure, uh, normalized, um, design, like, we can use required multiple tables instead of loading and updating in a same single table. Uh, recover strategy. Okay. Recovery So maybe I can do, uh, weekly data restore. Database restore, I can do, uh, weekly basis. Or if it is not critical means, I can go for, um, even, uh, day by day. So every day, I need to do, uh, database restore and the backup of the tree, uh, restore, I can delete. Like, uh, today, I will do on restore. So tomorrow, the today's we I can delete and only keep the tomorrow's. So if it is more critical, uh, more transactional data each and every day, we need to make sure means I can go for daily, uh, data database restore. Otherwise, I can go for weekly data store. Uh, so it will be useful.

Can you discuss a time when you had to reflect the database design for improved usage? So when the time of working in one of the project called Ferrero, it has more transactional data. So every day, we have, um, large number of record we will get. And, uh, to improve the performance, we, uh, we checked the, um, the defragmentation and all. So it occurs. So we will do some rebuild or reindexes, uh, if anything if improved to improve. Yeah.

Process for testing and validating is best of our upgrade or purchase before the, uh, upgrading production. Applied in production. Describe your process for testing and validating a skill server upgrades or patches before they are I'm not sure about this part of a skill server upgrading now. I'm interested to learn these things.