Info global tech solutions
Info global tech solutionsSystems Analyst
Dun and BradstreetAnalyst Programmer/SQL Server DBA and TSQL Developer
SRM TechnologiesProgrammer (TSQL Developer)
Ramco SystemsDatabase Programmer/SQL server and TSQL Developer
Zylog SystemsT-SQL
SQL Server Integration Services
C++
Microsoft Windows
RVW
VSS & Model Explorer
DFN
Project - WMS-CoSaCS Integration
Client: Unicomer
Environment: Windows Server 2012 and SQL Server 2012
Role: Database Developer
Team Size: 10
Project: Silver Blade 2.0 Product - Silver Blade 2.0 (SQL Version) is a credit Bureau product. This Project is for processing bulk data into database and fetching credit reports.
Client: Dun and Bradstreet
Environment: Windows Server 2012 and SQL Server 2012
Role: Database Developer and DBA
Team Size: 40
Project 1: ECihan - The Cihan University has online Admission, Academics, Fee Management, and Hostel Management tool.
Client: Cihan University
Environment: Windows Server 2012 and SQL Server 2012
Role: Database Developer
Team Size: 15
Project 2: eUKH -The Kurdistan University has online Admission, Academics, Fee Management, and Hostel Management tool.
Client: Kurdistan University
Environment: Windows Server 2012 and SQL Server 2012
Role: Database Developer/Production DBA
Team Size: 15
Project 1: Stock Monitoring Tool - The Zylog system having treasury department, which is doing online trading in stock market, the same debit, credit, buying and selling transaction it happens through bank and financial institutions. The same information gets through online statement from corresponding banks and stock financial institutions. This module will produce MIS report but the system expects inputs from user. So the user will input to system so that system will have total track on all information's and various MIS reports can be generated.
Environment: Windows Server 2003, 2008 and SQL Server 2005, 2008
Role: Database Developer/Production DBA
Team Size: 8
Project 2: Zee tools - This tool has the Employee details, Appraisal, Projects and their clients, Payroll Processing, Tax Calculation, Attendance Reporting System modules. Finance Department has been facilitated by Revenue Reports.
Environment: Windows Server 2003, 2008 and SQL Server 2005, 2008
Role: Database Developer/Production DBA
Team Size: 8
Project 3: Timesheet Management Tool - Timesheet Management Tool has been developed to track the employee's daily activities. This software facilitates the approver and reviewer to have an overview of their subordinate's daily activities.
Environment: Windows Server 2003, 2008 and SQL Server 2005, 2008
Role: Database Developer /Production DBA
Team Size: 3
Project: In house Implementation - This tool has the Employee details, Performance Appraisal, Development Appraisal, Payroll Processing, Tax Calculation, Attendance Reporting System modules.
Environment: Windows Server 2003, 2008 and SQL Server 2005, 2008
Role: HR Technical Support
Team Size: 11
Project 2: Loan Organization System - This project was done for multibillion-dollar regional bank in USA, which provides a diversified portfolio of financial services. The solution provided to the bank automated the entire Loan Origination process starting from lead generation to closing and booking of the loan in the system. The solution comprised of handling various stages of the Loan origination process like Opportunity management, financial analysis, Loan packaging, Negotiation, Ordering, reviews, approvals and counter offers, document preparation, Loan closing execution and Loan set up and booking in the system. The solution helped in Automation of manual tasks, increased productivity, better interfaces and streamlining of existing business processes.
Client: Commerce Bank, USA
Environment: Windows Server 2003, 2008 and SQL Server 2005, 2008
Role: HR Technical Support
Team Size: 40
Yeah. Hi. I'm Anthony. I'm SQL developer working in working in full global tech. I have over 10 plus years of experience in SQL development. I have involved in performance tuning, database designing, and SQL server migration, Oracle SQL server migration. In my current project, uh, through API, we are sending the data through APIs to the other, um, uh, URL. So it it has been achieved through SQL server procedures itself. So that also would have, uh, 20 years of old project. So without disturbing the existing functionality, we are sending the data as well as we are performing some, uh, enhancement to that. That's it from my side.
So, uh, without accessing the database, uh, this low running query can be because of the, uh, data load, and, um, no log can be used in the select statement. So that it, uh, it will give only the date 30 reads. So without accessing the database, we can use, um, with select statements with no log. That is the limit we can have.
To minimize the schema changes, what we can do is, uh, we can have some, uh, temporary tables or use for that. And, also, we can have if uh, permitted, we can have, um, user defined functions to achieve some, uh, calculations or computations. That's what we can do. Other than that, we we should understand the requirement clearly, and we should design the database properly with the primary primary table and the relationship tables.
Yes. A table partitioning can be done in the tables. Uh, in my current project, I have done table partitioning. Based on the year, it is like, um, if it is based on the year, we can, uh, do the partitioning so that date data can be read easily. So if it is, like, uh, order table or, like, some, uh, transaction table, we need to identify the key column of that. So in which, uh, data or which key the data has been written, we need to identify we need to implement the non clustered index on it, and then we can have a table partitioning based on that. And, um, uh, we can, uh, we should write the select statements or we need to write the procedures based on that condition. It will work fine.
Okay. Uh, to improve the, uh, indexing that is improve the index performance, we need to identify whether this index has the exact problems or not. We should not over index as well as we we should not do, uh, the minimum of indexing. So which criteria where conditions has been used frequently, we need to identify that. So most used bear conditions can be composed as a index part, and then we can use. Uh, and, also, uh, the, uh, columns with the indexing will be tough, but still, if we require, we can use it.
Okay, complex queries are like joining multiple tables and functions efficiently means we need to identify the primary table and then we need to use join conditions to join the detail tables and the join condition must be correct left join or right join or whatever the join must be appropriate and on condition must be the correct one which which must be the primary under foreign key relationship which follows under primary key and foreign key relationships and that must be perfect and then we should not use functions in the on condition or where conditions which will reduce the performance of that query. So already we are using multiple tables. So the data performance that is the query performance will be slow. So we what we can do is we can avoid functions in the join conditions and where conditions rather than we can compute the data and then we can join the conditions and if you want to use the function we can use it in the select columns while selecting the columns we can use functions that will not affect the performance mostly and if the data load is very high, what we can do is we can segregate if we have example six tables, we can separate into three and primary table into three and the detail table into three and we can join based on it we can populate into a temporary table and then have a join. So the data read will be lower already the three tables are having very high data and again the three tables are having some metadata or primary detail. So based on that we can select the condition and we can split the query into two populate into two tables like two temporary tables we can populate into it and then we can have a select from that that will improve the performance of the queries.
Actually, uh, in the wild statement, select star from select count of star from users where is active equal to 1 and which is greater than 0. So this while loop will be executing every time so that we can, uh, select the count of the row and we can, uh, we can we can, uh, delete the user based on the where condition. There is no, uh, minimum and maximum value has been mentioned. Instead of mentioning 0, we can give some value. And the count of the record select count of the record, we need to select the appropriate column in a local variable assigned to it, and then we can give that. And, also, in where condition, what are the tables to be deleted, we need to identify. So where condition is also missing in the direct statement.
Actually, price category has been, uh, that column has been defined here only in the where condition. So the column does not exist in the product state products table. So that column is a, uh, defined column. Like, uh, it it has been defined in the select list only. So the price category is equal to expensive will not work. The syntax error is where price category is equal to expensive. That line will give an syntax error. So rather than using it, we need to write a subquery of this. We can, um, uh, like, select ID, uh, select ID, the column names that has been listed from products, uh, within the subquery, we need to, uh, select. Right? We can write this up query, and then from that, we can select. From a and select all the columns listed here where price category equal to expensive.
Uh, to avoid data concurrency, uh, we can use, uh, in high transaction environment. We can use no lock in all these transactions stored procedures, uh, as well as the select statements that has been used. And, also, uh, if the multiple insert and update is happening, it must be within the begin run and rollback. If it fails, we can, um, we can rollback the transaction. That action we need to use, and data concurrency issue will happen. Uh, the proper begin trend rollback is not done properly. Uh, so if we manage that properly, this data concurrency error will not occur.
Yes. We can have Analysis Services in the database if we want to have, like, um, uh, analytical data, like, a fact tables, and we need to, uh, like, denormalize the tables. And we need to have fact table and the dimension table, and we need to have analysis of the data. Example, we need to have analytical data of the year or a particular period or for the particular, uh, transaction or particular, uh, dimension. By the time we can have SQL analytical services to implement this, uh, it will be, like, more analysis of the data of the database. We can have a separate database for it to populate the data and, uh, we can move to that. Through a particular period of time, we can have it separately.
Documenting is that we need to maintain a complete code, like if we are planning to release a code, like we are going to release a particular code, we need to have all the scripts maintained in a particular document. In that we need to have first the table changes and table creations and then first we need to have the data types creations or data types alterations and then we need to have the table creations and table alter and then we need to have functions, alterations and function like function alterations and creation and then we can go for trigger and then for procedures, this must be the order. And also if we are going to give the upgrade of the code, then we need to have the recovery of the code, like I mean if we want to roll back the complete script, we should have the opportunity or we should have the option to roll back the changes. So rollback script is must while releasing the code. We can have, we can maintain the code in Jira or whatever it may be, we have like a source tree or lot of options to maintain the code, but still we need to have a rollback script for each and every release script that is mandatory.