I have over 7+ years of experience in Software Development Life Cycle (SDLC), Application management, architecture design, end-to-end data management, and other technical aspects using C#, MS SQL, ASP.Net MVC, Angular, Azure DevOps, HTML, Javascript, Python among other technologies. I am deft at utilizing .Net Technologies and Angular for the development and implementation of software applications. I have also proven successful in handling various projects namely - Orchestration Engine, Computerized Vehicle Registration Application (FL, GA states), QCI Application Enhancement, Chatbot Development, SCM Application Development, BYOD Project, Product Support and Development, and Pack Xprez.
Senior Software Engineer
ZS AssociatesSenior Member Technical
CDK GlobalSenior Software Engineer
InfosysC#
MVC
SQL Server
Postgres
MongoDB
AWS
Azure DevOps
HTML5
CSS3
Python
AWS
Azure
Power BI
Ui Path
VS Code
Git
Azure DevOps
Microsoft Azure
Yes. Uh, so I'm Krishna Kumar. Uh, I have, uh, 6 years of, uh, experience. I'm mostly into dotnet. I have, uh, skills like, uh, c sharp, asp.netasp.netmvc, uh, dotnet course, and, uh, entity framework. Along with that, I have a UI experience, uh, with HTML and, uh, Angular. From the cloud perspective, I have experience with, uh, AWS and, uh, Azure. Yeah. That's all about myself.
So in dotnetcore, uh, instead of creating multiple, uh, objects, uh, 1 at each time, We can, uh, inject the dependency in the controller so that you can you don't have to create, uh, an outage object in, uh, every time. And the difference, uh, or the advantage that we get, uh, with that is, uh, we can have a different type of database connectors being injected instead of, uh, us being us using that and creating an object in different methods. So, for example, if you want to have, uh, Microsoft SQL as a database, then, uh, we can inject the SQL connectors, uh, inside the constructor. And, uh, if you want to have MySQL or Postgres, then we can, uh, use its respective, uh, database connectors and, uh, inject those in the uh, dependencies or constructor. Uh, so that, uh, that specific type of based on the constructor type, we can, uh, call it respect to methods and use the connection strings. Thank you.
So, uh, bootstrap is mainly, uh, used for UI, uh, elements, uh, where we can, uh, style and, uh, have it in the all we can also have, uh, dynamic styling, uh, applied, uh, via bootstraps. Uh, and there can be multiple, uh, advantages of using bootstrap by, uh, you know, in try and by, uh, by making use of, uh, exact or jQuery or any other, uh, JavaScript, uh, languages.
Yeah. So, um, I mean, uh, CIC, there's nothing but the, uh, continuous integration and continuous deployment. So when we, uh, use, uh, dot net code as an application, uh, we need to build a application into, uh, DLLs, and then we would be using that. Uh, so in our local, we can build that, and, uh, ID takes care of the running of application. But, uh, when we want to deploy that or host that application in IS, then we need to make sure it's built successfully and, uh, deploy successfully. So, uh, to do all this, uh, we can make use of CICD And, uh, the basic implementation of c, uh, CICD is, uh, first, we would be creating a build pipeline wherein, uh, we can make sure the code code quality is good by introducing SonarQube or any other tools, uh, during the build process. And once the build is successful, uh, we would be getting that artifacts, and we can deploy that using the release pipelines or making use of continuous deployment features. So using Git, we can make use of the build I mean, we can, uh, build a code and create the artifacts. And once we, uh, create the release pipeline, you either using Azure or using TeamCity, we can deploy that at specific respective locations so that, um, it can be run. And coming to database schema changes, uh, there can be multiple ways. It does it can be done via scripts and, uh, run, uh, or make the, uh, execution of the scripts via database change log, uh, as a tool, or we can, uh, also do that, uh, via code if it is a model first approach in dot net. But mostly, we would if it is a script based changes or database first changes, then we can go with database change log as a tool to deploy the schema changes database schema changes.
So, uh, in the exception, uh, so in dotnetcore, uh, we can handle exceptions via using, uh, try catch blocks. And, uh, in order to, uh, not expose the, uh, sensitive information, uh, we can, uh, override, uh, certain, uh, exceptions and, uh, use our own specific exceptions and, uh, through a common exception message or, you know, minimize the, uh, system information and show a generic exception. That can be done, uh, by using the interface and, uh, writing custom exceptions, and that can, uh, that can handle the handle the sense sensitive x, uh, database information to be not exposed. And, uh, we can also use a generic exception, and, uh, instead of throwing the entire stack trace, uh, by throwing the sensitive information to the end user, we can, uh, just show a common exception message or, uh, generic, uh, exception.
So, uh, when we are migrating from, uh, dot net application to dot net code, the major challenge would be, uh, I mean, the legacy code might be in dotnet, uh, with bb, uh, or if it is c sharp, then it can be an entity framework. So, uh, in entity framework, if we consider, uh, it would have web config and program dot CS files, uh, where all the settings would reside in the web config, and, uh, the application would start from the apps app dot start file. But in order to migrate, uh, into dot net code, then we would have to move the start up uh, code into program dotcs file, or we can make use of separate, uh, files to start up and call those files via program dot c s. Because, uh, in dot net core, uh, the starting point is from program dot c s. And the data migration, uh, from MySQL to Microsoft SQL, uh, I think, uh, we have couple of tools where we can, um, you know, migrate from 1 database tool to another database tool. Or if if not, we can export the data from my MySQL as a SQL file, and then we can import that into Microsoft SQL so that, uh, the schema would remain the same and also the data would be imported successfully.
So in the SQL script, uh, we are trying to fetch the name and, uh, the maximum salary from the employees, based on grouping the department, but it would throw an exception or another because, uh, we are trying to find a name after doing a group by. So in this case, uh, when we are doing a group by, uh, department, uh, we won't be having any names. Either we can correct it, um, by, uh, displaying the department and the maximum salary, uh, as a maximum, uh, maximum salary as a maximum salary from employees group by department having, uh, count of stars greater than 5.
So, um, I mean, uh, here we have a number list where there can there are duplicates within the list, and, uh, then we have, uh, made the numbers distinct, uh, and converted it into 2 list. But, uh, we are not returning the distinct numbers. Instead, we are returning the same numbers, hence the bug. So we can actually return the distinct numbers, and, uh, that would actually return our, uh, distinct list of our numbers.
I'm not sure about this, uh, signal, uh, and, uh, react.
So, uh, in the, uh, when we are interacting with, uh, web BPAs, uh, we have to make sure that, uh, CSP headers are present within the request. And, uh, also, for more, uh, additional security, whatever, uh, APIs that are being used by, uh, specific rules or specific users, we can add, uh, the authorization to those APIs, uh, by using filters authorization filters. And, yeah, I think, uh, we can add that. Uh, we can also make sure that the APIs, uh, when we pass the user credentials, uh, we can make sure that the request that we get within the body, uh, it's a valid, uh, user or coming from a valid, uh, with a valid JWT token.
So, uh, setting the DevOps workflow, uh, in a dot net core project, we can, uh, I mean, at least from the AWS side, in AWS also, we have CICD. And then Azure also, we have, uh, Azure CICD tools. Uh, we can make use of them. First, uh, it would be of the same process. 1st, we would have to create a, uh, build pipeline, uh, and then, uh, we will have to create a release pipeline. And within the build pipelines, we can add, um, multiple tools like SonarLink, SonarQube, or any other tools which are, uh, which test the code and the the build quality. And then, uh, we can also have a certain gauge, uh, where, uh, when we want to have deployment or when we want to have a specific branch related bills, we can have, uh, those gates added. And once we, uh, create any PR, uh, if we have to configure an autogratic build or only after merge if we want to, uh, merge those changes, that also can be done, um, via AWS or, uh, via Bitbucket. We can set up those gates, uh, so that the build would be executed only after, uh, it gets merged. And testing and deployment can be, uh, I mean, testing can be done at least by using the, uh, dot net test cases that we have, a unit or integration test. So that can be run within the build pipeline itself. And, uh, once those tests are run test suite is run, uh, we can, uh, make sure we can make the build as successful so that it can be deployed by via the release pipelines.