
Software Engineer II
Walmart Global Tech IndiaSharePoint Developer
Diametriks Consulting Pvt. Ltd
React
Node.js

ASP.NET

Power BI

SharePoint

Alteryx

Tableau Prep

Git

GitHub
.png)
Docker
.png)
Jenkins

Kubernetes

SonarQube

BigQuery

SSIS

SSRS

JSOM
Yeah. Sure. I have, total around 8 years of experience with full stack developer, as in asp.net, c sharp, React. Js, and Express. Js, SQL server as in backend. And, I have worked on dot net core as well for around 4 years.
So to ensure the dotnetcoreapplication, uh, new feature, so we'll be using, uh, the Docker pipelines wherein we'll be creating a new Docker image with the enhanced feature that we'll be uploading and later on for the integrity part and ensuring that backward compatibility is working fine. So we will be hosting that independent Docker, uh, system over which we'll be integrating that with the application based on the, like, the microservice architecture that we'll that we'll create.
There is to be in front of both scalable and maintain limit and high loads. So in terms of, um, strategy that we'll be building up for STPA end endpoints to remain scalable and maintainable in high load scenarios, So we'll be, uh, uh, for this approach, we could somewhere utilize our threads, uh, with the help of threads and our task, uh, task, uh, should you know that we have in, uh, dot net? So with the help of that, we could, uh, ask the APIs to wait and hold the processes in parallel. So in terms of thread and task, so task is preferred more over thread wherein, uh, because for the large scale application or large scale, uh, process, so task would perform better as compared to the threads.
It could be a reason to use singleton design pattern in dotnetcore, please. So, uh, reason to use the singleton design pattern in dotnetcoreapplication would be, uh, based on, uh, the, uh, based on the application, uh, size if the data handling is pretty less, and, uh, we are not looking to go extensively on the components. So that way, we could go for a singleton design pattern because it, uh, in multi, uh, in multi ton, we have drawback of, uh, managing very large components. And with the features, we got to have more than 1 or more than 10 images as well, which is, uh, kind of difficult to manage as well. So for small scale application, it is, uh, pretty good to go with singleton design patterns.
How would you debug the issue that arises when you try to update an existing item in the the repository? Okay. So the first approach would be, uh, implementing our try and catch block. And, uh, so how do they debug the issue that arises when you try to obtain an existing item? Yeah. Right. So So first approach would be using the try cash block, uh, wherein if, uh, any error comes up from the background, so we'll have the exact error imported into the exception handler. And other would be so as here, I could see we are managing a variable with the name as existing item. So if whatever item we are pushing into the list, if that already exists onto the list, so it will, uh, the existing item will hold, uh, that's value. So based on comparison to that, that if, uh, existing item is null or, uh, having value, we could have those our custom messages in place, uh, that item already exist or, uh, if there is any syntax error or the data type error. So that would be probably there in the catch or the exception handler.
So in terms of, uh, ensuring the threads' safety, so we could, uh, somewhere go with the asynchronous process wherein all the threads would be running in parallel, and, uh, they will have that, uh, the authentication and authorization in place where, uh, we'll be, uh, we could, somewhere for each thread, we could, uh, reverify or, uh, implement the authorization for each and every control that will be running through the threads.
To optimize a critical section of Polygon and go with that. And as a performance bottleneck, explain how we can find the problem that we can get to. So in terms of our dot net core application, uh, part, which has been identified as a performance bottleneck, So in terms of large scale applications, so we could simply, uh, import that Docker image, and, uh, we could, uh, do a performance analysis, like, which set of APIs we are building up here and which set of back end we are, uh, hitting with those APIs. Also, on the back end side, what's short of, uh, query optimizations we can do to kind of load the data fast. And, also, we could, uh, run the performance analyzer, which will, uh, straightforward tell, okay, which part of this API call or which part of this dotnetcore application is taking a bit longer. And based on that, we could work on optimizing the back end queries and the APIs or the response. If, uh, if the things permit, we could, uh, go for asynchronous operations wherein we could implement those, uh, threads and tasks in place to handle the parallel processing.
High latency, especially under peak load. How would you go about diagnosing and fix the problem? So in terms of high latency, uh, for the rest API, we could go and check for the server usage and which part of server usage, uh, is happening more frequently. And based on that, so we could, uh, check for, uh, the query which is being fetched and the back end load optimization that needs to be done. Also, the resources that has been, uh, provided to the back end server and the hosting server. So we could, uh, somewhere look to enhance those resources along with the performance, uh, things with the help of, uh, like, how the API is being called and what sort of, uh, operations we are doing onto the API before, uh, before calling it. Or, uh, also, we could look, okay, which part it is taking longer whether to whether in the get section or the post section. So based on that, we could, uh, debug it, and we could, uh, see how much how much we can optimize that process.
So, uh, in terms of advantage that we could have using a a middleware and dot net core would be, uh, having different set of sections for a single application. Uh, by using middleware, we could, uh, somewhere introduce, uh, new features, enhanced feature more frequently. And the load balancing could also be done, uh, perfectly with the help of middlewares. Yeah.
State in a distributed not core application. So, uh, in terms of managing the state in distributed dot net core application environment, so in terms of managing the state for several set of distributed applications. So we could go ahead with session management. We could, uh, enable the session, uh, sessions in the application, and we could cross utilize those sessions for each and every component of our application.
So in terms of a scenario wherein we could, uh, go for asynchronous method in c sharp, uh, to improve the API performance. So in terms of implementing the asynchronous method, we could, uh, go ahead wherein we have a very large set of data. So suppose we have around 50,000 of data, and with the help of, uh, asynchronous thing, so we could, uh, implement something like lazy loading where we could, uh, uh, firstly, we could import some set of record. And based on some event or based on, uh, some action or, uh, based on the requirement only, we'll be loading the other components of that API. So that way, we could go for a synchronous operation wherein we could so it would be helpful if we have a around suppose in in a single bunch, we are getting 10,000 records, and we have total around 50,000. So we could, uh, basically, create 5, uh, different dozen global processes, which will be, uh, independently fetching 10,000, 10000 records. And, collectively, uh, they'll be giving out those 50,000 records as single go. So that will, uh, kind of improve the IPI performance.
So in terms of, uh, creating a high availability rest API, so it would be, uh, with the help of so the the basic mechanism that we'll be implementing would be authentication and then the authorization. So that that way, uh, the this API will have a limited access to the end users, and only the authorized user will be, uh, using those resources. And apart from that, so we could, uh, we could implement our task or thread, uh, as per the need, as per the load from the back end. And, also, we could, uh, somewhere optimize the queries on the back end, which will be helping in, uh, fetching out fetching out the data from the back end.