Motivated and Passionate software engineer with 8+ years of experience in developing highly scalable and secured software systems using a variety of technologies like Java, Spring Boot, Python, Microservies, Jenkins, Rest API, GraphQL API, PostgreSql etc. Certified Disciplined Agile Scrum master and a Security champion.
Senior Member Technical Staff
SalesforceComputer Scientist - 1
AdobeSenior Data Engineer
VisaMember Technical Staff
First American IndiaSoftware Developer
SAP LABS IndiaScrum
Spark SQL
Apache Spark
C++
Java
C#
HTML5
Javascript
Asp.net
jQuery
Python
MySQL
Linux Admin
Could you help me understand more about your background by giving a brief introduction of yourself? Hi. My name is Kush Sharma, and I belong to Jammu Rajasthan. I have about 7.5 years of experience in developing enterprise grade softwares and highly secured and scalable distributed systems. Uh, I have worked with variety of technology and tech stack, like using Java, Spring Boot, Python, CICD using Jenkins, data pipelines, machine learning, and various algorithms and data structures. Currently, I'm working with Adobe as a computer scientist. I'm part of digital experience group in Adobe, and we are developing a service called Lookalike Modeling. And this service is just, uh, like, creating a recommendation system, and we tend to capture all the digital footprints or digital events that are happening all around the web. And then we try to suggest some look alike, uh, segments to the users. And, uh, I'm an individual contributor. I also manage a team of 3 juniors. Uh, it's been 2 years with working with Adobe right now, and, uh, I'm also a certified scrum master by this BlendAgile scrum master uh, forum, and I'm also taking the lead responsibility of regulating the scrum process and all the agile related meetings. Uh, before s before Adobe, I was working with, uh, companies like SAP and Visa. So, uh, there in SAP, I worked for 3 years. I was working on the cloud platform team of SAP, and for 3 years, we developed 2 services from scratch for the SAP cloud platform. Uh, namely 1 is Cloud Platform Integration, and the other 1 is, uh, Integration Suite. Both of these services were developed from scratch, and I was part of the team. I was very fortunate to part of, uh, to be part of such a team where the projects were being discussed and groomed and accurate, uh, architectural decisions were being taken up, and I was part of those such discussions. And, uh, so there, the text check was also pretty much about same, uh, Java, Spring Boot, microservices, RESTful APIs, GraphQL APIs, PostgreSQL, uh, Azure, AWS, and private cloud like Ali Cloud. And there, uh, I had an end to end ownership of the incomplete systems. And, uh, right from, uh, grooming the requirements with the product owners and tech technical leads or architects, and then capturing the requirement, then implementing them, coding, and then writing unit tests, defining the test pyramid. Like, uh, 1st layer would be the unit test. 2nd layer would be, uh, UI level test like Seleniums, and then e to e test, then component level test, mock the API test for micro for microservices mock microservices, and then deployment. And, finally, if there are any customer issues, we had ServiceNow portal, and we used to regulate. Uh, we will be used to have a regularly on call schedule for different people based on, like, the sprints. And each sprint will have 2 people assigned to just look at all the, um, workload that is coming in from the user's point of view, meet a new request or or any, uh, fraud issue, which we use to actively solve and collaborate with customers to get it solved ASAP. A yes. So, yeah, I think I have a good understanding of taking design discussions, design
How would you use Azure DevOps to automate deployment pipelines and increase the release frequency for dot net core application? Uh, I have not personally worked with Azure DevOps pipelines, but I have worked with Jenkins. Uh, so there, we used to create a lot of pipeline jobs, uh, for various purposes. For example, if you have a pull request job, whenever you create a pull request, it will automatically get triggered, uh, and then it will run and would do, uh, do the course of actions that we have defined in job. Similarly, there will be a deployment job if you want to deploy your code in, uh, any environment. So, uh, I've configured Jenkins pipelines, uh, Spinnaker pipelines, but not Azure DevOps personally. Uh, but the process should be similar to these actions that I have already performed in past. So, yeah, I think, uh, that is how we can automate these things. Apart from that, uh, we can also have some kind of branching strategies for the GitHub repository. For example, what are the branching strategies, um, and on on what level we would want to, uh, raise, uh, pull request jobs and what all actions we should be doing in that. Uh, we should be running some kind of tests also in that and a code coverage, etcetera, all these things can be automated using the CICD jobs.
Which technique would you use in c Sharp to ensure your objects are threat safe while maximizing the concurrency? Threat safety. I have worked, uh, with c sharp and with my 1st employer, which was First American. I worked for 2 years using dot net framework 4.5 at that time, so I'm not, uh, very clear right now with, uh, this index, or the keywords used for threats threat safety, but I have worked with parallelism and concurrency in my other roles with Java. And there, we used to have the thread safety by using, uh, async, uh, asynchronous jobs using, uh, executive services, callables, futures, runnables, and even, like, just by creating a new thread and running those. In that case, we would have to maintain the entire life cycle of that particular particular thread. So yeah. Um, in c sharp, I don't remember exactly how they would do that, but should be related with threads by using
What is the strategy for addressing and migrating technical application? For addressing any, uh, addressing and migrating the technical depth and SQL driven application. So I think, uh, today, mostly all the applications are database driven, so be it a SQL or a NoSQL database. And, uh, almost no projectors and does show many kind of small technical laps because once we are pushing some new features with tight deadlines, there might be chances of any slipovers or any technical debts. In that case, uh, recording those technical debts is the key factor to have a constant reminder that we are having these technical IDAPs, uh, in place, uh, that can be captured by using Jira Processes like, uh, Jiras or Platairs, and then slowly taking over these technical items by implementing the the the the most efficient solution to upgrade, uh, the application so that it does it performs much better and there are no more technical depths. So that is what I feel like, having a track of what is going wrong and then fixing it 1 by 1. And be it SQL driven or NoSQL driven applications, I think, uh, the key resides in prioritizing those
How can you leverage Azure DevOps build and release pipelines to roll back the requirements? Leverage the Azure develop build and release pipelines failure. I think there is a mechanism in Azure DevOps to build jobs to build your jobs and deploy and then release your changes in a particular environment. And in case of any failures, there will be a roll a rollback feature also available. I have not personally used Azure DevOps, so I'm not very exactly sure what is the name of that particular, uh, mechanism. But, uh, yeah, there should be a reason to roll it back. In other case, like, there can be another pipeline to deploy the last latest change, which is the master change, uh, if there's any failure in merging or releasing a new uh, change in the environment. This is the very, like,
SQL query that is running slow. What steps should it take to diagnose and optimize its performance? So if SQL query is running slow, explain, analyze would be the 1st step to figure out what's what defines it to be a slow or not. And then having some indexes if there's a SQL query which is having some kind of a where clause or filtering, then having proper indexes would help in making it more optimized. Uh, if the query is running via input output operations, so, uh, read and writes can be avoided by writing stored procedures. And, uh, what else, than in that other what are the other things that we can do? Yeah. After indexing, like, over a period of time, reindexing your tables, reindexing your indexes would help it a lot. And sharding, in some cases, like, if if there are there are billions of records and then a column has only 2 or 3 values inside it, So I would say sharding would help in, like, organizing your tables in such a form that it hits hits the right tables, uh, when the request comes in, it will also optimize its own performance.
In ASP.C sharp code snippet for an API to process a payment a developer is using a single tricluster of multiple observations. I am going to explain why it is not following the best practice for explaining in solid principles. Okay first it is trying to get payment details then validation result what it is doing it's validate payment details it is getting payment details it is validating the payment details and if it is valid then it is processing the payment also firstly it is not a single responsibility it's violation of the single responsibility principle one class is doing so many things it's getting the payment info it's also validating the payment info and then it's also processing the payment information all these could be three different behaviors in three different classes but yeah it's done in one class so when this single responsibility principle is violated um then there is a very generic try and catch like this does not explain much this is a very generic try and catch so we are just logging an error and then throwing an exception again so we are not even printing the stack trace as well so the second problem lies and try and catch block where everything is pretty generic and we are not recording the stack traces as well
Report review, you notice that you notice that the following user code snippet is meant to display list of user roles. It can then close a null reference exception. You're changing codes. You can explain why this is happening and how you suggest this is important. Get user roles. Uh, user roles is not null, and user roles count is greater than 0. For role and okay. User ID. Okay. It's throwing a null reference exception. If it is doing a null, why it wouldn't be doing? Firstly, get user roles method itself might be returning a null pointer exception. So that should be under a try and a catch. And, uh, yeah. So if that method is not throwing any exception, then there could be possibility that it could have user roles in it or if it it will not have. So then rest of the code looks fine, uh, in which if if condition handles that if the user roles is not null. And it should not be null, and it should not be empty as well, uh, which is fine. Like, it is seg checking the count is not greater is greater than 0. So and it is displaying it and otherwise yeah. Yeah. So get user roles is the culprit here, and we should have it under the try catch.
How would you optimize a dot net application that has to handle large volumes of data with complex transactions? Optimize the dot net application that has to handle large volumes of data with complex transactions. Large volumes of data. Yeah. Today, a large volume of data, it's not a big problem. A lot of, uh, distributed systems are doing that, and data platforms are processing such large volumes of big data, uh, with complex transactions being made. Proper retrying mechanisms should be there in place, and in case of any failures, you should record it. Uh, we should use a a genuine, uh, data source. Like, if it's large volumes, then should be decision should be made, like, where we want to restore this data in in a new needs a NoSQL system or a RDBMS system. And and the frequency of the data and being, uh, ran or write should also be taken care into taken into account. Um, Yeah. I think if the data is huge and the frequency is huge, some distributed systems, some large volume processing systems like uh, uh, Storm, Fling, uh
How would you you how you would use reflections if you have a metadata extension and the potential impact on application performance? Metadata extraction. I'm not sure what, uh, metadata extraction is, but reflection glasses are generally used to take more control of what we have. Um, but, yeah, in general, I would not recommend using electric red like, reflection glass, uh, but I'm not sure what do you mean here by metadata extraction.
Asynchronous programming patterns in our asynchronous programming systems are used when, uh, we want to do something, uh, efficiently. So instead of actions being performed 1 after then 2nd and 3rd, if you want to do parallel operations, uh, which are possible and which are not linked to each other, then in that case, synchronous programming can be of great help. And, uh, yeah, we can make our application more efficient. It will be faster, and, uh, we can get some early failures and take accordingly the next step of actions instead of waiting for all the operations to complete and then responding back with a failure message.