Ramesh Verma is a skilled Technical Lead with extensive experience in software engineering and project management. He excelled in delivering timely releases, overseeing codebase migrations, and transitioning projects to Agile methodologies. His contributions as a Senior Software Engineer included implementing time-bound setups for desktop applications and collaborating on a Leave Management System. proficiency extends to ASP.NET Web API, Blazor, and Unity frameworks. He has contributed to open-source projects
Technical Lead
Nirvana SolutionsTechnical Lead
Nirvana Solutions India Pvt. Ltd..Net Software Developer (Remote)
LemonEdgeAssistant System Engineer
Tata Consultancy ServicesSoftware Engineer
Nirvana SolutionsSenior Software Engineer
Nirvana SolutionsAssistant Systems Engineer
Tata Consultancy ServicesSenior Software Engineer
Nirvana Solutions India Pvt. Ltd.Software Engineer
Nirvana Solutions India Pvt. Ltd.FTP server
Git
REST API
Jira
Visual Studio Code
Microsoft Teams
Visual Studio
Visual Studio 2019
Microsoft SQL Server
SQL Server Management Studio 2022
Jenkins
Azure Pipelines
Azure DevOps Server
GitLab
GitHub
SVN
C#
.NET Core
AWS (Amazon Web Services)
Docker
Postman
Javascript
Python
Winforms
WCF
Blazor
Dapper
Entity Framework
VS Code
Azure Boards
Kafka
HTML
CSS
JS
About The Role
We are rapidly growing our team to meet the needs of our expanding client base and to scale out our growth in the coming years. We have the technology, funding, experience, and capability to scale our business quickly.
We are actively looking for an experienced developer, who can work within a small, focused team to deliver against the product architecture, and its on-going expansion. The successful candidate will have demonstrable experience in a high growth, and high complexity environment where they have played a role in developing software product solutions, working within development teams, and delivered through the full software lifecycle.
About You
What can we offer you?
Engagement Type:
Direct-hire on the Playroll India Pvt Ltd payroll on behalf of LemonEdge
Job Type: Permanent
Location: Remote
Working time: 1:30 PM to 10:30 PM IST
Interview Process - 3 Rounds
Round 1 - assessment Test
Round 2 - Discussions with CTO
Round 3- Meeting Team Members
Okay. So I have been doing dotnet development for 7 plus years. I graduated from, uh, like, computer science and engineering in BTech in 2016. Uh, first, I joined, uh, Tata Consultancy Services as an assistant system engineer. Uh, there I worked for around 1 year, and, uh, main responsibilities were mainly, uh, SQL and C related. There were 2 parts of the job. 1 was related to an AWS MVC application, which I had to manage, and other was related to, like, uh, data warehousing in which I had to use SQL. So after 1 year, I switched to a product based company in Nirvana Solutions, and I started working as a software developer in c sharp technology. So I have been working there for past 6 plus years. During these years, I worked on their, uh, I worked on their core enterprise, uh, module, which is a desktop application built in WPF and WinForms. In terms of domain, the main domain is, uh, like, finance domain. Mostly, the enterprise module is for hedge funds to manage their portfolio, uh, order management system, general ledger, uh, allocation, closing, all of that. So I have worked on all of these modules throughout my career. Uh, I was promoted I have been promoted twice, uh, once, uh, from software developer to senior software And after that, currently, I am working on technical aid. So I am managing around, uh, 5 people under me, and, uh, my day to day tasks basically involved Like, there is a 3 week sprint, so I do initial analysis of the requirements, uh, creating different users, uh, creating different tasks and the user stories, calculating the story points, all of that. After that, there is development and basic dev testing for this, uh, requirements. And, uh, other than that, I also have to do code review for all of the people working under me. So these are the main, uh, and, uh, day to day tasks. In terms of, uh, my main skill sets, I have been, uh, since I worked for 6 plus years in c sharp, that is my main skill set. I have not worked that much on web applications, but I have worked on web API. Uh, in terms of cloud, I have mostly worked on AWS, not much on Azure or Google Cloud. Uh, so, yeah, that is all about me.
Okay. So when it comes to optimizing any SQL query, first, you have to, like, uh, go through that query and find out if there is any bottleneck in that. If is there any view being used in that SQL query or any join, which is, uh, the root cause of the performance issue. Uh, just, uh, trying to use, like, standard ways to optimize a query might not yield that great results because you will not be targeting the core problem in that SQL query. So whenever you whenever, uh, like, if I have to optimize the SQL query, I first try to, uh, go through that entire query, check which part is taking longer time, how we can optimize that part. For example, you can, uh, replace a directed join to uh, I mean, if there is a join to a large table, instead of directly joining to that, you can have a temp table. Maybe you are using a wear on a, uh, wear on a column multiple times, but there is no indexing on that. You can add indexing to the table. After that, you can check if the execution plan of the, uh, that query how is execution plan of that query is being generated because you will, uh, you might find some things that you can optimize to have a better execution plan, which will result in, uh, like, more optimized query and a better performance for your SQL query, which is being used in a dot net, uh, in your dot net or any other application.
Uh, okay. So in my career, I have not actually implemented CICD pipeline through AWS. So, I mean, I have done it through Jenkins and, uh, as your, uh, DevOps. So, I mean, the steps cannot be that far off. Mainly, what we have to do is go through, like, the manual process. If you have to do it manually, how will you do it? You have to go through those exact steps and kind of, like, create a documentation or, uh, like, list down all of the steps you have to do. Then you have to go through the uh, what we do is go through the those all of the steps 1 by 1. Like, for example, if you have to move a data to a particular file, build a particular project. All of these steps can be found in any application. These all of these were available in Jenkins, when I had to build that pipeline. After that, we switched to Azure pipeline and all of these, uh, steps were there. So, basically, the crux of implementing a CICD pipeline is you have to come down to the manual process first. That if you don't have any automation, how would you do it manually? You have to list down all the steps. And once all of those steps are listed, you can, uh, then start creating your pipeline in AWS or any other tool, basically, that you are using. And 1 by 1, you have to go through those steps, and you have to, like, find this task or any extension available to actually do those steps. Usually, if you can't find anything to replace, uh, I think batch files work really well in that case. We can perform lots of things from there. Uh, but in terms of dot net application, most of the steps like MS build, rebuild, or clean solution, all of these are fairly easily available in all of those different pipelines. Uh, so, yeah, if I had to implement the CICD pipeline for dot net application in AWS services, this will be these will be the steps I will be taking. So, I mean, the crux is basically doing the MS build and generating the EXE file, which I think should be fairly easy in any of any tool available.
So, basically, how Docker works is that if your dot net application does have any dependency, it it is basically helpful to provide a container which will contain all of those dependencies. So, I mean, sometimes, uh, tester and pro tester and developers have some issues, like, uh, it is working fine in developer machine, and it is not working on testing machine. And even, uh, in the after spending, like, multiple hours on that, we find out that there was a particular software or a tool that was not installed on on the testing, uh, machine. So Docker kind of minimizes these kind of issues because you can ship your application along with all of the dependencies. So when providing a providing our application for testing to to the QA team, we can provide an isolated testing container to them with all of the dependencies. So we can be sure if, uh, they face any bugs or that. We can just, uh, get a copy of that entire, uh, like, Docker container and run it on our system, and we will know exactly what is happening instead of trying to find out the exact steps that were they were doing. If there was anything missing, trying to debug again and again, and, like, facing a scenario where we are not able to reproduce the issue even after we are doing exact steps, even after QA is helping us. So these kind of these kind of issues gets minimized by Docker by using Docker so that you can be sure, at least environment wise, there is nothing that there should be available on testing machine but is missing. You know when you are shipping a dot net application inside the docker container that the at least all of the dependencies are available there. So whatever is the bug, it is it it should be related to the code, and you should be able to reproduce it easily.
Okay. So in dotnet, uh, SQL transaction basically work the same as, like, the transaction that are inside the c a SQL procedure or something like that. So whenever, like, in, uh, terms of if you have 2 separate databases or multiple databases storing different data, and based on one update or insert, you have to update inside a different database. So that these kind of things, it is very good to use SQL transaction. I will definitely use SQL transaction in such a case because you I have I want to reward the earlier transaction as well in case second one failed. For example, uh, let's say there is a online ordering portal, like, kind of like Amazon. So there are 2 parts. 1 is, like, uh, posting that particular customer has ordered this project. And second is you have 2 update as different database in which the, uh, in which we display the, basically, how much amount of product does we have in our warehouse. So if a particular, uh, if the if the order is placed, but the count is not updated, it will create a issue later on when you will be showing that you have that product, but it will be missing in your inventory. So to prevent these kind of issues, SQL transaction is very good that you need to revert it in case of any bugs in the second transaction so that the first transaction also gets reverted. Or if, uh, I go with the second example, like, if you are doing a generalizer kind of thing, the there are, uh, multiple things where you have to, like, update 2 things at the same time. If you are updating a transaction for a equity purchase, you have to do, like I purchased the 10 quantity equity. And another case, this cash get reduced. So, uh, if you don't implement SQL transaction or rollback for the first change, then there will be, uh, like, discrepancy in the two details, and your final tally will not match up, which is very bad. So in those cases, you have to do SQL transaction you have to use SQL transaction and implement a rollback rollback in case the second transaction fail in case the second update or you can say insertion fails. So that first also is not added. So that at least you have matching scenario, and you can do that transaction again to, like, match up finally.
So, I mean, if you, uh, uh, if you want to run a dot end application in AWS with high availability and disaster recovery, there are multiple system design concepts that can be used there. Most of them, uh, like, are related to scalability. But in terms of availability, you should have multiple backup backup applications hosted so that if one of them goes down, that another you can, uh, shift to you. So this was the main one. Other than that, there are, like, creating date database backups as well so that in case anything happens to your primary database, you should have a backup available to, like, restore and make your application highly available. So I mean, in terms of making in terms of making your application, uh, highly available to your customers, mainly, you have to, uh, uh, check where are your customers. And in case your customers are available on different con like, different continents or different countries, you should try to get servers on those locations as well to improve performance of, uh, your application. Uh, other than that, uh, I think, yeah, these are the main points that you can do to, like, make your application highly available in case of any kind of disaster recovery. Having database backups, having those on separate continents so that any kind of disaster does not impact your, uh, impact your data. And even if you are having AWS belt dot net application, it is always a good idea to have a local copy of your database and your application so that in case of if if even if there is an AWS downtime, you can use your local service for uh, temp or for temporary basis to keep your application available to your customers.
Okay. Given the sequel code snippet, I would manage getting an audio with Okay. So this is an update, um, uh, statement. Update employees, set salary equal to this, where employee ID equal to this. Select star from employees where employee ID is equal to this ID. So I'm not sure what kind of issue could be faced when executing this statement. I mean, as long as all of the column names are fine and So, actually, the end goal of this statement is not mentioned, so I am not sure how, uh, what could be the issue with this statement, uh, if we were given an error. I mean, select star from employees where employee ID could do this. I mean, if the employee ID is unique, we are just incrementing the salary by 5,000. So I am not sure what could be the possible issues, uh, other than I mean, there can be, like, SQL injection into the the red ID. So if this query is actually being directly used in your dot net application or somewhere else, there can be chance of a SQL injection. If that is the case, I will definitely move to, uh, use a SQL procedure or something like that to ensure that there is no, uh, SQL injection being done on the actual query. Other than that, uh, I don't think I can't see any kind of issues here. And then one more issue I can notice is that the 5,000 is like a hard coded value. Uh, so, I mean, depending on currency, like, 1, uh, one candidate might be getting my one employee might be getting, uh, paid in USD, another might be getting player paid in GBP. In those cases, I think it is it should be better to pass, uh, like, some kind of check to know the current currency and conversion so that that can be updated accordingly.
Okay. So, I mean, this is a very, uh, well known problem. Uh, whenever you try to get a particular instance, it is possible that if 2 threads are trying to access, uh, your instance for the first time, what will happen is both will check this if the instance is null and instance will be null, then both will go inside and create a new instance. So one of those 2 instance gets lost. So it is not purely singleton because 2 instance are getting created, although one is being used. So that is the kind of problem that, uh, usually is faced when doing this type of singleton implementation. If you have to make it thread safe without using the dotnet lazy infrastructure, uh, the straightforward answer is using double locking. In that case, first, you check if instance is null. If instance is null, then you create a lock on a private object. Uh, sorry. So, uh, first, you check if instance is null. If instance is null, then you create a lock. And inside that lock, you again check if the instance is null. Using that implementation, you will, uh, you will be sure, first of all, that you are locking. So only one instance will be created, and you are again checking inside so that, uh, you don't feel face any situation where both check if instance is null and key. Second1 is waiting for the lock, and when it goes inside, it creates another instance. Because once it will get inside the lock, it will again check if the instance is null. If it is inside the lock and instance is still null, it means it is a 1st thread there, and it will create the instance. The second thread, which gets inside the lock, will again check if instance is null. But instance, this time will not be null because the previous thread has created an instance. So in any risk condition, you will always be sure that only one instance is created.
Okay. We have a dot net application on AWS experiencing latency issues with Docker configuration code, uh, potentially minimize latency. Uh, okay. Sorry. I don't know how to minimize latency. I will have to read up on that.
Okay. Real time data analytics platform. So, I mean, when we are doing data analytics, first step is always about, uh, creating a data warehouse, like, with your historical data, all of that. So if you we have to do it, uh, data analytics for a dot net application. First, I have to, like, create an ETL architecture inside the AWS services, uh, where we have to extract the entire data from our main transaction DB because we can't do the return analytics directly on our transaction DB as it will be used in day to day operations. So first, we have to set up a, uh, extraction service there. After that, we have to uh, like sorry. So after that, you have to transform the data into needed format for warehousing and data analytics, say, like, converting that daily data to, like, uh, combined data or union data so that you can do, uh, your performance analysis or different kind of data analysis on your data. Finally, uh, when all of this conversion is done, you can use, uh, like, AWS Lambda to, uh, query your data into different kind of formats or use, uh, Elasticsearch to, like, do another kind of, uh, extraction for more data. And, uh, I'm not sure what kind of visualization tool AWS provides or there might be another, uh, other, uh, visualization tools that we can link to our, uh, AWS data, uh, quicker for to, uh, uh, like, create a data analytics, uh, platform for our application.
Uh, okay. So I am not worked a lot with Azure Cloud, so I'm I don't have any idea what is Azure cognitive, uh, cognitive services.