As a Senior Consultant, I am responsible for designing and implementing secure, scalable, and highly available cloud-based solutions using Microsoft Azure. My expertise in Azure infrastructure and services enables me to assess business needs, recommend the optimal cloud architecture, and work closely with stakeholders to develop technical solutions aligned with their goals. As an Azure Solution Architect, I have a deep understanding of Azure's capabilities, can recommend the right mix of services, and ensure solutions are built with security best practices. My ability to communicate complex technical concepts to diverse audiences helps ensure stakeholder alignment and successful delivery of cloud-based solutions that meet business needs.
Senior Solution Architect
Advantmed India LLPSenior Consultant
InfosysTechnical Architect
Cygnet InfotechSoftware Engineer
LinkSture TechnologiesSenior Software Engineer
C-Metric SolutionsTechnical Lead
Cygnet InfotechSoftware Engineer
Sufalam TechnologiesVisual Studio Code
Visual Studio
Microsoft Azure SQL Database
MySQL
Azure Functions
Azure DevOps Server
Jira
Bitbucket
RabbitMQ
FTP server
Azure DevOps
Bitbucket
SonarQube
Google PageSpeed Insights
Azure DevOps
I am writing to highly recommend Jaimin Soni for their outstanding work as a Technical Lead. He has been an invaluable member of our team, consistently demonstrating a deep understanding of development, strong technical skills, and a dedication to achieving excellence.
In addition to his technical skills also an excellent communicator and team player. He is always willing to share his knowledge and expertise with others and has been a mentor to many of his colleagues.
Overall, I have been extremely impressed with Jaimin's work as a Technical Lead, and would highly recommend him to any company looking for a talented and dedicated individual to join their team.
Jaimin has demonstrated a strong work ethic and dedication to their job. he consistently meet deadlines and are always willing to put in extra effort to ensure that projects are completed on time and to a high standard.
I highly recommend him for his exceptional attention to detail and their ability to deliver high-quality work.
K. So, um, myself, Jammin Sony. Uh, I'm I'm coming from the IT background, and, uh, overall, I have around 10 plus years of experience in IT industry. Uh, mainly work with the .net.netcore and VC architecture. And, uh, from my roles and responsibilities, currently, as a lead in architect role as well. Also, familiar with the software development life cycle, uh, and holding the couple of certification in on Azure site, like, uh, Azure Fundamentals 9 9, double 0, and Azure 305. Also, need certification on the top of site for the for the long term goal that to has to has become an architect. So that is the my oral background. And, uh, also, I have experience on the Azure side for more than 5 to 6 years on Azure, uh, used with the different Azure services like Azure DayOps, uh, Logic Apps, app services, uh, AKS, uh, in some at some level, uh, as your storage accounts, and, uh, yeah, and many more. Yeah.
So dependency injection, uh, mainly used in .net and also familiar in, uh, also, they are providing inbuilt integration in .net core, uh, framework as well or in our latest .net version as well. So main purpose to use the dependency injection is to create a loosely coupled code, and, uh, we can write the test cases on the each, uh, service layer for that if we are for example, if you are using the deposited pattern in that case, so we can write the unit test cases on the each service layer as well. So when we do update that, uh, whenever it's required, we can create a separate service based on our requirement, and we can inject at any point whenever it's required. For example, we have the 2 different, uh, of, uh, application. 1 is a web application. Another one is a mobile application or web API project. So based on our requirement, we can create these services, and we can inject that service into particular controller or base class. Right? So that, uh, that loosely coupled structure, we can implement it with the help of the dependency injection. So that will also help to the enterprise level application as well. Like, if we have any, uh, event driven application or message or publish or publish or subscribe kind of mechanism we have. So in that case, uh, we are we're using this approach. It will be easy to developer to write the code in a loopy call manner. And, yeah, we can do the very in a very paced, uh, way, the development as well. So that is the benefit.
Okay. So from SQL query optimization, I would prefer to to run the SQL query plan in the SQL itself, uh, based on the permission if we have for the developer side. If we don't have, then we will check the SQL, uh, complex query that are we following the, uh, based practices of SQL or not. Like, if you return the, uh, unnecessary column unnecessary column in our select statement or if we are using the unnecessary looping order or we are we have some kind of cursor or some background, uh, things are happening on the some code operation on the particular based on that query. So we try to avoid that, and we optimize it. So that is, I guess, um, main thing that we can use. And, uh, we we if we have we are returning the larger data on from that SQL query. So we try to implement the pagination. We try to apply the indexing on the particular tables or column for that. So so that so these are the way that we can improve the query performance for
Okay. So I would consider for that, uh, this integrity, data integrity, we would consider as a transaction based approach. So let's say we from the 1 store procedure or from the multiple store procedure, we are calling from the business logic layer. Right? And we're trying to insert or update the data into multiple tables. Right? So integrity is very important in that case. Like, we are dealing with the multiple tables. Right? So in that case, transaction will be will be more efficient to that. And, uh, if any exception occur in any stage of that business logic, right, so we can revert or we can roll back the transaction. That will be managed by the business layer as well as from the SQL side. If you are doing the things from the same single SP, then we can roll back the transaction. We can use the try catch inside the, uh, Stroke procedure itself. We can we can write the log on the catch catch block, and we can do the rollback the transaction. So that way we can maintain the data integrity in SQL.
So so no sequel DB will be used when when, uh, we kind of follow the JSON based structure or whenever it's required. There is no any transaction based approaches required, or we are dealing with the non relational database kind of structure. Right? There is no any normal license required. Right? So in that case, we can we can use the NoSQL. So that in that case, we will have the freedom to to add the columns or add the data based on our requirement at any any point without any, uh, making changes on the database side or schema side. Right? So there is no any schema we need to we just need to maintain the kind of, we can say, as a document. Every time we are managing the we can update or add the required fields or, um, many fields in that document as well. So it's kind of JSON based document, I would say. So I had some kind of experience in in the past that we worked with the, uh, providers, and we have the DocumentDB as well so that we can follow. And and even on on .net side, we can use the open source database like MongoDB. Uh, that is more helpful in that case. And, uh, if we are working with the, uh, multilegene application, for example, we we have a customer base as a global label. Right? So that in that case, I would prefer that, uh, we can use the NoSQL database. So they have a very good throughput in terms of the performance. So within the millisecond, we can do the insert and update operation as well for that particular document or particular record. So that is giving you very good very good benefit of that, and, uh, uh, it's also providing the latency good latency, and there will be no any performance issue in in that kind of database. So that in that scenario, we can use the NoSQL
Okay. So this is some kind of concept of the, uh, throttling, I would say. And, uh, uh, we can we can use the rate limiter. Uh, either we can create the custom action filter from the, uh, from the, uh, middleware layer, or we can use the, uh, API management, uh, from the Azure or AWS side as well. So if you're hosting our API on the, uh, cloud cloud environment, right, so we can use the API management, and they provide the rate limiter approach over there as well. So either we can use the that or or we can create our custom, uh, filter for that, and we can, uh, apply the, uh, throttling, or we can use the rate limiter approach over there. So they did this is mainly because of security perspective and to if we are exposing our API to outside world and, uh, there are multiple third party are using our API, then then if someone tried to block or talk over API instant with, uh, if you are not if you don't have any rate limit, uh, you know, or API, in that case, it will create a problem to the other customer. So that is the business, uh, approach that nowadays people are following that, okay, in this standard or business model, we are providing the rate limit for that. For example, in within, you can you can make a 100 request, for example. So or if you are premium, then you can make the 1,000 requests within the minute. So that kind of approach is now business, uh, using it. So that way, we can apply this, uh, security to our application as well, and we can maintain the overall performance to application or API instance for most of the time
Okay. So in this core, uh, there is a user roles returning from the get user or my third. Right. And we apply the, uh, uh, not not null check as well and count it and 0, it will execute the follow. So if we if we have the null reference, uh, exception get thrown from that particular logic, we can use the, uh, null handle the we can null reference exception as well. And, uh, uh, either if if instead of writing the not null and not count, we can use the any order, like user roles dot any, uh, method that will also handle the null reference as well as the there is a record or not. So that way we can avoid the null reference exception in some time cases. And, uh, either we can cover the that particular used, uh, code into the try catch block. And in an exception block, in a try catch, we can, uh, handle the null null null
Okay. So here, we don't need to write the each and every try case block for multiple operation. Uh, so for and it's not a part of the solid principle. So if you want to follow the solid principle, then we can create a single responsibility class and, uh, as well as we can create the exception filter. That exception filter link with the each and every, uh, controller method, or we can also link with the or generate with the, um, base controller. Right? And that that is the one single, uh, point where we can handle the exception of our web application or v v p a. So we don't need to write the try catch on each and every action method or each and every method. So that way, we can handle the action handling in our, uh, in our case. So, yeah, so we can use the, uh, single responsibility class as an exception
Okay. So in that application, uh, to handle the large volume of data with complex transaction. Right? So I would, uh, there are a couple of uh, couple of, uh, integration or a couple of option we have like we can use the cash or distributed cash for that to handle the, uh, some data that we can store in the cache to improve the performance of our application as well. Then we can use the, uh, uh, RabbitMQ or message queue mechanism, or or or if we can use Azure service bus in that case. So whenever some event occur in a particular stage, we can we can publish that data into the queue, and that queue will have the subscriber or topic below the subscriber or multiple subscriber that can consume the data and proceed further for the next stage. So in that case, we'll create the good rhythm of the application transaction level. Right? It it's not, uh, not on the single transaction or single request. We we do the we do handle the old transaction. Right? So it will be for example, we have we are maintaining the chain of sequence of transaction using this large volume, or or we have the complex kind of mechanism. Right? So in the 1st stage, it will happen, publish into a particular queue, then next queue, or it will be consumed by the multiple, uh, time trigger or multiple functions of what I could say. That way, we can we can make it as a smooth initial. Like, it will be asynchronous manner, or we can we can use the async await keyword on each and every layer from the application layer to business layer to data access layer. Right? And we can, uh, for if there is no any real time data required, then we can maintain by the queue for that. And if it's real time, that will be consumed by the real time, uh, time consumer or real time consumer from the application side. So that that way we can maintain the uh
So that asynchronous programming pattern is nowadays everyone using it to to to help the, uh, performance to improve the performance of the application. That is the biggest advantage. Right? So each and every stage, I would prefer that on on on action method or on the business layer, on the data access layer. We would use the async and await keyword for that. So whenever we have the multiple requests coming from the, uh, application side on the user side, right, so each and every request will be executing a parallel base, not on by first in, first out manner. Right? So we so that user will not have to wait for the response from the application. So that is the main purpose. So previously, in a traditional framework, that was the synchronous, uh, approach. We were we were using a legacy system. But now in a traditional well, now in a modernization happening in the framework site on the .net or .net core so that we can use the async and await uh, keyword for that. So that is the, uh, good approach. It should we should have everyone should have that
So from the architecture perspective, uh, applications should be, uh, scalable and, uh, maintainable. So for that, in our design approach, we would consider that, uh, it will be it will be part of our designing. Like, uh, we can if you are going with the, uh, cloud provider. Right? So whenever it's required, it can be scalable on horizontal or vertical base. Right? It depends on the business requirement. So that is the main purpose. And from the maintain maintainable perspective, uh, if there are any issue with the downtime, we should follow the agreement from the, uh, cloud provider for that. And based on that, we can use your services and integrate the inside of architecture. So, for example, if we have replication running on the multiregion, so we can create the disaster recovery scenario for the 1 region or for multiple region as well. So and, uh, we can also pretty we can also identify the downtime based on the SLA. Uh, if any any patching, uh, process occurred on the server side, that will also, uh, addressable from the cloud provider side, and we can inform in advance to our user base as well. So from the architecture side, I would consider as a multi disaster recovery approach for the production environment. And, uh, from the security uh, we could consider as a different solution, like, uh, how how we are how we have configured the firewall on our application side, how the, uh, request travel from the, uh, front end layer to back end layer, network level configuration, and for that. So these are the main, uh, scenario we should cover for that.