Senior Software Developer
Pearl TechnologiesSenior Software Developer
UpworkProject Manager
SigmaSolveSenior Software Developer
Freemind TechnologiesSenior Software Developer
ProAcumen, UKTeam Leader
BexCode ServicesSoftware Developer
SigmaSolve IT Tech Pvt LtdGit
SVN
VSS
Azure App Services
Azure Storage
Azure Service Bus
Azure Event Hub
Azure Data Factory
Azure Functions
Apache Kafka
Stripe
PayPal
Twilio
SendGrid
MailChimp
CSS
CSS3
BootStrap
jQuery
HighCharts
Docker
PowerBI
PowerAutomate
NodeJS
SQL Server
MySQL
Entity Framework
Excellent work from Mit, highly recommended. His quality of work is phenomenal, understood from the very first communication the project and delivered a solution better than what we planned. We'll be doing more work together in the near future Mit. God speed.
Mit is reliable and we will continue to work with him.
Oh, hey. Hello there. My name is Mick Johan. Uh, I I have, uh, I have some 12 years of experience in in Microsoft dot net related and related technologies. That includes microsoft.net.netcore, uh, SQL Server, uh, the Microsoft Stack. Along with that, I have brief experience in Node. Js as well. Uh, on the front end side, I have good experience in Angular. Js and Angular. Uh, I'm I'm good with, you know, JavaScript, jQuery, uh, Bootstrap, so, you know, the entire, uh, front end. Uh, I, uh, for last for last, uh, for last 3 and a half years, I've been mostly working in microservices and, uh, web APIs. The domains I worked are are are, uh, pharma, uh, logistics. I'll I I did a lot of work in logistics. I also worked in ecommerceing. So these are these are, you know, uh, uh, this is this is the, uh, major of majority of my background. For last last I think at from 2018, I've been working, you know, remotely. So I have a dance experience in remote, uh, working, uh, working with the client directly. I have also, uh, I have also worked with, uh, as as a as a dedicated or a lease developer to from a client to another client, you know, for example, C. H. Robinson. Apart from that, uh, I I have I have, you know, very good knowledge in in low code, uh, tool, uh, in in Jitterbit and SSIS. I worked a lot of integration work and ATL jobs in, uh, SSIS and Jitterbit. These are, you know, some of my, uh, some of my, you know, capacity. I worked in Azure, uh, lately, you know, know, Azure, uh, Azure functions, Azure, uh, message queue. I also happen to work on RabbitMQ. Currently, I'm working on Apache Kafka to, you know, exchange the communication between microservices. So that is, uh, that is my technical background, and that's what I, um, bring on the bring on the floor if I'm hired.
Okay. So, uh, how I did last time, you know, it was it was not a dot net, uh, 6. I think, uh, it was it was dotnet. I think, yeah, dotnet, uh, 3 for 1. So we, uh, on Azure, we were we were using we were using deployment slots. So what we were doing is, you know, 1 deployment short slot, you know. Uh, we we target 1 deployment slot that is not inactive, uh, that is inactive, and we'll deploy everything on that. That deployment slot was, you know, uh, uh, connected with with the URL on which, you know, we will do UAT testing. We will test everything, uh, and we we had a very good we had a very good, you know, understanding of a logical and study of what a test user are. So it will not affect a lot on the on the on the actual live database. Uh, we will do all the parameter tests that have been given. We'll do few few manual test. If there is any UI involved, we will do the UI testing UOT. And once that is done in about what we'll what we would do is we would we would swap the deployment slot. So, you know, your your deployment slot in which, you know, latest code has been pushed from master, uh, is is right now, uh, an inactive deployment slot. Right? But, you know, we test it on everything. You know, it is not a front end yet. Once we swap it, the the latest master branch code, you know, becomes live, And and the previous live deployment slot is swapped to inactive 1, so it does not serve on the production, and that becomes your older version of it. So that is how, uh, uh, how we did. Uh, apart from that, you know, there is a there is a very good, uh, example of using a a Kubernetes port, you know, so you can you can have a priority or the default port. Right? So you can actually, uh, you can rotate those ports, you know, to make 1 as a primary or the default port, uh, port. And that would make, you know, AKS a powerful tool to, you know, have a near to 0 time downtime. Uh, Uh, but, you know, there there is a caveat. Uh, you know, if you're using caching, you know, you have to ensure whenever these actions are happening, you clear your cache or, you know, the user actually see the fresher website, you know, because, uh, we experience, uh, an issue of using this approach, you know. So at times, you know, we were using the radish caching, and the caching was still the same. So, you know, older users used to see you used to see a lot of older information, uh, which was not relevant in the, um, in the updated context. So what we was used to do is, you know, we would clear the cache for that user. So entire cache is cleared for that particular user, you know, all the experiences, all the feature products, etcetera that have been cached, you know, and they'll re recash it. So, you know, the every entire experience is new, but just that you don't, uh, you don't actually see the downtime.
Will use to manage data mesh schema migration dot net application. Uh, need to support multiple, uh, database, uh, providers. Okay. So I think, uh, let me tell you what we have done. It's right now, uh, right now for an for logistic partner that we are using, we are migrating from, uh, 1, um, 1, uh, database under the SQL 0. We are trying to migrate from to MongoDB. How we have done is, you know, we have created Iproviders. Now that Iprovider can have multiple instances. They are not singleton. So, you know, you can at any point, you can switch the database and it will start serving different thing, and that's what we have been testing very rigorously about. Then this this context will will work as will work as a singleton entity of the database connection, and that and and that interaction gives me entities, you know, that are generic across across the system. Right? And that and and using that, what how we are doing is, you know, the entity stays stays the same, but, you know, the queries, uh, the the the mediator the mediator says yields me something different. Right? So using that approach, we are using multiple data database providers, you know. III have worked in OpCommerce earlier, so OpCommerce is 1 of the is 1 of the ecommerce system at this moment, uh, you know, very popular in dot net. So in that also, they have created the similar way. The only thing is that they in in in the case of no commerce, that database context is singleton. You cannot use multiple database context, uh, in that 1. But I would suggest, you know, always have a multiple providers, you know, uh, interfacing the abstraction and and underlying results should stay the same. So, you know, you can without damaging the architectural or, you know, you have you having or injecting bugs, you can simply, uh, implement your your your your database migration or a different strategy on different database. It is easier. I'd like to share an experience earlier that, you know, at at times, you know, uh, we use, uh, uh, 2 database context, you know, the DB context in the 1 of the application, and we are just switching the connection, and we had lot of issues back then. So, ideally, it it it's it's it should be handled by using a a fair bit of extract, uh, of of abstraction to achieve what you need. But the underlying, you know, the DB schema or the DB operation, the DB services that we use to get the, uh, persisted data from back and forth. Right? They they should stay the same and uniform. So, you know, you can achieve, uh, a better a better solution.
Okay. Uh, so yeah. So middle middle middle middle middle components are are important because they work on the pipeline. So now, you know, in order to make them make them reusable, you know, we have to we have to use a lot of generic for that. Right? A a generic, um, and so I'll I'll tell you example of the of of the current project that I'm working. You know, we have coded, uh, about 5 generic, uh, middleware's generic based services that and that that entertain the middleware, and we have put that into a nougat package. And we have, uh, microservices. There are, I think, uh, 6 microservices are in picture at this moment. You know, they have truck, we have carrier, we have service, we have trip, trip, we have customer, and we have the truck. Uh, sorry, the the the transportation. All of the service are using the same middle west. Now what those middleware do is to ensure that first of all, your retail is serialized, uh, you know, what I would say, uniformly. So the application that are using is suppose, you know, there are front end applications, you know, which you which consumes those microservices. If if they need to, uh, contact the another microservices, they can expect the similar result. That is 1. Second thing is when when you when we code the middle of A02B, ensure that, you know, it is a backward compatible. By backward compatible is, you know, truck service we are using. It uses that it uses the latest, uh, dot net core framework. But trip services and and customer service, they are they are they are they are slightly back in the in the in the in the dotnet framework. So what we have do what we have done is, you know, we are trying to make sure as much best backward compatible, uh, you know, code we can do. And that also applies for the middleware because, um, because the way we are using middleware is to ensure that the pipeline has, uh, proper, uh, proper authentication responses. The pipeline has a proper error error handling. And and thirdly, pipeline whenever a suspicion, uh, request is found or or, you know, when when there is a there is a auditable request, you know. So what we do is we randomly choose an auditable request, and that request is sent out to audit department. You know, they'll they'll, uh, see how and what kind of interaction has taken place. So using that, we can also achieve, uh, uniformity. And up and, again, uh, it it it's never a single, you know, thumb rule about it. You can always, uh, go back and see what kind of requirement you have, what kind of reusable reusability you predict you predict and and you foresee. And based on that, we can always, you know, make the addition. But as I said, you know, having a generic way of doing it is always easy because, you know, you can have a forward and backward application to, you know, to be able to use those middlewares.
Okay. So monolithic, uh, so, uh, it is it is there there are there are rules. You know, there are so many rules are floating across the market, but, uh, we have to the the 1 that I actually, uh, you know, I used or, you know, first of all, you know, plan a decomposition. Right? So when we are using monolithic, let's assume you're using a pack, you're using older framework, and you're using a new framework. Decomposing is the most important part because monolithic have the mess they don't need to worry about messages, you know. They are in the single application. Everything is there. But when you go for a microservice, you break those those, you know, monolithic into PCs. But this this, uh, this, uh, microservice should communicate between so messaging should be planned very well. Uh, secondly, it's choosing the correct platform to, uh, to to upload and and and run the application is all equally important. Uh, third thing is to ensure that, uh, you you have, uh, you have the, uh, you know, the common code that you are going to use. You know, they are they are pushed into a very secure and, uh, and and better accessible modules, you know, such as NuGet packages. Otherwise, you know, what will happen is you could have a lot of redundant code, you know, uh, at times, you know, which is not required. For example, details, you know, they don't change a lot. Right? Uh, all the details stay the same across the platform because that's that's what it is, the the replica of the database, uh, the DTO object. So, uh, that is 1 of the factors. 4th is because in mono monolithics, you know, you have you don't have a lot of access to, you know, uh, let's say, caching. Uh, caching can be, you know, very distributed in case of microsystem. You can plan cache well and you have to put it forth to, you know, ensure because, uh, there could be delays, uh, in in communication because they are not monolithic anymore. You know, they're communicate between, uh, different, uh, different thing. Of course, default tolerance the tolerance is 1 of the advantage we are trying to exploit, but, uh, we have to plan that 1 as well. And lastly, you know, we have to create a strong telemetry about the communication happening and and create an audit auditable, you know, auditable, uh, interactions, you know, because, uh, telemetry is required for you to know where the fort is rising because, you know, their their initial stage is always difficult because, uh, the messages are not planned. We don't, uh, we don't expect certain user behavior. We don't expect certain data to coming in in different fashion. So in that case, it is it is it is very important to have a, uh, solid telemetry, uh, in the picture. And and and last, uh, but not the least, I would I would say the security has to be the, uh, cornerstone of entire, uh, establishment because what will happen is, uh, in in in case of RPM, uh, our our we were using the, uh, we were using the 3rd party Okta. Right? And now the same, uh, same, uh, you know, it was same database was, uh, you know, moved out to different, uh, different items. So it was difficult to use 1 access database. That's that's the end of the microservice if you use the same database, uh, across. So what we did was we we devised the Kafka to, you know, spread out the events and, you know, any table that is pertaining to that, uh, you know, microservice will become the DVO. And any table that is required by this a microservice and it actually belongs to 2nd, we we created a, um, a we we created a a different schema for for practicing. And whenever the data got, you know, uh, emitted, we will use the f kappa event to, you know, sync the database. So we know we eliminated the, uh, database syncing as well. So, yeah, these are the items, you know, I would take to, uh, you know, convert a microservices, uh, you know, minority to microservice. Uh, it it took long time because I have experience about that. But yeah. Thank you.
Communication with your microservice using Azure Virtual Key. So it it is very easy. We use the Entra ID. Uh, what we did was, you know, our entire, uh, our entire, uh, authorization and authentication framework on Microsoft, uh, Entra, the Azure Active Directory earlier, uh, that used to be. And and based on that, we have also created, uh, we have also created, um, a a matrices for the, uh, for the event, uh, for the event or the or or the message receiving and message broadcasting. Right? So for example, broad let let's take a simple example. Even though the even though the, uh, the order or the ProBuild order is available across, uh, across, you know, applications, but, um, ProBuild, uh, ProBuild is is is the ProBuild services own database. So in that case, we are, uh, we are simply we are we are simply using the uh, metrics to control that. You cannot you can broadcast the problem, but you cannot ingest any problems because any other database that are are are in different microservices, they are metadata for it, and we don't have to consume that. So, you know, uh, that is a way how we are using the Azure service bus. Actually, we used the Azure service bus because now everything is on Kafka. I think only 1 application that is using Azure service bus at this moment. Uh, I believe carrier service. Yeah. So carrier service using service service bus to, you know, uh, to create a a queues, you know, and that that queues are are are accepted by the, uh, by the by the different, uh, microservices, uh, right now. But we are not broadcasting any service plus messages from the other applications anymore. We are using Kafka mostly. Uh, and apart from that, you have to choose what kind of, uh, what kind of encryption you want to use if you are interested in using that. You know, you can use a private or public key based, uh, application, uh, encryption of messages. Uh, apart from that, there are, uh, apart from that, there is a there is a, uh, there is a hash check, uh, NuGet available to check the messages originality about it. So there are few mechanism available in place, but, uh, I did not play an active part in doing that, you know. IIII just I'm just using the framework that that was already built before I joined, but these are the items that I'm aware of, uh, that can happen.
Okay. So the report generator should have so everything should be broken into single responsibility. There should be a PDF service. So a report generator if it if it if it kind of files find that the document type is a report, you have to have a PDF service to do this stuff for you. If you have document services, you know, or the word services, uh, that will do for you. Report generator, uh, and and there should be document services, uh, you know, that that would, uh, that would, you know, request a report generation and that will break the serve break the service into deliverables. You know, the deliverable of report generator service is always it's always going to be the, uh, to to communicate with the service and deliver the document and log it or audit it. Uh, there should be a different document services, uh, for word documents. So, you know, the word document service will come like that. Now every for every different type of report, it is it is advised to have a different service to do that. So, you know, that service will do only PDF. PDF service will always always work on PDF. Um, Word service always on Word. Uh, you have my you have Excel, you have CSV, JSON. So why these services are required? Because you can always unplug it. You know, someday, your your you say that, you know, there are there are reports of, you know, having a macro that'll run on the Excel, uh, and and and and we had we had a couple of clients complaining that your your items saw the macro issues. Uh, even though we did not do anything for for for a bit of time, we simply discontinued sending Excels. So, you know, unless and until we resolve the bug, you know, that is showing, uh, dangerous, uh, dangerously, you know, potential malicious code that'll gonna run on the Excel. So that you can always plug and play. You know, you can always plug in. You can all plug out. If there is 1 code, you know, this this this reduces the testability so badly that, you know, you have to test all the documents on that, uh, even though you make just 1 change in a PDF service. Instead of that, simply, you know, we should have all the services taken care of. So if you make any change in PDF service, you're always always test test the PDF service or p t PDF pertaining functionality only.
So in this Azure, um, function code excerpt, there seems to be a potential issue that could affect the scaling behavior. Can you explain what we're Okay. Okay. So, uh, see, the why we use Azure function is because they are small sort of microservices. They should not take a lot of time. So if I was to if I was to work here, I would simply deliver a token, uh, token to the user, uh, that would, uh, you know, that would create database and I'll I'll generate the, uh, I'll I'll generate the task in the background, you know, using a background task. Now that background background task, what it'll do is it'll generate the it will, uh, uh, generate or process the queue item and and and, you know, once that queue item is complete, I will deliver the acknowledgment to the, uh, to the to the consuming service. Right? So it will be easier. Why it is, uh, why it is a why do why it will be a scalable is because, you know, you know your Azure task does not need need a lot of processing because this task is just just a handler. You know, it will create a log entry, and it'll it'll process in the background. So you can you can, you know, you can keep it on low scale, but the service that actually perform the task, you know, you can actually have it, uh, you know, a kind of, uh, horizontally scaled, uh, right? Uh, sorry. Uh, vertically scaled. So it'll it'll become easier for you to process the queue, and this handler will just do the log entry. Right? And this way, you are you are applying a a lazy lazy answer, but that that won't block. You know, This way, it is going to block entire, uh, at a function. You're you're actually consuming the function for different reason. Right? So I would use the scaling issue by using this. And apart from that, what I see is connection service bus connection is, uh, applied to it. Uh, you have my trigger. Uh, my queue and item. My logo. Yeah. So logger is also 1 of the 1 of the, you know, uh, function that you should use because you are you are going to just keep, uh, uh, just keep a login, uh, the entry of the task queue, and then you can continue processing it. So I think, you know, segregating this task into uh, 0 function will do task. You know, 1 is the background service. You know, they'll they'll be vertically scaled, uh, a lot and, you know, they'll do the real time work. And this 1 will work as just just as the, uh, simpler, um, simpler, you know, gateway. Right? So this way, it will be it will be it will be more, uh, scalable. And that is also backward compatibility if if you're due to some some of so and so reason your queue does not work. Right? This function will choke and will start choking other functions as well. But the other queue, you know, if it is choke, you still have the choke the queue entries. Right? Using that queue queue entry, you can always reprocess. I think we I used this, uh, somewhere. I don't exactly recall the approach.
Okay. So microservices is is, uh, is is hungry for Solid. Uh, why monolithic went away is that it was a huge architecture, and everything was inside of it. You had logging, you had caching, you have database interaction, you had the front end and and the authentication services, and we broke it. Right? So we we use the solid s onto it. That that is a single responsibility. Right? Uh, Then we used, uh, we used the inversion of control. How we use inversion control? Inversion of control is to, you know, have, uh, have everything, uh, you know, uh, taken into the context on demand. Right? So you inject everything your entire in in in your context, but only when your particular service is, uh, kind of, you know, uh, involved. And that that will be that object will be activated and we will, uh, do, uh, we will do the operation and control the life cycle based on what kind of scope or scale or singleton object we are using. Uh, we have the, um, outside in approach, uh, very easily observed because what will happen is, you know, your core code will stay the same. Right? Uh, even though it is distributed and it it's replicated by your code, your your core code stays the same. Now, uh, when I say stay the same, that means, you know, you you are going to use 1 1 chunk 1 chunk of, uh, item into into into the, uh, into the, uh, what I would say, uh, into the same repository that you are going to use, and that repository will will not change a lot. You you are going always going to you're always going to extend it based on what what kind of requirement you have. Uh, you have, uh, you then you have the, uh, list cost substitution principle. Right? And that principle what happened? I'm sorry. I'm sorry. Uh, I by mistake. So you have list goes, uh, principle that would what it would do is it will simply, uh, simply let you, uh, decide that what kind of responsibility you want to give. So like like in the previous question, when I said the responsibility should be delegated. Right? So you have PDF service, you document service. Right? So that would be dictated that there'll be a a fair bit of, uh, fair a fair bit of delegation that will happen across the services. Right? And, uh, then you have inside, uh, then in words in principle that is, uh, that, uh, that is that is all all good. Right? And and I think yeah. So that's it. Solid is the simple, uh, single responsibility 1. Microsoft will always have the single responsibility. You have the inversion of control. Right? So you invert the control. Right? Your objects are not the main are not the principle. Your abstraction is the principle. Using extraction, you can always depend and inject and and and create objects and, uh, use them and, uh, discard them according to the life cycle you expect it to be. So I think, yeah, these 4 principle, I can, you know, very, uh, very easily, uh, uh, interact, uh, with with microservices. And lastly, uh, there are there are hybrid microservices that we use monolithic and, uh, microservices as well. Like like 1 of my previous project, we were using a a kind of hybrid microservices. There was a lot of monolithic items that we were reusing it, trying to, you know, try to increase the, uh, sorry, shorten the time to market. Uh, but, uh, that that that is what I see is.
Uh, okay. Uh, as I said, uh, I worked, uh, I worked Hangfire very, uh, very, uh, you know, III used a very, uh, Hangfire for very short amount of time. Uh, but I understand that you can actually uh, schedule a task, you know, create a cron job based on it. You can also also do the dependency injection, uh, of services, you know, that are going to be used. Uh, that is that is 1 way of doing it. Uh, we moved to the background, uh, the background task of the dotnet core framework, you know, that that runs on background itself, uh, because Hangfire, it's paid. Right? So, uh, we, uh, we didn't we didn't we had, you know, licensing and and payment issues that we we ceased to work on Empire. So how we were using it? Whenever, uh, whenever, um, uh, I'm talking about, you know, specific case of happy.com, which is, you know, variable smart variable right now. So how we use, uh, Hangfire earlier was to whenever whenever, uh, uh, whenever an order is placed, you know, we create a we create a background schedule task of checking whether the user has activated the membership activated the membership. Based on based on this activation, we were we we used to we used to use Mailchimp to, you know, give them the, uh, the the required information because, you know, uh, that would make the user to use, uh, you know, and add adapt to the pro adapt to the product. And, you know, since it was a revolution, we were expecting the customer will become a or advocate very soon. So we used to, uh, do that. So that we were using Empire, but, you know, when when we moved to microservices, we started using the, uh, the the background task. So background task would, you know, simply check whether the there is an any any interactions on the on the, uh, on the on the customer side with the library, the the the libraries of the songs or tracks that we had. Uh, then again, we moved that to Azure function to, you know, make it more even more, uh, you know, a lean approach and and, you know, not having a a lot of boilerplate to run, uh, for for that simpleton task. Uh, and and lastly so, yeah, these are these are the kind of, you know, uh, we can use a background. Background task are very important. Right? For example, there is a there is a way you can simply run a task dot run, and that would, uh, that would get out of your async function and still continue using in, uh, still continue working on the back end. That that's called the fire and forget approach. So background task are very important. Background task, you know, sales, uh, sales you a lot of, uh, customer interaction and everything is is is moved to back background. You can actually, uh, schedule a background task or, uh, or even if it's Azure, you can do a cron job or, you know, create a queue based on your based on your requirement. Right? Whenever a queue is is used, you can simply use that queue to, uh, uh, to pass the message and and and get the result, you know. Let's let's take an example what was the last time customer used the use the track, you know, which was the last location. So these are the some of the advantages that we have for the background task. But I would I would assume that, you know, if you have a power to use a a thinner, you know, a a thinner preprinted function, you know, use Azure functions. Right? Or for for example, Lambda and AWS if if if there is an opportunity. Because what it does, there is there is not of not a lot of, uh, a lot of, um, boilerplate involved. Second thing is it becomes your your your your application becomes slim, you know, you don't have a lot of item. And and and whenever you have to make changes to those background task, you don't have to worry about, uh, what kind of execution, uh, downtime we could have or or or we'll have to trigger the pipeline again to deliver entire project. So according to me, that is a that is that that is a that is a good approach. But there could be some cases in which Hangfire might be useful, but those I am not exactly aware of.
Okay. Red disk high availability of red disk caching is, uh, is is when you on Azure you can have you can have an application services or a Kubernetes port that runs on the, uh, that that will uh, that will create, um, a Redis, uh, Redis distribution, uh, in the in the in the cloud. Uh, so according to me, you know, best way to do it is is to have a is is to have AAA replicated Azure Cache. Now by replicated Azure Cache is to have 1 or 2 ports, you know, that are, uh, that are syncing, uh, with 1 another, and they are, you know, load balanced, uh, amongst amongst themselves. Uh, that would ensure the high availability of caching. Now uh, there are few there are few other, you know, popular cachings are available, uh, but I I would use, you know, I I would I would try to use anything that uses the JSON based, such as the red dish caching. Right? So, uh, high ensure high high availability, you know, we have to ensure that, uh, the the port or the app of the app services that we're using this, it's it's happily work vertically scaled, you know, if it's possible, you know, use the elastic, uh, use the elastic, uh, you know, plan to, you know, ensure that it is scaled up and scaled up according to the traffic. Uh, second thing is, uh, if we are going to if we are going to, you know, use the app service for example, we have to ensure that the right, uh, right, you know, right, uh, zone is selected. Right? So the, uh, so the geographical zones that are divided accord, uh, are across, you know, they they are, uh, properly, uh, you know, uh, replicated amongst themselves, you know, and they are available in multiples, uh, multiple zones, uh, so that, you know, the nearer, uh, the caching available for the request becomes, uh, the, uh, become the choice for the, uh, caching. Uh, 3rd is, you know, replication should be good. Uh, replication is the most required feature if you are using a multiple zone or multiple uploaded, uh, uh, Kubernetes based, uh, application of Redis. I think, yeah, these are the these are the approach, you know. Again, I have not I have not lot of work with it in DevOps or the back end of the Azure like this, wherein I had to, uh, actually understand the, um, understand the the the condition, you know, try a couple of solutions and see which 1 works the best, um, because there is no, uh, there is no thumb rule, you know, for there there could be a number of scenarios that could, uh, affect the, uh, the availability issues that we that that you're you're trying to get an answer of. Yeah. So these these are my inputs. I don't have a lot of inputs. I would not bang around the bush, but, yeah, these 2 things I am aware of that that should be taken care. I think that is a generic answer for other, uh, application as well. But for Apple Redis, this is this. Yeah. 1 more thing. Yeah. You you have to use the so Radeus, as I as I know, it uses the JSON based. Right? So try try not to use Newton's option. Try to use the default uh, JSON serializers that is available in dotnetcore nowadays. Like, it is faster than the JSON, uh, the Newton's soft, and it it is equally compatible. And and there are there are a lot of, you know, middlewares available in, uh, for that to manipulate even the pipeline. So best is to use that that, uh, uh, you know, system dot, uh, I think, uh, system dot JSON or system dex dot JSON 1, uh, that is available in dot net core and and and that that is better to use instead of mutant soft JSON if you're using dot net application. And with this, I rushed my case. I think I have done fairly fairly good. Uh, and thank you very much for the questions. They were very they were very good and, uh, and and thought provoking. Uh, see you next round.