
I am sotftware developer since 2016, worked in product based company, help build their products ranging from content management, content syncronization and regulatory techs. I love to explore on various stacks and its fun to mingle with problem solving activity.
Freelance Developer
FreelanceSDE3
Zee Entertainment EnterprisesProfessional - 2
CapgeminiSoftware Development Engineer (Full Stack)
Astrika InfotechJava Developer
SugarBox NetworksJava Developer
Cognext.AI
Google Cloud Platform

Argo

Redis

Splunk

Design Patterns

Git

Spring Framework
so basically my core technology remains java and mostly i have worked on springboard microservices architecture and even driving that too in it apart from this i have been a core backend developer since the inception something around 7.4 years so mostly i have worked on startup and also product based startups ranging from media to the fintechs and mostly i spent 4 to 5 years in media domain and worked on content syncing to user registrations to gathering user journey in the mobile apps so these are my development space where i spent most of the time and apart from this in fintech domain i used to develop a mathematical model using java and ruby r language so over there all the regulatory tech startups need to be converted to a software so basically all the financial reforms or you can say that frameworks which are defined by big institutions for lending money that has to go through a proper rules and regulations and all other some calculations so those rules and as well as the calculation need to be converted to a software so basically it was a replacement of the consultant to a software so that's it from my end and apart from that recently i worked in zee5 in payments and subscription team where i integrated third-party payment apis from pay or from deducting the amount from the bank account of a customer to the refunding flow and everything is being developed by me and my team apart from that i also worked on the renewals part of the zee5 where subscriptions and renewals happen regularly on a monthly basis or as per the plan expiration period so that's it from the latest
So, basically, whenever the radius, uh, there is there is a certain time interval that need to be fixed, uh, so that the cache can be cleared first, and, uh, it has to be revalidated. Uh, you know, what so it may it we we are we should have in a specific time intervals like, uh, like, uh, like, grouping all the plan IDs of the and then while we're getting the tentative dates where where is the plan ends. So before that, it needs to be flushed out. So in the code itself, we have done that flushing things. Uh, like, we have fixed a certain time. We have scheduled a time. At that time, the it will it will flushed and the data need to be again reloaded in the into the release. So the that's what we have done here. And we have used release in, uh, in fetching the brand details as well as as well as in, uh, work for validating rules and everything before before the plan has to be attached to the, uh, whatever before the black bag has to be attached to the z five. We have used to validate we have used radius in validation or rules validations and plan while while fetching the plan details and everything.
So, basically, whenever the whenever the data moves from one service to other service and the statement to be maintained, it can be maintained by using a saga, uh, saga patterns, uh, where, you know, where the where the state has to be maintained in a conductor. And, uh, and whenever it moves, when the data state changes, the conductor gets updated as a finally, it persists in the databases. So that's a that's a one strategy pattern. We have one pattern where we used to maintain the payment state in the, uh, in the Ziff ecosystem. So so Saga's pattern can be really helpful in, uh, maintaining the data consistency across the microservices. Also, apart from if there is no saga, we have to use the normal robot mechanisms, but we we cannot rely on it mostly, uh, uh, because it is, uh, data consistency can mostly 80 to 20% can be achieved where where where where where the database is attached. But when when, uh, when the orchestrator is when we have a design pattern of orchestrator where where it will just, uh, deviate the call from one service and others other will, uh, deviate the call from one service to the other. In that particular thing, if there is any network failure or any any failure other than the, uh, any abrupt failure. So it might the data consistency can be, like, uh, may a change can become certainly more difficult. So it's better to use, uh, Sagas patterns, uh, because it has a separate separate conductor, separate service. You can call that where the whether the state of the data changes, and it just maintain that maintain that this particular, uh, maintain the latest state latest updated state and just update the database. If anything goes wrong, we can, uh, we can have a we can have a chance to roll back the transitions or, you know, we can just, uh, process that, uh, or process the transition as per our requirement.
Circuit breaker pattern is something which is of, uh, which we call a callback mechanisms fallback mechanisms. So, basically, it happens, uh, in our ecosystems, uh, it hardly happened, but still, since it's an internal, uh, these things since all the services are internal, so callback uh, that failure rate was very low. But still, in case of failure happens, uh, we can just we can just, uh, redirect the redirect the, uh, redirect to a separate, uh, follow separate web page where we can either tell the customer that there are some changes, there are some error, or you can just no. Just notify us for some of that. This is something you need to be updated. So that's it from my for circuit breaker pattern. The implementation of a circuit breaker pattern is like, uh, it it will be up. So in the core, there has to be it has to be with a direct callback mechanism. And also in dependency, we should have that, uh, open Netflix dependencies over there, and, uh, you know, to validate the method. And also, uh, also to handle that whether if the if there is any failure in the call, you're it has to be fall back to a certain delegates the call to a certain, uh, fallback mechanisms. Over there, we can have we can redirect towards a dedicated, uh, web page or any other, fallback mechanisms.
So, basically, I have not used RabbitMQ. I've used Kafka for this. So if I receive to in Kafka, the item potency is handled based, uh, implicitly via Spring Boot. And, also, uh, if I want to handle the, uh, like, straight like, handle the data consistency, item potency. I'll just make sure I'll just make sure that whether whether incoming data is already present in DB or not. So if it is present, I won't update it. If it is not present, I'll I'll end up purchase the data.
I bet it's I'm not used. Uh, I have no idea about this particular.
So, basically, this seems to be a... So, basically, we should have, first of all, we should have a dedicated exception handler, exception controller. We have to first make sure that what type of exception it is, whether it's an help only exception, we have to handle it accordingly, and if there is any other type of exceptions, like database is not present or data not found exceptions, or something like, we have to first write a dedicated exception handler inside that exception controller, and annotated with a dedicated exception class, okay, there is an exception handler something, and the class name, sorry, this thing, what type of exception it is, and what type of error message need to be thrown, and how to fix this, like, whatever exception it has, what are the exception it has thrown, and apart from this, we should also print, like, what was our incoming ID, we have to log the incoming ID, we should log the error, log.error, not the info, because it will delay the responses, because it will take, the execution time will increase, so it's better to log.error inside the catch blocks, because we required, whenever we required, we need to see the data, whatever data, the incoming data it was, and also we have to log the message, why it is, why it is, why the exception has occurred.
It's a singleton design pattern. So, basically, the purpose of this design button is to, like, to create a single object across the application. Okay. So, uh, the update may not to be, uh, may not to be shared with any other threads. It has to be shared only once. Like, you want I'm gonna say only once in a life cycle. So that's it. But the object will, uh, the database collection object cannot be cannot be created by any other thread apart from the existing, uh, thread. So that is the thing.
First of all, we need to go through like what type of application it is, whether it is depends upon what type of like application, whether it's like an event driver or it's something like something like like execution type of like like if we import it will give the output and then the output into the processor. What type of application is that first we need to go through it and integrating means what exactly the integration goes on like like whether the whether the monolithic needs up Kafka to monolithic some many message brokers or any any any other any other microservices patents need to be implemented. So that we need to first analysis proper and analyze that properly. And apart from that, we should also if we are trying to break the monolithic, we should identify the separation of the concern separation of the concern means where the state gets changed where the data gets data need to be persisted or you can say that where the where the flow is where the flow gets over and our next flow starts. For example, in a suppose for example, we have a e-commerce applications, suppose it is designed in a monolithic architecture, we should ensure that what is in the cart and then we should ensure whether the payment is done. So basically, we should also ensure that whether the whether the products are in the inventory or not before the payment is done. So we have to just make a separation crystal clear about the separation of the concern. And also we should be able to analysis like how much the user traffic would be if any load balancers if any is to be used, we have to also ensure that particular part. Apart from that, whether the any database changes need to be done, we should also make sure that whether we have to convert it to a MySQL or NoSQL, depending upon the requirement or the type of application we should do and also the caching if any is required to implement, we should also do that. That's it.
No. I'm not a a front end guy, so I have no idea about it.