
Software professional and accomplished delivery lead with over 16 years of extensive experience in IT servicesand the Banking industry. Presently engaged in a transformative project, focusing on the Fixed IncomeSettlements platform. My responsibilities encompass the entire project lifecycle, from planning and estimation todesign, build, delivery, and refining/refactoring, all executed within an Agile environment.
Previously I have worked on successfully modernizing legacy systems into cloud-native distributed applications,utilizing technologies like Kubernetes, DevOps, Microservices, Kafka etc. Have also worked on SOA, BusinessRules Management Systems, and IVR. During my tenure in IT services I have also worked in Banking, Insurance,Healthcare and TnH domain.
A lifelong learner, always trying to keep myself updated and interested in working with the cutting-edgetechnologies, including Machine Learning, Blockchain, and more.
VP Sr. Lead Engineer (Trade Management Athena)
JPMorgan ChaseVP Sr. Engineering Lead (Settlements Processing Platform)
JPMorgan ChaseArchitect Cloud Transformation
WiproProject Engineer
WiproConsultant & Technical Analyst
CognizantSenior Consultant
Cognizant
Kubernetes

DevOps

Microservices

Kafka

SOA

IVR

Splunk

AWS

iReport
.jpg)
JUnit

Mockito

bamboo

Bamboo

IBM BPM

T-SQL

AWS
.png)
Docker

Git

VS Code

Trello

Slack

Google Cloud

KAFKA
Hi. My name is Rajkesh Bhattacharjee. I have a total, uh, 17 years of experience, and I have worked for multiple, multiple companies. So I started my with Wipro, and after that, I wrote for Cognizant and then finally, I joined JPMorgan back in 2019. My most recent experience is working as an engineering manager. Uh, I manage a team of 6 people now where we are working on a big transformation program. So this program is about, uh, we are moving from one of the vendor platforms to a more in house based platform to deal with settlements. So that is what, uh, I'm doing now. Technology wise, I technology wise, uh, I've been I've been totally uh, throughout my career, have been more of a back end and middleware side of developer. I have been worked for Java, J2E. Right now, we are going through Spring Boot Microservices. Uh, I have experience in DevOps, end to end pipeline DevOps pipeline and deployment. I've also worked a little bit on the business rules side. I've worked on tools. I've worked on IBM ODM. So I have experience on that side as well. Earlier days, I've also worked on swap platforms like, uh, IBM BP Beeple and those, uh, VPN platforms. My most recent experience although is to work on microservices, building cloud ready applications, doing on a transformation program, building the solution architecture towards the how to execute the transformation plan over a multiyear program. So that is where my current, uh, experience lies.
So for a rollback program mechanism for a distributed payment processing system. Okay. So when when we are using a distributed payment processing system mechanism, I'd I'd first try to split it down in steps. Right? The first step, uh, is to present to the user an API or a UI to collect the user payment information. So that is the first step, I'd say. And from there, the the definitely, there would be a validation level of logic should be there, which could validate that what kind of uh, information the user has provided. So there would be a validation. And post the validation, there would be a there would be a connection to the 3rd party provider, either Visa, Mastercard, UPI, any any of the 3rd party provider with which we should connect with the either the credit card information or the payment information. And that process should be then finally connecting to the payer's bank to deduct the money. So that is the second step. So first step is validation of the payment information or the payment mode information that has been provided by the payer. Then the next step would be the would be to contact the issued bank or the payer's bank and deduct the money from there. The 3rd step would be to debit or sorry. To credit the money to the receiver's account. So these are the 3 steps I can think of, and 4th step would be to provide the acknowledgment once this entire process is done. My experience, it should be all a synchronous system. It should not be in a synchronous system. So that is there. So the money should be first to first, the thing should be validated, then the money should be deducted, and then the money should be credited back. That is how it should be. Now in case there is a outage, this outage can occur in any of these steps. The first step would be on the validation side of things. If the validation fails, basically, there is nothing to roll back. So we just stop it there. Alright? Uh, like, we just say that we cannot validate it because some of the servers are down or something like that. Please try after some time. That is one of the response. Then the second step would be that we deduct the money from the payer, but before doing while doing that, we could not deduct the money. So that could be the sec for second point of failure. In that case as well, I think absent the full
Since the first deduction itself has not gone through, we should be just leaving it. Uh, I mean, we should, again, let the user know about that and stop the system there, and the user can retry payment. The 3rd system would be, however, the money has been deducted from payer and has not been received to the receiver. So that's the only case when we need to do Feature flag system to control in a payment service. Sorry. I was, uh, still on the last question. Uh, I didn't realize it is time bound. The strategy you'd use to implement a feature flag system for control through loans and payment service. A feature flag. System for control roll ups. So I want to enable the feature or disable a feature. That is what, uh, I mean, I want to enable a feature for a controlled roll up. So it is more of a so so mostly, what what I have understood is, like, we we want to add a feature, new feature into my payment service, but we want to do it on a controlled way. So the main strategy, what I see is, like, canary release. So what we could do is we could just first, And now now for that can already release, we would try to test it with a subset of users. Maybe some of those users which are, like, very frequently visiting our payment service and as well as frequent visitors of the payment service, something like that. Or in a payment service, uh, how do we want to do it could be is, like, we want to probably roll out this new feature for low value transactions if they use this permits. So something like that. And based on that, we would try out that canary release. What it means is, like, we'd roll out the we'd we'd based on some based on some criteria, either the user base randomly selected or based on the amount of the payment amount of the payment, we should configure our load balance in such a way that that subset of the user would be would be directed to the new leaf feature enabled system. Whereas the they whereas most of the users most of the other users should still point to the old payment system. That is how I think so. So that is what my feature flag should be.
So it's method to efficient improvement, distributed transaction, distributed transactions across microservices in a payment ecosystem. Since payment ecosystems are mostly central systems, I'd I'd assume that there would be a orchestrated service, which would have 2 domain services. 1 is connecting to the payers, and 1 is connecting to the receiver bank. So the the entire operation should be done in one transaction so that, uh, and in a sequence. So the first first service should connect to the payers bank, deduct the money, inform the orchestrate to service only on only on the successful response to the orchestrator, the second transaction would be executed. That's how I see it. Now if the first transaction was never been executed, the orchestrator will just give a give a different signal. And in case the second transaction was not executed successfully, the orchestrate will know that and request for a reversal to the payers payer bank. So that is how it should be. Now we have to think of the outages in different scenarios even in case of the orchestrator service. For example, if the communication between the orchestrator and the other bank is not done properly. So in that case, uh, there should be always a time out because it is a synchronous call. So a time out should occur in case the payer it the time out occurs to while conducting the payer bank, the orchestrated instance will close the feedback loop. In case the time out occurs on the receiver bank, the orchestrator will close the feedback loop, but as well as, uh, close the feedback loop. Right? I mean, uh, it would close the, what I mean to say is, like, if a time out occurs on the receiver side of things, the orchestrator should stop the transaction from their design. Right? Uh, and, uh, it should time out then. But then, uh, what needs to be still done is when the orchestrator should keep on retrying and getting query of the status of the transaction ID. Right? So as soon as that is done, it should update the status asynchronously later. Now that is that is what I can think of. Okay. And then so it should be totally based on the orchestrated service, which would which would do make sure that both the transactions are happening. Now in case of any time out, the orchestrated service should not allow another transaction for quite a while. Right?
Generally, payment reconciliations and things like that, what it happens is, like, we we generally, uh, do a journal based posting. So we do a journal based posting. We do a ledger posting from the for that all the transactions from our executing from the from our transactional system, that should go into some of the reconciliation mechanism. Major components of the data data model should be mostly the internal account number, the external account number, the parties, and the payment amount. So that is all it should be. Now what happens is, again, we should get another feed at to the end of day or something, uh, or frequent feeds from the either payer bank and the receiver bank, and we should have their external account ID numbers and the amounts. Right? So our ledger should map it matching with their statements, the payer bank statement and the receiver bank statements. So so I didn't understand probably the entire system, but what I would think of is, uh, the data model has to be, uh, a data model has to be non RDBMS data model with, uh, which should help us with the file based reconciliation. So we the concept wise, we should be reconciling our transaction later with the statement from the external entities. That is what this should be done. Mostly this, uh, the internal bank account external bank account, um, the payment amount, That is what I could think of at the moment. And they should go to into some level of data lake where the reconciliation process should start and, obviously, the transaction IDs, the external transaction IDs. So we should have a ID based reconciliation for all the recon play x. There should be that should go into a suspense account. And from there, it has to be daily by the operations.
So we should we we should employ a number of number of, um, um, tracking tools. So one of the very useful tool work you and I use in my current system is definitely the logs, the Splunk based logs, and few of the tools like Dynatrace. So in Splunk based logs, we can build all our custom dashboards, which should which can efficiently track, uh, all the memories, uh, the memory performance. Like, it should collect, and we should be able to create some dashboard which could real time in real time provide the, uh, in memory memory usage by analyzing the hip dump logs and, uh, thread logs and application logs. Also, it should be able to provide the number of payments going through have provide a proper summary of the number of payments, what have been going through. At the same time, logs like Dynatrace and, um, I mean, uh, tools like Dynatrace could help us monitor our CPU usage, our memory usage, and things like that.
For example, in this logic, there is only one record, uh, maybe only 2 record, then 0 by 1, it should be 1. Right? Uh, so one by 1 by 2 is 0. So mid is 0. So low is 0. High is 1. Mid becomes 0. Now if x is 1, x is not there, then 0. Right? 24. Now if it is 1, so array of mid, which is 0, It's less than x, then low becomes mate. I'm sorry. I have seen it is greater than x. Right? So maybe 3, and high becomes mid to mids. It becomes just 0. So if there is 2 elements and if there is 2 elements in the role, like, uh, with value 2 to 4, in that case, the mid will and we are searching for a value which is less which is not there and which is less than the first element of the array.
Examine the form and see hash code into the repository. I'm disposable into this. Not disposable. Sorry. I'm not much aware of c hash, but I'm trying. And notice that the idea is being handy. How should that is put in not to well. It's a good proposal pattern to ensure proper resource management. So gender repository, I repository, I just provide t colon class. Gender repository, t p context dispose. If not disposed, if context went on. Sorry. I don't understand this.
So there are multiple ways to do that. There is a blue green deployment strategy, which we sometimes take. So, generally, blue green is basically downtime less, uh, rollout of new features where the blue cluster is the existing cluster and the green cluster is the new cluster. So we generally kept the new deployment in the green new features deployed in the green cluster, and then we tweak it based on the load balancer. So the first load balancer generally goes to the, uh, goes to the um, blue cluster. And then once the entire testing has been done on the green cluster, we just switch the load balance to the green cluster. That's how we can achieve, uh, no downtime strategy for for, um, rolling out new payment features.
How to align technical debt. So technical debts, so to me, technical debts, we we how do we do it generally in our systems is mostly, we try to make sure that we introduce as less possible, uh, technical debts when we go ahead and do that. But given that, uh, in ideal scenarios, it is not possible given our timelines and or stress on testing and meeting the deadlines, so it is not always possible. So first thing is to capture the technical debts properly. And during the sprint planning or the sprint reviews or retrospective, we should try to reorganize those technical debts along with the feature studies as much as possible. Sometimes we do that. Uh, so what we do is, uh, we know that some feature has been a problem. So and then, uh, has been has been done very hastily and created some technical debts. So what we do is we try to tag them if it is a smaller one along with some of those, uh, some of those feature stories, which is going to touch the feature again, same feature again, if it is possible. Otherwise, sometimes if the technical glitch grows very high level, we gently take a break between the releases. We possibly dedicate one sprint to the entire technical tips to is to cover it, given if if a release has been scheduled and gone. But the more point is that what we try to always do is keep a track of the technical dates very properly and monitor them very closely during each of the sprint review system.
So there are quite a few things what we do. Uh, obviously, first thing is arrange some standard training process. Second thing is arrange some of the sooner scans, which we'll always do sooner any other scans for that matter, which should take care of the test coverages, uh, which should take care of the some of those some of those, uh, code coverages, test coverages, as well as the coding guide guidelines. Uh, one more thing what we always try to adhere to is follow standard formatting pattern of the classes and methods so that it's it's always easier for review. Uh, then we implement some of the strict guidelines that all the for the for all the code reviewers, that all the all the reviews should go through the checklist of required things. For example, test, uh, code coverage for one matter, test coverage for second thing. 3rd thing should always be I mean, to ensure there are sufficient test cases. 3rd thing is to, obviously, the unit test is evidences. And 4th thing, obviously, is to run it through our performance test or any other test, integration test guidelines to make sure that it is meeting all of them. So that is how we intend to do it.