Engineering Manager skilled in AI and ML technologies and with extensive experience in blockchain development projects. I am a Web3 Architect at The Hashgraph Group, where I design and build innovative Web3 solutions using Hedera Hashgraph, focusing on DeFi and identity. My career combines technical expertise with leadership, driving innovation in the evolving tech landscape. I am passionate about leveraging blockchain and decentralized technologies to craft transformative digital experiences
Engineering Manager
Uptroop LtdAssociate Manager
AccentureLead Blockchain Developer
AccentureBlockchain Developer
AccentureAWS
Azure
Kubernetes
Docker
Jenkins
Hyperledger Fabric
Microsoft Teams
Slack
Azure App Services
Integrating AI-driven learning solutions into workplace education. I focus on crafting and implementing strategies that embed advanced interactive learning tools into daily workflows, aiming to significantly enhance both employee skill development and overall performance. This initiative is geared towards reshaping traditional learning paradigms, making them more efficient and aligned with modern business needs.
Leading the Blockchain engineering team team and defining the technology architecture for permissioned blockchain applications. Contributing to technical aspects of software development, such as designing and implementing algorithms and systems, hands-on development of cloud-native solutions.
Setting goals and objectives, determining project timelines and priorities, and providing guidance and support to team members. Working with other departments and stakeholders to ensure that projects are completed on time and within budget.
Led Agile Scrum Backend development teams and supported Systems Architecture Design to build highly available, secure, robust and scalable cloud-native, blockchain based solutions. Also, responsible for monitoring and evaluating the performance of the team, and implementing strategies to improve efficiency and productivity.
Delivered multiple Client Projects using an assortment of technologies such as Blockchain Platforms - Hyperledger Fabric, R3 Corda, DAML, QLDB, Programming languages - JAVA, NodeJS, TypeScript, GoLang, CI/CD tooling - Jenkins, Docker, Kubernetes, cloud platforms - AWS, Azure and Queuing mechanisms like AWS SQS, RabbitMQ, Apache Kafka.
Built and presented POCs, POVs to Senior Management and clients in the Liquid Studio to showcase the potential application and value of Blockchain/DLT based solutions in different industries.
Role: Blockchain Architect and Associate and Team Leader
Objective: Enhance and architect a GoQuorum-based ERC20 and ERC721 gold token trading platform for a leading commodity trading firm.
Problem Statement: Improve the platform's transaction processing speed and integrate with existing ERP and CRM systems to ensure seamless operations.
Impact:
Results:
Technologies Used: GoQuorum, AWS Lambda, SQS, Solidity - ERC20, ERC721, AWS Cloud Platform
Role: Led an 8-member blockchain backend development team.
Objective: Develop a cloud-native, event-based track and trace application.
Problem Statement: Existing product traceability processes were slow, taking up to two days for turnaround, lacking efficiency and real-time tracking capabilities.
Solution:
Impact:
Results:
Technologies Used: Hyperledger Faric, Azure Cloud Platform, AWS Cloud Platform, PostgreSQL, JAVA Spring Boot, Azure EventHub.
Role: Blockchain Developer
Objective: Develop a secure and efficient invoice financing platform to address fraudulent activities by vendors.
Problem Statement: Vendors were exploiting the system by financing the same invoice with multiple banks.
Solution: Implemented a decentralized platform on R3 Corda to provide a single, transparent system for invoice financing, including onboarding and notification services to manage pre-approved credit limits for vendors.
Impact:
Technologies Used: R3 Corda, Kotlin, Swagger, PostgreSQL, SQS, AWS Lambda, Kubernetes.
Yeah. So I my I I'm a computer science graduate an MIMS University in Mumbai. And post my graduation, I joined Accenture as a blockchain developer back in 2018. And, uh, I worked on multiple projects of production grade, uh, the first one being an anchor led, uh, invoice fence financing platform. Here, I was, uh, this this was built on r three quarter, and my primary responsibility included the, uh, back end development of onboarding service and notification service as well as, uh, uh, the entire infrastructure management for this project. And, uh, this this was a project where, uh, multiple financial institutions, they came together. They formed a consortium. And, uh, they basically whoever has onboarded onto the platform, they, like, each of them, they gave credit limits to the individual vendors, and the vendors then had options to whenever they, uh, an invoice has been raised against them. They could pay their invoice and, uh, through this platform and, uh, then repay the invoice after some time. Uh, this project actually helped them reduce, uh, the fraudulent activities with the same invoice being financed by different multiple financial institutions. My next project with was with a major pharmaceutical company where we built an end to end pharmaceutical product check and trace application. So the idea was the end user, the patient who is being administered with the vaccine, they're able to scan the barcode on the vaccine and get the entire lineage, identify that any temperature conditions were not violated. And this was a truly multi tenant architecture. The different entities were on different cloud platforms like AWS and Azure. We used, uh, Hyperledger Fabric to build this one. I was I I led the back end development team as well as the infra team in this particular project. Uh, my next project was, uh, a grant management system that was built using QLDB, quantum ledger database. It was more of a centralized, uh, system that offered all the features of our blockchain network, but a central party was responsible here. And, uh, uh, it is actually being currently used by over 2,200,000 people. Uh, more recently, I've worked on a, uh, on a high very high performance CBDC, uh, implementation using r three coder. This project had its unique challenge with the throughput requirement of around 20,000 transactions per second, and we were able to scale up the entire network and, uh, achieve this throughput by incorporating various performance tuning techniques such as, uh, we used a cluster based approach. So we identified the bottleneck was the notary cluster, and, uh, we divided the entire entire into multiple zones, and each zone had its own notary cluster. Uh, we used concepts like sharding, uh, even implemented features like online wallets and offline, uh, transactions online transactions as well as offline transactions. So, yeah, this was about, uh, my experience in blockchain. More recently, I have started, uh, expanding my knowledge on AI and ML. Uh, at my current company at Uptube, uh, we implemented a retrieval augmented generation technique that would ingest thousands of pages of PDF to create personalized learning journeys for different users. And we actually were able to reduce the content creation time by almost 90%. And the user engagement has increased at up to 80%, which is a very high number in this industry.
So, generally, a race condition occurs when, uh, the multiple, uh, entities or, uh, in this particular case, multiple Python, uh, microservices. They are trying to update the same, uh, same, uh, set of data. Right? So in order to address this, we can use some, uh, something called as, uh, MVCC, which is multi version concurrency control. Right? So in this case, instead of let's say there are 2 microservices, and one is trying to update it while the other one is trying to read it. Uh, what we do is, let's say, the first one is when, uh, when it picks up the data. And, uh, it's applying all its processes and tasks instead of updating it, uh, directly. 1st, it will create a version of its own and then update it. And while this time that it is updating it, if the other service, it tries to read the same data. Right? Uh, so it will still, uh, be reading the initial version because the first word, uh, the 1st microservice, it has not committed its changes yet. So and as soon as, uh, the 1st service, uh, commits the change, the, uh, current state of the data, it is, uh, updated to the newer version, and all the subsequent reads or writes will happen on this version. Yeah. So, yeah, I would like to address this, uh, situation with
So in order to implement data synchronization between Hyperledger, uh, Blockchain and Ethereum network, I would generally try and implement a multi version concurrency control kind of system, uh, where, let's say, while, uh, Hyperledger fabric has picked up a a a particular set of data, uh, that it needs to, uh, update, Uh, it does not directly override the existing data. Instead, it first takes it, uh, applies the changes, creates a new version out of it, applies the changes. And once it has completed all other tasks at the time of commit, it, uh, actually pushes the data so that if any other, uh, entity or a network is trying to, uh, access the same information, they are, uh, they are not reading the, uh, like, uh, uh, until and unless the version is updated, they still have access to the previous version.
So in order to ensure integrity and consistency,
So in order to, uh, debug a failed transaction in Hyperledger of, uh, blockchain network, there are multiple ways to do it. One of the most easy easiest way would be to debug the logs. So, uh, in a in the ledger, right, in ledger in hyperledger blockchain hyperledger fabric blockchain network, uh, all the all the transactions are recorded whether or not, uh, uh, they're valid or invalid. So one would be to look at the ledger. 2nd would be to look at the logs. Uh, so ideally, you would want to keep a centralized logging system where all the logs of, uh, any, uh, any transactions that are taking place within the network are generated, and you you can inspect the logs and figure out what has went wrong. Uh, if, uh, that's one thing. You can even use some tools like, uh, block explorer that gives you a real time view of what's going into the blockchain network, what's not going in. And, uh
So in order to choose between a RESTful API and RabbitMQ implementation, the most important, uh, criteria would be whether or not I want to decouple the 2 architectures. So let's say if I have a service, uh, service 1 and service 2, service 1, uh, wants to send, uh, like, the the, uh, you need to establish communication between these 2 services. And, uh, if you can, uh, if you if you need a synchronous response and you cannot have any delays, uh, then you would probably want to choose restful APIs. Uh, but it comes with its own set of drawbacks that, let's say, if the service 2 is down, you might not get any response at all. If you want to create some data and the the service 2 is down, you would have to implement some mechanisms to ensure that the data is not lost. Uh, but in case of a RabbitMQ, uh, a messaging system in between, uh, it would be an asynchronous process where you'd not get the immediate response. You'll have to, uh, wait for some time, uh, or depending on the, uh, on service 2, whether or not it's able to immediately pick it up and process it. Uh, but yeah. And, uh, the data would not be lost because if the service 2 has not picked it up and not processed it, uh, it still resides on the queue. And when the service 2 comes back up, it can still have access to all the data, and it can implement.
So this particular, uh, code snippet here is an error. So, uh, the fur in the first line, we've created a mock stub. 2nd line, uh, we're trying to read the, uh, read the state for, uh, some key. But in the 3rd line, there's an error that, uh, uh, basically, we are checking if error is not equal to nil. Uh, we are returning error that we have failed to get the state for the key. Uh, but, uh, actually, if the data is identified and if the if there is an error, then we should be checking for, uh, error equal to nil. Sorry. My bad. So one of the things that I see here is that some key is hard coded. We are first checking for error not equal to null. If error is not equal to nil, uh, basically, we are saying that there is some error, and we are we are not able to fetch the state for the key. And second, we are checking if the value equals to nil, uh, which means that, uh, we did not find any data for this particular key. In that case, also, we are returning that key not found. Else, we are just printing the value for the key. Uh, after that, there should be a line to return the value. And you should ideally, if there is, uh, data associated with some key, you should be able to get it.
So this particular piece of code, it first imports the AMQP and then tries to connect to, uh, uh, queue called AMQP local host. And then it executes a function where we are creating a channel within that channel. If, uh, there's any error, we're throwing the error, and then the queue name is defined as task queue. The message is called hello, world. We check that, uh, there is such a queue. So it will check that there's a task queue. Then, uh, then we we have called a function to send to the queue where we select the, again, the queue name as well as the message that we sent in the buffer, uh, buffer format. And if everything is successful, uh, we we have a log here that, uh, the particular message has been sent. And, uh, there's a set out set time out function after that. So every 0.5 second, it it's 500 milliseconds. It will try to, do do the same thing. Uh, the potential issue is we do not need to create a channel every single time we want to send a message. I would probably approach it in a way that, uh, the channel is created, and I call send to queue, uh
So in order to upgrade, uh, the chain code on a live hyperledger network, uh, what we need to do is, first, we need to, uh, go to the peer, fetch the channel details. Uh, and those channel details, uh, ideally, we would want to that that, uh, you'd want to probably, uh, uh, fetch the details, and then, uh, you need to calculate, uh, basically first first you write the chain code, convert it into the package. And, uh, this package is something that we need to get approval from all the different, uh, uh, endorsers in the, uh, fabric network. So first, uh, first step would be to fetch, uh, to get approvals from the different endorsers to upgrade the channel. Once you have collected the signature from everyone, you take that package. Now the, uh, now you can go to, uh, uh, go to your client library. And, uh, once once you have all the endorsements, uh, signatures, you submit, uh, you increase the sequence of the of the, uh, chain code and upgrade the version of the chain code and then submit a transaction to the order. The moment, uh, order makes it a part of the block and then, uh, distributes it all to all the validating peers.
So RabbitMQ has, uh, different ways of implementation. So, uh, RabbitMQ basically fall follows the AMQP, which is advanced message, uh, messaging message queuing protocol. Uh, AMQP, basically, uh, basically, it it has a message broker that is, uh, sorry, message exchange that is connected to different queues. And when your consumer wants to send a message, it first sends to, uh, uh, it first sends it to the message exchange. And based on how message exchange is configured, it will pick the correct queue that it wants to send to. And from this queue, I connected the consumers. The consumers will have, uh, the the consumers will, uh, take the message, process it. And, uh, that's how RabbitMQ is implemented. Now there are different, uh, ways in which, uh, message x, uh, message exchange can be configured. Uh, some of them them being fan out, which means, uh, the message exchange will send the message to all the queues that it knows. Then there's a topic based configuration where, uh, it will, uh, send to the topics or the queues that are basically part of this topic. Uh, then there's a message header, uh, like, uh, whatever is specified in the header, it will take the message and send it to that particular queue. And, uh, finally, uh, it uses the binding key. So whatever binding key has been, uh, is, uh, whatever binding key is selected by the consumer, the message is sent to, uh, that particular queue. Uh, now in order to ensure that the transaction messages are, uh, uh, processed in order, we need to configure it in a, uh, 1st in, 1st out, uh, uh, manner. So the messages that are, uh, produced first shall get, uh, consumed first. And, Yep. So once the messages are published in in a sequential order, that's how it will be picked up by, uh, the consumer. And, uh, there won't be any loss in distributed blockchain net, uh, environment because, uh, if the if there's, uh, let's say, there's a downtime for for the consuming service, then the messages keeps, uh, getting accumulated in the queue. And when that service comes back up, it picks up the message from the queue and then processes it.
If I need to perform a complex joining data from post gray and the MongoDB with a blockchain application, I would probably use uh, an AWS Lambda function. So my Lambda function will have this query, and, uh, it will execute the query. Once, uh, once it gets the data from, uh, the the result of from the database, it will then, uh, yeah. It can then perform, uh, it can then, uh, make the message convert the message in a format that can be, uh, consumed by the blockchain application. So, uh, generally, the blockchain systems are considered to be, uh, pretty slow compared to your traditional systems. So I would probably not want to, uh, keep this burden of, uh, doing this complex joining data and, uh, any any transactional activity as an additional burden on my blockchain application, I would probably want to use a separate service. And, uh, we can even use, uh, some ETL tools like Glue, uh, AWS Glue, or, uh, I think the services like AWS Lambda would be the best.