-1710919073.png)
Senior Software Engineer
Coffeebeans TechnologiesSenior Software Developer
Cloudworx StudioSoftware Engineering Associate
Amdocs
Git
.png)
Docker

Linux

Nginx
AWS (Amazon Web Services)

AWS Lambda

gRPC

Redis
.png)
InfluxDB

Kubernetes

systemd

Gorilla WTK

GoSwagger
.png)
Flask
REST API

MQTT

Kafka

IOT

NATS

Blockchain

fintech

Node JS

API Creation
Uh, hey there. I'm Kiran Kumar B. Uh, I'm a senior software developer working currently in CoffeeWins, uh, Bengaluru. Uh, I have professional experience of around 6 years, in which, uh, initial 2 years, I worked as a associate technical instructor, uh, in one of our, uh, own softwares, which, uh, me and my, uh, friends started when we were, uh, graduating. Later, I have worked with Amdocs as a associate engineer, uh, where I have worked with, uh, back end systems like order management system. Um, and later, I have joined, uh, Cloudworks, which is a start up, where I have got a complete exposure of various technologies and tool stacks, uh, where I have transformed myself into a complete, uh, backend developer working with various, uh, programming languages, starting with Python, then moving on to Node. Js, and finally, expertizing in, uh, Golang. Uh, along with programming language, I have worked with various other databases, uh, like, uh, PostgreSQL in terms of relational databases and MySQL in for some projects and, uh, essentially, MongoDB for many of the projects where we needed Node. Js supported back end. Uh, along with this, like, I I do have a good hands on experience in building, uh, REST servers, uh, which supports, uh, like, uh, to support a high user capabilities and even gRPC serve services to enable, uh, distributed systems and microservice architecture systems, so which can robustly scale up and acquire larger customer bases. Uh, after Cloudworks, I have I have worked with, uh, I've been working with the CoffeeBins, uh, where my primary concerns are with respect to building the blockchain systems in which I have got expertise in hyperledger fabric, uh, where we play with, uh, or the source code itself. Uh, we have, uh, implemented various techniques to increase the, uh, TPS of the system, uh, that too up to 3 folds of the existing ones. And I have spearheaded many of many of the, uh, chain code development, uh, programs and will be leading, uh, currently leading a a team of 4, uh, in improving the efficiency of the existing hyperledger fabric system and, uh, are writing down the, like or providing the chain codes, uh, or business contacts to implement them in the applications of a very big Fintech joint. Um, along with that, uh, I do have experience in working with other, uh, systems like, uh, building, um, what is that, a Google Smart Home application, Alexa systems, and, uh, even starting from, uh, I have built a a command line interfaces, simple command line interfaces. Uh, my expertise go goes majorly with the back end development, uh, in you. That includes various programming languages and databases. Along with that, I do have a good knowledge in networking and containerization using Docker and a bit of knowledge in Kubernetes where we use for Docker container orchestrations. Uh, I have worked with AWS Azure, uh, as a cloud providers. And I do have understanding of GCC, uh, GCP when I which I used for, uh, Google's model development. So along with that, I do have understanding of, uh, front end development. Like, I do understand HTML, CSS, and have understanding of, uh, Vue. Js, uh, which I learned when coordinating with the front end developers. And, uh, I can, uh, coordinate with the mobile application developers because I have worked with, uh, Flutter developers out there. And these are, like, various text tag that I have worked with previously.
How do you implement a secure breaker pattern in Node. GSI to prevent failures while interacting with blockchain systems? Uh, we can implement secure breaker systems, uh, mainly while while while monitoring the application that we are communicating with. In blockchain system, uh, we, as a client, will be directly communicating with the client using gRPC, uh, network, especially I am mentioning here about the client communication with the, um, Hyperledger Fabric, uh, server and the Node. Js client. So we'll be, uh, communicating using the gateway servers, which is a gRPC network, And we we can monitor, uh, on every stage of the uh, the beauty of the Hyperledger Fabric SDK is that we have control over each stage of the transaction. That is where when we, uh, prepare a transaction payload and when we, uh, submit it for the, uh, or sorry. Submit for the endorsement, then submit it for the ordering, which is which further goes to validation. We we we have the segregated way, so we can, uh, look into the, like, errors that we face, uh, at the various stages. And, uh, I I take up the call to, like, um, break down the circuit communication there. Apart from that, like, uh, considering systems like, uh, uh, whereas, like, uh, systems like Hyperledger Fabric, uh, which are permissioned blockchain networks, whenever we whenever the system receives any request from a client, uh, AOT will have the identity of the, uh, requester, which has nothing but is, public key, like, uh, the the any data request from the client will be assigned by the specific user. So we can make use of, uh, this data to rate limit the requests that are being sent from the client, and, hence, we can, uh, uh, uh, break down those communication with the client if something mischievous is happening or if the, uh, request rate is being crossed. Uh,
What would be the best strategy to handle data synchronization between a hybridized private blockchain and a public Ethereum network? Hyperledger Ares is designed to work with, uh, uh, crouching, uh, networks itself. So it supports enhanced support for, uh, synchronization between multiple, uh, networks. Apart from that, uh, uh, considering Hyperledger fabric itself, uh, which is having way, um, like, uh, pluggable models, uh, we we can design the system in such a way to, like, uh, uh, trigger with the external system like, uh, Ethereum like Ethereum or any other blockchain networks and set up an intercommunication between the networks. And, uh, and Hyperledger also supports, uh, interchain and, uh, interchannel, uh, uh, triggering triggering so so that the chain code, which is where currently being executed, can trigger the, uh, dedicated chain code, which which is responsible for communicating with the, uh, rest of the blockchain networks. So by by this means, we can enable, uh, cross chain, uh, communication and hyper
Can you outline a strategy for implementing the repository pattern in an application that communicates with both MongoDB and PostgreSQL. Uh, yes. Uh, usually, whenever we are, uh, designing any, uh, back end application or any other application, we're just communicating with the, uh, repository. Repository, in the sense, is nothing but a database where we store our, uh, objects or any other, uh, application related data for a longer run. Um, let let's say, yeah, let's considering an example of Go. In Go or any other, uh, object oriented supporting language, we can define, uh, a repository layer in terms of an interface where a repository will be providing a list of functionalities that is either to read data or to write data or update data or delete data, like, considering very basic current operation that I am mentioning. So, uh, now the application layer or the core layer can directly call any of the implementation of this interface without even having the knowledge of whatever is implemented under the hood. Let's say, in our case, we can write a post SQL client, uh, to implement this interface and provide these 4 IDs of CRUD operation. Um, and, internally, we can call the internally, in implementation, we can, uh, write the specific PostgreSQL queries to, uh, talk to the database, particular data. Similarly, instead of using psql, we can come up with a MongoDB client and just implement for these 4, uh, functionalities that we defined in the interfaces. And but under the hood, we can write the MongoDB specific client to talk with, uh, the MongoDB server and provide back the functionalities. Here, we have to go look into, uh, very important very important thing that is which is modeling the data. Uh, let's say I'm writing a, uh, a repository layer, uh, for, uh, any one of the database layer. 1st, I will have to define the models. Let's say, I'm writing it for a books, then I will have to define the model for books, like, whatever the structure that has to be passed when we try to created it, and what will the structure when we try to read it, and similarly for update and delay. So we'll have to define this. However, in the internal implementation, we should have to take care of transforming this generic modeled, uh, object into our database, uh, specific format. In most of the time, it would be similar only, but, however, with respect to database, uh, relational database, we have to take care of, uh, uh, considering the per specific, uh, database schema table schema. So this gives us a freedom, uh, the application layer to, uh, choose any of the, uh, database that we want to use at the runtime, uh, irrespective of, uh, the under the hood implementation. So this repository pattern would act as a, um, layer after the routing and core of the application. The we we we we might have routing in case of, uh, back end applications, which uh, provides API for the front end. Uh, for for any other applications, let's say, um, which are not networked applications, they can now only have the core and, uh, uh, repository layers. These core layers sometimes are also called as business layers.
What are the key considerations when designing microservices that interact with dockerized blockchain nodes? Whenever we are deploying blockchain applications in the dockerized environment, in fact that is one of the most user-friendly way of deploying the blockchain applications because as we all know, a single blockchain network will be considering of many of the nodes, especially when we are trying out something in our local environment or even for the production also, like we will be going for the counterized environment for its robustness that it provides. Some of the things that we have to consider while designing a microservices to interact with this blockchain counterized environment is the network setup that we make to enable the communication between these two, these microservices architecture blockchain networks. We will have to make sure that we have to set up a common network or even if we are the containers are established in two different networks, we have to provide the proxy layers in between to provide the common domain names and first we have to allow the communication between the containers by setting up the proper network settings and firewalls. Some things whenever we talk between the docker containers in the same network, we usually be using the container names for the communication. However, when these communications are happening, if we have enabled TLS, then we have to make sure that whatever the certificates are that we have generated for this nodes identity, they cope up with the container names because they should be the domain name that they will be representing or we will have to rename the domain specific or we have to like the certificate should have the entries of the local like even the communication inside the container might take place using the IP addresses and ports. But however, whenever a container has to talk with the other container, it will have to go with the container names, which is there in the network. Apart from the network concepts, we will have to take care of orchestration again, if we go for the environment container orchestration environment like Kubernetes, we should be taking care of the other parameters like clustering, where virtually the two containers residing in two clusters have to take a few more steps to talk with some external system like HAProcess to communicate with each other. So we'll have to take care of these things. And absolutely, we should be having retry mechanisms whenever the communication is being set up between the two systems. This is because it is very possible that the containers will go down due to any of the reasons, either application failures or the resource constraints or whatnot. However, the orchestration tools will bring back those containers to maintain the network up and running. So every any communication between the two containers should always be having retry mechanisms to keep the system robust and communication intact.
What criteria would you use to choose between a RESTful API and RabbitMQ implementation for a blockchain application? Let's first look into the major features and differences between the RESTful API and RabbitMQ implementation. RESTful API is where the communication between the client and the server happens in a more sort of synchronous manner. It is stateless and a predefined set of request methods and response status are already defined in REST. Whereas in RabbitMQ, it is more sort of suits an event-driven architecture, where RabbitMQ majorly acts as a message broker between multiple systems. Usually REST communication happens between two individual entities where client requests for a server and it goes back to the response. However, we can maintain load balancers in between to set up the communication between multiple server setups to answer a single client. However, in RabbitMQ, the scenario is completely different, where we have a common buffer-like channel where the requester will push the data event into it. And at the other end, the listeners will be picking one from the RabbitMQ queue and will handle them and push back the response into one more dedicated queue. So for a blockchain application, it mainly depends upon the part of implementation that we are looking into. Let's say if we are looking for a system where blockchain notification is happening, then we can go for RabbitMQ implementation. Let's say our blockchain service or DLT service will be there on one place and we have clients on another end. Let's say if the operation is a very time consuming or long duration operation, then we can go with RabbitMQ, where a client will make a call, that is, it will push an event to the RabbitMQ queue and waits for the response. That becomes an asynchronous communication, you will not be actively waiting for the response. So here the DLT system, the blockchain system will process the request and once the transaction is, what I say, executed, endorsed, then ordered and once verified, once it goes into the ledger, once it is finalized, then it can send back an event to the RabbitMQ queue, which will be taken by the client and then responded to the client. Till that state, it can just say processing or in process status. That is not the case in RESTful APIs. However, RESTful APIs is suitable for those cases where the processing is not too long. So that, let's say a client makes a request that immediately gets accepted by the system and processed and get back a response with either success or failure status. So this completely depends upon the use case. Most preferably, if the system requires immediate responses or we are pretty quick to respond to the end user, then we have to go with the REST APIs. However, if the system has to be designed in an asynchronous way, where the processing is a time consuming part, then we will have to go with the RabbitMQ implementation.
An ethereum smart contract function intended to transfer ERC-20 tokens between accounts is failing for some transactions. The error reported is ERC-20 transfer amount exceeds the balance. Based on the solidity code snippet, can you determine why this might be occurring and propose what checks a developer should include to debug this issue. Here the function transfer takes address and value, public returns, whether the successor require balance greater than the value, 20 transfer exceeding balance, require balance of the sender to be greater than and it returns ERC-20, balance is less than, emit transfers return true, yep this is fine, why this might be occurring, okay, as the code simply explains in its first line of implementation itself, here we require the balance of the sender to be greater than the value that he intend to send to the receiver. So here itself we are failing because the sender is having the balance which is less than the amount that he wants to send. So this is why it is happening and as a pre-check what all things I can determine on the client side itself before the request even coming here, absolutely we will have to check this balance of the sender before actually sending this into a smart contract, at the query level itself before we sending the transfer request to the blockchain network, first we will query the balance of the sender, once we query, once we get the response from the network which is having the balance of the sender, we will compare that with the transaction amount that is intended to, if the balance amount the sender is having or the queried amount that of the sender account is less than the amount that you want to send, we will discard the transaction there itself to the users stating that transaction balance is low and hence transaction cannot, account balance is low and hence transaction cannot be done, along with that we can add a few more parameters at the client level itself to verify the receiver details, let us say here we take the address of the receiver, so at the client level step, we can query whether this address is a valid address in the Ethereum network and transaction can be done to this address, that is the pre-check that we can put and to debug this way, to debug this issue, a developer will have to put up in a production system, you should have log lines which clearly state the balance of the user before actually attempting to transfer and yes, these are the things that can be and we have to like a log or the address, the intended address, the sender address and this current balance and its intended value to be transferred, if you log these lines along with other contextual data, then that would help the developer in debugging this, it is also a very good practice to make use of a unique request ID, a request response ID and we have that also logged in, so that and sharing this request response ID even with the requester, so that even requester with that particular unique identifier, he can know or deep dive into what exactly happened and what is the reason for the failure and at what place of code that failed, we can also use a stack trace to know where exactly the failure happened and we can have monitoring tools also to report this error.
During the development of hyperlative fiber chain code in Go, a larger query code snippet is not returned. Returning the access result for certain keys can spot a potential logical issue that might be causing the mismatch in queries. Uh, shim note. Mark stub. Example chain code. Okay. Get state some key. Uh, is not equal to retention. Error or fail to get state for key. Causing the mismatching query result. Okay. Okay. Uh, the here we create a mock stub, and then we try to get something out of it by passing a particular key. Here, we call it as sum key. And, absolutely, if error is not null, there might be many reasons like, uh, uh, the collection itself does not exist or there may be, uh, Ivo failures like, uh, communication between the system and the caller fails. And if the if the uh, red value is nil, that means the key did not exist at all in the cached state database at all. So we'll return the shim error. But, however, if everything goes well, um, even we read the value from the state database, uh, by calling state get state, uh, we are not returning the value here itself. Uh, if we observe the, uh, print here, we are just like, uh, printing the value and not returning it. So that that's, uh, one of the major issue. And along with that, in get state, uh, when we are calling that get state, as we are specifically calling, uh, we are trying to fetch a data from 1 particular, uh, common code, uh, we have to create a com complex key or a concatenated key out of, uh, the key that we want and the, uh, collection con collection that we want to, uh, read. So the the we will have to concatenate example cc and some key together to make a complex key and then make the query to or get the, uh, expected
Hyperledger network supports upgrading the chain code basically by using the chain code version. So here it goes. Let us say we have a running chain code hyperledger fabric network running and we have an established chain code inside it, however, due to some business requirements or any other aspects. Now we have to upgrade the logic that this chain code is having. First we will have to prepare the new chain code that we want to deploy it. We have to package it like any one of the organization in the network can package the chain code and this will have to get uploaded into the chain code that means this should get installed in all the peers of different organizations in the network. So this packaged chain code should be shared with all the organizations and they will have to install it in their peers with a particular chain code version which will give a unique identifier for that installed chain code package. Then we will have to we completed installing so then we will have to approve this chain code like when we install the chain code which gives a unique identifier and when we installing we can pass a particular version to it and even while approving we can give a particular human readable string which is called as chain code version and apart from that there is one more incremental value which is called a sequence number which should be more than the previous chain code. However here the chain code name would be same but however the chain code version and the sequence number will be different and will be incremental. So now once this chain code is installed all the organization at least the minimum number of organizations which are required to approve this chain code should trigger approval of this particular chain code with the updated chain code version and sequence number and once that is one sufficient number of organizations have approved the updated chain code version then we can commit the new chain code version into the like trigger a transaction any one of the organization can trigger a commit of this new chain code version. So by that particular transaction whichever it goes into block one this block get committed in all the peers by that commitment the new chain code version will be used. That is let us say if we send the transaction number 100 as the commit for the new chain code version then all the peers which commit this 100th transaction will start using the updated chain code version and discard the previous one. Since we are now we almost eradicated any downtime or data inconsistency all the peers at the same point when they commit the transaction number 100 will start using the updated chain code version as it is already pre-installed in them. So there is no additional boot up time there because the containers which are having updated chain code versions are already warmed and ready for the execution. However to increase the or to change the endorsement policy and the chain code we do not need to package the new endorsement new chain code at all rather we can just use the existing chain code package itself where the all the all the organization's required permission have to accept on the updated endorsement policy. So this updation does
What consideration would you take into account when interacting when you're getting machine learning algorithms with a blockchain application for proactive predictive analysis. Okay? Okay. Uh, let's say we design any blockchain system, we have to first identify understand that a DLT, the core blockchain system, is a part of the bigger business need. So let's say in, uh, let's consider the example of hyper fabric here. Uh, here, we consider that a transaction is successful only after commitment, not during endorsement, not even during audit. So we can have a machine learning or any AI oriented systems running parallelly to our DLC system, our to to our blockchain system. So whenever a particular transaction get committed, we can emit chain code events, or we can event any of our custom notifications, which can be read by the block a system which feeds this machine learning, uh, or blockchain, uh, or AI system, where this will contain enough data. This, uh, event triggered or post commit will contain enough data about the transaction which has happened, uh, maybe from our 2 information. That depends upon the business logic. And, basically, this should contain the data, uh, that is sufficient for this ML, uh, model to get trained on and to understand the type of transaction happened. This is one way that is using the chain code events. And 1, uh, and another way, uh, whoever the client which is making a request to the, uh, blockchain system let's say, I mean, any client which is using SDK or any other interactive way to submit a transaction can, uh, look into the status of the transaction that it has submitted. That is it can watch the status, uh, once it submits the ordered, uh, transaction to the order asynchronously or synchronously. So once the transaction get committed, this particular client can know the status of it, either the transaction submitted and was it committed as valid or invalid, if it's invalid, for what reason? Anything. So this client has the access to that, uh, completed transaction. So now client can trigger 1 more, uh, transfer which can have this, uh, commit status, and along with that, initial request which was made initial transaction request which was made. So all this, uh, data can be packaged and sent to a ML model for the pro further processing. Uh, this this, again, considers a communication between the client and just ML model. Uh, the way that we can go with that, uh, depends upon the system. Uh, a system where the ML model is strictly associated with the a DLT network can communicate directly with the DLT network using into train code events. But if we want this ML model to look into something at the client end with, uh, at the client applications, we want to decide something and we don't want to look into the whole transactions happening in the network, then we can implement this ML communication at the client or SDK level, where once the transaction is committed, we can, uh, feed the same into ML model for, uh, predictive analysis
what method would you use to manage and orchestrate multiple docker containers running blockchain nodes in different environments okay multiple docker containers running blockchain nodes in different environments okay what method would you manage okay docker containers have already helped developers and even the operations team or combinedly DevOps team in lot of way in managing the application lifestyle at a production scale and additional tools like containers container orchestration tools like Kubernetes have even helped us more in utilizing these container environments in a very robust manner again we can define our own development environment have container orchestration tools to look into this different containers running at various network as I mentioned in one of the previous while answering one of the previous questions when these nodes containers are running at different environments in terms of container it is like running in different clusters even though even though these the exact hardware where these containers are running at different geographical locations they can still be configured to run in a single cluster so that the difference be the network physical network complexities that we see is abstracted to the lower level and the containers will talk to them as if they are in a as if they are in a common network however they are virtually in a common network but not physically apart from this using container orchestration tools like Kubernetes we have a cloud provided containerization environments like Amazon container management tools I are I mean again they also provide orchestration tools like Amazon Kubernetes tools also where they provide like next set of functionalities like we can provide the number of nodes to be running and minimum number of nodes to be maintained and how do we scale up in case of increased if increased load this cloud environments provide these functionalities too so this will help us to maintain a robust system and we can maintain multiple containers to maintain multiple nodes as multiple nodes help us in increasing the robustness by spinning up sufficient number of containers to make a fail safe scenario and modern tools container organization tools have come up with the logic like reverse proxies, HA proxies to set up the communication between the containers seamlessly where they take care of all the virtual networking and along with that they also provide capabilities to provide security rules over the containers and we can block the communication between some containers if you want to and along with that we can maintain them under a common subnet where the communication between them Apple's at level 3 of networking and hence we don't see that much of a network latency and helps we will create we can create a robust and yes fast system also