Sr Staff Engineer
Elpha SecureSoftware Architect
Reliance Jio Digital HealthSenior Software Engineer
UberSoftware Architect
Picsean MediaLead Fullstack SDE
BankbazaarSenior Software Development Engineer
MicrosoftLead Software Engineer
SamsungSDE Intern
Intel India Pvt LtdKafka
Github
Hive
Spark
HDFS
Docker
Golang
Java
Jenkins
Amazon S3
MS Azure
Redis Cache
Azure DevOps
Azure Datafactory
Azure Synapse
PowerBI
Git
JSON
YAML
Azure
Spring Boot
Hibernate
MySQL
AWS
Bitbucket
ffmpeg
Valgrind
Coverity
Perl
Unix
HTML
CSS
Kafka
Cassandra
Github
Hive
HDFS
Redis Cache
Azure DevOps
PowerBI
Hibernate
Valgrind
Perl
Unix
HTML
CSS
Vision:
Platform through which health wellness programs are configured for users and corporates.
Track users health vitals, activities and nutrition intakes.
Capability set individual user health goals and reconcile the same against the enrolled program.
Nudge the user for their health conditions/performance and scheduled/missed consultations.
Provide specialised doctors consultations for each program to help enrolled users to achieve the health goals.
Complexities Identified:
User nutrition intake computation is a complex problem to solve, have lots of SMEs dependencies.
Specialised doctors consultation support for each program is another complex problem to solve.
Health Data Provider OnBoarding Platform Vision:
To onboard any health data provider within 24/48 hours, depending on the health data type.
To onboard all pharmacies, clinics, hospitals and labs available in the country.
To sync all health data from data provider.
To enable providers business workflows from the system, like search/book consultations and medicines.
Complexities Identified:
Integration of providers business workflows in our system.
Integration pipeline to support health data ingestion from different health data providers and data
sharing/migration between providers.
Hey, myself Amit Kumar, and I'm having 13 plus years of experience in software development and design. So far, I'm working in Reliance Digital Health since more than three years. And here I designed three major platforms. So my key roles and responsibilities are I work on the high-level system design, interacting with the stakeholders, getting their role requirements, working on the role requirement to refine the engineering task and features we will be developing. I do mentorship of junior engineers and other engineers, including like software engineers, senior software engineers as well. I help them to grow from the mentorship perspective, and I work on the LLD and Cote d'Ivoire, plus I design the, from the high-level perspective, I design the system, and... Hey, are you? I'm on call, Babu, please go. On the tech side, I work on Java, Golang, and the microservices. So here, the major platforms that I design from scratch is completely event-driven based architecture. We are using Kafka heavily here for asynchronous communication between the services and to scale the services and systems. Apart from that, we use MongoDB as well. As a NoSQL DB, we are using because the data is very not in, not fixed in a schema-wise. So it's a very, I would say, variable in a schema. So overall, my background is like having 13 plus years of experience and having strong knowledge in system design, work on the system design from scratch, and three major platforms so far I completed from scratch in my previous company. Yeah, that's all about my background.
Yeah. So to predict the fraud payment fraud in real time, we can build a a data streaming pipeline, and we can use the Flink as one of that, uh, framework to, uh, identify and capture any pattern of, you know, payment specific uh, orders or events. So, yeah, uh, we need to have a, um, real time, uh, processing of the payment information, and I think on, um, scale on a scale like, uh, building a system, um, or having a very scale system, which is taking the payment specific data. So we can apply the, uh, data, real time data streaming and processing approach here. Apache Flink is, uh, I would say, the best approach here where we can identify any, uh, fraud based on the, uh, scenarios like, uh, pattern matching or any information that we have against the fraud, uh, to identify, uh, in almost real time. And that processing data, we can use for training the ML and AI model to predict the fraud, uh, in almost a real time system. So, yeah, so I think that is the best way going forward. So I'm assuming here the fraud detection should take place in almost a real time system. It's not like fraud happen, and then we are trying to identify what fraud has been done. Right? So the best system would be if we can help and to capture, um, soon after fraud happen or if we are capturing that, uh, information in our early stage so that we can take immediate action on the fraud payment fraud specific. So here, we can rely on that information. Uh, payment specific information if we see any, um, abnormality in the in the payment system. Um, so we can have the user profiling data, uh, on top of that to have a proper match on, um, their payment and credit histories. So that is, uh, another approach. Like, we have already a reference system in place, which is having, uh, payment information about the users, how their prints being done on the payment side, and in real time, what abnormality we got in that, uh, data payment data processing so that we can capture as, uh, fraud. Yeah. I think that's
How can you design a secure communication protocol between microservices? So between the microservices, the, we can protocols, communication protocols, we can use secure connections like HTTPS, that is one level of security, it is already been secure connection. On top of that, if we want to have additional layer of security, we can do a data, complete data encryption between the systems when we are talking. So the encryption, we can apply encryption, encrypted data, we can communicate through HTTPS, communication protocol, and the services, microservices, who are basically working on the encrypted data. That key we can maintain on a secure cloud, like Azure Vault is best, one of the best solution to have that key, well, key storage. So yeah, so as a secure communication protocol, we need a very secure connection. So HTTPS, I already talked about, which is providing the SSL layer or SSL certificate we have in the HTTPS connection. So which is basically securing the connection from the secure software. So that is, that is, that is one of the communication protocol we can use here. And all the data we can communicate through encryption way. So all the data end-to-end encryption should be applied not only in between the microservices, it should be through any client facing application, which is sending the data to back-end services should also be encrypted. And we can have those encrypted keys, encryption and decryption keys stored on the secure cloud. So that is, we can make the secure communication between the services in a payment ecosystem.
In what way would you manage team, knowledge sharing, branching, consistent code quality, various payment systems? So the, in what ways? So I'll definitely have ACICD pipeline, where we have the continuous development and continuous integration on the deployment side, the code on the, from the consistent code quality side, we can have standards on having a proper naming pleasure while writing the code, defining the proper protocol of writing the code, like function variable and methods being used should have a proper commenting, like around 40 to 60% as commenting on the code side, in the code, so that code is much readable and it's having a proper description of what functions will do, code should be much modular, it should not be very long file, we should use much more libraries in terms of while we build the code, so we should target the common components to export as a library, also we should not write the functions from the, if the functions are already being exposed through the library, so it's always better to use the standard libraries for faster development, also we can share, we can make the codes should not be repeated, we should have a kind of proper way of avoiding the code repetition side, we should have a proper test cases written, where we have the test case coverage of 90, above 80% as a code coverage quality, so these are the standards we can follow, we can share that knowledge across the team to have a consistent code quality, and we should have a monorepo, not like all the services are hanging, so we can have a monorepo concept where all the codes and services are being in a single repository, that is called monorepo, and in that library or in that module, we can have all the dependencies, so if someone is writing the code in that monorepo style, so the shared component we can easily share across the components, so that is one of the best way to have the code consistency across the team.
What steps would you take to ensure a new feature? Any payments? Product aligns with both technical and business object objective. Aligned with technical and business objective. Yeah. So while ensuring a new feature in the product align with both technical and business objective, so steps are, like, we should have a, uh, uh, proper sign off from the product team on the, uh, you on the outcome of, uh, of the, uh, feature that we are building. We should have proper UAT testing against the feature we build. We should have a proper test plan, What is the expected outcome of of that feature? And, uh, we should be, uh, from that, uh, from that, uh, feature testing. We should have a proper UAT testing. We should have a proper sign off from the, uh, stakeholders. So stakeholders should give us the plan where what is the expected output, out of that payment, uh, product that we are building from the engineering side. So business objective will we will reconcile the outcome with the business objective. What is the expected output from that? And on the technical front, we will make sure we have a proper, uh, UAT testing, we have the proper integration test, we have the proper unit test being written, and, uh, the the the we should have if there is a testing team, we should have both the both the, we should have proper testing scenarios from the stakeholders and should be validated through the, uh, testing team and identify if there is any, uh, unexpected output. We should have a proper infrastructure for deploying the product on, uh, on a, production. I would suggest here we should have a proper CICD pipeline. Uh, soon after we have, uh, the feature tested by the tester, and it is being deployed. So should we should have a, uh, automated test test scenario there? Like, Jenkins build is one of that where we can enable the, uh, testing on the pipeline soon after we submit the code. So it will that auto compile, build, unit test case, and integration test case is being, run, and expected output are may already met. And finally, the code goes to the production. These are the steps we can make.
Imagine consolidating several payment services, consolidating several payment services into a unified platform. So to minimize the risk, the payment services should have a proper authentication in place, should have in fact service to service communication we are doing, all the communication service to either service to service should be proper, should have a proper authorization and authentication in place. We should not log any PII information on the system and in the log, they should always have a encrypted communication protocol follow, we should follow like HTTPS and it should be an encryption between the services and nowhere we should mention the PII information and passwords and usernames in the system and PII we should avoid logging any payment specific information on the console or in logging into the system, let's say we are using some logging system. So we should be very careful while working on the payment services into a unified platform, there should not be, the PII information should not be easily trackable or easily gaseable in the system. All the communication should be very secure through the secure protocols, it should be properly encrypted, we should not log any secure key or if we are using secure connection through the let's say encryption keys we are using, so should not log those encryption keys into the console or logging or should not have in configuration. Everything should be under a secure API gateway, we should have a proper whitelisting of the APIs, all the services should be under API gateway, no API endpoint should be exposed directly from the services, we should have a proper gateway, we should have a proper rate limiting in the system and yeah so these are, we should have a proper whitelisting of the request, so these are the steps we can follow.
During the program, can you spot the technical error that would prevent it correctly determining if number is If a number is prime. So if number is 2. Right? So, I So the logic Yeah.
So here, logger, we are basically not is being initialized as null. I didn't process transaction. It is not sure that logger will be set somewhere. Right? We are not passing any information to that, and it is seems like local. So definitely my try sorry. Execute Discord. So logger being null, it will throw the exception, so it should be initialized properly.
Yeah. So in while we are doing payment processing and we want to identify outline dynamically adopt payment processing based on the real time analytics. So somewhere, the real time analytics, we are having. And based on that, we want to, uh, dynamically adopt the payment processing threshold. So, sir, definitely, we need to have a proper, real time streaming, uh, data pipeline in in place. So I would suggest here, uh, we can use Apache Flink. It's one of the best, um, real time data processing. And it can work on to adopt the payment processing threshold. So if there is a, like, on real time analytics, we want to set some, uh, we want to set some conditions or we want to capture some pattern or we want to analyze the pattern or we want to take the action based on some patterns being already set in real time, and those can change over time. So, definitely, we need a a real time data processing on the payment system so that we can change that the data pipeline. So I would suggest here we can use Apache Flink as a data processing pipeline here, which will work on the real time data and, uh, to analyze the data and to set the to adopt any payment processing thresholds, which is very easy, uh, with Apache Flink. And it is basically very distributed. It works on a, uh, scale. So Apache Flink did real time data processing. The applying payment processing threshold is easier on a scale system, and we can, um, set the threshold on based on the real time analytics data, and those will be applied in the data processing pipeline through Apache Flink itself. Uh
So basically, to help web bot compatibility, we can have the versioning in in in API versioning in place. We can we can if we can configure the proxies or the in a failed scenario, we can configure the API gateway itself. If in a failed scenarios to the new APIs we or update, we are passing. If there is scenario failure, it will it should pass to the back end APIs. The all all, I would say, the older APIs. So worsening, we can have we can add the proxies. We can add, uh, a kind of, uh, configure the API gateway to route the failed request to the older system or the older API. So that is the way we can handle, uh, backward compatibility.
Papa. What would be your design So I would suggest we should have a source of truth system in place soon after data is being entered into the platform. We should log that. And after processing, right, after processing the final transactions, so what transactions we are done, we can have a source of truth as one of the databases and final transaction storage as another database. Now we can build a platform which can on daily basis, or on we can create a bucket, which can be hourly, weekly, or, uh, monthly, or minute wise, uh, buckets, which we we can which we can do a reconciliation between the final transaction storage and a source of truth storage. So that way, we can, uh, implement a scalable payment reconciliation system in place.