
Engineering Lead / Head of Technology
SC VenturesSenior Tech Lead
PaytmTechnical Lead
Rakuten IndiaSenior Software Engineer
IGATE CorporationTechnical Lead
UGAM Solutions
Maven

Gradle

Nexus
.png)
Jenkins

Git

NginX

Kubernetes
.png)
Docker

Apache Storm

Kafka

Zookeeper

Terraform

MySQL

Redis

Elastic Search

Hadoop

AWS

GCP

Maven

Nexus

AWS

GCP

Sonar

Terraform

Nexus
REST API

Kafka

AWS EKS

AWS Lambda

AWS ECS

AWS SNS

AWS SQS

AWS Cognito

AWS Textract

AWS ELB
So I carry, uh, uh, 13 years of experience. I mainly worked as a back end engineer, uh, all through my career. So I worked for I started my career with Pertini Computers. I worked there for, uh, 3 years, and then I joined Ducommun Solutions where I worked for 4 years where I was part of building, uh, analytical solution, which is backed by big database technology like an HPS, Hadoop, Apache, Storm, Kafka, Zookeeper, and all those technologies. Uh, then I joined, uh, Rakuten. With Rakuten, I was there for 4 years. I was part of, you know, building, uh, API gateways, which is, uh, which is, you know, used by a lot of services in Rakuten. Then there was, uh, as part of building Rakuten Music, Rakuten Live Application. In Rakuten, I worked as both senior software engineer and as a senior technical lead. Uh, so I was, you know, doing complete hands on along with some of the entity management activities. Then after 4 years at Rakota and I joined Paytm. With Paytm, I was working with, working as a senior technical lead there. And so I was I was responsible for, you know, leading that team of, you know, 10 to 12 members is handling a merchant side of our application, which is called RTD, which is now all the merchant passbooks and the settlement related details. So those things were actually built by, uh, our our our responsibilities of our team. So recently, I I was working as head of technology with the C Ventures. Again, we were building up a product, uh, called, uh, credit scoring as a service, which is used by a lot of SMEs, uh, in the SAP Pacific region.
Yeah. I think irrespective of the irrespective of the technology that we use, uh, it is, uh, you know, uh, making each and every request as a stateless system. Right? If you build an API, which is a stateless system and can be handled in a distributed environment. So, uh, each request coming into a distributed system can be, you know, in a distributed serve services. Right? The request can be reached, uh, any of the, uh, instance that we deploy. So making it a stateless will help us in in in in deploying and then horizontally scale our applications. So how do we, you know, uh, efficiently handle the sessions? So make it, uh, stateless and a process thus, you know, uh, state you know, process the state it, uh, at a centralized DB level or the or a caching level, which is, again, centralized. So So your applications are distributed, but the the persistent storage, be it a cache or be it a database, which should be a centralized system. Again, you know, we can, you know, scale those services also in terms of, uh, in terms of, you know, do the master's new configurations, you know, read, write, primary, secondary kind of a configuration, which is easily handled at the database and the caching level.
Uh, I think, you know, couple of things, uh, here. 1 is, you know, uh, yeah, we need to check the steps. Right? Whether enough whether, you know, check the reason and why the reason why the Jenkins pipeline is failing, you know. It may be the it may be because it may not have, you know, received the webhook from the source control, you know, whether from the git or once we commit the, uh, commit the changes into Git repository, the changes may not be propagated and, you know, the webhook may not have been triggered so that triggers the Jenkins. If that is the case, you know, figure it out and fix the webhook call. Second thing is on the build side, if our application, uh, uh, is there, uh, is it properly building? You know, before the the build step, there would be a static code analyzer. So if you had set some of the rules for static code analyzing, you know, if if some of those rules, number of percentage of the failure is increased, so that that that may be the cause for for Jenkins failure. The next one is build. If the build, you know, check-in all the dependencies are being resolved in the build stage, Build maybe, uh, because of the compilation issues or because of the dependency issues, all those in a sort. So if it is a dependency issues, you know, or look, uh, are there any dependencies which are expired or unable to download from Internet or unable to find it on local repositories? Fix them. So then, uh, if the build is failing, you know, find out the compilation issues and all those things if if that is the reason. So if the build, uh, is success, maybe the next stage is propagating in deployment. So if if that is also part of our chunk in shop, figure it out to know is the required infrastructure is ready to deploy, whether we are able to start the server, you know, or is there any, uh, port configuration missing? The port is application is already running. They're unable to start memory issues and all sort of things.
I think, uh, irrespective of whatever the language we use, uh, we always, you know, the the the build the full script and the build steps is is same across the language. For TypeScript, it's also similar. But, you know, pull all the all the source code from the repository as soon as the web book is triggered. Uh, pull it into a Jenkins server. Now execute the build script, uh, there before the building on all the necessary requirements of dependency resolution or the static code analyzing. Complete those things and, you know, build the TypeScript project. Uh, it's simple.
Yeah. So, uh, what is, you know, on the application side, uh, for the Node. Js, uh, again, you know, this results very much generic in the respect of the language that we use. Uh, for for the Node. Js, obviously, you know, being an asynchronous in nature, it should, uh, it should be able to, uh, you know, handle large quantity of asynchronous request. Right? So if there is a large volume, you know, whether, uh, is my service scalable or have I, you know, done enough sufficient applications to handle the largest large load. Right? And I just have those things in mind. And so see, uh, if if there is, uh, what type of load we are getting is also important, whether it the load more is on on memory more or is on a, you know, CPU consumption, memory consumption, or a IO consumption. It depends on that. So, uh, Node. Js really does well if there is, you know, lot of, you know, IO consumption, which can handle a lot of asynchronous request. If there is a memory or the CPU related, uh, things that are in causing, uh, uh, the load, so that probably we need to horizontally scale with these resources. We can, you know, deploy multiple instances of Node. Js application. That means, you know, containerize these applications and do horizontal scaling, uh, with some of the, uh, orchestrated tools. Right? So that should be able to help.
K. It's a rectangle with number construct with the number. It's it's width and height. So it's a big class square, extends a rectangle. Okay. Uh, construct size, set to width, no width, and that is equals to same. Yeah. I think so this one would be, uh, single responsibility inwards and move. Uh, open, close. I think, uh, so this cross substitution is the one which is you know, we we cannot substitute our square class with, you know, rectangle objects. Right? So that is one thing, and it's not an extendable one as well because it, uh, it's it's violates the open close principle as well because, uh, square class cannot be extended from the rectangle because rectangle is a is a different shape and no sky is a different shape. Right? So open, close, and this goes substation. It should be, you know, 2 of the principle, I would think. Yeah.
What's wrong with that? The hedge block is being used. How do you rewrite this correctly? And the other g I. Let data what is it? What's the block? See what else? Try catch block. Used. It is oh, wait. Okay. So look at the JavaScript function. What's wrong with the cache block is being used? How do you rewrite this correctly to handle the errors in a more synchronous context? Okay. I think, uh, since we had put it as an await, you know, we the external system will be waiting. It makes up as a synchronous call. So so you know? K. No idea on this.
Handling database schema migration in the continuous delivery system. Right? I think, uh, it's very, uh, difficult because if it is in the live environment, uh, with the live production, you know, traffic coming in, migrating those systems is one of the critical, uh, in the in the industry. So, uh, to handle that, I think, you know, we need to build our 2 step solutions. 1 is, uh, you know, we build, uh, our migration script, which migrates the data from older system to the newer system, which will, you know, which will my which which we call it as a migration scripts. Right? So we can, you know, build our own script. We can use some of the industry based tools to migrate, uh, you know, those these databases, you know, if it is a different databases migration within the, you know, within the schema migration database schema migration. If we within the database, you know, we need to know, you know, check, uh, you know, what type of, you know, migration is required. Right? So that is the schema development, uh, which will, you know, put the data. So while doing running this schema, so there will be a delta created. That delta is basically because of the live traffic which is coming in. So, uh, with within the, uh, within the application, uh, we need to deploy, uh, a redeploy code with some, you know, switch logic where, uh, it it where it where it deploys the code to a newer service. Right? So, uh, the the the traffic the live traffic, uh, whatever the live traffic request coming, which will, you know, based on the flag, it will send it to the, uh, the newer databases or the old database. Right? So that is that depends on the time stamp of those the the the request or based on the users, you know, how we are, you know, having the migration strategy decided. Right? So have that flag and deploy our application, so which will run, uh, uh, and, uh, you know, do the proper traffic redirection. So in this process, uh, you know, at the end of the larger data migration, most of the data would be migrated to the newer system, and the traffic is also redirected with the newer databases. In the end, if there is, you know, still any data left, so that can be handled with, uh, with the database level, with the application level. Because if within the database, if there is any request, there are still requests which are not committed and all those things from the database, you know, get it from, uh, you know, database, uh, uh, the commit logs and try to, you know, create those migrations from there. So multilevels, database migration script, database and application traffic redirection, some strategy based on the based on the location users of, you know, time based ones. And and then final, the last delta, you know, with directly from the database update logs, something like that.
How do you pursue the trade offs between using Java and Python for microservice deployment and respect to ecosystem advantages? I think, uh, so very good question. So, uh, one is, you know, Java is mainly, you know, we can write a proper enterprise level application. Uh, it has got lot of supports for that, uh, uh, you know, in terms of frameworks, in terms of industry use cases, in terms of representing our the the the problem into our, you know, our domain to win solutions, you know, re representing them into an object oriented solutions. Right? That is, uh, really good with, uh, Java. So we can build a very high, uh, very low latency system, and we can scale it up to, you know, millions of requests per second. Right? So that is in terms of Java. So in terms of Python, Python is is is really meant for, uh, I I personally used more for, uh, analytical purpose in terms of text passing, in terms of, you know, AML applications, right, in terms of, uh, building analytic you know, data analytics kind of an application. Right? So so it has put a different use case which is why I'm building a restful APIs or something like that. Python has got some good frameworks, but, you know, it it really dip you know, difficult to we can obviously containerize and deploy it. So it has, you know, uh, short of some of those frameworks that are required for, you know, building these types and in terms of authentication, in terms of, uh, uh, in terms of, uh, building a large scale enterprise application. Right? So that's where, you know, it's it's a little bit difficult with the Python. So Java would be, you know, uh, for the for the applications with an enterprise scale. Right? So Java would be the better one. So if I'm building, uh, any text passing related activities, if I'm, you know, doing some data passing or, you know, data analytics kind of a things. Uh, a a number formatting kind of a thing, you know. So all those things are because of lot of library support for Python, so which is very, very much, you know, helpful. Right? Uh, so that is the thing of both have both, you know, different use cases. Uh, so I would prefer to have both the if I'm building up in a larger applications. Right? So I prefer to have both, and I would, uh, be in the current I would build lot of, uh, applications in in in, uh, data analytic kind of applications using Python, and a lot of APIs can be built with Java and related technologies.
What technique would you use to reduce complexity of a large stream boot application? K. I left with one more question. I okay. What techniques would you use to reduce the complexity of large spring boot applications? Right? I think, uh, largest screen boot application, you know, can be we we should be able to, you know, split them in in smaller chunks because a larger application screen boot, you know, with with lot of interdependency injections and all those handling the containers of, uh, instance creations, right, object creations. So that would be very complex process if we, you know, uh, it takes lot of, you know, our, uh, ramp up time, right, to go to boot up our applications. So one simple, uh, logic is, you know, break this into a smaller services or smaller based on the based on the use case, based on the business logic so that I could have, instead of large monolithic Spring Boot application, you know, divide it into a smaller chunks microservices, define the communication, uh, and a strategy between these microservices, you know, if required and use Kafka. Use skewing based, use use HTTP REST based, use blocking or asynchronous. Right? So those all things, uh, can be implemented. Yeah. So I left with one more question. 3rd question, I I submitted without as I submitted without, uh, any recording the session. So I hope it would be it I should get an option to record that third question. Right? So, uh, I really cannot go back to that particular question. So I would really, uh, help, uh, want your help in in answering the third question.