Principal Software Developer
Aptos India Pvt. Ltd.Solution Architect
Zensar Ltd.Software Developer
Rise and Shine Printech Pvt. Ltd.Git
Agile Methodologies
NewRelic
Jira
SonarQube
Jenkins
Okay, could you help me understand more about your background by giving a brief introduction about yourself? Okay. My name is Piyush Jaiswal. I have around 12 years of experience in which for the past seven years I have been working in Node.js, React.js, TypeScript, and microservices. I was leading the fiscalization lane with Aptos India and basically fiscalization is the feature when we want to launch the product in different countries, you have to follow certain rules and regulations as per country, what country has asked for especially. And Aptos is basically a retail domain product organization. So in retail, basically all the sales transactions and return transaction needs to be submitted to the government for tax verification, I mean to government when I say to government it means via tax authorities. I have completed a couple of integrations for Peru, Colombia, Costa Rica, Italy, Germany, Poland, Portugal are some of the countries which I have fiscalized. I have also worked for Japan fiscalization. Prior to Aptos, I was working within sensor, I was working for a client called HS blocks and prior to that I was with INVShare. Before joining sensor, I was part of an organization called Rise and Shine Print Tech where I was working on an e-commerce website for print to management system. Initial five years were basically on .NET and within sensor I got the opportunity to work on Node.js for HS blocks. So from there my journey for Node.js and React.js started.
What criteria would you use to decide between deploying a Node.js application and? Okay, so I'm really not sure like what criteria it needs to be on. I have because I have only worked on AWS and when I say AWS. We have worked on AWS Lambda, whereas AWS does support EC2 and Fargate. But since we have worked on mostly on the microservices and based basically on the serverless module. The way we have decided to go with like because it does the means charge based on the computation. So again, the computation based on the computation cost and the flexibility and the number of services which AWS provide. I mean, having said that, like S3, Firehose, EventBridge, SNS, SQS. I mean, there are more than 200 services which AWS provides. I mean, depending on the performance and yeah, depending on the performance and the availability of the number of services which AWS provide. I would say like because I'm not sure about the Azure and GCP. I have worked on only AWS, so that looks pretty good to me as of now.
The process of setting up a CI-CD pipeline for a cloud-based Node.js application. I'm not sure that I understand the process, because if you're looking for CI setting, how do we set up CI-CD pipeline? I mean, there are certain steps which we have to follow within Jenkins, but what I understand is, I mean, there are certain steps which we have to follow. What we have to follow is like checking out the code, having a build for it, and then it runs the unit test. And once the build has to pass, then the unit test passes, and then it publishes. Again, before publishing, there are certain steps which you have to follow. But again, I really need to check the Jenkins. We did this like one-time setup. I don't remember the steps exactly.
How would you handle branches strategy in Git for parallel feature development and hot fixes? Okay. So parallel feature development, I'm aware of hot fixes, but parallel feature development the way we have worked on, we had one master branch and every code which get pushed into a specific branch based on the Jira ticket number, it gets reviewed, it get approved and it gets merged to the master. Now, once we have a release, we basically tag with the release version and I mean, there is a certain branch again, that particular release version is branched out and once there are hot fixes, so again, these branches are based on the release version, which gets released to the customer and if there are certain changes, let's say, if there are bugs in the code, it needs to be fixed, we check out that branch and we fix the, I mean, we fix the code in that particular one, not on the, I mean, along with the master branch, it gets fixed in the feature branch or the release branch.
How would you introduce a new technology or library in the team and Node.js tech stack without disrupting the current workflow? New technology or new library generally gets introduced with the new feature. But if I have to work on an existing feature, I will create a separate I mean, I'll create sorry, I'll create another version. So I mean, so let's talk about a microservice. So it means we have like, let's say there are four endpoints and I want to introduce something a new technology or if I want, if I want to, how do I put this? So there are four routes get, post, patch and delete. These are the four endpoints for a particular microservice. And let's say if I want to make some changes in the, in a particular, if I want to introduce a new technology or in a new library, what would I do is I'll create a new version for a particular endpoint. So I'll start with the get library, get endpoint, I'll try to introduce the bit again, depends on the what library means what we are trying to do with the library. So I will have like the coding version is the endpoint version like v1 and v2. And with that v1 and v2, v1 will remain on the existing code and with the new v2 version, we can have this experiment done. But again, I wouldn't do that on the live code, I will try to do a POC to check the flexibility, the feasibility, the performance of the new library means it's not, it's very hard to introduce a new library in the existing code, which is working just fine. We need to do a POC and once the POC is successful, then only we can have a new library introduced within the particular set or particular piece of application.
High-performance background jobs in a Node.js application. High-performing background jobs in the Node.js application. Basically, there are two ways. You can have a cluster. Clusters can be set up at the infrastructure level, you can have the cluster set up at the code level as well. You can have the child process in place, and you can have worker thread in place. It depends on the requirement, what exactly we are trying to achieve with the, when I say with background jobs. If we are talking about just the input-output kind of job, where I have to constantly update the main thread about the progress, then I would go with the worker thread. If I'm talking about child process, again, child process has like a master and slave, where master keeps track. I mean, these two threads are running entirely different, but again, a master knows if a child has crashed, and so that it can restart it. But again, it depends on what we are trying to achieve by creating this child process. If it is just a task of having a background job, it can be achieved through the worker thread.
What solid is being violated and how much do you refactor it to adhere to the principle? Given this TypeScript code sample, what solid principle is being violated? How much, how might you refactor it to adhere to the principle? Single responsibility, open, close. And then we have interface segregation. This is violating interface segregation, why it has to extend rectangle, the square class. So this is, this should be interface segregation. So we have solid, so E should have single responsibility. So this is rectangle which is a constructor set with setTightenArea, that is true. Here square is extending rectangle and calling super. Why you are calling super? Because this does not look good, it is way tight, way tight. This is wrong, so this is like the single responsibility and this calls substitution. Interface segregation, I mean this violates single responsibility, I believe, and interface segregation. These two, I believe these are the two which gets violated and the way I would fix this is like, I mean I will take, I will create a separate called, I will create a separate called like calculating the area, means I will create a separate class like area calculation or something and with that I will implement, I mean I will create rectangle class and a square class where I can, means I can extend to those class, I mean the size or the area calculation class in which I will pass these variables to check the height and width, means that way, means that following the abstract factor design pattern along with having the solid principle like single responsibility and interface segregation in place.
Yeah, what's wrong how the track is being used How would you rewrite this correctly handle Okay, the sync function fetch data Let response await fetch Oh You're not trying to return anything in this one But again, that's the try block Catch is error Error error what Why your console you need to throw error over here How would you rewrite this to correctly handle error in an asynchronous context is synchronous contract Oh This Handle error error We should be throwing error not consoling. Why are you consoling and error? Uncaught and as you're catching an error I'm not really sure about this and what what exactly you're looking
Continuous, describe an efficient way of handling database schema migration in a continuous delivery environment, okay. So database schema migration, there are, the way we define it is like if there are changes in the schema, we, so first of all, we try to understand that what we are actually, I mean, database schema migration, schema migration, I mean, what we are trying to do here, are we adding new columns or we are moving data from, I mean, one source to other, it's difficult, it's difficult to understand. But let's say if I'm trying to just include one more column or modifying the column values instead of, I mean, let's say I have an object as, a strict object as a column, if I'm trying to modify it, what I'll do is I'll, I mean, I'll create a migration script, which will do exactly like, which will, so basically in the migration script, what I'll write, I'll create a temporary table and in that temporary table, I will, I mean, the modified schema will be the part of the temporary table. I'll read the data from the existing table, will start copying, means once I read the number of records and everything, I'll start inserting it into the new schema table. And again, the new, when I send the migration, if it is a mandatory parameter, I have to try to generate a random value, if it is an optional, I'll try to keep it empty. Then I mean, just start copying data from primary table to the new temporary table. And once the data has been copied, I'll, I mean, there is a command which says that you can rename, you can dump the older table and start using this new table by changing the, what is that term? There is one term which says that the new table is as the main table by changing the table name or something. I don't remember exactly what it is.
It is good to have what if I do was creating a score in what ways would you transition from Java to Python back in the system affect the team delivery type? The learning curve, this is something so, I mean, if an existing code is in Java, what would you transition back and back in system? Yeah. Again, I mean, the learning curve is the primary and then understand I mean, if I understand Java, I need to understand Python, but this again, I mean, this is half a context I don't understand is I have some couple of cross questions to verify what exactly we are doing. So are we trying to migrate the existing system from Java to Python or is it something where I'm writing? I mean, I'm putting Java developers to a Python backend system. This is something which has to be pre-planned. It's not like it can be done immediately. There has to be tracing sessions conducted and there has to be a proper guideline provided what needs to be done. And I mean, this has to be done with a proper guideline with a proper planning. It can be it cannot be done overnight.
How would you harmonize your Node.js API with third-party payment system integration, Node.js API design with third-party payment integration system, okay. Again, I don't have the complete context, but I'm trying to understand like when you say third-party payment system integration, so we are trying to integrate it with a third-party. Now third-party, we need to understand the request, the response body, the wait time, success scenario, failure scenario, possible failure scenarios, tokens, number of requests that the third-party payment system can handle, timeout scenarios, the failure scenarios, and what are the possible scenarios in which this failure happens, and let's say in case, so after making a payment, something happens, so how would a refund will work like an alternating or as a compensating transaction or compensating action, there are certain things which needs to be understand and planned in a better way. So, I mean there are certain parameters which I don't know, so I'm not even sure what I can do with this, I mean.