Designed and developed scalable backend architecture for robust data processing system with REST and graphql standards. If you are looking for the one to design your backend with millions of user handling capabilities. I can develop this for you with Test driven development.
Sr. Software Engineer
Scaletech (oxolo.com product)Sr. Software Engineer
DotsquaresSoftware Engineer
RenosysPHP
Python
Terraform
Typescript
RabbitMQ
Javascript
TDD
Docker
React
MQTT
MariaDB
MongoDB
Firestore
ElasticSearch
AWS
GCP
EC2
S3
RDS
Route53
Lambda
SNS
EBS
VPC
IAM
SQS
CloudWatch
Kubernetes
Git
CI/CD
Webstorm
VS Code
Apache
Nginx
Linux
Ubuntu
MAC
Used PHP and Node.js technology to develop E2E. Frameworks like Laravel, Lumen, Nest, express.js, CodeIgniter, CakePHP and many other technologies. Used Many SQL databases to Store data into the database using databases like MySQL, Postgres, and SQLite databases Like MongoDB, and DynamoDB. Mainly Used Microservices architectures to develop Projects.
Could you help me to understand more about our background while doing a brief introduction about ourselves. So, hey, myself, the mentor Sony, and currently, I am working with remote company from Germany. I have experience about in this field for around 8 years. Currently, we're looking as a senior and tech lead for the back end team. My role includes a lot of aspects of the part, but eventually, it is attempt to just about, uh, doing the code, helping the junior peers, syncing with senior peers, getting the flow right from the direction of the product owners, and making sure that our production system is up and running all the time, uh, without any hesitation, uh, without any code leakage. I am experienced in a lot of technologies, started my career in PHP, then I moved to Node. Js. I have experience in Python and other, uh, technologies. For example, Postgres, SQL, uh, MySQL, SQLite. Uh, I have also space in TypeScript. Not this. I have already saved that. I don't know. But yeah. And, um, I have also some experience in DevOps. It's just the rule. Uh, I just like to do the automation, uh, that I can do. For example, setting up the pen board, setting up CSCD pipe. If you want the board using Terraform, I can I have experience in a lot approximately all the major cloud providers including AWS, Azure, and GCP? Also, Alibaba and, uh, Oracle. Not much, but I have some. And yeah. Uh, this is me, nutshell. And, um, no. Uh, I love to I love to play cricket, and be always on the computer. And that that's me in the nutshell. Let me know if you need anything else. Submitting the answer.
What is the method to ensure that express this where this can handle a certain infrastructure request? Do the load testing. And, uh, that's the best way to do it. There's no other best alternate way to make sure that, uh, it can handle a certain effects of request or not. Without load testing, you cannot do that. Without load testing, you cannot work. Cannot say that, hey. My code is working. Cannot. With a load with the help of load testing, you can make sure they if you have the the the particular RAM, the particular CPU, the particular memory, it will perform well up to this many request. I use course ISO k 6 to do the testing, but I also use sometimes Postman and, uh, other tools, but is the best. And yeah. Uh, usually, I do the load testing of the offline, but sometime, I also do the load testing on my AWS provider. If, uh, if I know that there's just some kind of event is happening, and I know on a certain time, we are doing a campaigning, and we need to make sure that our service is doing right or not, We are doing the, uh, load testing always.
What technique you will you recommend to for automated code, quality checks, and audio script? Alright? Unit tests as much as possible. Uh, because unit test makes you that your code is ready, and it checks that if sudden modification comes, your code will break. And you know that, hey. This is flow is not working, and you can go there and test it. Obviously, it requires a lot of hard work to do it, but in the end, it's, uh, it's very much much because, uh, let me give you a graph. So in the beginning, let's say this is a graph. This is how you do it. You are doing a code, deploying it. It's coming in a bug. You are fixing it. There's a new bug. You are fixing it. And there's new bug, and there is a fixing. With the help of testing, what you can do, in the beginning, the load will be much, much higher. But eventually, it will be go slow. And you there's no influx of testing. There's no influx of, uh, uh, there's no you cannot say that. Hey. This is a test case. This is a particular story. I wrote this code. The particular things work with this particular inputs and it works. If something changes come, you know, and some kind of syntax error or some kind of logical error happens, the flow is going to break and you know, hey. If someone messed my code or somehow I messed my code, and and without deploying it to the proxy, there's no, uh, without use doing the user user story testing. You know that. Once you write the testing, it will going to break. Once it's going to break, you know that here. So maybe I'm repeating myself, but in short, adding the proper unit testing, it's is the way to go. Especially in the back end projects. Also, for the back end and all this, but
How will you leverage AWS services to enhance the fault tolerance of the rest of your background? There's no one away. There is no one answer to this. There are many answers to this. In the last question, I have told you, like, doing the load testing, and I will also save this again. Do the load testing. Whether you can do it on the AWS services when or do it all locally. But, uh, then once you deploy the code to the AWS, you need to do 2 or 3 things. You need to deploy it behind the load balancer and do the proper, like, metrics checks. If certain spike has become. And it will just create a new note, and the load will be transferred to the new load. So by this way, you are making sure that the code is 100% fault tolerant. So for example, once if I wish I could share my screen, but let's say the the request is coming, and it's going to load balance. So okay. And there's one load. Okay. If there's a 1000 100 request, it's going to be cached and restart the server. This is bad thing. This is bad. What you need to do is you need to create a load balancer. If you're on the solution behind the load balancer, everything is same as before. But on the road balance, sir, you can do a particular check, like how many concrete requests are you getting. Also, you can do a particular health check on the node that the average node, um, what is the load average load of the node? For example, CPU says, uh, memory uses, input, output, total. So you can make sure that, hey. Everything is done. So combining all to average metrics, you can see that if if something if, let's say, if, uh, HTTP concrete request has gone up to 1,000 per second, there's a new node that's coming. And that load is going to be transferred to that load is going to be transferred to the new, uh, endpoint. So if it will be lined around Robin, or the first thing we'll go to the first, second request will go to the second, third request will go to the third, 1st request will go to, uh, 2nd, and so on and so on. Like, do this and then this. So, uh, this way, you can make sure that your code is healthy. Plus, you need to do the load testing, load testing, and divide behind the
Can I suggest a way to improve while getting the performance in MongoDB for a system that's just having data pressure? That that's a heavy question. Actually, yeah. Way to improve. So there are many ways to improve. The one thing that you can do is, I forgot it. I think this is a very one word answer. Uh, the basic answer that I have can tell you is my experience in SQL. But I don't in the MongoDB, there's a particular way. There's a new way to do it. But especially in the SQL, what you can do is you can create a regular data. So all the data person will go to the RAID, uh, RAID replica. All the RAID application will go to the RAID replica. So with this way, the load will be, uh, fully balanced, and everything will be final side.
What is what is test for the electric and to enforce acid properties in a distributed transaction system using postures in my so it's in meaning the atomicity. So what you can do is, like, you can use the proper indexing to make sure that every code that we can do is right. Second thing that you can do is use the views properly. Uh, so the queries that you go to the select page will do that. It will not take much time, and everything will come from a cached database. The third thing that you can do is, uh, yeah. But these are the 2 things that I can remember on the top of my mind. Uh, basically, process and MySQL, the query is similar, but behind the scene, they are the images are different. So Postgres is very simple to use. Post case and heavy, there is an resilient post case database. SQL database that helps you to manage the data in a particular way with the help of schemas and the help of use indexing and all. And MySQL is the same as Postgres, but with less with, uh, uh, with multiple engines and all. Um, I will choose 2 things. I I cannot give you any hard answers about this, but I can give you only 2 things on this. What is I use the blues probably and use the indexing probably.
There was the following stress middleware function that is supposed to check for a present of systematic property in the reverse object. However, it is always direct to the login page when you this manual exist. So app dot use request response and set this here has own property. Let me check it for the index errors. So just you just need to remove Yes. Uh, the system has this present. So if there's no system ID, So request is an object. Okay? Uh, I don't know about the highest on property will work or not. This is this is, um, session. So especially cache. So it should be stored in, uh, cache, uh, cache property. And in that asset property, if you use, uh, it will it will have this session or not. So yeah, so you can do that request. Got question mark, essay, question mark request question mark dot essay question mark dot session ID, uh, and pass it to next. But, I am not sure about the has on property, uh, function. Okay? Um, so in the modern JavaScript, what you can do is, like, you can write it, read it, read it in the proper two types, uh, like request question mark request dot session ID. And if there's not, then it will be log in to the login. And then if it's still, then it will react to the next. So I don't know about I'm not sure about the how has some property because I'm sure that, uh, request is not SMS symbol object. Request has some property as a function, but it's a prototype function. So there's no prototype in the request as in property as much as I know, but it should be possible. I I'm I I never used that this syntax, but, uh, I'm 100% sure that hasn't properly supposed to be not is a prototype. I suppose, or it will not be there because it's a very big object by itself. So if you want to check if this key exist properly exist, you just do the request or session ID and request to false or not null or and that's where we're active to the login and that f as well. That's my answer. Hope makes sense.
Try to look at the following JavaScript code. Use the for database in no SQL environment. What issues can you spot in the function kind of error testing? So when the data the data data Name, edge, and image. The syntax is good. Syntax looks good. So, basically, what it takes is a function that takes an object and test that the following properties exist or not in that object. As a subject, present, then do this, else do this. So, uh, first of all, I do not like the syntax. Uh, not my test. Anyway, but, uh, all of those 3 exist. They know. I don't know if this is I don't know what I don't pardon me. Thanks, uh, since Except, uh, not using some kind of job API to validate this. I could not find any it was actually fine. The way I look at, this has proper cache. Try and cache from the data dotting. Yeah. Looks good. That's why I don't perform. But did not find any issue.
Can you describe high level design for Node? This is still the this system that leverage it on this this first section. Actually, yeah. Actually, yeah. So let's say you want to create a Google Analytics. Okay? So what are the things that you need to require? You need to make sure that your system can handle millions of millions of transactions per second. Okay? First thing you need to do this. Second of all, you need to do this. You need so for this, that's the only requirement because, basically, if you it's an analytics tools. There are a lot of analytics are coming from multiple websites, and they are so they are, uh, just pushing the code to the back end. So for that, what we we are going to do is use this Node JS, Nuance for database, and ECS with load balancing. For ECS, for 100 billions of tasks, we are going to use also p m 2 as a run time to run the Node JS because every instance will have multi CPU. So every CPU will, uh, have its, um, have, like every instance will have, like, 4 or 5 CPU. So notice it's a single setting. So p m two will make sure that the talking is happening, and it will consume all the 4 CPUs, uh, uh, in that machine. So, for example, let's say, I mean, to make sure that every time, like, 20 replicas is working. So, uh, 20 replicas, into 4 on 6 80. So 800 notes are working. 20 into 4. So 80 replicas are always all time, uh, you can say that if it notes are always working. Okay. And, uh, yeah, that's how you can do. So just write a note just code where you are just submitting 2 things 2 or 3 things. Name, delta, value, and, uh, some kind of description. Uh, put this information to the database, and, uh, that's all you need to. So you will report 2 things, uh, back end service and a database. For database, what we need to do is, uh, we need to deploy, uh, it using if you want to, uh, so so you can we can use DocumentDB, um, instead of MongoDB and Node JS. So, yeah, so we are going to deploy the 2 things, 2 services on the AWS, uh, DocumentDB and, uh, Node JS on ACS, uh, load with load balancer. And, uh, at this country replicas to handle billions of trans um, billions of transistors. So but I will not use Express. I will use Skoah instead because I know. And, um, it's very low level to the, uh, Node JS itself, and I expressed it very high level on. So I will use 2 instead of express. But, yeah, this is make sure I I just made that answer, making, uh, let me know if my answer make any sense and all perfect.
What consideration will you make when creating a Dockerized microservice solution involving express CS and NODS? First of all, like, it still follow the single responsibility principle in itself. So in that microservice, uh, you should have 2 things. You should have the mid database. Migrations, if it's required or not. It should have all the proper utility libraries that it needs. For example, if you are using CALM, if you are using the blue gate, if you are using some other CLI tool that is not that only available on that particular, uh, image. For example, if you're using, uh, Alpine or Ubuntu image or or Red Bull, uh, like, bull bull image. And all the all the CLI APIs are not available on particular. So you do make sure that those CLI applications are available on this or not. Once we are installed all the required things, then what we need to do, then we need to, uh, copy the package, listen, and package log your JSON first. Okay? And install the and then the NPM install command. What it will do, it will install the NPM command first, and then add in. Then what you can do, you can copy the rest of the things. Okay? Like source folder and test folder and etcetera. Also, it is very important to define docker ignore files that let you to ignore all the unnecessary things. For example, uh, dot git file, not models file folder if you want to really know it. Uh, dotvscorefile.jestfile.chemistfile. So all those things that are not required in there. So image will not make The the final image will not be too heavy. And since I have ignored all the required files, what I'm going to do, I'm going to copy that. And, also, I need to turn the I need to build their project if it's some kind of, uh, if it's written in TypeScript, I need to run the build command. Once I have wrote the build command, uh, then I will define the start command in the CMD. Also, I will also define entry point. It is very important to define the entry point because once we deploy to the to it, like, uh, like, others it didn't work for providers. It can know that part default port to be exposed. And yeah. And, also, uh, so not entry port, not exposed. Also entry port is also important, but entry port, you can define with by default if you're is also defined, but you can define node as a default and the CMD, the startup command. Uh, for example, node start.jso. Yeah. These are the things that I will follow. I will take the consideration while decorating the microservice solution involving expression notice.
How could implement an automated testing pipeline for distributed microservices within AWS infrastructure? So I do not understand. What did you mean by automated testing by plan for distributed microservice. So so if you are telling me before deployment, after deployment. So after deployment, there's no test. There's no test to be done, uh, once it the code is deployment. Apart from the security test, this will be done before, uh, before deployment. Apart from the, uh, apart from the load test test test testing, it should be always done before the deployment. So all the tests will be done before the deployment. So once it is deployed, uh, we can add the CICD pipeline in GitHub or GitLab access to do the required checks. For example, security compliance, test kit, test coverage, code coverage, or anything else that we'd need to make sure that either code is healthy and, uh, running the pipeline, etcetera. Um, that can be done in in the before deployment process in CSCD. Uh, we can use AWS infrastructure to do the testing. For example, we can use ECR to to pull their images. We can use test runner. Uh, sorry. Uh, pipeline, uh, instances. Is it instances to run the automated testing and all those things. But yeah. We can write all those things in, uh, in a in a in the in the before step, in the CSCD pipeline to before getting the deployment of branch. And, yeah, and push it. Yeah. That's how I can answer for this question. Thank you.