
Senior Software Engineer
Unient IndiaSenior Software Engineer (Backend)
Blue Blaze EarthSenior Software Engineer
rithmXOPHP Web Developer
Aleoy SoftwareSenior Full Stack Developer
Innosolv Consultancy ServicesTechnical Lead
MicroexcelFaculty (Dept Of IT)
R.K. Degree College
Visual Studio Code

WampServer
Jira

Postman

Git
.png)
Docker
.png)
Jenkins

Redis

.NET Framework

FlaskAPI

ExpressJS

MySQL

MSSQL

MongoDB

DynamoDB

PostgreSQL

Redis

GraphQL

Docker Swarm

SVN

ReactJS
I have worked with Sammsul in the recent past and I can only recommend him as the great developer, leader and colleague he is. He was always supportive and helpful, bringing new useful ideas to the team but being also open to embrace new ones from other co-workers.
March 27, 2024, Andrs worked with Sammsul Hoque on the same team
I have had the pleasure of working with Sam for 5 years now. We have worked together on applications of varying sizes and complexity. Sam is full of great ideas and not afraid to share them. Sam is driven and always willing to take on the hard tasks. When given a task, Sam can be trusted to get the job done in a timely manner. On top of being a hard worker, Sam has been an excellent mentor to me and to the other members of the team. I would recommend Sam to anyone looking for a trustworthy, hardworking individual.
March 21, 2024, Dave was senior to Sammsul Hoque
I have had the pleasure of working with Sam on mutliple projects at multiple companies. I have always been impressed with Sam's problem solving skills. He is someone who is not afraid to jump in and make things happen. I've seen him come up with elegant and efficient solutions in a number of languages and technologies. I've seen him solve problems and build features across the full stack. Whether it's front end, API, data layer, you name it, he has the skills to get the features across the finish line. It's been an honor to work with Sam and I hope I have the opportunity to work with him again in future endeavors.
March 19, 2024, Jeff managed Sammsul Hoque directly
Project Details-
As a leading SaaS product, revolutionizes business orchestration with its microservices architecture, optimizing operations and project management efficiency for organizations.
Responsibilities:
Project Details:
The web app unifies patients and doctors, enabling patient registration, family member addition, and document uploads.
Patients request consultations online/offline, search for doctors via name, specialty, or location, and make online payments.
Prescriptions are stored in patient profiles. Doctors manage consultation requests and access previous prescriptions, utilizing patient intake forms.
Responsibilities:
Hi. So my name is Samsula Chaudhry. I have done my graduation and post graduation in computer science. And after that, I did, um, teaching for, like, almost 2 years in a college where I should teach graduate and post graduate students. After that, uh, I came to Bangalore. I look at it here, and then I started working with, you know, a start up, uh, which I do real estate. And I got my initial days of training into, uh, web development, uh, testing, AWS infrastructure, and all those, you know, necessary elements that are required for development. So I worked there for 1 year. Uh, the company got closed, unfortunately, and then I moved to, uh, you know, some consultancy where I was working on product as well as projects. And then I had the opportunity to work on different kind of projects, which helped me to gain more technical skills to learn more about different text tags like dotnet and all. I was working with PHP, then I also learned dotnet. And then towards the end, I started learning Node. Js using Express. Uh, so that was a thing. And then I worked started working with Microsoft. And since 2017, I'm working, um, um, I was working at the, uh, Microsoft start from 2017. Uh, and since then, I'm working remotely. Uh, after that, whatever companies I have joined, I'm working remotely with them. So I used to collaborate with team who are based out of United States or UK, uh, and they're into different, you know, places. So I used to collaborate with them. Initially, I used to work in their time, uh, uh, in their time zone. And then later on, I started working on my time zone so they can collaborate better. But then during all this process, I also learned a couple of more tech stack using Python and all. So and gradually, I started focusing my shift towards, uh, back end mostly because I want to expand my knowledge and my expertise more to back end than the front end. Uh, but I it doesn't mean that I'm not, you know, interested in front end or working with them, uh, with anything like that. I'm familiar with Angular. Uh, last time I worked with Angular 6, and then I worked with Vue. And then recently, you know, I have worked with React. Just 6 months back, I worked on React. Have used MySQL, MSSQL, MongoDB, uh, with my current brother. I'm working with the AWS DynamoDB as well and into PostgreSQL. Of course, I'm there. Uh, apart from that, I have experience in microservices architecture, in serverless architecture as well. I have participated into, you know, different kind of, uh, client interactions also along with, you know, designing the application from the scratch. That also experience I have, especially with SaaS products. So with my previous experience with the so in my previous employer, I have built over there the the the whole SaaS product along with 7 different engineers from the scratch. The whole, uh, architecture was laid out and there are a couple of iterations to evolve the whole thing. Uh, and then we build the whole thing with dotnet and then a couple of modules in Python as well. So right now, my main motivation is to be with the company where I can stay for very long, be a part of the team where I can be, you know, for a very long time and, uh, and grow with them personally as well as professionally. So that's the main motivation behind it. Apart from that, I have interest in learning new things, and I'm just in learning AI, ML, and, uh, blockchain. Uh, starting to put it on, I started learning blockchain as well. So that will also add more skill set into my profile, and I may help in a more better way, uh, you know, proposing better solutions in future as well. Thank you.
Um, I have not exactly worked on, you know, automation pro process with the Genkis, but one thing I can tell you that if there's a if if if any kind of project is there, typescript or, uh, what was it, Python and or or dot net, it doesn't matter. First of all, we have to create a Docker container, which will, you know, encapsulate the whole project and all its dependencies over there. So we will create Docker image. I'm still we'll create Docker image over there, and that will encapsulate the whole project as dependencies into one single place. And then we will create, uh, the all the all the dependencies, like database and internal networking and other stuff, volumes and all, uh, into that, you know, docker compose file. And once these changes are, you know, deployed into CICD, we can create a CICD pipeline over there, which would pull the latest changes, implement this docker container over there, run the tests, validate everything over there. Once it's validated, then we are going to, you know, uh, publish it using Jenkins to the respective servers. Jenkins can help in, you know, uh, act as a mediator over there to see what has changed, if there any actions has been triggered or not. So Jenkins takes care of all these things. So, basically, I rely on the CICD work pipelines over there, and Jenkins, you know, act as a mediator or, you know, orchestrate over there to to take all these charges and then, you know, makes metrics over there. I can do the things manually, uh, on Jenkins application, but I don't have I I don't remember exactly how to automate that one using a step or something like that. Thank you.
Okay. So there are a couple of dimensions that need to cover when you're talking about security, uh, any restful APIs, not only about Proteus. First of all, we have to understand that, uh, any restful APIs is is going to access some kind of resource in the back end. Uh, it could be data we file. It could be talking to any cloud resources over there. So we had to, you know, be very specific what that particular API is meant for and what are the versioning that we are going to follow so that, you know, in future, we are going to implement all the security measures for all the different versions over there. Uh, apart from that, we have to make sure it should be always communicated over any kind of secure layer like SSL and all. So that gives more security if you are communicating with the restful APIs or HTTPS rather than HTTP. For development purposes, fine. But for manual production and HTTPS is the core thing that you should be doing first. Uh, and then there should be, you know, authentication and authorization there should also be there. The authentication can be done in any ways. You can use both. You will go for JWT authentication. There are n number of ways that can be do over there. And then we can have that put in over there into some API gateway or, you know, double load balances where you can implement that one. Then we can add, um, the authorization process based on the role based access, uh, whether the particular, uh, call is being there or not. And then we can also implement some rate limiting and rate throttling as well. Otherwise, there'll be, you know, bottleneck. There'll be charts of videos over there. So we should avoid will that one by implementing, uh, the rate limiting or rate throttling. Uh, so that also gives them, you know, more added security over there. And whatever data we are fetching, we should not be, you know, open to, you know, returning every data. We should be very specific to what access, uh, to the resource we are giving, what resource we are accessing, or what intent, and what are the different filters, and, you know, what kind of data we are trying to fetch it over from there. So all these things make the the data as well as the, uh, the restful API more secure, that we are not exposing anything unnecessary to the outside world. And, also, we need to implement the cost policy, uh, because there's also, uh, uh, very important. If you know very particularly that this particular API is going to be accessed from certain domain, then we should be incorporating that into our cost policy in the back end. And anything who is, uh, or or any access that is not done from that, uh, an identified resource from the front end or any other source, that that should not be entertained. So that so these are a couple of things that we can do to make sure that the APIs are very secure, uh, and they are, you know, uh, working smoothly whenever we are trying to interact with them from any either mobile application or from the front end application. Thank you.
Okay. So when you're talking about sessions, there are different things we can do. 1 is we can use, uh, cookies. Cookies can be stored over there. Cookies can tell about what the sessions is going all about. Then we can head JWT, uh, in JWT when we're implementing, we can have refresh tokens. Apart from access token, we can have refresh tokens. So access tokens, uh, allows you to access that resource at that particular API, certain limitation, uh, between which you can access that particular resource. Uh, so refresh token also, you know, is used by the front end, uh, to to see whether the session can be there or not. And if that particular timeline is over, it again cost for a next fresh access token, and the back end will, you know, renew that whole access token once again for the next interval of time. And this different token is stored in the database in the in the back end, or you can store it in Redis or any other caching mechanism also to make it much more faster. Uh, based on that, uh, whole behavior, we can, you know, store if since we are storing it somewhere, we always have the option to kick out that session from that particular storage either from Redis or database, whatever is your preference. And, um, that really helps us over there. But the main there are certain constraints out there which we need to always keep in mind since we are storing it somewhere either in Redis, it's much more faster. But, again, then if there is one Redis instance where every session is being stored, then that becomes a bottleneck. And if there are multiple out there, then that improves the whole performance. But then there is some kind of, uh, turnaround time is there, which, you know, becomes very slow, uh, which may reduce it to certain microseconds over there. That that would be a difficult issue. So it depends on what is the size of the project and what kind of, you know, interaction we're talking about, how much will get traffic, and how frequently the sessions will be, you know, in, uh, in change, how will they be managed, how long they will persist. All these are various factors that are used to keep in mind while, you know, designing, uh, the sessions for, you know, for particular audio, uh, for particular application. In my previous project, uh, as well as my recent project, we are using already single instance which takes keep you know, which keeps track of all the sessions, uh, all the JWT sessions over there. So we have, like, a key value pair. The key is basically, uh, the refresh token refresh token and the value is the access token. So and we have this in such a way that whenever there is, you know, a refresh token, uh, is you know, gets expired, uh, it again goes back to the back end. It requests over this real process, which updates the refresh token and access token and sends back the access token and updates the release cash also. And next time onwards, you keep checking from the release always rather than come to the back end. Thank you.
Okay. So this is a very, uh, you know, the name is hotfixes. Well, the issue is also with hotfix. So, basically, we create individual task for each of the features as well as, you know, for the bugs and fixes that we are doing over there. So, ideally, the practice that I follow is to have the fee, uh, to first mention the environment as in-depth UAT or production, then hyphen and then the ticket number over there, and then hyphen, and then I give a description for that one. So that helps us to, you know, to do the things parallelly as well as since there are different tickets allotted to different issues. Even if there's a parallel features going on, that will have a separate ticket for that one in Jira. And then for the hotfix also, this over there. So whenever it's a hotfix over there, it's a rich production, and we mark we, uh, we also mentioned that in the tag and also in Git when we can create a pull request, there are certain labels over there, which are, you know, gives us which does the priority. So whoever is the reviewer of that, uh, branch, he will know based on the level that this is on high priority. And based on that, they will review. Even I do the code review. Sometimes I do peer review as well. So based on the level, I check first what's the priority. If it's a high priority, and that goes into, uh, the review first. If that is a a normal priority or if it's a low priority, then I take it accordingly. So, usually, that's the whole structure. The environment name, the ticket name, and the branch name. If there's a hot fix, then that is measured towards the end with the capitalized, uh, saying that it's a fix or a bug or a hot fix over there. Thank you.
Okay. So when we're talking about tight deadline, uh, we have to first be very sure that what tight deadline means or whether it's a day or hours over there. Sometimes it's days. So, basically, what we do is I practice test driven development. So what happens with test driven development is that related to the feature or the bug that we are fixing, I write the test first. So that covers the main aspect of the whole feature over there. Then based on what changes I did, what refactoring I did, whatever the new methods I might have created, I add few more tests. So that are, like, secondary testing elements over there. I know that reduces the coverage of the code if that goes into per into, like, deployment over there without those tests, but that I think that can always be improvised. But the test written in context of the feature is nonnegotiable. So that's why I write that one first in test your development. And I write then the feature according to the those tests that I have written. Since my test cover all the aspects of scenarios of the requirement, there is no loophole and there's no mistakes over there. Uh, that also gives me an advantage that even though the deadline isn't near is nearing over there, then I'm not missing on the crucial aspects of that whole development that the feature and the feature, uh, like, the functionalities and their respective test is already there. The refactoring, uh, after the refactoring, I might have created a few more functions. And if that misses out in order to meet my deadline, that is absolutely okay because I feel that that can be taken up in the next release or the next cycle as well. I can make a note of it. I can do it at once and later, but that is not going to be a very important one in terms of the feature that I'm developing. Always the priorities related to to the feature functionality and its testing over there. So that is already met in the first place. So there's nothing like that that can, you know, go wrong in that one. And definitely code coverage is always there. So I keep on checking my code coverage every 15 days to make sure that it is somewhere around 75 to 78, 80% depends on what's the standard that that whole team is following. My personal is that it is 340 is always covered, uh, and and that's it should not go beyond that. It goes beyond that one. That is something that we have already missed, and that needs to be addressed in the retrospective meeting. Thank you.
It clearly shows that we are we are first fetching all the users, and then we are trying to find in that collection. So instead of doing that, we can do something like find by ID and pass the specific ID to return that exact data that you're looking for instead of searching the whole dataset once again, what has been written. So first of all, the line where we are doing await user dot find is giving us all the data. I don't know how many calls will be there, 100,000, 10000. And record all the reports. So let's start from in the database. And then once it's there, then we are, like, trying to go find method over there, which will again iterate to all the list over, uh, in the, uh, in the dataset that has been returned in the previous line. So these 2 will become the bottleneck. So best way is to index is to create an index to the ID field. If it's not there, then they should be created. And since it is created over there, if it's already there, uh, then if you're doing the find by ID and we pass the ID over there, then if you return that specific record, uh, that we are in you know, having that we're interested in, basically. And definitely, it is the whole, you know, turnaround time as well as this bottleneck will also go away.
So what do I can see over here? Like, you know, this will capture just a general exception. Uh, if something goes wrong by touching the the resource, then that, uh, that will catch the whole thing. But we are not checking what the response code is from that particular API, which is very much important. We are all we should always console or we should always return data if the status code is acceptable. It was 200, 201, 202, whatever it is, we we decide at the time of, you know, designing the API contract or how the whole functionality is going to be. Um, and then if there's something else, like, 400 and then, uh, 50, 55, double 0, 541, 503, could be anything. Anything other than 200 should give a proper error message. And then if if you want to handle it in such a way that it should always go to the, uh, the catch part, Then if you encounter a status score other than 200, we can always throw that 1, and the catch will receive it over there. Um, so, yeah, I think that should that should handle everything over there. And, yeah, I think they should be.
Now I'm not very much sure about cloud native Node JS application. I've I've never handled that kind of thing, uh, but just trying to understand the whole problem is that there is a printable hundred of failure scenarios. Uh, so this comes to when we are, you know, talking about how we design the whole system over there. So if I take it from that perspective, not taking in terms of specific to cloud printing more just application. I'm just giving you a rough idea based on my understanding is that all the printable scenarios, uh, you know, already listed when we design the whole system so that we know how to deal with that one. Maybe that's a failure with the data consistency. Maybe it's a failure of the data availability over this. We can create replicas. We can create multiple instances. We can scale them all the time. So that is already in place. In terms of unpredictable failure scenarios is that it's talking about availability. So we're talking about availability. We should always make sure that there are more than one instances running of whatever application we're talking about. If it is going to be only single instance, then whenever there's a high network traffic or high load over there, that prediction may not address that one. So there's one more thing that can happen. So we can always scale horizontal data. At least have at least have 3 instances when there's a scenario. There's a possibility is there that our applications will do host in such an environment and we're exposing it to, you know, that kind of high traffic over there. So in that case, uh, we can be, you know, be ready, be cautious about the whole scenario, and have few more resources already running. Uh, and that can be done by Kubernetes and, you know, load balancer with AWS and all those that, uh, So I think these are the general measures that can be, you know, uh, used for handling the general scenarios. Apart from that, all the printable scenarios is that that means we are already aware of the exceptions and other errors that might happen, and that can break the solution and, uh, take it down. So that should be done in a very graceful way. I'm not saying they should take it down in a graceful way. I'm saying that they should handle the errors and the exception gracefully and rather than exit the whole application. So that should never happen. In terms of printable scenarios, unpredictable scenarios, something goes wrong, so there should be one more instance who's running, and that should, uh, always, you know, make sure that the application is available. Now in both the scenarios, one of the key aspect is that we should have logs. Without logs, we cannot tell what went wrong. Even if we have already listed every scenarios and errors and everything over there, and even if it fails over there, logs are the go to place where, you know, we can go and check what has went well, you know, what went wrong over there and how we can mitigate that one immediately. So, um, that can be done in many ways. Different the levels of logging are there. Different types of logging are there. So that's up to how we design system and what kind of, you know, measures we are taking to ensure this logging is happening, you know, without a failure. So, yeah, that's how I would like to handle this whole purpose scenario.
So when you're talking about middleware, we are trying to basically delegate couple of, you know, uh, redundant tasks or some of the things that are done by some other process separately. And once that is done correctly, we are actually trying to, you know, give an access for the back end resource to any particular API and the microservice, whatever you have written over there. So, basically, what in, uh, middleware can really help us, It is part of authentication, authorization that I've already explained on the previous questions. So all the middleware stuff, authentication, authorization, security, caching can go into middleware level where, you know, the general things and the redundant things can be handled over there. That did not have to be very specific to any particular service or any particular resource in the back end. So these are, like, general elements that has to happen at every interaction. So that's why we are we always keep on implementing o r two in the middleware and all the error handling, all the different scenarios, like what o r two will respond from over there because we are trying to integrate couple of other, you know, 3rd party, uh, uh, 3rd party elements over there. Like, it could be Google. It could be LinkedIn. It could be Facebook. It could be anything. So they have their own signature. They have their own, uh, uh, request response cycle over there. They have their own status code. They have their own response methods. The response structure not method. They have their own response structure over there. Uh, so based on the integration that we're doing, we can do all these things into middleware. That also ensures that tomorrow if you're able to change anything or you want to introduce or remove anything, that will be done in that one single place rather than going into all other services or microservices or whatever back end resources we have to or whatever servers we are running in the back end. So they they they will remain unaffected, and all the security level things are being handled in the middleware. So the changes pertaining to that will always remain at one place. So one change, one place, and that reduces the complexity of being in the middleware. And that also ensures that, uh, whatever changes are being done is being followed by all other, you know, uh, all our services, they are following the with a common place that will go through that part. Thank you.
How do you how do you see the security server? Payment system integrations. Uh, I have no practical knowledge of payment system integrations over there. Um, I have done integrations with ERP APIs, uh, SAP APIs to be precise. So I can tell you in terms of that. I'm not sure how much that is applicable in terms of payment integration because payments are totally different domain. Uh, but based on my experience with, uh, integrating with the SAP APIs, the thing is that we have to first understand the documentation, go through that one and see what is the request and response architecture they have given, what contracts they have given, what are the limitations they have given in terms of the request patterns and the response, uh, structure over there. And based on that, we can define our own APIs, like because we cannot go beyond the scope of their API design that they have given. Uh, but whatever they're exposing will keep in within that boundary only. So based on the documentation and understanding and whatever, you know, they have designed already and what was the in place, we can write our own APIs according to that one. We can have our own level of, you know, security that is obviously, uh, very much feasible. But then you have to also adhere, you have to also comply to the security measures that the 3rd party has given. So I believe these are common lendees that we can follow in terms of any third party integration, be it payment, be it SAP, or be it some other application. So I think that l also applies here as well. Now the payment integration can have a few more things because there are, you know, uh, moderate transactions are involved and couple of few, you know, very fast are involved over there. So that can go to different levels over there. So, again, you have to check the whole application, how it works. So based on that, I can, you know, go through that one, and maybe in near future, I can tell you better answer for this one. Thank you.