profile-pic
Vetted Talent

Avijit mitra

Vetted Talent

An experienced and passionate Software Engineer with experience in developing high visibility, scalable Java/J2EE applications.

  • Role

    Java Developer

  • Years of Experience

    9 years

Skillsets

  • Spring - 8 Years
  • Java - 9 Years
  • Spring Boot - 6 Years
  • Kubernetes
  • Docker
  • Spring MVC
  • Gradle
  • Maven
  • Core Java
  • Amazon EKS
  • Aws identity and access management (aws iam)
  • Helm Charts
  • Liquibase
  • AWS - 4 Years

Vetted For

16Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Software Engineer(Java Spring boot)AI Screening
  • 73%
    icon-arrow-down
  • Skills assessed :Git, Hibernate, MySQL, HTML, Spring, Azure Cloud Services, Go Lang, Postgre SQL, Java, Spring Boot, CSS, Vue JS, JavaScript, Angular, Mongo DB, react
  • Score: 66/90

Professional Summary

9Years
  • Sep, 2021 - Present4 yr

    Technical Lead

    RDALabs LLC
  • Nov, 2019 - Sep, 20211 yr 10 months

    Technical Lead

    HCL Technologies
  • Apr, 2019 - Nov, 2019 7 months

    Senior Software Engineer

    TEK Systems (Client: United Health Group)
  • Dec, 2014 - May, 20172 yr 5 months

    Associate Software Engineer

    Erevmax Technologies Private Limited
  • May, 2017 - Apr, 20191 yr 11 months

    Software Engineer (Application Development Analyst)

    Accenture Services Private Limited

Applications & Tools Known

  • icon-tool

    Eclipse

  • icon-tool

    GitLab

  • icon-tool

    SQL Developer

  • icon-tool

    SonarQube

  • icon-tool

    SVN

  • icon-tool

    Rally

  • icon-tool

    Confluence

Work History

9Years

Technical Lead

RDALabs LLC
Sep, 2021 - Present4 yr
    Developing a full web-based project which helps to manage the huge client data, Requirement analysis, documentation, unit testing, GO LIVE activities, defect fixing.

Technical Lead

HCL Technologies
Nov, 2019 - Sep, 20211 yr 10 months
    Developing a full web-based project which helps to manage the huge client data, Requirement analysis, documentation, unit testing, GO LIVE activities, defect fixing.

Senior Software Engineer

TEK Systems (Client: United Health Group)
Apr, 2019 - Nov, 2019 7 months
    Developing a full web-based project which helps to manage the huge client data, Requirement analysis, documentation, unit testing, GO LIVE activities, defect fixing. Upgradation of product performance and technologies.

Software Engineer (Application Development Analyst)

Accenture Services Private Limited
May, 2017 - Apr, 20191 yr 11 months
    Developing a full web-based project which helps to manage the huge client data, Requirement analysis, documentation, unit testing, GO LIVE activities, defect fixing. Upgradation of product performance and technologies.

Associate Software Engineer

Erevmax Technologies Private Limited
Dec, 2014 - May, 20172 yr 5 months
    Developing a full web-based project which helps to manage the huge client data, Requirement analysis, documentation, unit testing, GO LIVE activities, defect fixing.

Achievements

  • Got recognitions from stake holders twice in 6 months for excellent performance.

Major Projects

1Projects

Assurant

Sep, 2021 - Present4 yr
    Developing a full web-based project which helps to manage the huge client data, Requirement analysis, documentation, unit testing, GO LIVE activities, defect fixing.

Education

  • Bachelor in Technology in Information Technology

    Camellia Institute of Technologies, WBUT (2014)

Certifications

  • Erevmax internal technical certifications in core java

  • Professional training in struts 2 in erevmax technologies private limited

  • 6 months successful training in erevmax

AI-interview Questions & Answers

Hi. My name is Abhijit Mitra, and I have around, uh, 9.6 years of experience in Java and Datu Technologies, uh, including Spring Boot and Microservices. And, uh, I also have, uh, knowledge on hibernate. That is the ORM tool that I use, uh, in order to connect, uh, to the database. And, uh, the architecture that I'm following currently, uh, I I probably have been following the architecture of microservices, uh, since, uh, 2019. So it's been, like, 5 years, uh, that I've been working on it. Apart from that, I have knowledge on, uh, certain AWS resources like, uh, AWS SQS, AWS s three bucket. Um, these 2 I majorly used, in order to dump the data and securely in the cloud. And the message service that I use, that is Kafka. So Kafka is a secured message service that I, uh, use for making the, uh, service server to server interaction asynchronous. And apart from that, I have, uh, knowledge on, uh, SQL, MySQL server, MySQL, uh, server 20, uh, 12. Uh, apart from that, uh, MySQL database. Uh, and the lowest tool that I use, that is MongoDB, uh, uh, in order to maintain large volume of data without making it any structured so so that we can actually have a large amount of volume of data, uh, in the cloud without maintaining any, uh, proper stable structure. Apart from that, I used Elasticsearch for extensive searching, uh, algorithm. So Elasticsearch, uh, I use for those services, which is majorly, uh, actually used for search, uh, and, uh, not majorly there, uh, updated. So that is one thing that I did as well. So, yeah, that's pretty much what we worked on. And currently, I've started using GraphQL, uh, uh, which was the requirement that came up later on because, uh, some of the service are actually used as a single service, which has a lot of, uh, attributes in the response. So based on, uh, certain criteria and requirement of a single or specific service, uh, they provide I mean, they ask for the I mean, for a certain attribute so that we can provide them that without making any changes in the code, without also introducing any new endpoint in the service itself. So, uh, GraphQL is something that, uh, have started working on recently. So, yeah, that's it.

Yeah. So spring, uh, batch, uh, is something that Spring already provides it. Like, there's, uh, job launcher interface that we choose. Uh, so, uh, the job launcher interface has certain jobs. So, uh, and, uh, the job has certain steps. So, uh, and those steps use certain, uh, certain processes, uh, which are actually, uh, which are which are scheduled, uh, with certain, uh, Kron expressions. So, uh, large volume of data like, uh, suppose exporting a report from a database. Uh, this is a scenario where we actually use this job, uh, on a on a daily basis. Like, suppose Monday to Friday, we we use a cron that, uh, that is getting triggered Monday to Friday at a certain period of at a certain time in a time zone. Um, so, uh, that particular batch is actually run every day at a specific time and triggers email with an export, uh, report to all the stakeholders of the of the particular application.

Yeah. So hibernate session factory is something that, uh, uh, the persistent con context. So, uh, we use basically Spring Data JPA in order to connect to the database. And, uh, the thing is, uh, the hibernate actually takes care of the session factories and all. So the ORM that you will use, that is Spring Data JPA that uses, uh, or that implements a hybrid Internet. So the session factory, uh, that we use, uh, for concurrent applications, uh, where we actually, uh, provide all the information, uh, of the the connections and all. We create connection pool, and, uh, we try to make it as, uh, robust as possible, uh, based on the application, uh, for I mean, the amount of, uh, traffic the application has. So based on that, we create the pool because unnecessary, uh, large pool can actually cause, uh, more, uh, more, uh, resources, which are not actually optimized so that because, uh, I mean, if we are, uh, I mean, gate I mean, if we are engaging more resource to a specific, uh, resource I mean, to a specific section, then, uh, probably there might there might be, uh, going forward, if the application gets, uh, higher in the volume, so probably we might get issue in allocating resources to other sections. So that's why, uh, so the connection tool that we create in the hybrid and based on that, the session factor is created. So, uh, so that I mean, if the traffic is higher, then we can actually, uh, allocate more, uh, threads, uh, or allocate more session, uh, to a specific thread so that, uh, most of I mean, we we don't end up losing, I mean, uh, threads, uh, and, uh, we we don't uh, exploit, uh, I mean, uh, the threads which are in the queue. So that is one thing. And, uh, since we are using, uh, Spring Data JPS, so all the HBL queries, uh, the HQL queries that we write, uh, which are highly optimized in order to, uh, provide certain, uh, solution, uh, to certain, uh, criterias. And, uh, there is a good thing about, uh, this ORM is, like, we have a caching mechanism so that every time or whenever we are querying a data, if that data is not persisting in the cache, the hybrid persistent context. So if the context is not having the data, then it, uh, goes to the database so that we can minimize the data based traffic, uh, from the application side. So, yeah, I mean, uh, pulling the data from the persistent context is highly, uh, feasible in these case scenarios, like when we are dealing with, uh, a multithreading environment or a high volume of traffic, uh, from the users.

What is the best to implement? Okay. So, uh, basically, uh, in our application as well, we use config service. So config service is something that we implement in order to make all the configuration related stuffs outside of the applications. Like, we externalize the application, uh, property files so that, uh, any any application can look up, uh, the config service, uh, based on their requirement, and they can fetch it. Uh, they can fetch it directly from the config service, and we create the service. We have, uh, we use GitLab, uh, as, uh, as a repository for merchant control. So we have a particular specific repository for, uh, maintaining all the information, uh, within our property file. That property file is basically pulled, uh, in the config service. So whenever a application or a service is trying to access a certain, uh, information as which is part of the application property file, they can directly ask, uh, the config service, and config service pulls, uh, the data, uh, the config data from the repository and, uh, provide that data to the application. And another way we are doing it, there is there are certain environment variables that are also can they they can be passed from the values dot tml file because since we are using help HelmChart, uh, to back to to maintain the Kubernetes deployments. So currently, in the environments section, uh, from the values dot m l file, we provide certain environment variables, which are specific to a particular service, which are not actually generic property files. So that way, we do, uh, in our application.

Okay. So, uh, this distributed, uh, system, uh, this is the asset property that we follow, like so, I mean, uh, if a system is connecting to a database and if the system is monolith, so in that case, uh, the transactional, uh, boundaries are quite simple to maintain because all the interactions between, uh, between the components are actually, uh, within this within the same application. So when it comes to the distributor system, so there are, uh, certain, uh, mechanism that we follow. So one of them is 2 PC that is 2 phase commit. So, uh, what we do, we, uh, first, we when a particular application is interacting with the database, it it, uh, it, uh, it first, it basically, it gets the acknowledgment message. If the acknowledgment message is is already provided, then only it goes for the commit. So this is called the 2 phase commit when we get the acknowledgment first, and then we commit the all the data. So that, uh, if another application is waiting for the sign up from the other application. Uh, so in that case, what it does, it basically checks whether, uh, the commit is done or not, then only it proceeds with the, uh, with the with its, uh, commit. So if the acknowledgment is, uh, that is not properly provided by the application or the commit is having some certain trouble. So what we do, we entirely roll back, uh, all the transactions. So, uh, so that I mean and suppose a service is calling through REST API, another service, and it is waiting for its response. And that particular response, if that is used, uh, in another services transaction, then, actually, it really helps because it it waits for acknowledgment or the commit message or successful message. If there is any issues if if there is any issue in the second part of the transaction, then what we do, we completely roll back all the, uh, I mean, related transactions. If there is 3 or 4 consecutive transactions are happening within the same request, then we actually roll back all the related, uh, uh, or corresponding transaction that are that has already that that those have already happened prior to the to the one that has some issues. So, yeah, that that is how we follow. Uh, Yeah. So in microservices, this is the way, and there is another way that is, uh, the 3 phase, uh, committees there. But that is a bit complicated, so we don't go for that because, uh, that is something that needs a high amount of, uh, attention. So, yeah, that's we that that is something we don't follow. And apart from that, uh, yeah, and, uh, one is, uh, one architecture that we follow, uh, that is Saga for microservices. So Saga is a is a event based transaction mechanism. Uh, so this has 2 part, like orchestration and choreography. So we use orchestration. There is an orchestrator that actually takes care or, uh, that, uh, which, uh, takes a look into all the messages so that the next or the corresponding, uh, transactions can be followed based on that.

Yeah. Memory leak is something that, uh, that, uh, actually, I see I mean, being a Java developer, the the main benefit that we get out of Java that is the garbage collection. So garbage collection is something that we, uh, that we get, uh, by default from Java or Java JVM. So, uh, Spring Boot is, uh, does not differ from Java, so it's based on Java. So Spring Boot takes care of all the bins' life cycle. But the thing is, I mean, uh, if the bins are created in a singleton, uh, scope, then, uh, I think I mean, then it it it gets destroyed in the, uh, in the disposable bin section because bin has a specific life cycle from the initializing bin to the disposable bin. So, uh, I mean, if it is in the scope of, uh, Singleton, then the disposable bin takes care of it when, uh, the bin has done its work. So so it then the disposable bin actually, uh, gets called, uh, that interface gets called, and, uh, we have predestry methods and all. So we are called by default, and singleton beams are, uh, disposed or they are, uh, ready for garbage collection. Uh, but if the scope is prototype or request or any other which requires a continuous creation of instances of a particular gain. Right? So in that case, I don't think it's been, uh, I mean, Spring who does the job for the garbage collections. In that case, we have to explicitly, implement the disposable bins so that the bins, uh, which are getting created, uh, on each request, The the instance needs to be nullified so that the instance can be taken care by the garbage collector. So uh, we have to make sure it does not have any difference, uh, so that, I mean, it does not go to the old generation or the survival space of the memory. So, yeah, that's pretty much that, uh, Spring provide. But, yeah, I mean, Java takes care of it mostly. So which is a bit good part for, uh, as a. Yeah.

Okay. So I find 2, uh, issues here. Uh, see, I mean, the the double lock is not there, first of all, because instance, if, uh, if any trade checks at the point where the instance is null and the other thread actually, uh, checks it. So if 2 threads are checking it, so there might be some ambiguity. So I think the critical section that is if instance equals to null, that should be, uh, in the synchronized section. And double checking of the instance, uh, nullified null of the instant null check of the instance would be beneficial, uh, for long term, uh, some multithreading, uh, projects. So, uh, the synchronization should be there, uh, synchronized dot, uh, synchronized within the bracket. We should, uh, do the singleton dot class. The synchronize block we should write for the if condition under the gate thing gate instance method. And another thing I find that public static singleton instance, the global variable, I mean, the class variable has been declared. Right? So I think it should be volatile because volatile, uh, makes that particular, uh, reference variable or that member variable, uh, it it it is visible from the thread side I mean, thread monitor side. Like so the value, uh, will be picked up by the thread, uh, from the jail from the chip, uh, memory from not from the caching because, uh, from the if I mean, every trade has its own cache. Right? So if it is, I mean, if it is looked up from the cache, then there might be some different data. So I think, uh, it it should be made volatile, that instance variable, so that every time this instance variable is read for the null check, it should be read from the, uh, heap space instead of the thread cache.

So, uh, I mean, uh, if we are extending it, then I don't think we, uh, we should again call the super because, like, uh, I mean, this is a part of of this concept. So if we are, uh, we are not making any changes in the overwritten method, then, uh, I don't think it it is a good idea to actually implement it at all. Like, uh, if we can just keep it simple, like, without implementing it, if there is no new implementation as part of the child class, then, actually, we can keep it like that, and we can keep it as it is because calling again the super is not recommended. Uh, super is, uh, I mean, that big tangent class has already done the same thing. So if there is any new implementation that we're looking for, then it's fine. But, otherwise, I don't think calling super is a good idea.

Okay. So, uh, see, I mean, the spring boot is already, uh, something that we we for microservice applications. Right? So, uh, we already pro I mean, we already follow the, uh, saga pattern, uh, but in if we are talking about only Spring Boot, so, uh, Spring Boot has certain, uh, fallback mechanism that that can be followed. So that can be actually implemented within Spring Boot. Uh, we have a a certain library, uh, called Hystrix, which is maintained by Netflix. So, uh, from that, we can actually, uh, get certain benefit uh, of circuit breaking of of of of particular, uh, request. So when, uh, there is, uh, there if there is any failing really fail, uh, failing response from a connecting service, so what we can do, we can we can retry the, uh, particular service for a certain time. So if the service is still not up, then what we can do, we can provide some default, uh, response to that particular request, uh, so which is taken care by, uh, uh, Hystrix, uh, as to as part of the fallback mechanism. Apart from that, we can we can get some, uh, bulk hit pattern as well. Like, uh, we can, uh, for certain trades, we can, uh, provide a certain section to a certain thread. We can limit certain thread pool for support, uh, maintaining certain request so that if, uh, any application, uh, is not responding. So all the trades, actually, they don't go to the same section and, uh, they wait in the queue and they, uh, they down the particular system. So if, uh, a certain number of trades are already, uh, engaged in a particular in a specific, uh, for a specific service, then there should not be any new fit allocated for that particular, uh, section of code. So this is how also we can we can restrict some seamless failover. And in in case of event, uh, in in the case of event of system system crash, uh, so, basically, uh, I mean, uh, if I mean, in this scenario I mean, in this, uh, event driven architecture. So what we can follow, we can follow certain, uh, asynchronous processes as well. So if, uh, if we are waiting for a certain certain response or certain data from another service, then I think it's always better to go for that pops up model. Uh, so Kafka is something that we can use, uh, so that I mean, no sees no trade is basically waiting for a longer period of time to get a certain amount of data. So, I mean, so so that I mean, it's, uh, always better to release the existing thread, and we can allow the new thread so that, uh, if, uh, going forward, the application is getting large and the requests are coming in a higher volume, then we can actually accommodate more threads. And apart from that reactive programming that I'm trying to learn currently, so if that is also something that we can come up with some solution with Java 17 version. Uh, so, uh, basically, reactive programming is also something that does not wait that does not allow us straight to wait for a longer period of time. So that that change in the mindset of adopting this, uh, this, uh, this particular, uh, reactive program is also good.

How would you transition spring boot service to a non blocking model using red blocks? What are the key performance indicators to monitor to do the transformation? Yeah. So, basically, that, uh, that is something, uh, I'm trying to learn currently because as the Java developers, we we mostly rely on the blocking, uh, part of the program. So we usually wait for certain, uh, certain responses, and based on that, we do certain, uh, activities. Right? So, I mean, uh, but, I mean, this non blocking, mechanism that helps a lot, uh, in releasing a current rate and accommodating more threads into the application so that the application can be more fast and in in producing the responses to the clients. So, uh, the non blocking, uh, using the Webex, I haven't actually used Webex as of now, but, uh, I mean, I can I can talk about, uh, the computable and the computable future, which I used? Those are not actually, uh, completely non blocking. They they wait for certain for certain, uh, things to be happened. But at least, uh, I mean, uh, if there are 2 duplicate the 2 services we are calling from, uh, our service, at least we can actually do that, uh, supply, uh, uh, with the with the method reference. We can just get the response whenever it is ready, and we can do it for both of the, uh, both of these, uh, service responses, then we can actually do the computablefuture dot all of that, uh, when both are both the responses as in responses are ready, and we can just combine them and produce it, uh, to the, uh, to the controller. So but in that case scenario as well, we use join. Right? So that, async or join is also something that that waits for longer time. Uh, that that is that is waiting for a certain period of time, which is not fully non blocking. But yeah. I mean, uh, the non blocking, uh, section is like mono. I I I heard about or I read about mono, and mono is something that that does not wait, uh, for a particular object to be, uh, to be, uh, to be returned. Uh, so so this is something that we can use. So, uh, yeah, I mean, non blocking reactive programming is, uh, is is is something that that boosts, uh, the performance, uh, in terms of the thread maintenance. Uh, so more threads can be accommodated in a single application, and, uh, we can reduce the queue of thread waiting threads, uh, in the application as per the requests.

In this scenario, the use of Yeah. So if the application is really large, if the application is used by the end users like, uh, some ecommerce site, some, uh, some sites which are getting traffic on a day to day basis on a large following, suppose Amazon or, uh, Flipkart. This sort of, uh, ecommerce sites which are, uh, actually, they are, uh, getting traffic in a huge volume. So for them, I think it is it is beneficial that I mean, adapting the reactive programming is beneficial because as many threads they can accommodate or as many requests they can accommodate in the application is better so that the user doesn't get actually, uh, uh, the user don't have to, uh, wait for a longer period of time to get that response from the application. Thank you.