profile-pic
Vetted Talent

Farrukh Masroor

Vetted Talent

Professional Software Engineer with a strong background in Java programming and Spring Framework. Experienced in developing RESTful APIs using Spring Boot experience in both monolithic and Microservice architecture. Proficient in MySQL database management and React JS for creating dynamic user interfaces. Familiar with Kafka messaging system.

  • Role

    Software Engineer

  • Years of Experience

    4.10 years

Skillsets

  • Java
  • AWS
  • Spring Boot
  • Spring JPA
  • Spring Batch
  • Ext JS
  • Kubernetes
  • Mongo DB
  • NoSQL
  • SQL

Vetted For

11Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Intermediate Java Spring Boot DeveloperAI Screening
  • 69%
    icon-arrow-down
  • Skills assessed : CSS, Hibernate, RESTful API, Spring Boot, Spring MVS, AWS, Git, HTML, Java, JavaScript, SQL
  • Score: 62/90

Professional Summary

4.10Years
  • Jul, 2024 - Present1 yr 2 months

    Software Engineer

    Deutsche Telekom Digital Lab
  • Sep, 2020 - Jun, 20243 yr 9 months

    Senior Software Engineer

    Adeptia India Pvt Ltd

Applications & Tools Known

  • icon-tool

    Ext JS

  • icon-tool

    Spring Batch

  • icon-tool

    Kubernetes

  • icon-tool

    CI/CD pipeline

  • icon-tool

    Maven

  • icon-tool

    Google Kubernetes Engine

  • icon-tool

    JSON

  • icon-tool

    Apache Maven

  • icon-tool

    REST API

  • icon-tool

    GitHub

  • icon-tool

    Oracle

  • icon-tool

    Jenkins

  • icon-tool

    ReactJS

  • icon-tool

    MySQL

  • icon-tool

    Java

  • icon-tool

    J2EE

Work History

4.10Years

Software Engineer

Deutsche Telekom Digital Lab
Jul, 2024 - Present1 yr 2 months
    Working on the Customer Data Platform (CDP) project, helping create a unified 360degree customer view to ensure data consistency across teams. Utilized eventdriven architecture, NoSQL databases, and reactive programming principles to enhance system performance and optimize API response times.

Senior Software Engineer

Adeptia India Pvt Ltd
Sep, 2020 - Jun, 20243 yr 9 months
    Worked as a full stack developer in the core product development team, managing the GUI and backend framework. Designed and developed the License Microservice for the product, leveraging Spring Scheduler and integrating with the Kubernetes API. Led a successful refactoring effort of the old Logs cleanup implementation using Spring Batch, achieving a 3x reduction in cleanup time and improving system efficiency. Implemented Git repository cloning functionality to enhance product features, enabling product forms to be dynamically populated from properties defined in Git repository files.

Achievements

  • Spearheaded the development of full-stack solutions.
  • Innovated by introducing stored procedure support feature.
  • Engineered advanced solutions for transaction execution.
  • Pioneered the improvement of product functionality for event triggering.
  • Led successful refactoring effort of the old Logs cleanup implementation.
  • Drove the implementation of Git repo cloning for product enhancement.
  • Orchestrated the seamless transition of the product from monolithic to Microservices architecture.

Major Projects

2Projects

Customer Data Platform

Jul, 2024 - Present1 yr 2 months
    Created a unified 360degree customer view to ensure data consistency across teams. Utilized eventdriven architecture and reactive programming principles to enhance performance.

Adeptia Connect

Sep, 2020 - Jun, 20243 yr 9 months
    Designed and developed the License Microservice, implemented Git repository cloning functionality, and refactored logs cleanup implementation to reduce execution time.

Education

  • B.Tech(CSE)

    Teerthanker Mahaveer University (2020)

Certifications

  • Java

    Udemy (Aug, 2018)
  • Problem Solving Basics

    HackerRank
  • Prolem Solving Intermediate

    HackerRank
  • Java Certification

    HackerRank
  • Java Certification

    HackerRank
  • Prolem Solving Intermediate

    HackerRank
  • Java Certification

    HackerRank

Interests

  • Badminton
  • AI-interview Questions & Answers

    Hi. So myself, Faruq Masulu, uh, I have been working as a senior software engineer in Adaptive India Private Limited. So I've been working here for almost 3.5 years, and I've been part of the core product development team. Uh, I've been working on both the front end and back end part of this, uh, company. So Adaptia is a basically, product is coming here. We have the product Adepta Connect, which is a business integration tool. Using this tool, we can integrate our client, uh, within their environment and with other external clients. So, uh, the major technology stack that are used in Adepti Connect is EXE JS framework for, uh, the front end part and the Java related technologies like Spring Boot, Hibernate, uh, Spring JPA, and microservices architecture, module architecture. All these are texture for back end. For database part, we are using, uh, support all type of database in our product, but for the development purpose, we are using MySQL. Uh, I have been working on migrating the MOS EC architecture to the microservices architecture. So and there are basically 12 to 13, uh, microservices for our monolithic to microservices architecture. And I've been also, uh, maintain the core for this application, and I have delivered many, many functionalities for our product. Uh, the latest one is, uh, we have migrated our, uh, cleanup process, which is our microservices, which is used to clean the data logs and data file which are generated in our application. Earlier, it used produce file and, uh, database trained JDBC to perform this task. But now we are using the advanced spring, uh, batch API for batch and bulk operation. So it has, uh, uh, increased the performance of all this task by, uh, more than 3 times. And, also, the border flow flow code is being is become very less. Uh, also, other than that, I've been working on different parts, like, I work on the licensing microservices. Uh, the license microservice is what we define, uh, like, how much CPU usage our our client can use. Like, if my client has used, uh, a license for me, which has 100 cores of CPU, So we maintained that in Kubernetes cluster, uh, like, uh, he may can make multiple instances of all micro services, like 2 of license and 5 of micro 5 of other. But the usage of, uh, all the micro services is should not exceeds with 100 CPU ports data allotted to that client. So for that, we have used this the IOTO fabric of Kubernetes API, and we have used the Wartsheduler for running a job which checks whether, uh, there is a, uh, there is some poor violations offered, like if usage has increased or not. Other than that, I have also given many different functionalities, uh, to front end and back end. And, uh, I've been working on, like, providing, uh, day to day bug fixes, which arise in our client products environment. Recently, I got an issue related to performance in which we found that after analyzing through, uh, our code and other thing that the client database was slow and not our code, uh, which we have analyzed by using, uh, the JV, sir. That's all about my work in Adeptia.

    How do you architect a scalable database schema that support hibernates, lazy loading, and eager fetching strategies. So, uh, in hibernate, we have this hibernate lazy loading, uh, which we use, uh, like, if I have a, uh, product and I have a user table. So I have a mapping where a user can have multiple products. So there is one too many, uh, mapping. So with the lazy loading, we are saying that, uh, we don't want when I'm getting that user object, I don't want the product object. I only want the product object when I specifically call the get product of this user. So that's the lazy loading, and it's help us to utilize resources because we only required, uh, asking for things when we have actually required them. So it increase the performance. Uh, other than that, in eager loading. Eager loading will happen when, uh, we will load all the data in one call. Like, when I call for get user, then it will load all the user details along with all the product details that a user have. So, uh, eager loading is somewhat faster because it will fetch a report in one go, and we can use that report uh, separately. While in lazy loading, we have to, uh, make the end calls to database because for our user, when we required, uh, the product, we make other calls to the database. So both have their own advantage and disadvantages. That's, uh, about the data loading and data loading strategies in hibernate.

    While allowing the restful standards. So, uh, we can have a class, uh, to maintain the global exception handling, and we can use the method at the rate exception handler. And we can pass the type of assertion that this method should use, uh, to to, uh, maintain this. And we can use, uh, like, strategy. Uh, we can use the class with with annotations. Like, uh, we can have this just give me a minute. I am not getting the right answer. We we can have a class with add controller advice, and we can have in that class, uh, our exception handler method. And in these exception handler methods, we can define different type of exception data arising in our, uh, class. Like, if I want a single method, I can define exception class. So all the exception that will happen in our class will go to this class. But if I want to, uh, modify that hit, uh, that, uh, like, runtime exception go to different method and, like, my custom exception go to different method. So in this controller advice and a treated class, I can add different method with different types of method signatures, uh, which will be used to handle different types of exception in our Spring Boot application.

    Uh, how would you implement a data set in through good data compiled in the Orion provided by Hibernate? So, uh, to use the data access layer, we can have a controller, a repository class, which can extend JPA repository. Uh, with this class, uh, the hybrid the Spring Boot will auto configure all other things for us. So if we are deploying our database property in our application or properties and we have this class, uh, at our class path. And we are used at the Spring Boot application. So Spring Boot will handle all of this. So we with this, uh, ACLAS, uh, interfaced at it, extending JPL repository, we will get predefined, uh, methods to save all of our database. Like, if we want to save an object, we can use the repository save method. Uh, if you want to retrieve all, then we can use the find all method. If we want to get by ID, we can use find by ID and we have predefined method. Also, the JPA repository, uh, will provide us the functionality to search based on find by criteria. So we can have a method which use find by and, like, name. If I were entity class, which have which has a field name. So I can use find by name, which will, uh, return me all the matching, uh, object in the table with the given name. So with this, uh, functionality using GAP repository or we have other method classes also. Like, if we want to use CRUD repository, then we can also use JET also. Uh, but JPA repository is there because it provides all the functionality based on the JPA configuration, like pagination and sorting, uh, also with pageable, uh, interface.

    How can we do this thing and see to validate user input before processing in our web application? So, uh, to validate the user input before processing our web application, uh, we can use the spring specification, uh, like, add direct not null. We can provide on our entity class so that, uh, this entity should not have the null values or also we can have regex in our application so that we can define, like, email specification. So we can define that these entities should of type email so we can have regex, which, which, uh, compiles, validate that the email field is validated with email. Uh, Vikon also has the number annotation so that we can use the the regex to validate it. The current functionality is based on number.

    Uh, optimized remote applications configuration to make efficient database connection pooling. So when we are using the Spring Boot, we are, by default, getting the Hikari, uh, database pooling. And we can have, uh, by default, getting Hikari. And we we can define our configuration class, which can return our data source object. And in the database source object, we can define the different configuration for a particular pool, uh, like maximum number of ideal connection, minimum number of ideal connections, and maximum time of the ideal pool. Uh, with these configuration, we can optimize our application connection pooling. Also, if we don't uh, uh, pooling services like c 3p0, uh, connection pooling services.

    Review the following Java snippet with Spring annotation. Explain why this code might lead to a runtime issue when using Edge software, and how will you resolve it? Text service public class user survey auto wired user repository user repository public user find user string, username written with the We have service class in which we have We can do so quality. So, uh, the one thing, uh, is that user deposit, you'll find by username. Uh, so we need to pass the and field name by in camel case. Like, if I have the field, like, username, uh, in so that is this should match with capital u and with rest of, uh, this with the, uh, capital n. So that this may lead to problem. Other than that, add autowires, we'd have a private user repository. So this will cause the issue. And we can resolve it using a constructor in our user service class, which will take an argument of the user repository, and we can initialize our repository using this dot repository plus to user repository, and string will provide us with that, uh, user repository object by itself.

    So, uh, in this snippet, the generation, uh, the our ID field is type of, uh, long, which is of type object. But in entity class, we can, uh, have an ID either, uh, using the, uh, default to data types like oh, sorry. Detail detail log and We should not, uh, have the ID of type long class, but we should have the ID of type, uh, long of long of default type.

    Uh, to improve to improve application starter time and resource consumption when deploying on a cloud platform. Uh, so to increase the application startup time, and we, uh, Uh, we should, uh, make sure that our application, uh, whatever the resources it is using, uh, we have defined educate educate, uh, resources on the cloud. Like, uh, if I am deploying my application and it should have our replica set of minimum 2, then I should making sure that the cluster environment I have provided enough CPU usage and memory usage so that our application could start easily, and we could have a a start up faster time. Because if we have provided a very low memory, like, uh, only 540 MB of CPU on on, uh, on our cloud, then it will take more time for my application to start, and it will not have more resources available. So it will take time. And to ensure that the source conduction is validated, so we can use optimization technique. Like, we can use monitoring with using spring actuator, and we can also use, uh, some, uh, sort of, like, database optimization. Like, our queries is faster enough so that we can get, uh, faster response time from the database.

    So, uh, basically, we are using the exe. Js framework in our application. So we don't use Thymeleaf. Uh, so I don't have much of the exposure of Thymeleaf API. So, uh, that is why I'm not able to confirm with this question.

    Optimize Java's permissions permissions serving dynamic web content. So in order to optimize JavaScript execution, we can use, uh, uh, like, latest latest framework of JavaScript like React, uh, because React, uh, is a very optimized, uh, framework. Whenever we create a React application, the DOM object side is very less. So we can use, uh, React so that our DOM side is lesser, and, uh, this will also help us to load our content more faster and, uh, easily in our application. So that's how we can use JavaScript.