profile-pic
Vetted Talent

Sudipta Chanda

Vetted Talent
Lead Software Engineer, currently working with Oracle, with 7+ years of hands-on experience in the software industry with knowledge of complete product lifecycle through requirements, analysis, data modeling, design, development, testing, documentation, and delivery of projects. Possesses responsible leadership qualities.
  • Role

    Project Lead

  • Years of Experience

    9 years

Skillsets

  • Restful APIs
  • Vue.js
  • VS Code
  • React.js
  • Oauth 2.0
  • Node.js
  • MongoDB
  • Microservices
  • IntelliJ
  • Eclipse
  • Ci/Cd Pipelines
  • AWS
  • AngularJS
  • Agile Scrum
  • Sql injection prevention
  • Kubernetes
  • MySQL
  • Jira
  • HTML5
  • Helm
  • GitLab CI/CD
  • CSS3
  • Python - 2 Years
  • Spring Boot - 6 Years
  • Spring Boot - 5 Years
  • Jenkins
  • JavaScript
  • Docker
  • Java - 9 Years
  • Kubernetes

Vetted For

6Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Staff EngineerAI Screening
  • 57%
    icon-arrow-down
  • Skills assessed :CI/CD, Python, Java, Micro services, Spring Boot, System Design
  • Score: 51/90

Professional Summary

9Years
  • Jan, 2021 - Present4 yr 11 months

    Project Lead

    Oracle India Pvt. Ltd
  • Sep, 2020 - Dec, 2020 3 months

    Big Data Engineer

    Equifax
  • Jan, 2020 - Oct, 2020 9 months

    Project Engineer

    Sath InfoTech
  • Sep, 2017 - Feb, 2018 5 months

    Senior Full Stack Developer

    Maritz Motivation
  • Feb, 2018 - Feb, 20191 yr

    Senior Software Engineer

    The Home Depot
  • Feb, 2019 - Jan, 2020 11 months

    Principal Software Engineer

    Nextera Energy / Florida Power Light
  • Mar, 2016 - Sep, 20171 yr 6 months

    Full Stack Developer

    JP Morgan Chase & Co

Applications & Tools Known

  • icon-tool

    Eclipse

  • icon-tool

    IntelliJ

  • icon-tool

    VS Code

  • icon-tool

    Maven

  • icon-tool

    Gradle

  • icon-tool

    Git

  • icon-tool

    SVN

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    GCP

  • icon-tool

    OCI

Work History

9Years

Project Lead

Oracle India Pvt. Ltd
Jan, 2021 - Present4 yr 11 months
    Enhanced file management processes by designing backend and frontend projects, authored detailed design documents, standardized data modeling, developed microservices using Helidon, deployed using Docker in Kubernetes pods, coordinated technical teams for successful implementations.

Big Data Engineer

Equifax
Sep, 2020 - Dec, 2020 3 months
    Developed big data applications using Apache Beam and Kafka, streamlined DevOps with GitLab, deployed stacks to GCP using Kubernetes and Helm, employed MongoDB as the database solution.

Project Engineer

Sath InfoTech
Jan, 2020 - Oct, 2020 9 months
    Led a team of developers, designed and developed microservices using Spring Boot, streamlined DevOps with Kubernetes and GitLab, secured the application using Spring Security, deployed using AWS.

Principal Software Engineer

Nextera Energy / Florida Power Light
Feb, 2019 - Jan, 2020 11 months
    Developed microservices with Spring Boot, integrated front-end interfaces, implemented logging for debugging, deployed applications on AWS, and streamlined CI processes.

Senior Software Engineer

The Home Depot
Feb, 2018 - Feb, 20191 yr
    Resolved QA defects, developed microservices with Spring Boot, configured AWS EC2 instances, built AngularJS components, and implemented Spring framework modules.

Senior Full Stack Developer

Maritz Motivation
Sep, 2017 - Feb, 2018 5 months
    Developed UML diagrams for design, implemented Spring Boot microservices, built dynamic user interfaces with AngularJS, configured AWS EC2 instances.

Full Stack Developer

JP Morgan Chase & Co
Mar, 2016 - Sep, 20171 yr 6 months
    Conducted application design, developed batch applications using Core Java, resolved SQL injection threats, and automated tasks with Korn-shell scripts on UNIX.

Achievements

  • Reciepient of Doctoral Research award M. Tech in Geo-Exploration Indian Institute of Technology, Bombay Mumbai, Maharashtra Worked on GeoTechnical Project with Indian Government as an Intern.

Education

  • Doctorate in Engineering Science

    Southern Illinois University (2016)
  • Master of Technology in Geo Exploration

    Indian Institute of Technology, Bombay (2012)
  • M.Sc. in Earth Science

    Presidency University (2010)
  • B.Sc. in Geology

    Presidency University (2008)

Certifications

  • Oracle Cloud Infrastructure 2023

  • Oracle cloud infrastructure 2023 - certified architect

  • Oracle certified foundations associate, java

AI-interview Questions & Answers

Hello, my name is Sridipto Chandu and I am actually as working as a developer for like last nine years. Currently, I work for Oracle. My designation is project lead. So basically, like I'm using microservices in the back end side for quite a some time, like almost six years now, I'm using, I have used Spring Boot earlier, currently I'm using Halidane framework to build microservices. Our architecture is basically event driven. So we are using Oracle streaming service. That is very similar to Kafka for messaging purposes. Basically between our services, the communication is using messaging. And then we have our, like basically build tool as Maven, we deploy through Jenkins, we have Docker file created for all of our services and we use help chart in Kubernetes basically to deploy in Oracle cloud currently. For Java version, I am using right now is Java 20 and 21 and in some places is 17 also. And I'm quite well versed in Java. So yeah, that's pretty much it, like I'm working as a back end developer for quite a long time

Okay, so the question if I understand correctly is how can we integrate Python in Java based microservices and potential pitfalls, potential pitfalls means like the negative side, okay. So what I can think of right now, so integrating Python into Java based microservices scenario is basically called as like polygots architecture. So I mean, it's simple like wired services are written in different languages but basically getting connected over the network connection. So it's like basically HTTPS, HTTPS connections, we can use that. So like my, I mean, what I understand is basically like if we have like, let's say like two different microservices, one is written in Java and one is written in Python, I mean, we can basically communicate between each of these services using HTTP. And that's pretty simple, like we can use like for Java is Spring Boot and for microservices, for Python, we can use like normal HTTP call, like normal API router stuff like that. So what are the potential pitfalls that I can think that can happen? So I mean, one pitfall I can think of is the network latency, like basically between the service communication, there could be some latency that comes into the picture. So and then another problem might be coming is like error handling, like timeout and stuff like that, socket timeout and stuff like that, because both of the services have to be between different languages, we have to manually use those, I mean, add those to our services to basically handle the situation. And the third would be sometimes, I mean, data serialization could be a big factor, I guess, because like, one particular data that can get serialized from Java to Python to deserialize that in Python, it might take some more time, sometimes that get overwritten. And the last, I would say like, like the versioning, I think both of the API versioning has to be the same or something like that. And basically, like we can integrate using message queues also, message queues will be like easier to use, like we can have like something like Kafka or something like that, that one particular microservice produces the message and basically other one basically listens to that particular message and use that. It can be RabbitMQ, Kafka, OSS, same thing, yeah, that's it.

How would you design a distributed caching mechanism in microservices using Spring Boot? So, caching mechanism in Spring Boot distributed system. What can be done? So, like simply in like at first like what can I can think of like caching basically makes your services work very very fast. So and like the data I mean it can like easily makes all the services will be very very fast. So I think what we can do is like there are two different type of caching we can use like we can use like local caching. So the idea is here the caching will be happening inside the application memory directly. So if we have like 10 microservices and I mean for each services we can have caching but in that scenario like there the problem comes is it will be easy to configure first of all the pros and the cons will be like if we are using like multiple distributed system and multiple instances are running for the services it can create trouble because I mean caching would not be that useful in that case because it will not be distributed between the systems. So that that's one problem and that's I mean the best case scenario I think that should be used is like distributed caching like Redis or something like that because Redis is like in-memory system. So it will be much more faster. I mean there are other alternatives also like Memcache, Hazelnut and stuff like that but I think like Redis is well-known and well-documented I think Redis will be much more easy to use. We basically have to have Redis service running in our cluster and basically like I mean the simply is like whatever data we need to cache we basically push it to the Redis queue and then I mean what all different services you can read it from there at the first basically try to get it get the data from the cache and then if it is not there then do usual mechanism, usual sort of thing. That's actually I think that's that's which basically simply I mean we can use that like in like local caching will be easy to configure first of all that's like best important but it will be not be useful in distributed caching. So in distributed caching I think Redis cache will be better to use. Yeah anything else like in Spring Boot like we basically need to have like a configuration class for Redis and then basically use that put it like this that's pretty much it I think that's first what I wanted to say.

Okay, so I can based AI model, okay, so into Java, so the I, how can we do integrate between if we have a AI model in Python and then we need to integrate in Spring Boot application, how can we do that? I think like the best way to approach it as like, I mean, even driven I think, so I mean the easiest way I can think of is RESTful API, like normal RESTful API, then we can use like messaging queue also is fine. We can use like gRPC connections also, but I mean, yeah, that is possible and we can use like something like JVM, but that JVM based Python, but that's not very useful. I think the best way to use is like normal API, like REST API calls, gRPC and message queue. I think these are the best three way to connect basically, and like the simplistic way to do this is basically we will, from the API, we will basically send the data, like data to the Python and basically get the data from the AI model, whatever data that is, and whatever the prediction from the AI model is, and then basically send it back to the API. That's the idea for basic API calls. Same thing we can do for message queue also, is basically like we basically send the one particular information, like whenever a swing boot API is, swing boot, I mean, service is basically sending a certain kind of message to the message queue, and then it's getting read by the Python and basically doing some work on that, the AI model basically, and do some, create like a JSON body of the prediction and basically send it back to the, like any message queue. And then basically we have the Java client and then basically we'll read it and do some processing on that. So I think, yeah, that's pretty much, and gRPC is the same way, basically one, like it has to be come from the Java service and then go to the Python, and then basically Python will return with some answers, that's as simple as that. So, I mean, in, yeah, I think that's pretty simple because I think Python has some specific libraries for this, this kind of operations, like I think Jython is there and Py4j. So this will be useful, actually.

When a method using Java to prevent overloading a microservice during peak traffic time. Outline a method using Java to prevent overloading a microservice. I think I would say like create like a rate cutter or something like in a minute like we can basically have like a separate service that service basically will check like how many calls a particular service is getting and if it basically peaks more than like 30 or 40 whatever the requirement is I think then we can stop that call and basically say like request after certain period of time or something like that too much pressure something like that so I think like it is very simple like we basically for the rate cutter service we basically will have like a redis cache or something and we basically will save like the call instances over there and with the time and for every minute I think just like for every minute we will save the data and basically we will and all the time the rate cutter will basically actually check that it is more than the threshold or not if it is more than the threshold then basically we'll cut all the calls to the particular service from the rate and yeah that's I think that's like much more scalable and easy to implement I think that's the best approach to be so yeah I mean yeah and basically we will we need some extra memory for that obviously for the rate cutter we basically will have to have a database to save the data and we have to have a caching mechanism also to save the time so yeah that's that's pretty much it I think that should do the trick and we will have in the database we will save them like the the requirement and we will basically read it from our data from our service and then basically check from the redis case that how many if it is matching in the in our cache it will be faster also

How to implement Python-based feed to automate health checks across network of Spring Boot? Python, Python-based. How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot? How to implement Python-based feed to automate health checks across network of Spring Boot?

Java snippet below. Can you identify the problem with the singleton design but an imprimatur here suggest how to read issues in multi-faceted? Yeah because In this particular snippet. We are not using using synchronized block basically to Make a lock on the gate instance method because if there are multiple Thread are running in this particular situation It can create like multiple instances because it's not set. So to make it Let's say we basically have to have like a synchronized block. We can use synchronized block on the method Signature like the definition also we can have like basically after that. We are checking the instances now We basically have like the synchronized block inside that and after that we can check the instance is null or not And then basically we can return the classes instance From this then it will be safe in threats in multi-faceted environment. And I think that's the best approach for singleton Because I mean other than that I would say like everything is fine because we are having like a private constructor and We are returning the instance directly. So yeah, I think that's basically just use like synchronized block Inside the method or at the top of the method for the gate instance, that's

Review the block of Java code for RESTful service. Can you spot the issue that might arise from the exception handling? Can you spot the issue that might arise from the exception handling? Can you spot the issue that might arise from the exception handling? Can you spot the issue that might arise from the exception handling?

Devise a method to ensure consistent data across multiple microservices, devise a method to ensure consistent data across multiple microservices, ok. So, I think the best like I what I understand from this information is basically we have like multiple microservices and then we can I mean and the data is flowing from one services to another in another and then another. Then I think like in a in this particular situation and we just need to make sure that the data that is coming from all different databases are consistent and like basically proper. Like if something fails in one of the microservices basically that actually goes like that actually shows in the database for all different sets. That I mean I think like the proper way to handle that is I mean we can use like a design pattern in microservices for Saga design pattern. It is very like like so basically we can have like event-driven architecture like orchestrated. So, in this situation particularly like if one data that is coming from another service to another service like we basically have like a success or failure scenario. So, if the particular the next microservices successfully process the data we basically send a successful message to the previous microservices. I mean to the to a particular messaging service and that particular message is read by the previous microservices and basically it commits the data and if it fails basically it rolls back that old data basically whatever it was committed to that earlier data. So, in that way like we basically have like a way to have go back easily between microservices if something fails and the other way other one is basically like use like a layer after all the services basically get the data in a orchestrated layer orchestration layer or something like that where we basically get all the data from the database from all the services and basically do the processing over there in a single transaction. So, if something fails we can roll back easily. So, in that way like there will be no failure situation. So, I think that that should be fine. I mean I think that that's the best way to approach. So, the first one is two-phase commit where basically we do the commit first and then basically let the data go and then basically keep checking like if something fails and from a messaging service or something we basically send the data information back to that successful or failure and do the processing based on that. So, and then orchestration layer that's basically much simpler in a single transaction block we do get all the data from all the different microservices and do the transaction. Yeah, that's it.