profile-pic
Vetted Talent

Saikiran Vallepalli

Vetted Talent

Java Developer with 8.5+ years of experience across all stages of the Software Development Life Cycle (SDLC), including requirements analysis, design, development, integration, and maintenance. Skilled in Java 8 and above with good expertise in features such as Lambdas, Functional Interfaces, and the Stream API. Proficient in Agile, Waterfall, and Test-Driven Development(TDD) methodologies, with a strong command of the Spring Framework, Hibernate, and both RESTful and SOAP web services. Experienced in leveraging tools like Jenkins, Maven, and Git for continuous integration and version control.


  • Role

    System Analyst

  • Years of Experience

    8.8 years

Skillsets

  • Hibernate
  • HTML
  • Java - 8 Years
  • JavaScript
  • jQuery
  • Oracle
  • REST
  • Spring Boot - 4 Years
  • Spring MVC
  • ExtJS
  • JSP
  • Git - 6 Years

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Java DeveloperAI Screening
  • 59%
    icon-arrow-down
  • Skills assessed :Java, Spring Boot, Micro services, Kubernetes, CI/CD, Git, Docker, Python, Node Js, AWS, SQL, NO SQL
  • Score: 53/90

Professional Summary

8.8Years
  • Feb, 2023 - Present2 yr 10 months

    System Analyst

    Ivy
  • Sep, 2021 - Jan, 20231 yr 4 months

    IT Analyst

    Tata Consultancy Services
  • Feb, 2019 - Sep, 20212 yr 7 months

    Programmer Analyst

    Cognizant Technological Solutions
  • Dec, 2015 - Jan, 20193 yr 1 month

    Associate Software Engineer

    Virtusa

Applications & Tools Known

  • icon-tool

    Jenkins

  • icon-tool

    Maven

  • icon-tool

    Git

  • icon-tool

    Intellij

  • icon-tool

    SVN

  • icon-tool

    Putty

  • icon-tool

    Postman

  • icon-tool

    Linux

  • icon-tool

    Sonar

Work History

8.8Years

System Analyst

Ivy
Feb, 2023 - Present2 yr 10 months
    Integrated multiple suppliers games into the Ivy technology system, delivering a seamless and immersive gaming experience across various platforms. Led the integration of multiple game suppliers into the Ivy system. Developed backend API services using Java and Spring Framework to facilitate smooth integration of supplier games. Collaborated with product owners to innovate and deliver new features, ensuring smooth and seamless game integration throughout the development process. Performed thorough testing to ensure optimal performance and minimize downtime. Delivered proactive support and troubleshooting for live game integrations, quickly resolving issues to minimize downtime and ensure a high-quality user experience. Documented best practices and processes to streamline future integrations, improving team efficiency.

IT Analyst

Tata Consultancy Services
Sep, 2021 - Jan, 20231 yr 4 months
    Spearheaded the migration of United Parcel Service(UPS) legacy Oracle Forms to modern web applications using Spring Boot and Microservices, enabling UPS planners to efficiently monitor package delivery details. Involved in Sprint Story grooming sessions with the Product Owner, thoroughly analyzing and translating business requirements into technical solutions. Developed and optimized backend API services using Java, Spring Boot, and microservices architecture, significantly improving the functionality and performance of the web application. Wrote and maintained JUnit test cases ensuring 80% test coverage and maintaining code quality. Managed Continuous Integration (CI) pipelines using Jenkins, deploying applications on OpenShift across development and testing environments, ensuring seamless integration, reliability, and scalability.

Programmer Analyst

Cognizant Technological Solutions
Feb, 2019 - Sep, 20212 yr 7 months
    Developed and maintained backend services for Verizons Automated Customer Support System (ACSS), streamlining customer query management. Understanding the requirements, Validate and implement systems solutions to fulfill business requirements. Developed and optimized Java backend code to support application enhancements and new functionalities. Created interactive front-end components using JavaScript, ExtJS, and jQuery to improve user engagement. Designed and implemented CSS styles to ensure consistent UI/UX across the application, enhancing layout and visual appeal. Performed unit and regression testing on both UI and backend Java components to uphold code quality and reliability.

Associate Software Engineer

Virtusa
Dec, 2015 - Jan, 20193 yr 1 month
    Enhanced and maintained Java code for Citi Banks Satellite Application for Messaging (SAM), a critical system for managing financial messaging, enabling end-users to create, repair, and monitor transactions across multiple modules. Analyzed and understood the applications interactions with other systems to ensure smooth and secure data flow. Worked with SWIFT codes to facilitate accurate and secure bank-to-bank financial messaging. Enhanced and maintained Java code for improved system performance and functionality. Developed client-side validation scripts using JavaScript and jQuery to ensure data accuracy and enhance the user experience. Prepared key project documentation, including Approach Documents, Change Control Board (CCB) Documents, Unit Test Plans, and Release Notes, ensuring alignment with project standards.

Major Projects

3Projects

UPS Legacy Oracle Forms Migration

Sep, 2021 - Jan, 20231 yr 4 months
    Spearheaded the migration of United Parcel Service(UPS) legacy Oracle Forms to modern web applications using Spring Boot and Microservices, enabling UPS planners to efficiently monitor package delivery details.

Verizon's Automated Customer Support System (ACSS)

Feb, 2019 - Sep, 20212 yr 7 months
    Developed and maintained backend services for Verizons Automated Customer Support System (ACSS), streamlining customer query management.

Citi Bank Satellite Application for Messaging (SAM)

Dec, 2015 - Jan, 20193 yr 1 month
    Enhanced and maintained Java code for Citi Banks Satellite Application for Messaging (SAM), a critical system for managing financial messaging, enabling end-users to create, repair, and monitor transactions across multiple modules.

Education

  • Bachelor of Technology in Computer Science Engineering

    JNTU Hyderabad (2015)

AI-interview Questions & Answers

Myself, so I can and I'm holding of 8.8 years of experience. I'm holding of 8.8 years of experience, and currently, I'm associated with IBM Tech where I will be, like, where I will be, like, integrating of different suppliers of games into a technology system. Tech to a technology system. And, um, and I will be associating with the product owners as well as, uh, various suppliers requirements gathering. So post requirement gatherings will be starting the development activities by rating the Java code and the API services will be rating and post that I'll be taking care of end to end, uh, end to end from development to deployment activities will be taken care of. So each and every integration as a senior developer, each and every integrations will be taken care from end to end till the temporary course into the live live. And also I'll be and also I will be, like, uh, if any productions issues will be occur, I will be jumping to the, like, production, uh, into the production environment and take checking the logs where the issue has been occurred. So based on that, I will be I will be verifying the issues and then and there, if it necessary, we'll be fixing those issues and we'll be redeploying again into the production environment. And currently, we are using tech stack as Java, Spring Framework, and Oracle DTVS database. And previously, I have worked on, like, uh, I have good expertise in, like, a spring boot and micro microservices. Yeah. And this is about my this is about and also, like, we'll in our current tech project, we'll be using, like, tools, like, for core quality tools, we'll be using, like, Sonar Sonar code. And also we'll be using, like, Jenkins Jenkins for deployment activate activities and we're using Tomcat servers. Uh, Tomcat servers will be using. And currently, as a senior developer, I'll be mentoring, like, 2 to 3 junior members. Like, reviewing the code they have been written and giving the suggestions, like, if anything goes wrong in their code, I'll be giving, like, suggestions to, like, what needs to be fixed and what needs to be enhanced. So these kinds of activities I'll be doing as a senior developer.

Yes. Like, uh, while writing the code while write while, uh, committing the code into the Git repository, we'll be we'll be, like, checking, like, uh, if any conflicts were there in the there in the merger while merging into the like, we'll be creating, like, a feature branches. With the feature branches, we'll be, uh, merging into them. Uh, after reviewing that, after reviewing completed, then it will be completed by the 2. We'll be merging into that, uh, like, a parent branch will be merging. So while doing this, we'll be checking, like, how if any mergers were if any, like, who has completed the previous, uh, if any merge conflicts are there before that. So we'll be checking on that. So based on that, we'll be resolving. If any merge conflicts is there, we'll be resolving. And, also, like, our, uh, like, will be while pushing into the git, uh, git will be first after pushing into the git, we'll be checking, like, uh, we'll be running the SonarQube scans. So based on the SonarQube scans, it's suggestions, uh, of our future branch, uh, it will scan SonarQube will scan our future branch. So based on our future branch, uh, what what are the commits we have done? So based on that commits, if they based on SonarQube results, we'll be, like, committing, uh, modifying the changes and fixing those vulnerabilities if there is any fixes, and we'll be committing it to the, uh, we'll be merging into the parent parent git repository. So and also, like, without the solar eclipse canceled, we won't, uh, we won't, uh, after passing the solar script's cancel, we'll be merging into the actual parent repository. Uh, and, also, uh, before that, we'll be, like, committing and checking into the future batch, uh, and running the scans to ensure the high quality of the can code. And, also, like, uh, we'll be checking, like, if any unless an object creation was doing, uh, object creations and, uh, like, if any, like, threads were, uh, uh, like, if you use the thread pool management, if the thread pool was shut down or not, we'll, uh, was hit or not. We'll be checking these kind of the things, and we'll be committing into the kit.

Concurrent HashMap works in multi threaded environment. Yes. Concurrent compared to HashMap, concurrent HashMap is thread safe. Is thread safe? Like, in the concurrent HashMap book, uh, is based on, like, uh, segment locking. Whereas, uh, is based on, like, segment locking. So for, uh, like, for reading operations in concurrent hash map, there is no locking mechanism is required. But for if you are updating while iterating or something we we are updating, uh, the threads will be locked locked into the lock particular segment like concurrent hash map will be divided into, uh, segments. So if you want to update some particular thing, so the, uh, the segment lock the thread will be lock thread locks a particular segment in the concurrent hash map. So it ensures that it, uh, uh, it ensures by this a concurrent hash map is thread safe. But when compared to hash map, uh, it is not the hash map is not a thread safe. Even though we can make hash map as thread safe, like collections dot synchronize, we can use. But, uh, locking mechanism, uh, by using synchronize, the entire hashmap get lock locks. Whereas in concurrent hashmap, only the segment parts, particular segment which we are more which we are iterating and modifying will be getting. We will be locking, uh, the threads get locked into particular segments only. And the concurrent hash map is is kind of like fail safe iterate fail safe iterator. Even when you're iterating and we are trying to modify it as well through concurrent modification exceptions. And moreover, in compare compared to HashMap, uh, in concurrent HashMap, uh, does not allow null keys or null values.

This would migrate an existing camera. Cloud. It's yeah. In a monolithic, uh, like, uh, the question is related to monolithic and microservice architecture. So the monolithic architecture is a kind of architecture, like, we'll we'll be mapping all the services into the single system. Uh, single, uh, single system. Uh, so even, uh, if you want to change if you want to change one service, we need if there is a change in one service, we need to redeploy all the services again, and we need to we need to make up application up where, uh, so it's, uh, so every time for a small change also, we need to deploy, uh, all entire services and we need to, uh, redeploy again to make the application and monolithic architecture. Whereas, compared to microservice architecture, we'll be, uh, when when we want to migrate monolithic into the, uh, like, microservice architecture, we'll be, like, segregating, like, decoupling of these services into each and individual independent of services. We'll be, uh, uh, will be, like, uh, decoupling these services, uh, individually. So even if there is any one failure in the service, the other application won't the application won't get stopped. Only the particular service will be, uh, will be showing as as failed as failed will be showing. So the application won't go application won't go goes down. And also, uh, by this, uh, we can achieve, like, uh, like, boilerplate code we can reduce. And also, main durability is easy when come, uh, come when compared to monolithic architecture. And also, uh, when you migrate to microservices, we'll be following, uh, certain rules will be following, like, when we can certain rules will be following when we convert into microservice architecture. Like, we'll be creating, uh, not, uh, we'll be creating, like, one service registry will be creating will be connecting all our service registries to the we'll be connecting all our microservices to the service registry. And through service registry, we'll be checking whether the all the services are up and running or not. And also we'll be implementing like API gateway we'll be implementing. So, um, rather than hitting each and hitting each and every microservices, instead of that, we'll be making common API gateway. So all the incoming request will come and hit into that common API gateway. Based on that common, uh, the common API gateway, we'll be writing, like, different routing configurations we'll be writing. So based on the request, we'll be routing into the different microservices. And also, in the microservice architecture, we'll be writing, like, for like, fault tolerance and the fallback methods we'll be writing. So if any services if any service goes down, we'll be, uh, um, we'll be, like, uh, writing the fallback methods. So based on the fallback methods, we'll, uh, we can know that what are the services up and running and what are the service goes down. So in this way, we can we can do the microservice architecture kind of the thing. And also microservice, we can implement, uh, like, it is basically we can implement those several design patterns. Like, uh, first basic, we can, like, API design patterns, 2 design pattern, and then and also we can layer 2 with even driven microservice architectures. We can convert our monolithic we, um, we need to decide, like, which architecture based on our requirements or feasibility. We'll be migrating it to different, uh, design patterns microservice design patterns.

Reconfigure Kubernetes to ensure autoscaling capabilities of a Java based service. Kubernetes, basically, we'll be integrating, like, and we'll be integrating with our spring boot up spring boots. Uh, like, we'll be creating the docker images and based on the docker images, we'll be, uh, like, we'll be deploying those, uh, we'll be deploying into the Kubernetes and based on, uh, based on the, uh, like, based on the, uh, we'll be identifying the load of those services. Based on the load of these services, we'll be doing scale upscaling and downscaling of downscaling, uh, upscaling and downscaling, uh, of base. Suppose if a request is heated by AWS based on if we keep to know that thousands of the users, then based on that, we'll be doing, like, uh, auto scaling in the Kubernetes, actually. So basically, uh, these, uh, I'm aware of the basic level of the Kubernetes not entitled to the Kubernetes architectures.

In the application, how do you implement error handling and retrans per fully be causing a microservice communicator? So in the microservice architecture in the Java application, like, we'll be writing the several fallback methods we'll be writing. So in the fallback methods, uh, with, uh, in the fallback methods, uh, if any services was done, we'll be checking the service registry. Uh, and with the service registry, we'll be checking, like, all the services, which of the services down, and which of the services are running. So based on that, uh, we'll be implementing, uh, we'll be checking, uh, like, like, uh, if any failures happen while connecting to the service, we'll be implementing those things in the fallback methods. And, like, error handling, what we'll be doing, like, uh, like, we'll be metering, um, we'll be creating our custom exceptions. So in the custom exceptions, uh, which are to handle this globally in the spring boot, we'll be using a controller with annotations and also, like, response entity exception and handler classes will be writing in a, uh, to handle the exceptions globally, uh, glow globally, uh, in the Spring Boot application.

Look at this Python course in many of the microservices. Hold on. Is there a solid principle being more like today? So this should be made.

As a list was already initiated with the list.as list. So if you want, if you're trying to add some, uh, as we are trying to add this, so new list, so we are getting this exception actually. May maybe, like, exceptional back up even though when we try to iterate or if we try to iterate the list of it and if we try to add the list, uh, one of the elements, so we'll be getting this runtime exception.

Present time of course. Uh, present time of course to reflect monolithic application, uh, to Microsoft credential 0. Maybe, uh, like, uh, we can, uh, migrate the model with the architecture rate to the microservice architecture, like, different, uh, patterns. We can use, like, circuit design pattern and APD set pattern. In the API design pattern, we may take common IP, gate based on the common gateway, it will be handling all the requests. And in the circuit design pattern, like, if if any of the services is a kind of circuit, if any of the service or if all the services are of the circuit is in the flow state, if any of the services is done, we can say that as a crisis in the half open state, if you are and also by using the other design patterns like orchestral, uh, saga design pattern. The saga design pattern will be having, like, orchestral design pattern and as well as, uh, choreography design patterns will be so based so if we implement, like, anything, like, uh, these event driven architectures, I mean, uh, we'll be ensuring the zero downtime.

Continuous testing strategy for the Java application using JUNT plus multiple stages. So we'll be, uh, JUNT. Like, we'll be, uh, in the we'll be writing the JUnit case here. Suppose we'll be integrating the, uh, the CICD process, we can integrate the sonar tube sonar tube tool into, uh, into our CICD pipeline. So based on the CICD pipeline, uh, in the c a we'll be integrating the solar tube. So whenever we come in whenever the deployment whenever we come in the code and start deployments, and the CICD pipeline goes through the solar tube phase, um, then the solar tube here, uh, gets out code gets run. So it it test the code coverage quality and also, like, uh, it SonarQube scans for our blockers. If there are any blockers in the code, major major block or major, uh, issues or minor issues, it will scan for us. So based on that, uh, in every stage, like, when we go into the first, we'll be committing into that, uh, development development enrollment. So the development enrollment will be integrating, uh, so not to be into CACD pipeline, so it will be running. So after that, uh, if it is satisfied, then we'll be moving into the next stage like QS stage. So in the QS stage also, once again, we can, uh, implement the sonar sonar tube scans. So based on that code scannings, uh, in the each integration step till the production implements, we can achieve this way by integrating the solar tube into our CICD pipeline, ensuring the, uh, code percentage quality. Suppose we can, uh, coverage code quality like 80% or 90%, we can declare, uh, we can configure in our code. So based on that coverage, we can in the different stages, we can check and we can proceed into the next, uh, like, development of QA to production.

So based on, like, relational databases, we'll be using, like, uh, like, a schema schema schema and tables we'll be using. There is a new SQL database. We can, uh, we can, uh, use, like, a MongoDB database where we'll be, uh, like, uh, variably using, uh, like, directly dumping our JSON JSON. Uh, it has no structure and directly we can dump the JSON into the table and we can check it. Uh, we can we'll be checking this way. Maybe we can use MongoDB for the different data types of processing.