profile-pic
Vetted Talent

Abhishek Bande

Vetted Talent

I have over 12 years of experience in IT, with a focus on full-stack development across banking, financial services, travel, and construction domains. I am proficient in .NET technologies (C#, .NET Framework, .NET Core, WCF, WebServices, MVC), React, Java, Angular, and various frontend technologies such as HTML, CSS, and JavaScript. Additionally, I have extensive experience with AWS and Azure cloud services, CI/CD tools like Jenkins, TeamCity, and Docker, and have worked on e-commerce platforms such as AbanteCard, OpenCart, and nopCommerce. Familiar with Agile/Scrum methodologies, I have led teams and played a key role in client interactions, gathering requirements, and delivering solutions in a collaborative environment.

  • Role

    Sr Java, Full Stack & RAG Engineer

  • Years of Experience

    13.9 years

Skillsets

  • Angular
  • Bootstrap
  • Design patterns
  • HTML
  • HTML
  • Java - 12 Years
  • JavaScript - 9.0 Years
  • Jenkins
  • Jira
  • jQuery - 9.0 Years
  • JSF
  • JSP
  • LINQ
  • MySQL - 12 Years
  • SignalR
  • Soap UI
  • TDD
  • webhooks
  • Docker - 8.0 Years
  • C#
  • CSS
  • Entity Framework
  • GraphQL
  • Microservices
  • MongoDB
  • PostgreSQL
  • RabbitMQ
  • Unit Testing
  • Agile
  • AI
  • AJAX
  • Android
  • Apache Kafka
  • ASP.NET Core
  • AWS
  • Azure
  • Azure Storage
  • Bamboo
  • Bitbucket
  • BLAZOR
  • C++
  • Camunda
  • CentOS
  • CI/CD
  • CircleCI
  • Cloudformation
  • Confluence
  • cosmos Db
  • Dynamics 365
  • EC2
  • EDI
  • Event bus
  • Expressjs
  • Figma
  • Flutter
  • Git
  • Github
  • GitLab
  • Golang
  • Hubspot
  • JUnit
  • LangChain
  • Linux
  • LLM
  • Logic Apps
  • Material UI
  • Mockito
  • Moq
  • MS Dynamics CRM
  • MVC
  • n8n
  • Next.js
  • Node.js
  • NoSQL
  • Notification hub
  • NUnit
  • NuxtJs
  • OpenAI
  • Oracle DB
  • pandas
  • PHP
  • Postman
  • Power Apps
  • Power Automate
  • Power BI
  • Primefaces
  • Python
  • rag
  • ReactJs
  • Redis
  • Redux
  • REST
  • Ruby
  • Rust
  • S3
  • Sage 300
  • Scala
  • Silverlight
  • SOAP
  • SOLID
  • Spring WebFlux
  • SQL
  • SQL Server
  • SSIS
  • SSRS
  • stripe
  • SVN
  • Swagger
  • T-SQL
  • TeamCity
  • TFS
  • TypeScript
  • Ubuntu
  • Unity
  • VB.NET
  • Waterfall
  • WCF
  • Web Services
  • Windows
  • XUnit

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Principal Software EngineerAI Screening
  • 88%
    icon-arrow-down
  • Skills assessed :Ci/Cd Pipelines, Cloud monitoring, Real Estate, Real-time Processing, Team Leadership, Third-Party Integration, API Design, IoT integration, Kafka/ RabbitMQ, Microservices Architecture, MQTT/ LoRaWAN, Nestjs (node.js), AWS, PostgreSQL
  • Score: 79/90

Professional Summary

13.9Years
  • May, 2022 - Jan, 20252 yr 8 months

    Contractor / Consultant Individual Contributor

    Contractor / Consultant Individual Contributor
  • Senior Java UI/Full Stack Developer- Remote

    Riskspan
  • Jul, 2020 - Nov, 20211 yr 4 months

    Technical Lead

    Globant India
  • Jun, 2013 - Jul, 2013 1 month

    Programmer Analyst Trainee

    Cognizant Technology Solutions
  • Apr, 2015 - Feb, 20182 yr 10 months

    Senior Software Engineer

    Tavisca Solutions
  • Mar, 2018 - Jul, 20202 yr 4 months

    Technical Lead

    Velotio Technologies
  • Mar, 2012 - Apr, 20153 yr 1 month

    Programmer Analyst

    Cognizant Technology Solutions

Applications & Tools Known

  • icon-tool

    TypeScript

  • icon-tool

    ReactJS

  • icon-tool

    Java

  • icon-tool

    Spring Boot

  • icon-tool

    PostgreSQL

  • icon-tool

    MySQL

  • icon-tool

    HTML, CSS and JavaScript

  • icon-tool

    AWS (Amazon Web Services)

  • icon-tool

    Javascript

  • icon-tool

    Node.js

  • icon-tool

    C#

  • icon-tool

    Jenkins

  • icon-tool

    GitLab

  • icon-tool

    JSON

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    TeamCity

  • icon-tool

    CircleCI

  • icon-tool

    Jira

  • icon-tool

    Bamboo

  • icon-tool

    Agile

  • icon-tool

    Waterfall

  • icon-tool

    CI/CD

  • icon-tool

    CosmosDB

  • icon-tool

    NoSQL

  • icon-tool

    SVN

  • icon-tool

    Git

  • icon-tool

    GitHub

  • icon-tool

    TFS

  • icon-tool

    Bitbucket

  • icon-tool

    AWS S3

  • icon-tool

    AWS EC2

  • icon-tool

    AWS Cloud Formation

  • icon-tool

    Azure Cosmos DB

  • icon-tool

    Azure Notification Hub

  • icon-tool

    Oracle SQL Developer

  • icon-tool

    SSMS

  • icon-tool

    SSRS

Work History

13.9Years

Contractor / Consultant Individual Contributor

Contractor / Consultant Individual Contributor
May, 2022 - Jan, 20252 yr 8 months

Senior Java UI/Full Stack Developer- Remote

Riskspan

    Job Overview:

    As a Senior Java/JSF/PrimeFaces/JavaScript Developer, you will

    design, develop, and maintain scalable web applications using Java, JavaServer

    Faces (JSF), and PrimeFaces for UI components. You will also leverage JavaScript

    for front-end interactivity and work with popular UI libraries like Highcharts, AG Grid,

    and Wijmo. This role requires strong expertise in component-based UI development

    and experience integrating complex JavaScript components into enterprise

    applications.


    Key Responsibilities:

    Design, develop, and maintain robust web applications using Java, JSF, and

    PrimeFaces for complex UI components.

    Build interactive front-end features using JavaScript and integrate them

    seamlessly with JSF and PrimeFaces.

    Collaborate with product managers, UI/UX designers, and backend engineers

    to define and implement new features and enhancements.

    Optimize applications for maximum speed, scalability, and performance.

    Write clean, efficient, and maintainable code.

    Troubleshoot, debug, and resolve issues across the full stack.

    Document development processes, architectural decisions, and system

    integrations clearly.

    Stay updated with the latest trends and best practices in Java, JSF,

    PrimeFaces, and JavaScript development.


    Qualifications:

    Bachelors degree in Computer Science, Engineering, or a related

    field (or equivalent experience).

    5+ years of experience in software development, with a strong focus on

    Java, JSF, and web application development.

    Proven expertise in PrimeFaces, including complex UI customization,

    theming, and performance tuning.

    Strong proficiency in JavaScript, with experience in modern JavaScript

    frameworks (e.g., Angular, React, or Vue.js) and Vanilla JS fundamentals.

    Experience with HTML5, CSS3, AJAX, and responsive design principles.

    Familiarity with RESTful API design and integration.

    Proficiency in version control systems (e.g., Git), build tools (e.g., Maven,

    Gradle), and CI/CD pipelines.

    Strong understanding of object-oriented programming and design patterns.

    Excellent problem-solving skills, strong attention to detail, and the ability to

    work independently and collaboratively.


    Preferred Qualifications:

    Experience with Highcharts, AG Grid, and Wijmo components for data

    visualization and complex UI grid implementations.

    Experience with additional JSF libraries such as RichFaces or ICEfaces.

    Knowledge of Spring Boot and microservices architecture.

    Understanding of web security best practices and performance optimization.

    Familiarity with UI/UX design principles and enhancing user experience

    using PrimeFaces components.


    Experience: 7-10yrs


    Engagement Type: Fulltime

    Direct-hire on the Riskspan Payroll

    Job Type: Permanent

    Location: Remote

    Working time: 11 AM to 8PM IST

    Interview Process - 4 Rounds



Technical Lead

Globant India
Jul, 2020 - Nov, 20211 yr 4 months

Technical Lead

Velotio Technologies
Mar, 2018 - Jul, 20202 yr 4 months

Senior Software Engineer

Tavisca Solutions
Apr, 2015 - Feb, 20182 yr 10 months

Programmer Analyst Trainee

Cognizant Technology Solutions
Jun, 2013 - Jul, 2013 1 month

Programmer Analyst

Cognizant Technology Solutions
Mar, 2012 - Apr, 20153 yr 1 month

Achievements

  • Winner of Xceed11 in Wed Development Contest

Major Projects

22Projects

Supply Chain Intelligence

Dell
Mar, 2021 - Present5 yr

    Supply Chain Intelligence

    Role

    Technical Lead

    Duration

    March 2024 Till Date

    Team Size

    6

    Environment: ASP.NET Core, REST API's, Angular 12+, Azure SQL, Docker, Highcharts widgets, Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, Windows, Azure, C#.NET, Rest, APIs, Azure, SQL, Azure, Functions, Azure, Storage, Microservices, Azure, SQL, DW, AAS, PBI, ASP.Net, Angular 10/11/12/13/14/15/16, JQuery, Kubernetes, CI, CD, Webservers, Databases, Marketing, Design, 3D, Design, Agile, Jira, Webflow, Backend, HTML, CSS, JavaScript, Python, C#, Bootstrap, MSSQL, Redis,Apache, Kafka, PostgreSQL, Spring WebFlux, Redux, TypeScript , Spring, Boot, HTML5, CSS3, JavaScript, GitLab, CI, CD, Python3, AWS, MongoDB, Swagger, Confluence, GraphQL, Postman, API, Jira, Agile, Power BI, MS Dynamics CRM, Dynamics 365, Logic Apps, Power Automate, Power Apps, RabbitMQ, AWS, HTML5, Java Script, SQL, UI, Linux, ReactJS, ASP.NET, Core, Node.js, Rest, api, NestJS, EF Core,TDD, RestAPI Services

    Hangfire for background jobs, Email service, Twilio for mobile SMS, Google Map Location service.


    Description

    The Supply Chain Intelligence (SCI) project, an AI-driven solution by SymphonyAI, focuses on improving supply chain visibility, agility, and decision-making. The middleware REST APIs serve as the bridge between the GraphQL endpoints and the Angular-based frontend. These APIs fetch reporting, metrics, and dimensions data from GraphQL, transforming the responses to align with the widget configurations required for the UI.   

    Role & Responsibilities

    • Developed Dimension and Metrics APIs to meet widget-specific requirements.
    • Actively participated in Agile ceremonies, including planning, daily stand-ups, reviews, and retrospectives.
    • Conducted peer reviews for tasks completed by team members.
    • Mentored junior developers and provided knowledge transfer sessions for other teams on widget configuration.

Partner Supplies Sales

Aug, 2018 - Present7 yr 7 months
    Quota tracking tool for channel partner (wholesale & resale) supplies, OPSS and regional supplies.

Paragon Routing and Scheduling

Nov, 2022 - Present3 yr 4 months
    Aptean Routing & Scheduling controls all aspects of planning and routing logistics of schedule. Planning is carried out in schedules.

AI search application for leading energy regulator company

Apr, 2024 - Aug, 2024 4 months
    This project leverages Azure AI Search to provide an advanced search functionality to enable users to find documents based on various filters and semantic search using embeddings and analyze selected documents offering insights and data extraction capabilities.

E-Commerce for leading health and wellness products

Feb, 2021 - Apr, 20243 yr 2 months
    This project is to handle development and maintenance of the e-commerce website. Client is a network marketing company that specializes in health and wellness products, including dietary supplements, meal replacement shakes, cleanse programs, and skincare items.

EY - Helix

Globant
Jul, 2020 - Mar, 2021 8 months

    EY - Helix

    Role

    Technical Lead

    Duration

    July 2020 Till Date

    Team Size

    12

    Environment

    Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, Windows, Azure, C#.NET, Rest, APIs, Azure, SQL, Azure, Functions, Azure, Storage, Microservices, Azure, SQL, DW, AAS, PBI, ASP.Net, Angular 10/11/12/13/14/15/16, Umbraco 8, NuxtJS, Unity, Frontend, Full, Stack, JQuery, Java, Ruby, On, Rails, Perl, React, Native, Flutter, HighLoad, Deployment, Kubernetes, CI, CD, Webservers, Databases, Marketing, Design, 3D, Design, Manual, Testing, User, Story, Mapping, Analytics, Marketing, Unit, Economics, Product, Hypothesis, Market, Research, JTBD, Agile, Jira, Client, Success, Management, Webflow, Backend, HTML, CSS, JavaScript, Python, PHP, NodeJS, Golang, C#, Scala, , Android, Bootstrap, MSSQL, Redis, Pandas, UX, JS, Node, JS, Project, managemet, Apache, Kafka, PostgreSQL, Spring WebFlux, Redux, Typescript, Spring Boot, HTML5, CSS3, JavaScript, GitLab, CI, CD, Python3, AWS, MongoDB, Swagger, Confluence, GraphQL, Postman, API, Jira, Agile, Power, BI, MySQL, RabbitMQ, MS Dynamics CRM, Dynamics 365, Logic Apps, Power Automate, Power Apps, AWS, HTML5, Java, Script, jQuery, SQL, UI, Linux, ReactJS, ASP.NET, Core, Node.js, Rest, api, Node, JS, ExpressJS, React, Redux, JavaScript, ES6, ASP, ., NET, Core, JSON, Java, NodeJS, Express, AWS, S3, C#, ASP.NET, MVC, Web, API, WebForms, NodeJS, NestJS, EF Core,TDD, RestAPI Services

    Hangfire for background jobs, Stripe payment gateway, SparkPost for email service, Twilio for mobile SMS, Google Map Location service.

    Description

    EY Helix is an integral element of audits. It is a global analytics platform, which includes a suite of data capture and analytics tools that dramatically increase not only the depth and breadth of captured data, but also the value of insight derived from it. The EY Helix library of analyzers supports the audit from risk assessment to execution, addressing a business entire operating cycle    

    Role & Responsibilities

    • Preparing low tech designs.
    • Responsible for implementing new features.
    • Analysis of requirements, writing down the user stories.
    • Code review process.
    • Bug fixes/ Maintaining code quality using the QC gates
    • Preparing CI/CD and maintaining the same.

Enterprise List & Generic Tile Framework

Sep, 2016 - Feb, 20214 yr 5 months
    Enterprise List and Generic Tile are highly configurable Application platforms, which allow organization to create and configure web apps at runtime. These web apps are used to store, organize, share and access information for multiple clients and have features like data analytics, item level and field level security and notifications. Which is then hosted in My Insight Store using generic tile.

Jobility

Velotio
May, 2019 - Jul, 20201 yr 2 months

    Jobility

    Role

    Technical Lead

    Duration

    May 2019 July 2020

    Team Size

    9

    Environment

    ASP.Net, React, Native, Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, Full Stack, ASP.Net, React, Native, Automatic, Testing, Python, PostgresQL, Docker, SCSS, Spring WebFlux, Redux, Typescript, Bootstrap, MSSQL, MySQL, RabbitMQ, Angular 8/9/10, REST, CSS3, JavaScript, Git, SQL, Node, JS, UI, Apache, Kafka, PostgreSQL, Figma, Ps, Ai, Ae, Rest, api, HTML5, CSS3, JavaScript, GitLab, CI, CD, Python3, AWS, Microservices, MongoDB, JSON, GraphQL, Java, Postman, API, Agile, MS, Word, Excel, PowerPoint,  MS Dynamics CRM, Dynamics 365, Logic Apps, Power Automate, Power Apps, Python, Material, UI, UX, Spring, Boot, Jira, Power, BI, EF Core,TDD, RestAPI

    Third Party system Integration:

    Viewpoint,BambooHR, InEight, NewForma, Sage.

    Services

    Hangfire for background jobs, Stripe payment gateway, SparkPost for email service, Twilio for mobile SMS, Google Map Location service.

    Description

    Jobility is a platform for companies looking out for on demand workers and workers who would like to work based on their availability.

    It helps workers to work when and where they want. It also helps companies to have workers on demand.

    Jobility makes it easy for both workers and companies to find exactly what theyre looking for. People are matched to jobs through the matching engine.

    Jobility platform consist of three separate portals for Worker, Company and Admin.     

    Role & Responsibilities

    • Worked on application performance improvement.
    • Responsible for implementing new feature.
    • Analysis of requirements.
    • Implementing unit test cases.
    • Developed a chatting system for worker and company.
    • Bug fixes/ Maintaining code quality.

Ryvit

Velotio
Mar, 2018 - May, 20191 yr 2 months

    Ryvit

    Role

    Technical Lead

    Duration

    March 2018 May 2019

    Team Size

    4

    Environment

    Jenkins, CI/CD, DOTNET core, C#, Entity Framework core, SqlServer, Angular 6/7, Cosmos Db, Event Hub, Service Bus, Azure Storage, Full, Stack, ASP.Net, React, NativeJava, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, DOTNET core, C#, Entity Framework core, SqlServer, Angular 6/7, Cosmos Db, Event Hub, Service Bus, Azure Storage,  Full, Stack, ASP.Net, React, Native, Automatic, Testing, Python, PostgresQL, Docker, SCSS, Spring WebFlux, Redux, Typescript, Bootstrap, MSSQL, MySQL, RabbitMQ, REST, CSS3, JavaScript, Git, SQL, Node, JS, UI, Apache, Kafka, PostgreSQL, Figma, Ps, Ai, Ae, Rest, api, HTML5, CSS3, JavaScript, GitLab, CI, CD, Python3, AWS, Microservices, MongoDB, JSON, GraphQL, Java, Postman, API, Agile, MS, Word, Excel,  MS Dynamics CRM, Dynamics 365, Logic Apps, Power Automate, Power Apps, PowerPoint, Python, Material, UI, UX, Spring, Boot, Jira, Power, BI,TDD, RestAPI

    Third Party system Integration:

    Viewpoint, eSub, Procore, BambooHR, ZLien, InEight, NewForma, Sage.

    Description

    The Ryvit Platform enables access and connectivity to the leading Construction ERP solutions. Ryvit was founded on the fundamental observation that technology can empower and accelerate the use and adoption of software within the Construction Industry. To that end, the Ryvit Platform enables access to the leading ERP solutions in the industry by providing connectivity that otherwise does not exist.

    Ryvits iPaaS solution delivers a Connected Construction eco-system of configurable, turn-key integrations enabling the seamless flow of data between the most popular on premise and SaaS applications used in the construction industry.


    • Allows your construction applications to talk, no coding required
    • Helps you preserve your most valuable resource: time
    • Enables your company to optimize for more profitability
    • Ensures your data is clean and valid for accurate reporting and decision making.


    Role & Responsibilities

    • Developed backend (APIs) for partner, test and admin portals.
    • Worked on developing backend for partner and test portal which is used to synchronize the data from one system/partner to another.
    • Involved with developing test framework.
    • Development and designing of project architecture. 
    • Business logic designing and coding.
    • Analysis of the requirements/production issues. 
    • Writing unit tests and integration tests. 
    • Delivering weekly releases in stringent deadlines. 
    • Bug fixes/ Maintaining code quality. 
    • CI/CD integration. 
    • New enhancement/ feature development.
    • Daily Code reviews.

PlatformOne

May, 2013 - Aug, 20185 yr 3 months
    PlatformOne (PONE) is Partner compensation system (Internal HP Only) for APJ region designed to process program creation, target/quota setting, performance, rebate, payment and approval workflow.

Point Bank Api

Tavisca
Feb, 2016 - Feb, 20182 yr

    Point Bank Apis

    Role

    Senior Software Engineer

    Duration

    February 2016 February 2018

    Team Size

    4

    Environment

    DOTNET core, C#, Entity Framework Core, MySql, ReactJS, Angular 2/3/4/5, Unity, ASP.Net, Ruby On Rails, Perl, React Native, Flutter, AWS, HighLoad, Kubernetes, CI/CD, Databases, Graphic Design, Marketing Design, 3D Design, Manual Testing, Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, DOTNET core, C#, Entity Framework Core, MySql, ReactJS, Angular 2/3/4/5, Unity, ASP.Net, Ruby On Rails, Perl, React Native, Flutter, AWS, HighLoad, Kubernetes, CI/CD, Databases, Graphic Design, Marketing Design, 3D Design, Manual Testing, Cusdev, Analytics, Market Research, JTBD, Client Success Management, HTML/CSS, JavaScript, Python, Android/Java, Golang, C#, PostgresQL, Docker, Kubernetes, C#, TypeScript, SCSS, Spring WebFlux, Redux, Typescript, MSSQL, MySQL, Redis, RabbitMQ, UX, REST, CSS3, HTML5, JavaScript, Git, JS, Docker, SQL, Node JS, UI, Linux, WordPress, Apache Kafka, PostgreSQL,  MS Dynamics CRM, Dynamics 365, Logic Apps, Power Automate, Power Apps, Spring Boot, Unit Testing, .NET Core, Rest api, GitLab CI/CD, Python3, Linux / Ubuntu / CentOS, AWS, Microservices, MongoDB, Scrum, Swagger, Code review, JSON, Confluence, jira, GraphQL, Java, Postman, API, Agile, Trello, English, Python, AWS EKS, AWS S3, AWS SQS, AWS EC2, Serverless (AWS Lambda), AWS CloudFormation, AWS Cloud Platform, AWS AutoScaling, AWS DynamoDB,TDD, RestAPI

    Description

    Work with proven rewards/loyalty specialist company to administer program and provide a

    Seamless experience for agent experience to include auto upload of booking data that can be viewed in a dashboard in real time and an automated e-gift card redemption process.


    Role & Responsibilities

    • Development and Designing of project Architecture.
    • Business Logic Designing and Coding
    • Analysis of the requirements/production issues.
    • Writing Unit Tests/ Integration Tests.
    • Delivering weekly releases in stringent deadlines.
    • Engaged with Linux based release process through Jenkins.
    • Bug fixes/ Maintaining code quality.
    • CI integration.
    • New Enhancement/feature development, 

Reservoir Navigation Services & WITSML Composite Log

Sep, 2014 - Aug, 20161 yr 11 months
    Reservoir Navigation Services (RNS) software is a fully integrated, forward, LWD response-modeling package. The software creates a graphical model of petroleum reservoir for the purposes of improving productivity and making decisions regarding the development of the oil field. Witsml feed and composite Log are applications to combine multiple drilling-based logs in Advantage system (SQL DB) and send to Witsml server.

Hotel Search

Tavisca
Oct, 2015 - Jan, 2016 3 months

    Hotel Search

    Role

    Senior Software Engineer

    Duration

    October 2015 January 2016

    Team Size

    3

    Environment

    Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, DOTNET core, C#, Entity Framework Core, MySql, ReactJS, Unity, ASP.Net, Ruby On Rails, Perl, React Native, Flutter, AWS, HighLoad, Kubernetes, CI/CD, Databases, Graphic Design, Marketing Design, 3D Design, Manual Testing, Cusdev, Analytics, Market Research, JTBD, Client Success Management, HTML/CSS, JavaScript, Python, Android/Java, Golang, C#, PostgresQL, Docker, Kubernetes, C#, TypeScript, SCSS, Spring WebFlux, Redux, Typescript, MSSQL, MySQL, Redis, RabbitMQ, UX, REST, CSS3, HTML5, JavaScript, Git, JS, Docker, SQL, Node JS, UI, Linux, WordPress, Apache Kafka, PostgreSQL, Spring Boot, Unit Testing, .NET Core, Rest api, GitLab CI/CD, Python3, Linux / Ubuntu / CentOS, AWS, Microservices, MongoDB, Scrum, Swagger, Code review, JSON, Confluence, jira, GraphQL, Java, Postman, API, Agile, Trello, English, Python, AWS EKS, AWS S3, AWS SQS, AWS EC2, Serverless (AWS Lambda), AWS CloudFormation, AWS Cloud Platform, AWS AutoScaling, AWS DynamoDB,TDD, RestAPI

    Description

    The goal of this project is to define APIs for the car search service for client.

    Search: This operation is used by consumers to search the Cars for particular location with the available amenities, Car agencies and Car classes.

    Plug-ins: Implemented Plug-in for Filtering, Paging and Sorting Handlers.

    Role & Responsibilities

    • Development of Hotel Search Operation.
    • Delivering work in stringent deadlines.
    • Attending daily meeting calls to give task status, coordinating with onsite team for completion of tasks.
    • Business Logic Designing and Coding.
    • Analysis of the requirements/production issues.
    • Bug fixes.
    • Unit testing operation.

Car Search

Tavisca
Jun, 2015 - Sep, 2015 3 months

    Car Search

    Role

    Senior Software Engineer

    Duration

    June 2015 September 2015

    Team Size

    3

    Environment

    DOTNET core, C#, Entity Framework core, MySql, ReactJS, Unity, Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, DOTNET core, C#, Entity Framework core, MySql, ReactJS, Unity, ASP.Net, Ruby On Rails, Perl, React Native, Flutter, AWS, HighLoad, Kubernetes, CI/CD, Databases, Graphic Design, Marketing Design, 3D Design, Manual Testing, Cusdev, Analytics, Market Research, JTBD, Client Success Management, HTML/CSS, JavaScript, Python, Android/Java, Golang, C#, PostgresQL, Docker, Kubernetes, C#, TypeScript, SCSS, Spring WebFlux, Redux, Typescript, MSSQL, MySQL, Redis, RabbitMQ, UX, REST, CSS3, HTML5, JavaScript, Git, JS, Docker, SQL, Node JS, UI, Linux, WordPress, Apache Kafka, PostgreSQL, Spring Boot, Unit Testing, .NET Core, Rest api, GitLab CI/CD, Python3, Linux / Ubuntu / CentOS, AWS, Microservices, MongoDB, Scrum, Swagger, Code review, JSON, Confluence, jira, GraphQL, Java, Postman, API, Agile, Trello, English, Python, AWS EKS, AWS S3, AWS SQS, AWS EC2, Serverless (AWS Lambda), AWS CloudFormation, AWS Cloud Platform, AWS AutoScaling, AWS DynamoDB,TDD, RestAPI

    Description

    The goal of this project is to define APIs for the car search service for client.

    Search: This operation is used by consumers to search the Cars for particular location with the available amenities, Car agencies and Car classes.

    Plug-ins: Implemented Plug-in for Filtering, Paging and Sorting Handlers.

    Role & Responsibilities

    • Development of Hotel Search Operation.
    • Delivering work in stringent deadlines.
    • Attending daily meeting calls to give task status, coordinating with onsite team for completion of tasks.
    • Business Logic Designing and Coding.
    • Analysis of the requirements/production issues.
    • Bug fixes.
    • Unit testing operation.

Investment Goal Service (9PLN)

Cognizant
Jul, 2014 - Apr, 2015 9 months

    Investment Goal Service (9PLN)

    Role

    Programmer Analyst

    Duration

    July 2014 April 2015

    Team Size

    12

    Environment

    Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, ReactJS, Full Stack, JQuery, ASP.Net, Java, AWS, Deployment, Kubernetes, CI/CD, Databases, Agile, Backend, HTML/CSS, JavaScript, Python, NodeJS, C#, PostgresQL, Docker, Kubernetes, C#, Bootstrap, MSSQL, MySQL, Redis, RabbitMQ, REST, HTML5, Git, SQL, UI, Apache Kafka, PostgreSQL, Spring WebFlux, Redux, Typescript,Spring Boot, Unit Testing, .NET Core, Spring framework, Rest api, Hibernate, GitLab CI/CD, Python3, Linux / Ubuntu / CentOS, C# asp.net, AWS, Microservices, Swagger, C# .net, jira, GraphQL, Java, API, Jira, Agile, GitHub, ASP.NET MVC, Python, ASP.NET Web API, .NET/C#, C#, ASP.NET MVC, Web API, WebForms, SpringBoot, C#(.net6), Oracle DB, RestAPI

    Description

    9PLN Service:

             The goal of this project is to deliver an online investment guidance solution that will help investors create,fulfill and manage a basic investment strategy.

    Update operation: This operation is used by consumers to update the investment profiles and the selected fund details. Also once the user complete purchase this operation Is used to insert the purchased fund details in the database. This operation also updates beta suitability related info for the associated account.

    Role & Responsibilities

    • Development of Update Operation.
    • Delivering work in stringent deadlines.
    • Attending daily meeting calls to give task status, coordinating with onsite team for completion of tasks.
    • Business Logic Designing and Coding.
    • Analysis of the requirements/production issues.
    • Bug fixes.
    • Unit testing operation.

Viacom ARC Migration

Oct, 2013 - Aug, 2014 10 months

Test Harness

Cognizant
Mar, 2014 - Jun, 2014 3 months

    Test Harness

    Role

    Programmer Analyst

    Duration

    March 2014 June 2014

    Team Size

    12

    Environment

    NET/C#, C#, ASP.NET MVC, Web API, WebForms, SpringBoot, C#(.net6), RestAPI, Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, ReactJS, Full Stack, JQuery, ASP.Net, Java, AWS, Deployment, Kubernetes, CI/CD, Databases, Agile, Backend, HTML/CSS, JavaScript, Python, NodeJS, C#, PostgresQL, Docker, Kubernetes, C#, Bootstrap, MSSQL, MySQL, Redis, RabbitMQ, REST, HTML5, Git, SQL, UI, Apache Kafka, PostgreSQL, Spring Boot, Unit Testing, .NET Core, Spring framework, Rest api, Hibernate, GitLab CI/CD, Python3, Linux / Ubuntu / CentOS, C# asp.net, AWS, Microservices, Swagger, C# .net, jira, GraphQL, Java, API, Jira, Agile, GitHub, ASP.NET MVC, Python, ASP.NET Web API, .NET/C#, C#, ASP.NET MVC, Web API, WebForms, SpringBoot, C#(.net6), RestAPI

    Description

    Test Harness is a web application which is used to test the E-Signature Service. Following are some features of this Web application:

    • To test the WCF Service operations.
    • Provide user maintenance screens to configuration data within the 9DOC database schema.
    • Provide user query access screens to audit transaction and error data.
    • Provide user query access screens to audit e-Signature transaction data within database schema.

    Role & Responsibilities

    • Development of Front End.
    • Delivering work in stringent deadlines.
    • UI Related change requests.
    • Attending daily meeting calls to give task status, coordinating with onsite team for completion of tasks.
    • Business Logic Designing and Coding.
    • Analysis of the requirements issues.
    • Bug fixes.
    • Unit testing for each screen.

Electronic Signature Service (E-sign)

Aug, 2013 - Feb, 2014 6 months

    Electronic Signature Service (E-sign)

    Role

    Programmer Analyst

    Duration

    August 2013 February 2014

    Team Size

    12

    Environment

    .NET/C#, C#, ASP.NET MVC, Web API, WebForms, SpringBoot, C#(.net6), Oracle DB, RestAPI,Java, JSF, JSP, Anular 12+, Jquery, AWS, PostgreSQL, MySql, REST APIs, HTML, CSS, Git, Bitbucket, Github, Bamboo, Jira, Agile SCRUM, Jenkins, CI/CD, ReactJS, Angular 6/7/8, Full Stack, JQuery, ASP.Net, Java, AWS, Deployment, Kubernetes, CI/CD, Databases, Agile, Backend, HTML/CSS, JavaScript, Python, NodeJS, C#, PostgresQL, Docker, Kubernetes, C#, Bootstrap, MSSQL, MySQL, Redis, RabbitMQ, REST, HTML5, Git, SQL, UI, Apache Kafka, PostgreSQL, Spring Boot, Unit Testing, .NET Core, Spring framework, Rest api, Hibernate, GitLab CI/CD, Python3, Linux / Ubuntu / CentOS, C# asp.net, AWS, Microservices, Swagger, C# .net, jira, GraphQL, Java, API, Jira, Agile, GitHub, ASP.NET MVC, Python, ASP.NET Web API.

    Description

    Task service containing a set of operations to facilitate the processing of a client signature for an electronic generated form. The goal will be to abstract as much as possible the details of the form generation, the signature status, and the storage of the form in an image repository to allow for simple streamlined processing of document requests.

    Role & Responsibilities

    • Development of Complete Operation.
    • Delivering work in stringent deadlines.
    • Attending daily meeting calls to give task status, coordinating with onsite team for completion of tasks.
    • Business Logic Designing and Coding.
    • Analysis of the requirements/production issues.
    • Bug fixes.
    • Unit testing of operation.

ERP System & School Management System

Nov, 2011 - Oct, 20131 yr 11 months

Traffic Master

Jun, 2009 - Apr, 20133 yr 10 months
    The main Objective of this project is to automate the Car Parking System. The end user will park the car in the Loading Bay and the automated guided vehicle (AGV) will collect the car and park it in the traffic master system.

Internal Management System

Jul, 2010 - Nov, 20111 yr 4 months

Learning Management Systems

Jun, 2008 - Mar, 2009 9 months
    Learning Management system is an e-learning web application used by millions of users. The application is built for Department of Transportation mainly but can be used for any kind of e-learning activities.

Education

  • B.E. Computer Engineering

    Maharashtra Academy Of Engineering, Alandi (2012)
  • Diploma (I.T.)

    Government Polytechnic, Nanded (2009)
  • S.S.C.

    Gurudev Vidya Mandir, Mukhed (2005)

Certifications

  • Bfs l0 certification

  • L1 certification (investment banking and brokerage)

  • Nac certificate in '.net framework' programming

  • Nac certificate in java programming

  • Nac certificate in .net framework programming

  • Nac certificate in 'java' programming

AI-interview Questions & Answers

Could you help me understand more about your background by giving a brief introduction of yourself? Not sure whether recording has started or not. Alright. Anyways, yeah. I'm Abhishek Bande. I'm having, like, uh, eight plus years of experience specializing into distributed systems. I would integration scalable back end architecture or the, like, couple of years. I have led the design and implementation of microservices based platform using Node JS, Nest JS with, like, PostgreSQL as a primary database. I have hands on expertise, like, on expertise and uh, you know, experience working with, like, messaging, uh, systems like Kafka, RabbitMQ, and MQTT for building real time event driven systems. I'm also family with, like, LoRaWAN protocols for, like, low power IoT communication and have, like, designed end to end pipelines that integrate each devices with, like, cloud infrastructure on AWS using some services like s three API gateway and then I would record as well. So recently, I've been focusing on applying some event processing architecture design patterns to build scalable fault tolerant and IOT ecosystem that can handle the high throughput telemetry data while ensuring the mental ability and the observability as well. So in short, like, I enjoy solving, like, complex architectural challenges, mentoring team members, and staying up to date with, like, modern design patterns and cloud native practices as well. So I'm currently looking to take on, um, some of the, like, principal engineer or tech lead role where I can lead system design efforts and make architectural decisions and drive technologies, STG, um, aligned with the business tools as well.

How would you optimize a Kafka or a RabbitMQ consumer in a microservice environment where IoT devices are constantly streaming data? Well, okay. So optimizing Kafka and RabbitMQ consumer in a microservice IoT is, like, requires kind of an very careful attention to scalability through poor and fault tolerance. So so what I do is, like, I first have, like, for Kafka, I'll be having consumer configuration for throughput and parallelism. So I'll increase the consumer group parallelism and use the multiple consumers in the same group of, like, uh, parallelism consumption, then I'll tune the phase dot min dot bytes or phase, um, uh, phase dot max dot wait dot m s balance between the latency and throughput. I'll use the max dot poll dot records wisely to prevent, like, processing overload by limiting the number of message pools, or I'll ensure, like, topic partitions are evenly distributed across the consumers. And, uh, I'll also, like, ensure that to avoid the duplicate in case of, like, retries. For rapid MQL, I'll use prefetch, which is, like, I'll set, like, basic dot, um, QA's, like, prefetch count and and to control message flow and avoid the memory pressure as well. Also, like, uh, for multiplex consumer threads on a single connection for an efficiency, so where multiple channels per connections, you can say. I'll go for, like, manual HCM mode where allow, uh, the try or the data from failure while you're giving control over the live as well. I'll distribute workload and avoid some bottlenecks on a single queue as well. And for message processing optimization, I'll go for, like, a a synchronous processing using the worker pools or some task queues inside the consumer, microservices. I'll group on microservices and, uh, processes, uh, for an IoT message in batches if, like, order is critical. I'll also monitor the downstream DB APIs and apply rate limiting for an before management as well. I'll add exponential back off with the teacher and, um, some timeouts or try policies, and, uh, I'll use, like, some, like, c s slash bull or the lens for j as well. For scaling and fault tolerance, I'll use container orchestration like Kubernetes plus based on the CPU and custom metrics like, uh, Kafka lag, Alice Prometheus, Grafana, or Kafka UI tools to ensure consumers keep up. I'll redirecting the failed message to deal queue for later analysis or reprocessing. I'll also ensure add important processing logic so that, um, like, any duplicate messages don't incur up the state, and I'll use the correlation ID for on, uh, or deduplication IDs in the messages for tracking and retrypivish. And, uh, I guess that's it. Like, so to optimize all this, like, I focus on horizontal scalability, uh, back pressure filing or kind of an handling of those things and parts in aware consumer parallelism and add important processing, some resource efficient patterns. And, additionally, um, it's crucial to maintain that pipeline that can observe the telemetry buzz while ensuring delivering Oh,

What steps would you take to identify and fix performance bottlenecks in a microservice architecture? So, um, in order to, uh, over okay. Identify and fix performance bottlenecks. Okay. So there are a couple of things. So I I'll I'll be I'll take, like, some systematic and kind of a layered approach you can see. First, I will establish the observability first. So before optimizing, may I make the system my system is observable. So so I'm using some distributed tracing or centralized logging or metrics collection and then service mesh observability as well. Then I'll also isolate the bottleneck. So once I observe my observability is in place, then I'll look into the highlights and points, like, tracing them back to specific services or DB calls, and then we'll go for, like, checking the consumer log by Kafka and RabbitMQ indicates, like, some slow consumers as well and identify, like, some overloaded services as well and, uh, some high CPU memory or maybe kind of a network IO as well. So there will be some slow database queries, which I'll look for, like, some n plus one queries of missing indexes as well and then excessive locking or some retries under the load as well. That card might be in problem here. And to optimize the bottleneck, I'll go for an, um, for an application layer, I'll go for profile code. And on Node g side, like, I'll inspect, uh, like, using, like, cleaning dot g s or x, uh, node and maybe for nest g s built in profilers as well for cache reputed computations or API responses. I'll go for release and in memory and avoid synchronous and blocking code and, like, smarts in from work like Node. Js For database end, like, I'll add indexes. I'll use connection tools. I'll avoid chat DB calls, so making heavy reads to the cards. On a network side, I'll go for enabling the compressions using DG port deflate, or I'll use HTTP two or gRPC for low latency. I'll reduce the payload sizes and enable request level caching as well. For messaging layer, I'll go for using Kafka or queue, which gives, like, consumers don't have prefetch for different poll settings as well. So I'll ensure that and, uh, retry logic, which doesn't amplify the pressure. I first level, I'll go for, like, setting up the proper in front, uh, having, like, auto scale services using Kubernetes, HP, and, uh, some cloud auto scaling services or groups as well. And I'll place the, uh, latency sensitive So it was just closer to whether, like, with the zone replacement as well, and then I'll tune resources limits and then, uh, request in different Kubernetes as well. So, um, I'll I'll also use, like, passing patterns then, uh, for refactoring. They come long and fat services into smaller units if needed. And then this I'll this I'll go for, like, kind of an whenever applying long term fixes for that. So applying rate limiting, throttling, bad pressure, leveraging the code, or maybe some edge edge caching or maybe some adding some service measures or, um, yeah, that's pretty all.

How would you work with other architects on the same project? Well, um, yeah, this has been scenario in, like, my current project as well where, like, working with, like, other architects on the same project requires, like, strong collaboration and clarity and, like, system thinking approach to ensure, like, architectural consistency perform and kind of long term and durability as well. So I'll align my goals and ensuring we will understand the business and technical objectives, scalability, like resilience or time to market. And I'll define the guideline principle, like, for example, even to an architecture or cloud native or 12 factor apps, and definitely security first things. Then I'll choose a shared, uh, vocabulary. I'll use, like, a grid terminologies and architecture documentation standard. So I believe in defining a common architecture language and decision frameworks of front to reduce, like, ambiguity and design drift as well. So, uh, I'll I'll also divide the responsibilities with the ownership. So I'll split architecture by domain by, uh, for example, IoT. We have IoT ingestion, API layer, storage, observability. I'll assign system boundaries and give autonomy to Windows subsystems, and I'll go for using some design governance and, uh, system mechanism kind of and things where sync mechanisms will work. So weekly, biweekly, architecture usually sync up to discuss the design changes, integration impacts, or some performance considerations as well. Mentoring version architecture documents using confluence and other stuff as well. I'll also involve, like, other architects certainly using, like, mobile boarding or collaborative tools like Miro or, um, we have loose lucid charts as well. So co create POCs and API contracts before handling them off to the teams and, um, ensuring design reviews are not being, like, what's left, but alignment checkpoints. Then coordinating on the cross cutting concerns, I work with, like, other architects who define some shared standards for security, observability, and then we also have, like, auto scaling, resilience, DevOps teams, and then some data story as well. So I'll host internal tech talks and roundbacks to share architecture insights and listen, learn as well. I'll mentor senior engineers to grow future architects. So these are things I usually do.

Uh, below is a Node JS code snippet that attempts to implement a binary search algorithm on the sorted array. There is a subtle bug in the implementation. Please review the code and explain the issue along with how you would correct it. Okay. Let's start. Let's start and then do made So with the binary search that we have is mostly with the So, um, what I see is that, uh, in the line of, like, where we, uh, the, uh, line we have, uh, the letter and is equal to. So we are setting the end to which is, like, one index beyond the variant range in JavaScript. So zero index. So what we need to do is that, uh, we need, like, to have, like, a dot length minus one. That's something we need to do. So, um, yeah, that is the only issue I see. And, um, using a dot length allows, like, to mid exit the valid index range, and it leads to comparison with, like, undefined and causing failures to find valid targets at the end of the area as well.

Okay. How would you design a system handle certain x increase in device traffic without impacting performance? Okay. So so let's say, like, uh, if we wanted to design a system that will, like, handle device traffic efficiently without, like, impacting the performance. So we need to have, like, our system to be asynchronous event to one and kind of on scalable architecture that is separate. So the clear critical pass and from, like, heavy lifting task. So, uh, uh, first of all, like, we should be having some need to define, like, critical versus, like, some noncritical activities. So critical activities are, like, kind of the things that must have processed in a real time, like device authentication or telemetry arts or heartbeat site, And kind of a non critical alerts or activities are, like, heavy and delayed access, like analytics, aggregation, and any kind of, like, logging or some enrichment backups. So the architectural overview of, like, I'll be having, like, first, I ordered devices, and then it would be having, uh, trigger with the API gateway or MQTT broker or some ingestion layer. And then it will be going for real time processor of layer validators or filters and routes, and then it will go for, like, um, passing queue, Kafka. And, uh, from there, it will go for Kafka or RabbitQ or MQ or maybe AWS IoT rule. And then, uh, post that, it will go for, like, uh, having some immediate actions, like background workers or maybe some analytics things as well. So, uh, first of all, uh, asynchronous offloading via messaging is, like, something using Kafka and traffic, m p two q with noncritical workloads, like data storage and analytics and some enrichment as well. For an event, even architecture will even process, like, Afghan AWS or even braces, right, to decouple the traces, and I will return consumers for logging as well. And for real time parts should, like, always say, like, lean, so only roots alerts or threshold or critical device controls, messages in a real time execution part. I'll also, like, leverage edge processing if applicable wherever there is. And I'll pushing, refiltering, throttling, and even basic, like, um, ML scoring to edge gateways or device from here as well. For scalability, I'll go for auto scale microservices using Kubernetes, HP, and Fargate or some back pressure handling is like using refresh on max dot cold or records or for for cold parts or heavy jobs, long running, or kind of uncomfort heavy tasks. For example, we have, like, model training or reporting. It should go into, like, batch processing batch as well. For monitoring and failsafe mechanism, we have deal queues, certain breakers, and structure logs, some metrics as interesting as well.

How would you build and manage a team in APAC? Considering the diverse region and their unique challenges, where would you start? Alright. So I guess this is not my first project where I have managed. So building and managing software engineering teams in an so specifically, like, a CCS specific region. So it requires, like, thoughtful approach and that balances regional diversity and culture answers as well. Sometimes those issues and varying varying the bachelor of, like, software engineers or leaders as well. So I'll start, uh, with the purpose and local context, like having understanding the business goals and regional context. So are we building some RND up or kind of a delivery center or support team or maybe kind of a new product or maybe in, uh, local market? So I'll tell her the team design based on the market and maturity, and then, uh, uh, we'll go for, like, having some map mapping the regional strengths like India, Vietnam, Philippines. So it's, like, having strong delivery and talent pipelines. And then, for example, Singapore, Japan, and Australia, we have, like, some product design and having, like, tech leadership, regulatory sensitive leaders as well. And we need to, like, build the right team structure first. So hybrid model, I would prefer for where to start with, like, core team member of senior engineers local or relocated who set the foundation and add local hires and, like, junior talent gradually with balances of experienced mentors. And engineer manager or tech leads usually having, like, our back end and front end developers, dev ops, product owners, and then, uh, recruit, you know, strategically and then region wise of the trends that we should have. To partner with, like, local universities and some bootcamp hiring interns or using, like, regional tech teams. And we will start establishing in a strong cultural foundation for them, like, posting the inclusive culture, respecting language barriers, and certain, like, cultural holidays, other stuff. Right? I'll I'll get, uh, always, like, encouraging you one on one with, like, skip levels and using collaborative tools. So, uh, we would be need to handle the time zone diversity. So establishing core overlapping hours, like, three to four hours of, like, uh, hours of a day. Uh, we should be collaborating with each other and overlapping with each other. We use, like, a follow the same model for supporting and hands off when needed and ensuring the communication as well, like weekly sync or async updates or daily Slack or everything. So and we'll go for documentation first mindset. Right? Everything that you might think of on, uh, investing in a growth and kind of on retention. So for that, I'll go for mentoring the different programs with, like, global counterparts and deliver tech talks, workshops. I'll start adopting the attack specific challenges, like, maybe if we have any, uh, kind of ambiguity or compliances. So yeah. So anyway, we need to align the team, um, with the purpose and regional sense with that as well.

What strategies would you use to ensure that messages between I o d devices and microservices using RabbitMQ are processed in order despite network hiccups? So I don't really use the issue that message. Okay. So yeah. First of all, uh, I'll leverage message grouping by some routing key or queue. So use one queue per device or device group logically, sharding, ensuring all messages for a, um, maybe given device go to the same queue, and then this ensures if you're for ordering within the ether device scope. I'll use, like, prefetch is equal to one to enforce strict serial processing, and I'll configure the rapid m q consumer with the channel dot prefetch one to process only one message at the per consumer, which preventing message being pulled and processing in parallel, which can cause out of order execution as well. And, uh, I'll use publishers confirms on the RabbitMQ predecessors to ensure the broker has issued and processed the message. I will use, like, manual acknowledgments from, like, consumers only of pre processing. Uh, even with, like, stick ordering or network hiccups or resets can, like, lead to retries or duplicate deliveries as well. So while I ensure microservices are important, processing the same message multiple times should reproduce the same output. And leveraging the messages time stamps and sequence numbers slow, I'll include the sequence numbers and the, uh, message payload from the IO devices so consumer service can detect out of order messages or buffer messages until the missing one error is. And alerting, like, uh, for dropped out or maybe out of order packets or implementing deal queues and the trap policies. So and avoid some blocking, uh, the pipelines due to the bad queue. And we should be using DLQ for, like, kind of an error handling and kind of stuff like where we might need to implement the other things. So we'll go for some okay. Yeah. And for, uh, for, like, for partial ordering or when, like, full ordering isn't scalable. So in that case, full ordering across all the messages doesn't scale well. So use the partitioning by device groups, maintain the local ordering when needed, and allows the concurrent preprocessing under for the underutil device types as well. So, yeah, that's pretty all.

Can you explain how you would implement an event driven system using Kafka or RabbitMQ? Uh, fortunately, yes. Because, uh, that's, like, somewhere I have set up these things multiple times. So, uh, first of all, I'll go for defining domain events, like start identifying the some domain events in the system. For example, device generated or maybe some sensor data received or some threshold exceeded or other triggers. So for each event, should be mutable and, uh, which should be presented some something that has already been happened. Right? So I'll choose the right broker for that. So I'll using Kafka when you need an high throughput and, uh, you want to reply, uh, some maybe so you if you want to have, like, some, uh, replay messages or some retain retaining them for the analytics, you want, like, some stronger ordering guarantees, poor partition, or use RabbitMQ for like, you need to, like, some low latency and real time communication or when you need, like, some complex routing via topic or other exchanges or you need, like, poor message acknowledgments and dead lettering things. No? For a producer, each microservice or IoT gateway publishes events or to Kafka topics or rapid app exchanges. So I'll use serialized formats like JSON, Protobuf, or Avro. I'll add metadata, headers, like, time stamps, host device, collision ID. For the consumers, like, I'll use Kafka consumer groups or RabbitMQ, uh, queues to distribute load. I'll then maybe consumer process the different events asynchronously and perform the business logic. I'll make sure to, like, acknowledge the processed messages, handle bit drives gracefully, and use item potency to avoid the duplicates. For a microservice integration, I'll my concern, like, each microservice subscribes to the events it's interesting to And, uh, use event routers and even even gateway like Kafka, Teams, and Nextiva's. Like, we have some event bus. What else? I guess oh, yeah. Again, like, if you want me to talk about the scalability and fault tolerance, so I'll consider it after for, like, when we use, like, multiple partitions of, uh, for, like, high throughput or deploy the replication and acknowledgements to all and durability as well. And I'll also go for, like, Kafka for streams for, like, stream processings and then Rabbit MQ, I'll go for using FAQs or enabling some messages, persistence, and publisher confirms. And they'll, uh, go for, like, RabbitMQ and, like, uh, use prefetch limits to manage the consumer load. So, yeah, that's pretty on.