
As a dedicated developer with a passion for solving problems, I Bring strong skills in both front-end & back-end development. I am always eager to expand my knowledge and expertise by learning new
Technologies and frameworks. Currently, I am seeking an opportunity to Embark on my career as a software developer within a reputable Technology-driven company.
Backend Developer
X-YUG TechnologiesFreelance Part-time AI Trainer
RemoTasksFull Stack Developer
ReachvelFull Stack Developer
Chahal Academy Pvt. Ltd.
ReactJS

Redux

NodeJS

ExpressJS

MongoDB

HTML

CSS

NextJS

TypeScript

SQL DB

PHP

Java

Python
.png)
CodeIgniter

Chakra UI

Git

MaterialUI

RazorPay

AWS
.png)
FastAPI

GraphQL

Zoom

Redux

HTML

PostgreSQL

Redis

Redis

Redux

HTML

CSS

AWS

Redis
Hi. My name is, and, uh, I'm from I'm from Colombia, Arkansas, India. Uh, currently, I'm in Hyderabad. I I did my graduations in 2021. And after that, I did a full stack web development certifications from from a boot camp. So in boot camp, I joined, uh, boot camp. I joined after the career since completed. So in boot camp, like, I I take a training on full stack of development. And the tech stack I used, React Node Express, MongoDB, and, uh, and also Spring Boot and Java for the backend part. So, like, for development of the website, I used ReactNode Express MongoDB and MySQL. And, uh, and after that, I worked in Chehal Academy. So after completing the boot camp, then I joined the company, which is Chehal Academy. So in Chehal Academy, I I worked here as a full stack web developer using React, Node Express, MongoDB. And there, I've worked in a website just called elearning website where, like, uh, where students can get enrolled there and can purchase a course and can give a test and can and and can purchase a course and, uh, like, he can pay through the EMI. So there, I have worked in user side and admin side. So in both side, I have worked in I have worked in the that, uh, that in that project. So after, like, we have also integrated 3rd party APIs, uh, Razorpayam and Gateway, and, uh, I have worked on RESTful APIs all there. And so there, I have worked in 1 year. And after that, I joined a ritual company. So in visual, I've worked in a website, uh, which is sevabhad.comamana.org, where you just can donate the amount as per the causes. Uh, so in the ritual, I have worked on, uh, digital side. So there, I have worked as a senior full stage developer where I, uh, I was leading the team, and team, uh, team size was, like, 4 to 5. And they're, like, I used text state, React mode, express, MongoDB, and and, uh, yeah. So these text text I have used there. And for deployment, I use AWS and and Apollo and. And after that, like, after after completion of project, uh, like, companies closed, then and then I joined x u technology. So in x u technology, I worked there as a back end developer. So in Xu, I was using, well, Nord Express, MongoDB. And for the deployment, I use AWS. So there and there, we have, like we have, like we have deployed many of sites. And in Xu, uh, I have worked in a ecommerce website where, like, I managed, uh, I lead the team as a back end developer. So there, I use there, we have worked on ecommerce website. So that ecommerce website like goldheart.com, goldbox, goldbox.com, and goldcenter.com. So here users can, uh, purchase the gold and silver, uh, as per requirement, like 100 grams or gram. So this is all. And including all, I have 3 years experience, like training experience, uh, and boot camp industry including all 3 years. Apart from that, uh, I have 3 years from companies. So including all, it is 3 years. So mostly, I use VF Nordic Pros, Expeditors, Meter, and Apollo, and socket. Io for for live price and everything. So that is all, uh, details about me. So this is the all background and everything's about me. So that's all. Thank you.
So discussing about, uh, discuss on how how would I scale a socket dot io based on based on, uh, best messaging service in a Node. Js to handle sudden to handle sudden spiking users. So for discussing how would I scale socket.io. So for this, I like for, uh, like, scaling a socket.io, uh, based on messaging service, uh, in Node. Js. So, like, uh, it will handle spikes in users, uh, that requires the convenience of escalated this to ensure the the system, uh, remains responsive, reliable, and scalable. So for that, we'll we'll scale by horizontally, like, multiple Node JS instances. So socket. Io is based on the web sockets and maintains long live connections between the servers and clients. Uh, it handle a large numbers of simultaneous users and, uh, like, this need to distribute connections across multiple Node JS instance. So for that, uh, in Node JS, uh, like, one thing is cluster more mode. So using this Node JS cluster module to run multiple instances of the socket. Io servers on whose on a single machine, and it leveraging multiple CPU course. And and after that, like, for the multiple servers, we will deploy several dot I on multiple servers. So each server will handle a portion of that traffic, but the challenge is ensuring that all instances remain in sync with user events or messages. So and after that after that, uh, we will like to do a sticky sessions for that, like, uh, since web sockets maintain a persistence, uh, persistent connections when using load balancers. So at that time, we need to ensure that each client is consistent consistently routed to the same server instance. And and for that, uh, like, we enable sticky sessions at the load balancer level, uh, through n n g, uh, engines, uh, engines. And after that, it will ensure that connections, uh, from the same user are sent to the same backend instance. So using and also, we we use Redis for pub and sub and state synchronize. And so when we'll scale up multiple instances, so it's it's server may have users connected to it that need to extend messages in real time. However, since WebSocket connections are, uh, tied to individual servers and we need to make, uh, we we need a mechanist to broadcast event across across all server instances. So for that, Redis pops up like it is, uh, it is, uh, like it will be used. Redis is to enable message broadcasting between socketed io servers. Redis acts as a central messaging broker and it will ensure that messages from one server are appropriate to all servers. So like this, uh, it will be, uh, done, and we have to install the radius, uh, adapter for socket IO. Like, so it will it will, uh, like, confer configure the in the server port, we we have we have to configure the socket. Io to for using the radius. And with, uh, with these setups, like, we uh, when a user on the server sends a message, it will be broadcasted. It will broadcasted to, uh, it will grow broadcasted to, uh, the users and and to all the socket dot instance through redis. And for
So in ExpressJet's applications, a good pattern for secure and, uh, structured error handling is the centralized error handling pattern. Often combined with the custom error classes and middleware, ensure all errors are properly handled, uh, logged and communicated securely to the client. So for custom error classes, we will create a custom error classes to represent different error types. Example, client client errors, server errors, and authentication authentication errors. And, uh, this adds clarity and structure when dealing with errors. So this will be and also, for, uh, error handling middleware, centralized error handling middleware. This middleware checks all errors including those thrown asynchronously and ensures consistent error responses. It also distinguishes between operational operational operational errors like where request and programming errors like undefined variables, uh, for better logging and security. So and after that, we'll use try case for asynchronous code. We'll use async of it for asynchronous code and rep rods in a case functions or higher functions that passes any errors to the, uh, centralized error hand error handler. And then we'll, uh, look for the global error handler for uncotic session and rejections. So we will see all that application handles uncotic exceptions and handle promise rejections, which are programming errors and should be logged and safely shut down the application. And after that, uh, we will look for the safe error responses for productions. In productions, we'll avoid leaking the spec addresses or internal error details to clients. We only show a generic error message for critical errors while logging in detail error for debugging purpose. And after that and for the 404 not found, let's we'll create a middle layer to catch routes that are not found in the, uh, found and fro 404 error. And for the logging, we will implement a logging for all, uh, errors using a logging service like Winston, uh, Banyan, or or clause bed logging service, uh, Sendry to ensure errors are monitored and tracked correctly. And for so through all these through all these, we ensure the errors are handled constantly secretly. Uh, and in and this way, we can make some debugger this year. Uh, so this all things defining custom error classes, and we'll use dry get blocks and async away to handle errors at the web group level. And we centralized error handling using error handling middleware that sends a structure code responses, and we'll ensure uncaught exceptions and promise rejections are handled globally. And for log errors, security, especially in productions, and we'll hide sensitive error details in production environments to avoid exposing internal application logic. So through all this, we can, uh, handling, uh, uh, we we would this way, we can, uh, do error handling in express application.
So what steps would I take to verify that integrity of data transmit over WebSocket connections in a real time app? To ensure the integrity of data transmitted over WebSocket connections in a real time app. I can implement a range of, uh, security measures to protect the data from tempering in terms you know, interceptions are touched on during transmission. So for that, we'll use web socket over TLS that is wsss and this will ensure that all web socket connections are established over a secure transport layer by using wss or web sockets over TLS and SSL. This provides encryptions preventing man man in the middle attacks and ensuring the confidentiality and integrity of transmitted data. In the steps, we'll use wwss instead of w s when connected to WebSocket. And we will obtain and configure an SSL and TSL certificate on the server. So after that, we'll do message authentication codes, MSC. We'll use message authentication codes, MSC to verify that the data has not been tampered with. And, uh, I can append a crypto cryptographic has example attempts to each messages that calculated using a sad secret between the client and the server. Upon receiving a message, the server recalculates that hasn't compared it to ensure data integrity. And for that, we'll we'll create a has of the message and, uh, and we'll contain using H Met with a secret key and we'll share the message with the hazardous. And after that, we'll verify has on the server before processing processing the messages message. And after that after that, for the data validations and a scammer enforcements, uh, we'll, uh, structure and format incoming data that, uh, is correct by implementing data validations on the server side, and we use leverage like Zoe or yep to enforce scammers for incoming messages and preventing, uh, malform or malicious data from being processed. And we define a schema for expected message message formats, and we'll validate each messages received against the schema. And after that, we use JSON of tokens for message authentication for, like, for additional security and authentications. Uh, I I I can sign each message using a j r. This ensures that the sender is authenticated and the message was not tampered with. And the, uh, and for that, we take the steps like, uh, client signs the mess with the secret key and send it to the server. And server the server verify the zedlt before processing the messages. Through all this, uh, like, uh, we can verify integrity of data transmitted over WebSocket connections in real time app and also for sequence numbers of time stamps for, uh, replay tech preventions. Uh, to prevent to prevent replay text, where attackers resend all ready message, including sequence numbers or time stamps with each message, the server should track the sequence numbers or validate that the time stamp is within an acceptable range. So through all this, uh, like, we can, uh, transmit our web sockets. So, like, uh, rate limiting and throttling, and, also, we can do, uh, sequence numbers through time stamp for replay attack attack preventions. And after that, uh, we check some we can also implement check some of has variables for data integrity. And so by using the misses of, uh, uh, WSS and edge access, deliver that data will will distance and data will be time stamps directly meeting and check some details. All through all this, we can do to transmit it to our web server connections in a real time.
So ensuring that API implants developed with expresses are strongly typed when using TypeScript. So so for this so for ensuring that API endpoint developed with expresses are strongly typed when using TypeScript, I need to integrate TypeScript, type checking, and enforce enforce that for request and responses middleware and rod handlers. So for this, we'll use a TypeScript. Uh, we'll set up a TypeScript project where it will it, um, that is your project set up with TypeScript by installing unnecessary dependencies and configuring, uh, case config dot JSONs. And after that, uh, after creating case dot JSON config, we'll store a module and we'll pass the strict and other which is required thing. And we'll, uh, and after that, we'll define a strong task for request and response. We'll we'll use the filtering task for types and express for best and response. However, for custom data, example, body query params, rod params, we can clear the interfaces to strongly tap those objects, and we'll use define custom tabs for request parameters, body, and query. And after that and and for and and for the middleware, we'll check for that at, uh, middleware in expressed. It can also be a strong tag, especially when I need to ensure that types are additional properties, uh, like, it might it might add to the request object. And after that, uh, we check we'll check what type customer handlers for error handling, period customer error classes classes. And ensure that error handles are strongly tagged to handle various types of errors. And we use also type skill with route handlers, which you define routes and rods, and it will ensure that handlers are properly typed particularly when dealing with our rod parameters and query parameters or request bodies. So and after that, we'll ensure tab safety for API responses. Like, we'll, uh, we'll ensure that API responses are consistently structured when we'll clear the tabs for the response, payloads, and install the tabs when sending responses. And after that, we use tabs script utility tabs for generic generic generic cases. So we'll use tab script, uh, tab script provide capability tabs like like partial, speak, and omit, which can help enforce more flexible or partial tapping when needed. And, uh, after that, we'll build it using, uh, leverage like Jod or Joy with tabs. For validating the request body parameters or query strings, we can integrate, uh, uh, types keep, uh, friendly validations library like Jod or Joy. And so through all this, like, we request response typing and middleware typing and error handling and for response response typing and utility tab tabs and for validations. So by following these steps, I can assure our experiences appears are strongly tapped, which leads to more predictable and mental maintainable code. So through all this, we can achieve.
So suggesting as strategy for implementing role based access control in Node JS API using passport. Implement a role based access controlling Node JS API, giving passports. I need to integrate user roles and permissions into the authentication process. So first of all, uh, the strategy I'll use set we'll set up the authentication with part courses, and we'll assign the roles to users. Example, admin, editor, or user. And after that, for we will be here for role based access control. And after that, we'll protect rods based on the user role. And for, uh, setup authentication with the password. Js, we will first configure password just for user authentication, example, using JWT or local strategy. So using JWT as to the setup, we will install, uh, install the necessary dependencies, and we'll configure passport with the JWT, uh, uh, strategies. And after that, we'll assign a role roles to users. We'll ensure each user has a role in, uh, our data model, and this role could be stored as a field in the database. For example, uh, using the, uh, user schema. And when a user is authenticated authenticated, their roles should be part of the ZWTO payload and passport can protect it and will create a middleware for role based access control. We'll, uh, it will that checks if the authenticated user has the required role to access a particular role. And after that, uh, I'll protect routes based on the user roles. So, like, I can protect our APIs routes by applying the role based access middleware and passport authentications. So and after that, for JWT payload and for the JWT payload with user role, uh, we'll, uh, when we'll be seeing a JWT token during the authentications, we'll ensure user's role user's role is included in the payload. And and for JWT token, uh, token generations, we will, uh, apply the downloaded token generations. Uh, and after that, for test testing access control for admin only dot, only users with our role admin suite has access the admin dashboard. Any leader admin dot, both editor and admin should access create content. And for authentication node, any logged in user with the admin editor or user role can access profile. So through all these through when I implement these all role based access control in Node. Js API. So for that, uh, we use authentications authentications, users using passport j j s with the JWT, and we'll restore roles in the user model. And we'll create a middleware to check roles and apply it to the routes, and we'll protect API in clients based on the user roles. And I'll include roles in the JWT payload to make role by decision during authorization. So this all across allows for secure, scalable, and flexible, scale control across our API. So through all this, uh, you can implement role based access control in Node. Js API using
So so in this given support IO board and for handling real time, but there are a frequent performance issues and optimization that can apply for the better scale scalability and efficiently. So for that, uh, for if you for inefficient event listeners setup, uh, so here so that was important. So for that, we'll use, uh, for the optimizations, we'll limit the event emissions. We'll make sure the server emits events selectively, uh, emitting necessary unnecessary events, especially to all connected clients can cause performance issue. So for this, we'll limit the event emission. And after that, for handling large amounts of data and efficiency, so, uh, for that, we'll optimize, uh, we'll optimize the we optimize through compressed data and use binary formats. We'll use compressions compresses our binary formats like message, uh, back instead of JSON to reduce the size of the data being transmitted. And for potential memory leaks for that, we'll ensure proper resource cleanup. So we'll ensure that all resources like database connections, timers, or event listeners are built up on that disconnect. Uh, we'll make sure nothing persistent memory after the client disconnect us. And for the scaling, it's used in single server limitations. So for that, we'll use as, uh, for that, we'll scale socketed IO with radius or another adapter. So we so for that, we'll use socket Ios, radius adapter for scaling across multiple instances, and this will allow socket IO to work across a cluster of Node JS Processor, uh, even multiple servers. And for when when it will be leg off for rate limiting or throttling? So for that, we'll limit a rate limiting or throttling. So it will limit the number of events a client can emit in a given, uh, period of period to to prevent abuse. So, uh, I can use library like express rate limit or or period your our own rate limit logic. And for the, uh, no namespace, uh, room usage, so we'll use socket. Io that provides namespace and rooms for better organizations and efficiency and see by splitting clients into groups, uh, like rooms and namespace. I can, uh, reduce unnecessary event broadcast, and, uh, there will pass names best, and that can be used for logically separating different types of clients. And rooms can be used for grouping clients, uh, for the like, for chat room or game room. After that, for error handling and logging, we'll, uh, we'll implement error handling and logging. We'll use proper error handling and log, uh, recorded events for monitoring and debugging. So and, uh, we can also check with the, uh, emit less frequency or fetch data. Uh, it will be instead of emitting data similarly, we'll batch events are only emit changes that are significant. So through all these through all these strategies, we'll help that is your that it will help ensure that our socket. Io build real time algorithm performs wherein in a scale and very, uh, scale under a heavy load. So through all this, we can optimize.
To integrate rate limiting in Node JS application, uh, using spreadsheets. So I can use middleware to restrict the number of request a client can make within a specific time window. So common package for this is express react to it. This will help protect our API from abuse such as denial of service, docs attacks, or good force attacks. So so for this, we'll implement, uh, rate limit. I will install the express rate limit, and then we'll configure the rate limiting middleware, and we'll apply the middleware to a specific route or globally. And after that, we'll configure, uh, like, rate limiting. Uh, and after that, uh, like, for in the rate limiting time window milliseconds, and we'll match number of requests a lot within that window. And after that, for the message, we'll use custom error message to return when the limit is existed. And in the header, uh, in headers, we'll set to true if include x rate limiting, x rate limit remaining, and x rate x rate limit reset headers in the response to inform the client about the limits. And we apply a middleware. We can apply the middleware either globally or to a specific lots. And after that, after, uh, like, yeah, applying rate limit globally or we can apply a relevant for a specific lots. And then we'll just imagine the response for and then we'll, uh, we can also, uh, stick we can also for the advanced users, uh, for rate limiting by our role or users, we can stick the, uh, we can stick the rate limit or logging doubts, uh, like, for the different limits based on the roles, user type, or for a specific criteria or such as login. So for that, uh, like, we can implement, uh, within the, like, this, uh, stricter rate limit. We can use the stricter rate limit on login dots. For logging rate limit violations, like, for security and monitoring purpose, uh, like, we can, uh, apply rate limit violations, uh, so we can implement. So implementing express rate limit easily integrate rate limit and it it will configure rate limit to specify the maximum number of requests per time window. And if when and then apply we'll apply the rate limiter globally or on a specific lots. And after that, we customize the response and then we'll add logging to create violations. So by applying rate limiting in our explicit applications, I can prevent abuse and protect our servers from overload and ensure better, uh, security for critical routes like logging. So through all these, uh, rate limit in not just the reasons using experience to prevent abuse of the APIs.
So migrating a complex database schema without downtime using type ORM and Node JS requires a careful planning and execution. So for that, we'll process this will process this process is often referred to as a zero downtime migrations where, uh, I can apply changes to our database while ensuring the applications continuous running smoothly. And for that, we'll use backward compatible migrations. Uh, so the core principle of 0 downtime migration is to make backward compatible changes to that new schema what works with both the old and the new versions of our code. The steps for that, uh, for avoiding breaking changes like dropping columns or changing columns tabs until the applications is fully migrated. And, uh, also, we can do the new columns or tables visually, and we can migrate the time for this. And, uh, and we'll plan for the phase migrations like, uh, instead of applying all changes at once, we'll beat the migrations into phases. And this allows us to migrate the schema and update the application into a step by step manner. So for the migration phase, we'll add a new columns or tables, and we can update the application score. Here, we'll write both all the new schema versions. And after that, we'll backfill data from the old column to the new columns, and we'll seize that which, uh, we switch the reach to the new schema. And after that, we'll remove all columns once no no longer in use. And after and and then we handle the scammer migration with type 1. For Type 1 provides a migration system to handle the schema changes in a structured way. So for that, we can we will generate and apply migrations. Type 1 allows us to create migrations using CLI commands. So for that, we generate a migrations file with our schema. And, uh, like, with that schema, we'll create a migration file under the migration folder. And in the migration file, uh, I, uh, I can define the schema. It just like adding new columns or modifying existing ones. And after that, we'll apply the migration. Once the migration is created, we'll apply it using the migration run command. And after that, we'll make a blink on backward compatible. So in the completion in the application board, please show that new column, uh, is only used after verifying that the migration has completed in the column exist and for data backfill and for the data migrations. So if if, uh, like, if it were like, if I I am introducing new columns or tables that depend on the existing data, so I must back up backfill the data. This should be that after new schema changes are in place. But before switching the prerequisites to the use with the new schema, for that for that, what I can do, I can add a new column to the user table, and we'll create a migration for a skip that populates the new column with the data from the old column. So and then we'll run this migration separately to avoid any perform event from this application. And after that, we can also do switch we can also switch the application to use the nearest schema, and we can remove all columns for tables. And after that, we can we do testing and staging moments. And after that, we monitor we we'll monitor after migrations. So through all these, uh, through all these, we ensure minimal disruptions to help migrate our driver system as if we got downtime. So this so that is all.
That's a to recommend for implementing custom variations loading in tag warm that are not supported out of the box. So when I'll when I was using tag warm, it not just when I when I use. So it might encounter situations where the out of the box variations feature like column extensions are aren't sufficient of our needs. To implement custom validation logic in tag one, I can take one of the, uh, pros, like, we will use class build datas. So we can use the class validator package in the combinations with the type orange entity classes to define the custom validation logic. And this will allows us to keep our validation logic separated from our database schema, and it supports both and it supports both building and custom validation integrators. And we can, uh, install the class related and class transformation. So after installing both, uh, it will make it will make this work with the type war entities, and we can define custom validation logic. So we can define a custom validation generator extending the, uh, validator constants class from class validator. So so for applying this, I want to validate that user's password with certain security criteria. And, uh, yeah, so through all this, we can apply. And after that, we'll use the custom validity in our entity. So we can use the custom validate in our app or entity classes. And after that after that, we can do like, we can build it before persisting data. So when I need to, like, uh, I need to explicitly build it our entity before saving them to the database. And after that, uh, we use life cycles also for validations. Type 1 provides entity listeners like like, uh, before insert and before update. Here, uh, I can perform custom validation logic, uh, directly within our entities, and this method, uh, compels the validations with the persistence lifestyle but can be effective for schema specific relations. So through all this so through all these, like, we say, want to ensure that the time will fit for user entities unique. And after using middleware or service, it's just for unit logic gradations. And after that, we'll use database constraints for critical validations. And then we so through all these, like, using class build data for defining custom validations, logic, other decoders in our entities. And for life cycle rules, like, before insert and after, uh, before update for a semester, we got life cycle bond validations. And for the series based build systems, we'll use complex multi digital business rules. And for the date the database constraints or critical data integrity that must always be enforced. So through, uh, by combining these methods, I can implement robust custom validation logic that limits all our application needs while maintaining clean and maintainable code. So that is all.
How would I manage a single single So managing SMS processing in Node. Js when you do multiple third party APIs is crucial for optimizing performance and saving both user experience. So first, I'll understand similar 0 Node. Js. So node just operate on a single threaded event loop, making it is making it if you see in 4 input output bond operations, like API calls and understanding how leverage a similar programming, call group, promises, and async event. It's key to optimizing API, you know, API integrations, and we we use promise and async event. So instead of callback, we'll use promise and async event for better readability and error handling. This makes our code easier to maintain and understand. And after that, uh, for the concrete request, I'll use prometheus.all when making multiple independent API calls so we can optimize performance by using them incorrectly using prometheus.all. This reduce overall waiting times as all big waste are sent simultaneously. And after that, we'll implement rate limiting. So when when I integrating multiple APIs, so, uh, and it may incorporate limits. So implementing rate limits to avoid exceeding the limits set by APIs, so I can use leverage like, uh, p limit to control concurrency. And after that, for error handling and retries, we'll integrate robust error handling and retrying mechanism for failed request. We use libraries like exeos retry to automatically retry failed request. And after that, we use caching for frequently accessed data for caching and significantly improving performance by reducing the number of API calls for data, uh, uh, that doesn't change often. So for that, we use in memory catching, like, radius or local catching strategies. And after that, uh, we'll get the API request when it is possible. So some via support base request align us to send multiple request in a single API call. This reduces the number of network round, breaks, and can enhance performance. And we'll monitor and logging. So we'll implement the monitoring and logging to create API performance, every error rates, and response time. This can help, uh, to identify bottlenecks and optimize our integration further. And after that, we use a similar SKUs for heavy processing. For operations that involve heavy processing after fetching data, you consider using a job queue, example, call or RabbitMQ to handle these operations simultaneously, uploading them from the main request response cycle. So through all these, like, using async for clarity and error handling, we can also leverage Promio dot all four concrete request, and and we'll implement rate limiting with the leverage like we limit. And and we can also add re retry logic for failed request with leverage like or else use retry. And we can use guessing to minimize API calls and for the best request when supported by the APIs. And we monitor the logs performance and errors, and we consider, uh, job queues for heavy post processing tasks. So following these studies, I can easily manage a single processing in an org. Just, uh, optimize the performance of our business and ensure seamless integrations with the multiple third party APIs. So that is all. Thank you.