profile-pic
Vetted Talent

Sirish Kumar J

Vetted Talent

Accomplished Cloud Solutions Architect with over 17 years of hands-on experience in crafting and managing resilient cloud solutions. Demonstrated leadership in optimizing infrastructure, driving scalability, and achieving cost efficiency. Adept at aligning technological initiatives with business objectives to deliver cutting-edge cloud services.

  • Role

    Senior Cloud Solutions Architect/Platform Engineering

  • Years of Experience

    17 years

Skillsets

  • cost optimization
  • Windows
  • Team Leadership
  • PowerShell
  • Linux
  • Golang programming
  • Data Analytics
  • Cloudformation
  • strategic IT planning
  • ML services
  • Cloud security and compliance
  • IAC - 2 Years
  • hybrid cloud solutions
  • Performance Optimization
  • Generative AI
  • disaster recovery planning
  • cloud migration strategies
  • AI
  • Python - 3 Years
  • Cloud architecture design - 7 Years
  • Ci/Cd Pipelines - 5 Years
  • DevOps - 6 Years

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    AWS Solutions Architect(Remote)AI Screening
  • 68%
    icon-arrow-down
  • Skills assessed :.NET, CI/CD, AWS Services, IAC, Networking, Docker, Kubernetes, Security
  • Score: 61/90

Professional Summary

17Years
  • Apr, 2020 - Present5 yr 8 months

    Senior Cloud Solutions Architect/Platform Engineering

    GoDigit General Insurance
  • Jun, 2018 - Apr, 20201 yr 10 months

    Architect Cloud

    Microland Limited
  • Jan, 2013 - May, 20185 yr 4 months

    Infra Consultant

    NTT Data Global Delivery Services Limited
  • Aug, 2007 - Jun, 20113 yr 10 months

    Resource Specialist

    ASAP-Y Sourcing Solutions Pvt. Ltd
  • Jul, 2011 - Jan, 20131 yr 6 months

    Technical Lead

    Cognizant Technology Solutions

Applications & Tools Known

  • icon-tool

    Imperva

  • icon-tool

    Akamai

  • icon-tool

    Kong

  • icon-tool

    Rancher

  • icon-tool

    AWS Lambda

  • icon-tool

    Gradle

  • icon-tool

    GitHub

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Terraform

  • icon-tool

    Route53

  • icon-tool

    S3

  • icon-tool

    RDS

  • icon-tool

    DynamoDB

  • icon-tool

    SNS

  • icon-tool

    SQS

  • icon-tool

    AWS CodePipeline

  • icon-tool

    CloudFormation

  • icon-tool

    CloudWatch

  • icon-tool

    AWS IAM

  • icon-tool

    EKS

  • icon-tool

    AKS

  • icon-tool

    Google Cloud Platform (GCP)

  • icon-tool

    Golang

  • icon-tool

    Jenkins

  • icon-tool

    AWS QuickSight

  • icon-tool

    Amazon Q

  • icon-tool

    Python

  • icon-tool

    Terraform

  • icon-tool

    Windows

  • icon-tool

    Linux

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Dynatrace

  • icon-tool

    Google Maps

  • icon-tool

    GCP

  • icon-tool

    SageMaker

  • icon-tool

    Redshift

  • icon-tool

    AWS CLI

  • icon-tool

    Terraform

  • icon-tool

    ServiceNow

  • icon-tool

    EC2

  • icon-tool

    VMware

Work History

17Years

Senior Cloud Solutions Architect/Platform Engineering

GoDigit General Insurance
Apr, 2020 - Present5 yr 8 months
    Lead the design and deployment of multi-cloud solutions on AWS and Azure, significantly enhancing scalability and flexibility. Architected comprehensive cloud solutions, implemented Kubernetes, enhanced observability, designed disaster recovery strategies, and automated cloud infrastructure.

Architect Cloud

Microland Limited
Jun, 2018 - Apr, 20201 yr 10 months
    Designed and administered AWS cloud platforms, executed automation solutions, performed cost analysis, migrated systems to ServiceNow, and managed containerization with Docker and Kubernetes.

Infra Consultant

NTT Data Global Delivery Services Limited
Jan, 2013 - May, 20185 yr 4 months
    Implemented self-healing infrastructure, designed monitoring solutions, automated AWS environments, managed hybrid cloud environments, and facilitated serverless computing solutions.

Technical Lead

Cognizant Technology Solutions
Jul, 2011 - Jan, 20131 yr 6 months
    Managed VMware ESXi deployments, troubleshooting, and administering virtual environments.

Resource Specialist

ASAP-Y Sourcing Solutions Pvt. Ltd
Aug, 2007 - Jun, 20113 yr 10 months
    Worked on staffing solutions for direct clients and in-house projects, Windows administration, and troubleshooting.

Achievements

  • Architected comprehensive AWS and Azure cloud solutions, ensuring optimal performance and scalability.
  • Implemented Kubernetes, DevOps, and DevSecOps practices, enhancing deployment efficiency and security.
  • Enhanced system observability by integrating advanced monitoring and logging tools.
  • Utilized Google Maps and GCP for basic cloud services, enabling seamless API integration and functionality.
  • Designed and executed multi-cloud disaster recovery strategies using cloud-agnostic tools and solutions, ensuring business continuity.
  • Established a greenfield Azure cloud environment for a subsidiary company, providing a robust and scalable infrastructure.
  • Developed self-healing cloud infrastructure, automating recovery processes to maintain uptime and reliability.
  • Implemented Dynatrace for advanced application performance monitoring and optimization.
  • Deployed security tools such as Web Application Firewalls (WAF) and Network Firewalls, along with API Gateway configurations, to safeguard cloud environments.
  • Led multi-cloud cost optimization initiatives, leveraging AWS and Azure cost management tools to reduce expenditures.
  • Managed inOps and team activities, fostering collaboration and ensuring the successful delivery of cloud projects.
  • Managed cloud budgets, optimizing costs and aligning expenditures with organizational goals.
  • Automated Cloud Infrastructure using Golang, CI/CD deployment automation, automated Kubernetes cluster upgrade/migration activity.

Major Projects

5Projects

AWS and Azure Cloud Platform Solutions

    Architected and deployed multi-cloud infrastructure, implemented Kubernetes and DevOps practices, integrated observability, and executed disaster recovery strategies.

Sentimental Analysis Solution Redesign

    Redesigned solutions for optimal response time and cost-effective infrastructure, collaborated with AI & ML teams, and provisioned necessary infrastructure.

SERCO, UK

Jun, 2018 - Apr, 20201 yr 10 months
    IaaS (Infrastructure as a Service) for AWS cloud environment with 200+ AWS instances in different availability zone.

Forethought Financial Group Acquisition by Global Atlantic

Jan, 2013 - May, 20185 yr 4 months
    Proficient in automating, configuring, and deploying instances on AWS, with expertise in designing and deploying a variety of applications using the AWS stack, including EC2, Route53, S3, RDS, DynamoDB, SNS, SQS, and Lambda, focusing on high-availability, fault tolerance, and auto-scaling in AWS CloudFormation.

Comcast Converged Products (CCP)

Jul, 2011 - Jan, 20131 yr 6 months
    Experience on Installation, Configuration, Administration and Troubleshooting of VMware ESXi 4.x, 5.x Virtual Center.

Education

  • Bachelor of Technology in Electronics & Communication Engineering

    JNTU Hyderabad (2005)

Certifications

  • Aws certified solutions architect associate

  • Itil foundation certificate in it service management

  • Aws certified solutions architect professional (aws sap)

  • Itil foundation certificate in it service management (itil)

  • Aws certified solutions architect associate (aws saa)

AI-interview Questions & Answers

Yes. Of course. So, uh, I have around, uh, 17 years of experience in infrastructure. Uh, I started my career as a, uh, administrator, and, uh, after that, I moved into a virtualization environment where VMware is a 6 and virtualization environment. And after that, after a couple of years, uh, based on the client requirement and the technology shift happened, I moved to a, uh, cloud infrastructure. From the past 7 years, I have been in the public cloud infrastructure managing and architecting. And the current organization which I'm working for is a it's a it's a product based company, and, um, it's a Fintech company, actually. And then I've been managing the complete cloud infrastructure for the company. Um, ours is a multi cloud environment. Uh, recently, we acquire 1 of the company or we introduced a new company. And for that company, we created a complete, uh, end to end. We can say a greenfield require a greenfield, uh, infrastructure, which we created. It's a completely a new, uh, infrastructure based on our knowledge and based on our experiences and, uh, drawbacks or, uh, we we, uh, accumulated. And based on that, we created a complete new infrastructure for our new company. And, uh, I've been working for AWS, uh, and Azure, uh, and we have a a requirement for a multi cloud Doctor. I have completed the research and, uh, prepared a multi, uh, multi cloud Doctor assessment and provided and we we are going in that direction, actually. So the plan was like, uh, we have 2 companies, and 1 of the company stays in AWS, another 1 is in Azure. So what we we are currently doing is we are recreating a environment or a plan for ADR such a way that, uh, multi uh, 1 company, uh, uh, Doctor will be in another cloud, and the other company Doctor will be in the same cloud. So it's a completely across cloud Doctor, and, uh, we have a near near cloud, uh, near near Doctor as well, which is, like, uh, for the, uh, for the, uh, for smaller application or a quick, uh, for high availability. It's, uh, in AWS or in any public cloud, high availability is derived from the, uh, utilizing the com. All the multi, uh, multi availability zones, actually. So we are creating near Doctor as well for few applications and few applications which we found, which is very critical nature. And for the compliance purpose, we created a multi cloud Doctor. And, um, if you want to explain about the skills and everything, As I told, uh, I've been working with the AWS for almost 7 years now. And Azure, uh, I've been working for almost 3 to 4 years now. And, um, uh, I've I, uh, initiated and worked on many projects like, uh, implementing self healing, uh, mechanism for an application and, uh, creating complete monitoring setup for the for the company, a complete monitoring setup, and created a Doctor for, uh, the companies with based on the budget and based on the compliance requirement. And, uh, many more applications you can find, actually.

What strategy would imply to ensure disaster free for critical AWS applications? There are 2 strategies, uh, based on the business, uh, need and company, uh, budget. We can have a multiple strategies. The 1 of the strategy we have is a near Doctor. Uh, for example, in in India, we have, uh, Mumbai as 1 of the available region, actually, and we have another region, which is which is a near near, uh, newly created region. It's a hydropath region. Before, we use we don't have a multi region, so we we never had a near Doctor or a Doctor for Apple for infrastructure, uh, because, uh, the the company or a company, uh, country rules where our data should not be reside in the other region. So we did not had our Doctor, but, um, for that sake, what we did is we did a multi cloud Doctor for 1 of critical application like our core application and the database. And the similar thing, now we have a near, uh, near Doctor or a new region which has come up and which is fully function. We plan to do an ER Doctor for our core application. And, uh, how we did this, uh, we had, uh, we list down all the critical applications and the services which we're using in AWS. And, uh, based on that, we created a in the same region, we have similar kind similar kind of applications as well. For example, we have a RDS, which is running on a post release, and, uh, we created a multi region. So there are 2 options, like, when we can have a, uh, read replica in the another region and the applications, which is writing an means we can use a read replica as a read replica as a, uh, Doctor instance or, uh, for the same application. Uh, so for example, the primary will be in 1 region, and the secondary will be in other region, Uh, based but there will be a bit latency, uh, because for all read applicate for for read calls, it will has to go to other region. For that, what we did is we had primary, secondary in the 1 region and another region, which is, uh, multi multi, uh, multi region we have enabled, and we have another instance which is running in the other region. So when we have to take over or do a Doctor, right, we'll just, uh, take over the, uh, the other 1, the standby instance, which is in the other region. And that will be a primary, and these 2 will be a secondary and standby. So this way we planned, uh, our, uh, application and, um, for the this is for the database and for the applications. Us is a completely loosely coupled architecture. We have, uh, our 80% workload, uh, is in microservice environment, and the remaining are completely, uh, in scaling mechanism. So we transferred the images from 1 region to other region and created an hardware and set up everything. In case of any Doctor, we just, uh, go ahead and deploy there as we have a byte bucket and a code pipeline, everything is in setup. So whenever we wanted to go, uh, have a Doctor, we'll just upscale the servers in the other region, and it will become this is for the application purpose. So another strategy is for the multi cloud tier, actually. Uh, multi cloud here, to achieve a multi cloud here, we what we did is we, um, mostly, uh, worked on removing the native stopping the native services, removing the dependency of a native native app services from then public cloud provider, and we moved that to a a third party tools. And, uh, based on that, uh, we had a multi cloud setup.

How do you ensure data encryption compliance for data at rest in the transit within AWS. Data. Okay. The the for, uh, for encryption purpose, we use, uh, SSL certification, TLSL certification at the load balance level for infrastructure. Uh, our infrastructure, our servers are in scaling mechanism. It we use auto scaling, and the code is in, uh, bit bucket, and, uh, we have a launch templates for it. So so there is not a, uh, most of the applications are in scaling mechanism. Actually, there is no, uh, stand alone servers, uh, independent servers are, um, which runs an application. We don't have that kind of setup. So for the scaling purpose, uh, for the encryption purpose, we we had multiple, uh, things. We have a, uh, SSL setup, uh, which we integrated. Uh, we we have our, um, uh, from the around 53. Right? Uh, we have a domain which is hosted in AWS. And from there, we use the ACM service, uh, where we upload the our, uh, domain certificate. And from there, it it is integrated to all our our services. This is 1 level of encryption, which is available in the AWS, and another 1 is at the uh, EBS volume level. We encrypted our volumes, all our volumes in encryption. So this has become, uh, at the volume level. It's, uh, at the at the the rest, actually. And for the in transit, we use our SSL certification. But each and every applicate any any communication between those services and from the public, we it is encrypted. This is how we achieved the, uh, data encryption or a request, uh, complete transaction encryption in the transit.

Okay. What strategy could you see to automate security audit within AWS environment? Security audits, we have many security services uh, in AWS for the audit purpose or for the compliance purpose to, you know, name a few. Right? We have VPC flow logs, uh, where we can have, um, a complete infrastructure. I mean, so what what all the request, what are the activities which is happening in the VPC level we can, uh, or the network level, uh, traffic in and out, or what kind of request we are getting and what kind of, uh, whether it is blocked or, uh, it was rejected and what where it is failing. We can track it from VPC, uh, flow logs, and we have a cloud trail, uh, service where it is, uh, it is enabled by by default by AWS Cloud Infrastructure. So every activity which is, um, uh, API call to the cloud or, um, or, excuse me, for API call to the cloud or any management activity and everything. Right? It is, uh, it is, uh, activated in the means it is stored in the cloud trails, And we have services like, uh, config services where it will track the change of, uh, on each and every services or several config changes, so it will be recorded in that. And based on all these logs, right, um, all these logs are accumulated, and it can be, uh, stored in the security hub where you can have a, uh, all the, uh, it will be, uh, audited, actually. It will go and give us code for us based on the application or or, uh, the compliance which you selected. For example, uh, by default, AWS has their own compliant, uh, certificate. We can we can enable and see, like, where where we are falling into that, what kind of workload which is falls under to that. And, um, for ISO, some 2, 007 or something certificate rate, it will be audited. We can go ahead and enable based on our client or based on our workload or our company, uh, requirement and, um, automatic security audit. And another 1, the best tool will be, like, a trusted advisory where you can find each and every, uh, services. Like, uh, you can find out the security loopholes or the security findings where we can see any public IPs opened or anything. So it does drive is a very good and, um, a must service, which is already enabled in the AWS. With this combination of all the services, we can find out, like, what are the, uh, what are the security findings we have. And based on that, we can audit, and we can take an action based on it.

Uh, how do you 1 minute, Peter. As English framework Very good. How do you leverage AWS option, AWS config for maintaining historical record, and audit changes to AWS impression? How do you leverage? How do you leverage AWS CloudTrail? Uh, CloudTrail is a service, uh, offer from AWS, which is and by default, you, uh, it is enabled. You can have a stream, which is, uh, which when, uh, 1 stream, we can go to a cloud watch services, log groups. Another 1 stream, we can, uh, configure it for a double s s 3, actually, where you can have a a historical data, uh, can be stored there for audit requirement or for the compliance requirement, 3 months, 6 months base or 1 year based on the, uh, requirement, actually, audit requirement. And to monitor the, uh, cloud trail, you can have a CloudWatch, uh, CloudWatch cruise where you can see, like, if if you have any, uh, security findings which you want to do or any breakthrough or anything which you want to go and audit it, or you can if you want to go and see the, uh, see the the breakage or, uh, to find out the cause, root cause of any failure or anything, you can go ahead and see that. And config is a service where it will record the configuration changes happen, uh, which has happened to 1 of the particular service. For an example, AWS EC 2 service. If you enable a conflict, uh, it was config service for that service, it will rec it will record all the changes which has happened. Like, if when it was stopped, what kind of, uh, uh, who triggered a stop, who took it, who triggered a snapshot for it, or any, uh, changes, like any uh, tags which is at it or anything. So everything will be recorded in the AWS config service. And these these logs can be forwarded to s 3 for storage purpose, and, uh, from there, you can use Athena for querying purpose, or you can have a quick side dashboard, which when can which can which will integrate it to Athena, and we can have a very good dashboard or a security dashboard, actually.

What is the process to securely manage? Some safety configuration service. Secrets are very, very important, uh, for any applications or for, uh, which need to be managed very in a highly uh, secure environment. So AWS offer, uh, secret manager services, uh, e k services where you can have, uh, an application for any application, you can store our secrets there. And, uh, and, uh, with an API call, you can retrieve the data, and, uh, your application will be application can use the service, actually. There are different kind of services. KMS service is 1 of the service, and, uh, uh, this is for the applications. It was services you can integrate with them. And if you really wanted to have a, uh, for any audit purpose or other government requirement, you can have a dedicated HSM as well. Uh, it is a bit costly, but, uh, it's it gives you in a high level, uh, high level security, uh, security, what we call security container, uh, hardware devices, which is dedicated for you, and, uh, we can, uh, we can use that services for that. Oh, sensitive configuration. Okay. Now sensitive configuration. Similarly, for a for for in our in for in my experience, what we did is we for an application, uh, let's give you an example of, uh, our microservices, which is running on an AKS infrastructure. What we did is, uh, the secrets, uh, developers or, uh, dev team, what they did is they, uh, store the, uh, application secrets or, uh, all the secrets are in the code itself. So that was creating a security loophole. So what we did is we removed all the secrets from the code and, um, uh, for the communication purpose or any there is a requirement, like, uh, for the communication. For example, uh, I have an application, and I need to take then, uh, data from that application and store it in s 3 service. So for s 3 service, we integrated with the rule based. Recently, we got a rule based and endpoint to use it, but before that, like, they were using, uh, they created a server, and they were using a m I'm for it, and the the secrets were stored in the code. So that was, uh, a security risk which we find out, and, uh, we moved that completely to have a role and the endpoint, VPC endpoint and s 3 endpoint to store the data. So, similarly, not only AS3 service, but, uh, any other, uh, application want to communicate or want to use a third party services. They, uh, the dev team required, um, and refer to store the, uh, secrets, actually. So we moved our moved that strategy to have a secret config config, uh, services in the EKS environment, and we stored the c or, uh, we stored the secrets in EKS. And every time the application, uh, which comes, which, uh, required that, uh, information, it will go to have a do a API call to an e a, uh, a secret services. And from there, it will get a secret, and it will be carry forward the request. So this is the setup we have, uh, implemented.

And this I'm policy written in JSON. Let's take that prevent it from executing as intended. Can you spot and explain the room? So there is a version is fine statement. Effect is allowed to action as s 3 and s 3 for the all the resources. Uh, action is it's completely, uh, full access we have given to a particular resource called my bucket and the child policies. K. Got it. And, uh, conditions, stress, strings not equals to s 3 prefix. It's, uh, none. The prefix is none. Home slash and home slash, uh, with rate AWS username. Mhmm. This stream might be good to design. I think condition is wrong string not equals to s 3 prefix. None. It should not be none, actually. Home slash home slash oh, this thing. First of all, uh, they should note with this open bracket. Is it I need to write here or uh, policy written in JSON. There is a mistake that prevented from executing, isn't it? Can you spot and explain the error? First of all, there should not be a open bracket and where 1 1 1, 2, 3, 4, 5, 6, 7. In 8th line, there is a bracket which should not be it is not has built the structure. Uh, that is the the closing should not be here. It should come after conditions or the line should be there. It's only about the the JSON is correct, but the way the format is wrong, actually. Format is wrong. Intentions, uh, space intent intentions is are wrong. I think that is correct.

Given to say there was lambda functions, a bit written in Python, can you identify whatever it might throw during sessions? 2, 3. Okay. Define lambda handler event and context. Okay. S 3 client equals to 2 3 client s 3 response equals to say response get object bucket equals to my bucket. K is equals to event. K. Return response. Body hyphen rate. I'm sorry. What it is saying? Even the it was lambda functions to better return Python. Can you imagine for what accurate might throw during execution. What measure it might throw? Response equals to s 3 client. Get object. Both of client s 3. S 3 from objects, bucket equals to mild bucket. K. It goes to event. K. There should be a value in the key, actually. Key equals to event key. It's a key value. Uh, return response body dot read. What we are trying to do here is, uh, we are trying to get a, uh, bucket, uh, list. What are the listing what are there in the bucket line? The there is a bucket called my bucket, uh, s 3 bucket, and we are trying to list the objects which are stored in the that bucket. K. Event. What? So there is a key value pair in the bucket, and we are trying to get the value it from it. I think the failed because they we need to provide a value here, uh, which is not mentioned here. We are we are providing key. Again again, key, actually. That's that's a issue. We can correct it. Okay.

Could you optimize cost when when scaling an application using easy to auto scaling and spot instance? How could you optimize cost when scaling an application using easy to auto scaling and spot instance? Easy to auto scaling is a, uh, mechanism, uh, which is, uh, used to scale an application based on the based on the, uh, hardware, actually. So for example, if I have a server which is, uh, running with minimum 3 service 3 servers in it, applicate in auto scaling server with 3 servers which are running, and I got many requests on that server. So what happens is if you get a more request on it, uh, it will start to use more. It start consuming more hardware for it. So and based on the CPU and memory, I will scale my, um, my applications applications, uh, based on the based on the hardware, I'll increase the, uh, say, for example, say, you'll say that, okay, CPU, if it, uh, if it grows 60% or 70 percentage of average CPU, I'll create 1 more server. So auto scaling is a it's a scaling mechanism for it. And if you so so what happens is if you implement spot in it so spot is a, uh, it's a it's a reserved instance. It's not a reserved instance. It's an, uh, what we call, uh, extra hardware, which is pro it is laying at the AWS level, and they provide these, uh, hardware offer with the with the 1 third of its cost. 60, 70 percent of cost, they'll give they will, uh, provide the spot instances. What you can do is you can configure in the auto scaling, say that, okay, when you want to have when you can run minimum, uh, for the for a production kind of environment, you can run servers with or the standard place. You know the, uh, you based on your request, okay, these are the servers which are required. For example, my auto scalings has, uh, 3 servers which is required. And at the, uh, the peak hours, it goes up to 4 or 5. So what you can do is for the scaling purpose, you can have an auto scaling, uh, service implemented for that. So minimum service, my servers will be in on demand servers will be with 3. And for any scaling mechanism, it will use the spot instances. With this, you can reduce your cost, uh, with with can reduce the cost. Right? But you need to have a mechanism, uh, or you can your application has to be in a such a way that it has to, uh, what we can it has to recover from the failure because spot instance will it will it will come, and there is no guarantee that it will stay for 1 hour or 2 hours or half an hour or 30 minutes. So your application has to be designed in such a way that with with the failure of the hardware, you should, uh, run, uh, the sessions, or, uh, it should be it should not be a sticky session, or it should not have any sessions which is holding at the instance level. If you if you have that loosely coupled architecture for your application, then we can use spot instance in productions as well. Else, uh, that is 1 use case for production. And for that development environment or in lower environment, we can blindly go ahead and use spot instances for earlier, uh, scaling mechanisms, scaling servers, or for the EKS infrastructure, Kubernetes infrastructure, everything, you can go ahead and implement Spot instance where you can have a massive cost saving, actually.

How could you deploy an application on AWS with considering for both CICD pipeline and AWS risk practice? Could you deploy an application on a to plus with considering for both CICD, a plus security best practice? So we have many, uh, services, uh, for we have many services, uh, for our, uh, container services or any services which you want to have as EDC, I set up. You can have code deploy of, uh, deploying a code in the in the servers or the microservices environment, and you have a mechanism like, uh, you can have a ECR for storing a image for a docker images. We can create a docker image and then store in the ECR, actually. So this public cloud infrastructures are, uh, having the tools required tools for configuring CICD implementation deployments for that. Uh, so what you can extra, uh, for the security purpose, what you can have is you can have services like, uh, for the for the, uh, for example, for our, uh, Docker instances, Docker, uh, images, if you want to see the how you want to see how the images are, uh, you do we have any vulnerabilities in the images? Or, uh, this image which we stored in ECR, is it a a Docker image or anything, which can which can have a a potential vulnerability which can bring down your applications? You can have services like, uh, inspector where you can go and, uh, you can scan the server or you can scan the image of the, uh, the image which is resting in the ECR. So it's it's a very good tool where, uh, before deploying a code or before creating an image or after creating an image before deploying the code into production, you can have a scanning mechanism and find out, okay, this, uh, if you find anything, you can stop the, uh, deployment or fix that, uh, vulnerabilities and retrigger redeployment. So there are many, many services which can be integrated to achieve this kind of setups, like before while deploying or before deploying, you can, uh, scan for any, uh, uh, case security findings, and you can rectify it. And after that, you can have AAA deployment. So you can man integrate many services, actually. So you can have a CloudWatch for the, uh, monitoring purpose where you can find out, uh, any any irregularities or security vulnerabilities. Inspector is a service for the security side. You can have a, uh, scanning purpose. Uh, detect detect you is, uh, is 1 of the service where you can, uh, beef before implementing or anything, you can find out any secondary vulnerabilities or code vulnerabilities or a platform vulnerabilities for that service or anything. So there are multiple services, combination of all these services you can have. Uh, you can have a CDCI setup with the security, actually.

In which scenario could you choose AWS Fargate or using AWS EC 2 instance for running container? So Fargate is a managed service from AWS where, uh, you just you just need to put your code into that, and it will create a container image and everything and provide a service for you. So for e c, uh, so EC2 instance is a, uh, is a server where you can you create a server. You deploy a code there. You, uh, means it's a it's a managed 1. You have to manage the resources. You have to manage monitoring. You have to manage the uh, deployment, uh, code, uh, of an any security scanning, patches, and everything on the servers. It's infrastructure where you have to manage it. And is a managed service from AWS. You just need to put your code there, and it will create the servers. It will manage the scaling. It will scan. And since it has it is a managed service, it will do all, uh, the, uh, infrastructure related work for you. So it's very easy for any small application can be implemented into the in a dockerized kind of environment, and it will take care the infrastructure back from your side. Hello? This is mostly used for easy, um, Kubernetes and with on end. So we have services like, uh, EKS. Uh, it's a elastic Kubernetes services, which is, uh, which is managed. It's open source Kubernetes services, which is managed by, uh, AWS, actually, the master engine and everything, uh, masters, which will be managed by AWS. You see ECS is another service from AWS where you can have a container. Uh, it's it's not an open source. It's a, uh, it's, uh, it's they took an open source version, and they implemented for the AWS, uh, AWS implemented for the managed services for container or microservices environment. Similarly, for the, uh, similarly, it is a managed service from AWS, which is you don't need to have any create any containers or anything. Just run a code, and it will, uh, it will provide a a provision infrastructure for you, and you can use that service. That's it. Thank you.