
Senior Associate
J P Morgan Chase & Co.Assistant Consultant
Tata Consultancy ServicesSenior Implementation Engineer
ThoughtWorksAssociate Consultant
Virtusa (Polaris Consulting & Services Limited)Development Support Professional
Kofax/HylandSenior Software Development Engineer
Euclid InnovationsSoftware Developer
Tech Mahindra
AWS
.png)
Jenkins

Ansible

Python

Bash

Git
.png)
Docker

ECS

ECR

Kubernetes

EKS

SQL

NO SQL

Linux

Splunk
.jpg)
Grafana

Terraform

CloudFormation

AWS

SQL

NoSQL

Linux

DataDog

Terraform
Hello. I in total, I have 13 years of experience into IT, and, I have been working in AWS and, infrastructure record ISE into last 8 years. So, initially, I was into Linux environment and working for the client like Tech Mahindra and Polaris. After that, I moved to the COFAX and, you know, Thoughtworks and, maintaining the infrastructures and implementing the solutions for the customers who are, utilizing the product for them, utilizing the product, and, we are providing as, the service to the customers. Whenever any like, any new instances, new VM, new servers required, we manage the infrastructure and then provide the details to them. And, anything like any application, CSUs, or, any databases in RDS or DynamoDB or any accessing related to the applications which is hosted in the AWS. We are the first point of contact, and we're, working on resolving those issues. And besides that, any decom request come into the picture, like any, old servers or any project getting decom. We we are cleaning cleaning up systems and, everything is happening in the agile development process. We, work on the sprint wise. And, and besides that, we do, write, new Terraform scripts, as well as cloudformers and templates, in order to automate the infrastructures, whatever the client is requiring. So in one of the in my last organizations, I do, write one info, Terraform scripts for, for the ONIX client, which is a major client in terms of Bitcoin, mining and everything, all that. And, we we deploy the instances, through JUULES. JUULES is nothing but the Jenkins actually. So yeah.
So in AWS, when data is in transit, we can use SSLTLS, and the certification 5509, in order to, encrypt the data in transit. And and the data enriched, there are 2 way, whether it's a customer managed or the AWS managed. So the customer managed, the customer will fully have control on the, encryption and decryption and only use the AWS services for data storage and everything. And, when it comes to the KMS, it's managed by the AWS and AWS will take care of the auto rotation of the, game SKUs every once in a while. But we we can configure that. So, at rest, for example, in s 3, if we are storing the data Hello? We can enable. K. By default, it is the s 3 encryption is enabled, but we can enable by using, KMS, which is be managed by the, AWS. So and another thing, SSCC, where, the customer has to be request the customer will, manage the encryption part and, AWS has nothing to do with it.
BPC pairing is nothing but, It's a communication between 2 VPCs. Suppose VPC a and VPC b, we are we can establish connection among these 2 VPC by doing the VPC pairing. So, and, And, the securities, can be, like, there are 2 way we can provide securities. Like, one is using security groups, which will only which is stateful and, which will only allow traffic to it. And in in top of that, in in in order to provide additional security, we can provide security, to in the subnet level by using Netl network access list. So that is stateless. So allow and deny, everything is different on us. Whatever traffic we want as an inbound and output, everything can be defined, check that. So in VPC peering, communication between 2 different VPCs can be, done by using some providing security groups and, NACL level, by providing NACL in the subnet level.
So in code pipeline, we can use code deploy. The code deploy is, like, where we can have these 2 instances, deploy to it. And, the the the there are, number of way we can have, without any having downtime. If we are, if we are using, like, For example, the deployment type like Kennedy deployment or, in order to, roll rolling in batches or rolling. So in that case, we, you know, we won't have any deploy, deployment downtimes. So when we have that, like, for example, the deployment, in that case, we can have a another issue to instance where we can redirect our traffic to a part of the traffic can be redirected to that. And, and once everything is sorted out, the full traffic can be redirected to the newly created instances. And the older one can be, terminated. And another one is, rolling ways. So what what happens in rolling in means another set of instances can be created, and, and once that is, have the latest code and everything, we can, terminate the existing one and all the traffic can be done. All at once, if we implement, then they will there there will be have some downtime. So we won't be doing that.
So what does CloudTrail does is it it it stores information related to the, user accessing the APIs and, it is it is best sure for, figuring out what user has access for the API and, for for, for trailing the information, is there any, person who is trying to access who who's who is not supposed to be. So, that thing can be done using the Clyde CloudTrail. And in the AWS config, so, so So right now I don't recall what AWS Config does.
The version control, can be, created in the git or Bitbucket, anywhere we want it use. And the usable modules means we don't need to, suppose one resource is being created, like, EC 2 instance by using CloudFormation Terraform. So that resource, we don't need to, rewrite it every time we wanted to have one EC 2 instance. In that case, we can have 1 module 1 small module for that, which which what it does is it gets the right set of it starts with the right permission, and that can be reused. That's where the reusable module is. And, same thing, in cloud cloud formation.
But, uh, in any ISE, the database, uh, especially when stateful set should not be part of an ISE. It should be handled, uh, externally, and the endpoint, uh, the connection endpoint should be provided to the ISE, uh, regardless of the cloud formation or the, uh, uh, Terraform. And the username, password, it should not be hardcoded, uh, in in the template itself. Uh, there is a concept called parameter where while providing you the template and the user, uh, the username and password can be provided as a parameter. So if that is the best case, then the password also can be stored in the secret manager, and it can be accessed through the, um, uh, secret manager or SSM parameter. Anywhere we can store it, uh, and, uh, we can while showing it, we make sure that, uh, it is KMS encrypted and, uh, we can fix that. So username password should not be hardcoded in this case.
Effect allow action s 3. Resource here in my bucket. Condition string not report. My Slush. Well, the resource section is here. Uh, given my bucket, uh, wildcard. It means everything which is inside the my bucket should be accessible. And then again in the condition section, if the string is not, uh, home slash AWS username. So so AWS colon username is, uh, given in specific to the username who is trying to access so that he can only access what he's storing. He cannot see what other steps to those details. So effect is a low. This was condition string not equal to. So it should not be home, but, uh, it should be instead, uh, my bucket then AWS username. Or in the resource section, we can have, uh, home slash.
Well, the monolithic applications, uh, uh, the approach would be like, uh, first, we'll have, uh, the connectivity, like, whether we wanted to use SSO login or SAML 2 dot o. And, uh, we need to figure that out first. And once we have that, the monolithic applications, uh, first of all, it requires some measures and just like, uh, we have to have the microservices instead of, uh, you know, putting everything in a single issue to instances and having a numerous issue to instances, uh, coming to the load balancer and, uh, AZ coming to the picture. So what we can do is we can utilize, uh, Kubernetes, uh, Docker containerization. All that stuff can be used, which will be definitely required a a major development changes. So if we want to directly use the monolithic application, what we can do is we can, uh, have the, uh, suppose let's assume for the static content if that application is processing. So what we can use is we can redirect the user to the CloudFront. And from that, directly, we can send the request to the s 3. If not, then the API gateway, then the load balancer and, uh, underneath all the EC two instances. And each EC two instances will have, uh, an individual a stand alone component. And so yeah. So in order to, uh, have a microservice service approaches, we need to, uh, do the development changes and all individual component can be deployed into a port and port means in EKS. So yeah.
Well well, uh, when I I'm unsure about how many issue 2 instances will be created, like, in ACS, uh, that the task is task definition based on the task definition, uh, the ec two instances are getting created. Suppose if we don't want to manage the underneath resources and, uh, we don't know the load, how much how many containers will be getting created. In that case, it's better to give it to the AWS now. AWS will take care of, uh, uh, all the under net resources. Uh, we we don't need to worry about the how many issue issue two instances we want, how many, uh, resources we want for our purpose. So AWS will take care of the auto scaling and, uh, auto scaling part, like, underground resources. If it required more, then the forget will have more and, uh, based on the request is not that much, then again, uh, it will scale down. So everything will be happening in the back end. We we won't be managing the resources. The AWS will take care of it.
Well, uh, Lambda, uh, we can send the, uh, Lambda log into the CloudWatch, and the CloudWatch is the best part to see, uh, uh, the log informations, uh, unless if we are, uh, the there is another way, like, we can use Kinesis Data Stream to, uh, uh, by using, uh, fire host to send the data to the 3rd party, uh, observability tools like Splunk, Datadog, and all that stuff. That is one way to figure it out. And and we need to see, uh, what lambda function, uh, is whether it's a synchronous or an asynchronous. If synchronous, we know right away that, uh, we are not getting 200 or, uh, the success message. In asynchronous, what it happens like if, uh, something is sending to the s three bucket and we don't wait for the response. We do get a response right away, but underneath it still, uh, works, uh, putting data into the s three or SQS sending queue. So whenever any, uh, issue happens, it, uh, sends the data to the dead letter queue and process subsequently with exponential backup. So, uh, if those things are still not working, everything is failed, we we need to see, uh, the CloudWatch and CloudWatch logs. And, uh, we need to, uh, do a testing on the Lambda Lambda, like, uh, we can have another lambda function and try to do a test on if, uh, everything working fine or not. So, uh, the error loops lambda usually used for the short interval of time. We don't, uh, write a big complex program into that. So most of the, uh, complaint which is which can be handled within 15 minute, those things would be, uh, mentioned within those things should be handled by using Lambda, and test everything can be in a linear application itself. So yeah.