Software Engineer
SanData System / RedCloud Computing Pvt. Ltd.Jr. DevOps Engineer / System Administrator
SLK Techlabs Pvt. Ltd.Git
Python
Docker
Kubernetes
AWS (Amazon Web Services)
Google Cloud Platform
Azure
Azure Active Directory
Terrafrom
Jenkins
Helm
Spinnaker
Zabbix
Terraform
Ansible
Veeam
Github
Rancher
AWS
GCP
OpenStack
Ubuntu
CentOS
Windows
Tomcat
Nginx
ArgoCD
Hyper-V
VMware ESXi
vSAN
Terraform
ELK Stack
Prometheus
Grafana
So here for straightforward services, uh, we use a persistent storage system and create a and define a storage class in Kubernetes, and which will persist all the persistent data in Kubernetes and maintaining the data consistency. So the basic requirement here is the the it's defining a storage class and, uh, setting up the persistent volumes and, uh, uh, behind the persistent volume claims for every or for all the required services, which are, uh, required for the persistent data.
So while migrating from ECS to GKE, uh, first, when she let the we can ensure that the application the stateful application is always up, is always up. And, uh, the consideration here is that, uh, if we, uh, uh, let's suppose we have some virtual machines and are running in ACS, and, uh, during migration, all of the machines cannot be stopped at a time so that it will provide a sudden downtime in the application, which will lead to the downtime on the application. So we can tick, uh, just decrease the number of instances, the number of compute no compute nodes, uh, gradually, and while increasing the number of the nodes in GKE side so that, uh, when application goes down in e c ECS, uh, in in the meantime, uh, same parts and same parts and same application will be starting up in the, uh, g GCE console side so that there is no downtime in the application during this migration.
Container resource limits. Uh, here, we we define the container's resource limits for the memory and the CPUs that the container is being container is using. So uh, some using some sidecar ports or some sidecar ports, uh, we can, uh, continuously monitor the resource limits in a in an application, uh, which will predict or which will not, which will define first the uses of current, uh, app current container. And based on these data, you can define how much limits resource limits can be set for that container so that, uh, we can get a optimum performance based on the cost and, uh, which which can lead to the cost saving as well.
The message processing in rabbit time queue, uh, we can ensure that what the only in one side, the data is incoming. And, uh, while accessing it, uh, the data should be properly, uh, accessible to the customer, and it's in while accessing it, multiple users for multiple users, uh, the different parallelism can be set up so that, uh, it can be reduced in the latency.
Setting up those auto scaling in GCP, uh, we can grab the data and the metrics is, uh, uh, where we we can monitor the CPU and the RAM utilization. Uh, and according to those utilization, the load balancing methods can be implemented. And, uh, CPU, RAM, and the disk storage, these are 3 basic things with, uh, where the metrics can be obtained and based, and the rescaling can be adjusted according to it.
Here, the second task in that that is debug and variable engine installation is not, uh, can be a potential failure, uh, which tends to the failure of the complete playbook code.
So in this container, I can see that the memory is limited to 512 megabytes, and the CPU is limited to 200 or 2 CPUs. So once the request or once the utilization of this port goes above these limits, then it can lead to potential failure of this port. And because beyond that, if this container is, uh, util is requiring, uh, some higher resources beyond these limits, then it will tend to failure of the board.
For interservice communication in STO, first thing is to consider is that all the services, uh, are needed to be, uh, of the of the cluster IP service type. There is no need to use the note port or any other load balancer service type while using Steel. And, uh, the second thing would be to set up the NGINX, the ingress one ingress endpoint. So so that it will point only, uh, one endpoint.