profile-pic
Vetted Talent

Hafiz Syed

Vetted Talent

A multi cloud certified Cloud expert with 9 years of experience into Google Cloud, AWS, Kubernetes, DevOps and Terraform. Experienced across different verticals and industries involving migrating of applications, databases from on-premise or any other cloud (AWS & GCP). Building CICD pipelines and helping clients to adopt DevOps cultures. Refactoring, replatforming, and archetyping the infrastructure to adopt and leverage cloud capabilities. Leading the team of cloud engineers to deliver on business/client expectations.

  • Role

    Cloud Engineers

  • Years of Experience

    9 years

Skillsets

  • Debian
  • Gradle
  • Windows
  • Bitbucket
  • Groovy
  • Maven
  • Shell
  • ELK
  • Google Cloud
  • CentOS
  • Fedora
  • Helm Charts
  • Asana
  • Bicep
  • Cloudformation
  • Confluence pages
  • GitHub Actions
  • Linux redhat
  • OpenShift
  • technical design documents
  • Jenkins - 2 Years
  • Kubernetes
  • Terraform
  • Docker
  • Python
  • Git
  • AWS - 6 Years
  • Kubernetes - 4 Years
  • Terraform - 3 Years
  • Docker
  • AWS
  • Ansible
  • Azure - 1 Years
  • Python - 1 Years
  • Ubuntu
  • PowerShell
  • Go Lang
  • Azure DevOps
  • Jira

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    AWS Solutions Architect(Remote)AI Screening
  • 59%
    icon-arrow-down
  • Skills assessed :.NET, CI/CD, AWS Services, IAC, Networking, Docker, Kubernetes, Security
  • Score: 53/90

Professional Summary

9Years
  • Jan, 2024 - Present1 yr 8 months

    IT Tutor

    Aspire2 International
  • Jan, 2021 - Mar, 20232 yr 2 months

    Consultant

    Atos Syntel
  • Mar, 2020 - Dec, 2020 9 months

    Sr. Consultant

    Virtusa Consulting
  • Jun, 2011 - Jul, 20154 yr 1 month

    Quality Engineer

    Serco
  • Mar, 2018 - Dec, 20191 yr 9 months

    Cloud Engineer

    Searce Cosourcing

Applications & Tools Known

  • icon-tool

    Git

  • icon-tool

    AWS

  • icon-tool

    Google Cloud Platform

  • icon-tool

    AWS

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    GitHub

  • icon-tool

    GitHub Actions

  • icon-tool

    Jenkins

  • icon-tool

    Ansible

  • icon-tool

    Terraform

  • icon-tool

    Helm

  • icon-tool

    BigQuery

  • icon-tool

    Cloud Composer

  • icon-tool

    Cloud SQL

  • icon-tool

    OpenShift

  • icon-tool

    VPC

  • icon-tool

    Cloud Functions

  • icon-tool

    Jira

  • icon-tool

    Confluence

  • icon-tool

    Bitbucket

  • icon-tool

    ECS

  • icon-tool

    GKE

  • icon-tool

    RBAC

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Google Cloud

  • icon-tool

    Windows

  • icon-tool

    Ubuntu

  • icon-tool

    CentOS

  • icon-tool

    Debian

  • icon-tool

    Maven

  • icon-tool

    Gradle

  • icon-tool

    Groovy

  • icon-tool

    ELK

  • icon-tool

    Helm Charts

  • icon-tool

    Azure DevOps

  • icon-tool

    Github Action

  • icon-tool

    Asana

  • icon-tool

    Shell

  • icon-tool

    PowerShell

  • icon-tool

    Python

  • icon-tool

    Terraform

  • icon-tool

    CloudFormation

  • icon-tool

    Bicep

  • icon-tool

    Go lang

Work History

9Years

IT Tutor

Aspire2 International
Jan, 2024 - Present1 yr 8 months
    Deliver engaging and interactive lectures, workshops, and tutorials to students. Develop and update course materials, administer assessments, and provide personalized academic support.

Consultant

Atos Syntel
Jan, 2021 - Mar, 20232 yr 2 months
    Led multi-cloud strategies, developed automation frameworks, implemented CI/CD pipelines, managed cloud infrastructures, and collaborated with development teams for automation and deployment.

Sr. Consultant

Virtusa Consulting
Mar, 2020 - Dec, 2020 9 months
    Assessed AWS infrastructure for vulnerabilities and cost optimization, deployed applications on OpenShift, configured VPNs, and implemented DevOps practices.

Cloud Engineer

Searce Cosourcing
Mar, 2018 - Dec, 20191 yr 9 months
    Designed CI/CD pipelines, collaborated with teams to implement DevOps practices, deployed landing zones using Terraform, and migrated applications and databases to cloud environments.

Quality Engineer

Serco
Jun, 2011 - Jul, 20154 yr 1 month
    Authenticated business information for Google Maps across various international markets, played roles of Quality Analyst and Acting Team Lead providing updates and training.

Achievements

  • Received Gold Award for best performance and team player
  • Twice received Silver Award for reliable Backup lead, Mentor and team player
  • Gold Award: Received Gold award for best performance and team player
  • Silver Award: Received Silver twice for being reliable backup team lead and Mentor

Major Projects

4Projects

Platform Engineer | Lloyds Banking Group

    Application Build and Deployment on Kubernetes.

DevOps Engineer | Health Corporation of America

    Teradata Migration to Big Query.

Lead Cloud Engineer | Google PSO

    Landing Zone Design, Automation and IAC.

Cloud Engineer | Searce

    Application and Database Migrations.

Education

  • Diploma in Computing Level-7

    NZSE (2016)
  • Bachelors of Technology in Computer Science Engineering

    JNTU (2010)

Certifications

  • AWS Certified Solutions Architect Professional

  • Google

    Google Cloud Certified Professional Cloud Architect
  • Aws certified solutions architect professional

  • Certified aws solutions architect professional

  • Google cloud certified professional cloud architect

AI-interview Questions & Answers

Yeah. Hi. Uh, myself, Uh, I have done engineering in computer science and also have around, like, 9 years of experience into cloud and DevOps. Uh, I have worked with very big clients like uh, Google PSO. I work with, uh, Al Jazeera Media Network, uh, Lloyds Banking Group, and a lot of, uh, Health Corporation of America and a lot of other projects. So in, you know, in my oral tenure, I worked with AWS Google Cloud primarily and a bit of Azure. And I have looked into projects, uh, worked on projects, uh, related to migration, uh, of applications from on premise to cloud, uh, from data center to cloud. I worked on projects where, uh, migrating databases and the teradata warehouse to Google Cloud BigQuery. And also worked on projects, uh, where, uh, creating CICD pipelines, automating, uh, implementing DevOps pipeline for, uh, a data project, something like that. So I have good hands on experience of around 6 plus years in cloud and DevOps, and my overall experience is 9 years. So, yeah, that's all I want for me.

So so the data we have in, uh, in AWS, let's say, for example, if you're storing the data on s 3 bucket, and, uh, we have option of enabling encryption, and we can use customer managed keys or AWS provided keys for that particular data and to encrypt that particular data which is addressed. So either we can use, uh, or we can create our own, uh, KMS keys within AWS, or we can upload our own keys to manage that particular data.

What strategy would you use to not music? We can enable CloudTrail on AWS account, so that we can monitor the logs and see which user has used what kind of, uh, what up what activities he has done on a particular database account.

What is what is to securely manage secrets and sensitive information? In order to create, we can use something called a secret manager, where we can, let's say, for example, we have a database and we want to store the username and password or any other credentials, which are sensitive data. For that we can use something called a secret manager to mask that particular username password and make use of that API calls to, uh, to read that particular information without exposing the actual content.

We can I haven't worked much on AWS config though, but, uh, CloudTrails, uh, whatever CloudTrail logs are generated, we can keep that logs on s 3 bucket and, uh, based upon that, so we can store that information on s 3 bucket, and we can analyze those logs data using different analytics tools available?

What design would you suggest to build a font tolerant connection between an on premise data center and AWS using? So when we set up a VPN, we can set up a site to site VPN, which has 2 different tunnels, uh, what which works as a active and passive setup. And, uh, to to ensure that even if 1 of the tunnel goes down, make sure that there is connectivity between the on premise services servers. And let's say if we're dealing with, uh, maybe, like, a database server, we can set up, uh, an application like a master slave kind of setup to ensure this highly available. And in case of any event happen, we can switch, uh, any of the servers as primary.

Do the following term on board, what change would you recommend to ensure that daily load days in instance is not unintentionally distracting to We need to any we need to add a parameter called termination termination protection so that if it it will prevent any resource from accidental deletes. Termination protection.

Given the save it was not a function snippet written in Python, can you any of the error it might throw during execution? Explain why. Definite and prevent, we'll have an even context. Go to 3.23. Probably we get an error at response, get object. I'm not sure that, uh, that s 3 client has the permissions to do to read the bucket we need to. In In order to get object, we need certain permissions. Probably we need to use credentials like secret access here, access key or access ID in order to access that particular bucket and get the data.

How would you optimize cost when scaling an application using easy to auto scaling in smart instances? Uh, when you create an after scaling group in EC 2, uh, there are certain policies we can set up saying that if, uh, the load on the application server goes more than 80% or 90%, it should scale more VMs. And whenever there is drop in traffic or load on the traffic or load on the application, it should scale down the application. Scale down the VM stood of different side to less some numbers, like, maybe keeping only minimum 2 or 3.

In which scenario would you choose AWS target for over Amazon EC 2? That's all for running containers. Uh, let's say, for example, uh, AWS target is a container, uh, management service. Let's say I want I don't want to bother about, uh, building the old whole cluster or a Docker, uh, Docker servers to make to to deploy the containers. And I don't want the more than like of maintaining cluster or administration. Then in that case, I simply go ahead and use AWS Fargate instead of using the EC two instances where we can just, uh, deploy the Docker image and pass some parameters to access that particular image to to access the particular Docker container like, uh, port numbers and all.

AWS EKS offers a lot of, uh, flexibilities in order to maintain security. 1 of the thing is we can create a VPC native uh, EKS cluster so that we can control the external access to that particular cluster and, uh, isolate services, create different services, and also we can install STO as a sidecar container on Kubernetes cluster so that the traffic can be will be reach the particular services behind through a host through through a service through a SEO service.