profile-pic
Vetted Talent

Bhavesh Patel

Vetted Talent

I am a seasoned Staff Engineer with a background in leading complex projects, optimizing infrastructure, designing robust architecture, driving innovation, and fostering cross-functional collaboration. With expertise in Go, Java, Python, and cloud services, I bring 11 years of proven success in ensuring project excellence.

At Funding Societies, I orchestrated a project ensuring compliance and streamlined multi-region deployment. My role involved driving AI initiatives and optimizing platform services.

During my tenure at Jeavio, I specialized in next-gen features and migration projects, providing expert guidance and defining key metrics.

At PubMatic, I excelled as a Product Owner, developing high-capacity data processing platforms and real-time reporting. My work also included QPS programs and efficient data analytics integration.

At SAP Ariba, I led migration projects and developed performance-enhancing tools, leveraging my expertise in SAP HANA.

While at Ola Cabs, I led backend system development, enhanced system performance, and facilitated seamless integration.

My experience at Ubona Technologies involved leading the development of Cloud Telephony solutions and enhancing customer engagement.

I have a strong foundation in technology, a knack for problem-solving, and a passion for innovation. I look forward to contributing my skills and experiences to your team.

  • Role

    Backend & Asterisk Engineer

  • Years of Experience

    12 years

  • Professional Portfolio

    View here

Skillsets

  • Containerization - 8 Years
  • LLMs - 3 Years
  • FastAPI - 7 Years
  • Ruby on Rails - 8 Years
  • API development - 11 Years
  • Relational Database - 12 Years
  • Artificial Intelligence - 3 Years
  • API - 11 Years
  • DynamoDB - 5 Years
  • Docker - 9 Years
  • SQL - 12 Years
  • MySQL - 12 Years
  • Neo4j - 10 Years
  • Restful APIs - 12 Years
  • Redis - 10 Years
  • Postgre SQL - 9 Years
  • Backend
  • Lambda - 9 Years
  • Terraform - 8 Years
  • Node Js - 9 Years
  • JavaScript - 9 Years
  • Kafka - 8 Years
  • Kubernetes - 6 Years
  • Git - 12 Years
  • Jenkins - 13 Years
  • Python - 8 Years
  • AWS - 8 Years
  • Quality Assurance - 12 Years
  • Java - 12 Years
  • Architecture - 8 Years
  • Data Engineering

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Principal Software EngineerAI Screening
  • 52%
    icon-arrow-down
  • Skills assessed :Infrastructure as Code (IaC), k8s, Loki, Prometheus, Team management, ArgoCD, BuildKite, Grafana, Terraform, AWS, Go Lang, Kafka, Kubernetes, Leadership
  • Score: 52/100

Professional Summary

12Years
  • Jan, 2024 - Present2 yr

    Staff Software Engineer

    CLOUDERA
  • Mar, 2022 - Nov, 20231 yr 8 months

    Staff Software Engineer

    FUNDING SOCIETIES
  • Sep, 2020 - Feb, 20221 yr 5 months

    Senior Principal Engineer

    JEAVIO
  • Mar, 2015 - Aug, 20161 yr 5 months

    Software Development Engineer - II

    OLACABS
  • Aug, 2016 - Jul, 20181 yr 11 months

    Developer

    SAPARIBA
  • Jul, 2018 - Sep, 20202 yr 2 months

    Principal Engineer

    PUBMATIC
  • Aug, 2014 - Feb, 2015 6 months

    Software Analyst

    UBONATECHNOLOGIES
  • Dec, 2012 - Jul, 20141 yr 7 months

    Software Engineer

    PERSISTENT SYSTEMS

Applications & Tools Known

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Rancher

  • icon-tool

    Helm

  • icon-tool

    Postgres

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    gRPC

  • icon-tool

    Spring Boot

  • icon-tool

    NodeJS

  • icon-tool

    Lua

  • icon-tool

    OpenAI

  • icon-tool

    SQS

  • icon-tool

    DynamoDB

  • icon-tool

    Lambda

  • icon-tool

    ECS

  • icon-tool

    Snowflake

  • icon-tool

    Airflow

  • icon-tool

    PostgreSQL

  • icon-tool

    Redis

  • icon-tool

    Consul

  • icon-tool

    Terraform

  • icon-tool

    CircleCI

  • icon-tool

    Kafka

  • icon-tool

    FastAPI

  • icon-tool

    MySQL

  • icon-tool

    Memcached

  • icon-tool

    HDFS

  • icon-tool

    Hive

  • icon-tool

    Ansible

  • icon-tool

    Maven

  • icon-tool

    Jenkins

  • icon-tool

    JMeter

  • icon-tool

    Selenium

  • icon-tool

    Tomcat

  • icon-tool

    RabbitMQ

  • icon-tool

    Git

  • icon-tool

    EC2

  • icon-tool

    RDS

  • icon-tool

    Asterisk

  • icon-tool

    SIP

  • icon-tool

    Objective C

Work History

12Years

Staff Software Engineer

CLOUDERA
Jan, 2024 - Present2 yr
    Implemented various features, including integration of Docker registry, smooth upgrade of cluster functionalities, and precheck validation for CDP Private CloudData Services. Collaborated with cross-functional teams to implement features for Hybrid CloudData Services. Provided on-call support for customer issues, improving customer experience by monitoring metrics and reducing the average resolution time from 11 days to 5 days.

Staff Software Engineer

FUNDING SOCIETIES
Mar, 2022 - Nov, 20231 yr 8 months
    Led the Indonesia Data Localization project, ensuring compliance through multi-region services and infrastructure for easy tech stack deployment. Initiated AI projects leveraging generative AI across the organization, with strong support from the Data Science team. Managed key initiatives for the Kong gateway, upgrading to version 2.8.X and transitioning to Fargate for improved maintenance, cost efficiency, and deployment.

Senior Principal Engineer

JEAVIO
Sep, 2020 - Feb, 20221 yr 5 months
    Engineering Delivery Owner for SevOne Solution product features, specializing in Wi-Fi, SDN, and SDWAN collectors. Led a critical migration project, transitioning collectors from Python to GoLang and streamlining onboarding for new collectors.

Principal Engineer

PUBMATIC
Jul, 2018 - Sep, 20202 yr 2 months
    Led Data Ingestion and Real-time/Ad-hoc Reporting as Product Owner and Scrum Master. Developed a high-capacity data processing platform (100+ TB/day) using Apache Spark, Kafka, Oozie, Avro, and Parquet, increasing capacity from 60 billion to 110+ billion requests/day.

Developer

SAPARIBA
Aug, 2016 - Jul, 20181 yr 11 months
    Spearheaded the Ariba on HANA project, overseeing the migration of Ariba applications (Sourcing, Buyer, and Ariba Network) to the SAP HANA database from Oracle, leading the project from inception to successful execution in production.

Software Development Engineer - II

OLACABS
Mar, 2015 - Aug, 20161 yr 5 months
    Led development as a Lead Developer in the KP/Auto team, driving innovation and enhancing system efficiency. Spearheaded the creation of scalable components, including the Demand Engine and Demand Dispatcher, improving system performance and response times.

Software Analyst

UBONATECHNOLOGIES
Aug, 2014 - Feb, 2015 6 months
    Led the Ubona Enterprise team as Lead Developer, focusing on creating Ubona Cloud Telephony solutions for enhanced customer interaction automation.

Software Engineer

PERSISTENT SYSTEMS
Dec, 2012 - Jul, 20141 yr 7 months
    Played a key role in the txtWeb project, a global SMS-based platform for mobile internet access. Contributed to the Cycle30 project, a cloud-based billing service for communication service operators, utilities, and machine-to-machine services.

Education

  • Bachelor of Technology (B.Tech.), Information Technology

    Nirma Institute Of Technology (B.Tech.) (2012)
  • Diploma Engineering, Information Technology

    B and B Institute of Technology, V V Nagar (2009)

AI-interview Questions & Answers

Uh, I'm basically from the. I have around 11 years of experience. I've worked with the back end technologies, And where is, like, a Java based, uh, Python? Just give me one moment. And, uh, I'm I'm currently I'm working with the staff engineer of, uh, staff So engineering staff in I'm working as a staff engineer with the funding society. Working with the current is, uh, funding societies is, uh, Fintech, uh, start Singapore based startup. They are providing lending products. And, uh, currently, I'm helping, uh, lending teams over there to help them too, Uh, if they are facing any technical problems or technical things. Throughout my career, I worked with a different different, uh, startup plus MNC, Mostly focus on the back end technologies and the big data technologies. Uh, I've worked with the various tech various languages, not specific to one language. Uh, like, Throughout my career, I worked with the Java, Ruby on Rails, Scala. The same time I worked on the Golang Python, Uh, Scala. Uh, Python and the Node JS. Um, Yeah. And that's about me and

Uh, in the multiple region at the US, what would you consider to provide? Definitely, I will try to whatever reasons, So should be the different reasons. Uh, there won't always be a primary and secondary cluster. I will try to set up it. And both cluster will be on the different data center altogether different regions. So it if something goes wrong, you they wanna update it, it will be automatically so it all happens to the another, Uh, another regions, uh, that will provide the more disaster recording. Uh, so that's one of the considerations I will I will think about it. Uh, like, uh, yeah.

How would you monitor cash question marks? Yeah. Definitely. There, uh, this is a common problem. So whenever the protesters are producing the loss of messages Compared to the consumers, then, uh, this, like, will be gradually increasing. So mostly, there are a couple of things out there. Uh, there is a couple of, uh, open source monitoring tools on the Kafka. You can monitor it and you can, You can monitor how many consumers are lagging behind the compared to the producers. That's one of them. 2nd approach to be we're using was, uh, you can you can pass the dummy the be mentioned to the comparison. You can, From the consumer side, you can also check what is the time difference between whenever you progress that message and what is the consumer. If something goes wrong, you can, uh, definitely, uh, alert, uh, create alert through the channels, like, any phone call, any Email or Slackbot. And, uh, what kind of action will take? Definitely, I will try to take And, Kari, first of all, I will try to do the consumer groups at the consumer level so we can increase the consumer number of consumers. Uh, definitely, I will check the number of partitions, uh, if those partition are suffice or should we increase it, Based on the so I will try to increase the partition to, So I can, uh, I can do the parallel processing from the consumer side. Uh, that's the 2 things. And, uh, Um, one last thing, uh, also, I will try to keep, um, number of Kafka clustering, please. Something. So yeah. That's

Hi. You are given a Kubernetes deployment to YML. The replica field is not defined. When what happens when YML is deployed? Specifying. So I think, uh, whenever, uh, if you are not defined declared by default. It is uh, replica will be a one. And, uh, this will, uh, it will automatically create only 1 replica with the that instance.

What definitely, I will try to create, uh, um, all the multi cloud deployment for the, uh Okay. But it is clusters across the different different providers cloud providers. So that's one of the things I will take care of it. Second thing, uh, whenever Kubernetes definitely, I will try to create, Mostly, it's a networking problems, how to set up the networks and all. I will try that one.

How would you One of the things I will prefer it, uh, multi cloud or multi region deployments. So multi zone now, we're waiting on. You can think about it. Uh, each, uh, Kafka broker should be a different different, uh, availability zones. That's one of the things I will take care of it. Uh, there is a elastic block search. I will use it. E b s, I will use it. Uh, there will be auto scaling definitely. If something brokers down, uh, it will automatically start the new brokers. Uh, those things I will try it and definitely I will add the monitoring and alerting things

Uh, consider the port on imports, inputs for always links. Can you support on I'm not able to find any item. Just one of the double quotes has this list might be, Uh, and that doesn't require in the Terraform code. Uh, it generally double quotes, uh, has a slash double quotes, so we can remove it. Second thing we can improve it is, uh, we can add the tags. So, uh, tags so that should be, uh, with the name. Uh, so, uh, what kind of, uh, you can identify that instance in AWS console easily, uh, which instance it's created by which telephone call. So there should be a text with the name. Um, that will be one of the things I will see it. Uh, that thing is, uh, I'm not seeing anything apart from that one. Assist. Well done. That is one of the things you have to provide, uh, uh, had the the setup the providers like AWS in this case. It's a AWS, so there will be a provider whole blocks. Core blocks should be there with the region. So that's I think that's it from on this

Okay. Come on. What should we do? Shouldn't accept I think, uh, that always So system is, uh, function, I think, or is the system? So should be, uh, whatever command should be a part of the, Uh, round bracket should be that command bracket. So there is a error definitely because o s dot system is a function you are passing Why, uh, by this space? So might we want to work it at the line number 3? That should be in the round bracket of the, uh

Hidden put on c means, uh, the configuration applies multiple times. I think you can apply that one, and the result will be the same. So I think, I think you can use the AWS, uh, cloud formation or data from something, uh, which is help to the, ID importance, uh, you can use the, uh, words on control of those codes so you can keep a track of the different different, uh, things on that one. And, definitely, uh, whatever code should be parameterized and, uh, should be clearly dependency management. So that will be helping our, uh, manage the item potency.

Discuss this So first thing you can use it to, uh, come organization structure only so you can, uh, separate it each account in a different different, uh, module. You can create a common modules and you can parameterize using a TF bot so you can easily manage it. Uh, second thing is, uh, you can use the to store the, uh, state environment in some of case 3 or something. And you can also use the Terraform model registry where you can register your Terraform models, and you can maintain the words running. Or you can use the GitHub one. Uh, to store, uh, remote statement, you can use the s 3 or DynamoDB or something 1. Uh, so you can, um, at least you can, uh, revert whatever last is of that one. You can use the parameterized, uh, parameterized, uh, configuration

What approach you can I think you can use the, uh, you can use the, Uh, first of all, you can use the services as a straightforward set, and, uh, you can You can use the some of the Kubernetes functionality like a port and t infinity? And you can provide the persistent volume for all of those stateful, uh, stateful services so you can know whatever state and last state. You can They can store it in the persistent volumes. And, uh, definitely, you can use a load balance and then health check-in the auto scaling of the Kubernetes so you can easily, Uh, if one of the instance goes down, you can start it. That one. So, Definitely, horizontal scaling should be horizontal, uh, port scaling. You can use it for that one. And, uh, and, uh, you can, uh, implement the backup and restore mechanism very well. So whenever the restart happens, it will be not inconsistent state. Whenever it starts, uh, it should be a

How do you enjoy Uh, first of which will be in first So there's a code you can use it with the beta report, 1 of that one. And immutable infrastructure, you can use it, And that will be help, uh, whatever contact some, uh, current environment will not, uh, obtain any changes. And the blue green deployments where, uh, implement a blue green a blue and a green environment. So we are blue is a little bit, Currently spawning a new infrastructure unless, uh, blue becomes a green. Can we, uh, show That kind of, uh, deployment strategy, you can use it. And, uh, feature flag or can release those kind of we can use it.