profile-pic
Vetted Talent

Dhairya Verma

Vetted Talent

Experienced Software Engineer with a demonstrated history of working in the software industry. Skilled in Java, AWS, Distributed Systems. Strong engineering professional with a Bachelor's Degree focused in Computer Science.

  • Role

    Golang Developer

  • Years of Experience

    6 years

  • Professional Portfolio

    View here

Skillsets

  • Python
  • TypeScript
  • Terraform
  • System Design
  • Svelte
  • SQS
  • MongoDB
  • FFmpeg
  • Golang
  • Microservices
  • AWS
  • AWS - 5 Years
  • Java
  • Amazon S3
  • Python
  • Redis
  • DynamoDB
  • SNS
  • Spring
  • Kafka
  • Java
  • AWS - 4 Years

Vetted For

9Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Media Engineer ( Python and Golang) - RemoteAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :akmai, fastly, media streaming, Terraform, AWS, Docker, Go Lang, Kubernetes, Python
  • Score: 54/90

Professional Summary

6Years
  • Aug, 2024 - Present1 yr 1 month

    Senior Software Engineer

    Jacks Club
  • May, 2024 - Jul, 2024 2 months

    Backend Engineer

    Freelance Developer
  • Mar, 2021 - May, 20243 yr 2 months

    Founding Backend Engineer

    Fanclash
  • Aug, 2019 - Mar, 20211 yr 7 months

    Software Development Engineer

    Amazon

Applications & Tools Known

  • icon-tool

    TypeScript

  • icon-tool

    Golang

  • icon-tool

    Kafka

  • icon-tool

    Redis

  • icon-tool

    Lambda

  • icon-tool

    MongoDB

  • icon-tool

    Node.js

  • icon-tool

    CockroachDB

  • icon-tool

    AWS

  • icon-tool

    Java

  • icon-tool

    AWS Glue

  • icon-tool

    S3

  • icon-tool

    Spring

  • icon-tool

    JSP

Work History

6Years

Senior Software Engineer

Jacks Club
Aug, 2024 - Present1 yr 1 month
    Designed scalable backend solutions for processing events, integrated third-party casino APIs, and developed a Telegram chat moderation bot.

Backend Engineer

Freelance Developer
May, 2024 - Jul, 2024 2 months
    Built a 24/7 live streaming platform and implemented real-time game state management.

Founding Backend Engineer

Fanclash
Mar, 2021 - May, 20243 yr 2 months
    Key founding engineer contributing to user base growth, built an esports RAG chatbot, reduced CPU usage, and developed a highlight generation SaaS and real-time esports fantasy platform.

Software Development Engineer

Amazon
Aug, 2019 - Mar, 20211 yr 7 months
    Integrated a Java-based rule engine and implemented shadow mode workflow for API migration.

Achievements

  • Implemented non-transactional approach and optimized performance with Redis integration
  • Architected multi-region betting backend using CockroachDB, Kafka, and AWS, emphasizing GDPR compliance and scalability.
  • Scaled data pipelines for daily jobs that generated customer reports
  • Implemented shadow mode work ow for the migration of the refund API, automating daily mismatch reports using AWS Kinesis Stream, AWS Glue, and S3.
  • Implemented non-transactional approach and optimized performance with Redis integration
  • Architected multi-region betting backend using CockroachDB, Kafka, and AWS, emphasizing GDPR compliance and scalability.
  • Scaled data pipelines for daily jobs that generated customer reports
  • Implemented shadow mode work ow for the migration of the refund API, automating daily mismatch reports using AWS Kinesis Stream, AWS Glue, and S3.
  • Implemented non-transactional approach and optimized performance with Redis integration
  • Architected multi-region betting backend using CockroachDB, Kafka, and AWS, emphasizing GDPR compliance and scalability.
  • Scaled data pipelines for daily jobs that generated customer reports
  • Implemented shadow mode work ow for the migration of the refund API, automating daily mismatch reports using AWS Kinesis Stream, AWS Glue, and S3.
  • Key founding engineer at FanClash with a user base of 3 Million
  • Reduced CPU usage by 10-20% with serverless event-driven architecture
  • Resolved concurrent slot booking conflicts by 99% using Redis
  • Developed GPT-based chatbot for esports knowledge
  • Engineered real-time esports fantasy platform
  • Led development and design of app features including payments and taxation
  • Built social graphs with ArangoDB for user engagement
  • Revamped application for B2B capabilities
  • Architected multi-region betting backend emphasizing GDPR compliance

Major Projects

2Projects

Real-time esports highlight generation SaaS

    Built a SaaS to automate esports highlight generation using OBS Studio, RTMP servers, OpenCV, FFmpeg, and GPU acceleration.

RAG Chatbot for esports knowledge

    Developed a chatbot using GPT, Pinecone, Langchain, and AWS Lambda for message processing and summarizing.

Education

  • Bachelor of Technology in Computer Science and Engineering

    IIT Mandi (2019)

Interests

  • Skateboarding
  • Drama Club
  • Skateboarding
  • Drama Club
  • Skateboarding
  • Drama Club
  • AI-interview Questions & Answers

    Okay. I am, uh, I am. I am, uh, back end developer. I started, uh, working in 20 in 2019 with Amazon. I worked there for 1 1 and a half years years. And there, I was mostly working with Java and, uh, AWS services. After Amazon, I moved to a very early startup, Bangladesh. It was a fantasy gaming startup. At that time, it was seed funded, and and I joined it as a founding back end engineer. We scaled that, uh, application or, like, a fantasy gaming app to 3,000,000 users or, like, 100,000 of concurrent users. And I contributed to, you know, raising series a and b afterwards. But after, uh, certain years, uh, there were some government regulations regarding taxation around, uh, fantasy gaming. So we discontinued that app, and, uh, later, we started focusing on, uh, more of, uh, AI applications. So recently, we built automatic highlight generation using open CV Python and FFmpeg with hardware acceleration. Uh, so what we do is, uh, there are esports like CS GO games, uh, CS GO streams being streamed by the organizer, and they take a lot of time to, you know, uh, cut the clips and upload it to their social media handles. So what we do is, uh, provide a live highlight, uh, platform to them. Highlights will be generated automatically in, like, in in a matter of minutes, and they were able to reduce, uh, their video posting time from, like, an hour to 10 minutes. And currently, uh, I am working, uh, with Amsterdam based startup. It's the crypto casino, and I am building a live video stream on a certain game for them, which which they will use to bet on. Like, the stream will be played, and and the player users will bet on the people fighting on that stream. So I'm building a stream for them. And I am particularly using, uh, Golang for it and FFmpeg for a headless streaming. And I in my previous company, Fanclash, I was using TypeScript, Golang, Python, AWS, kai Kafka, AWS, Lambda, and and many other services.

    Let's let's take a function, like an example function which says, okay, post post anything or I would say it's definitely a single, uh, responsibility principle. So I will try to break that function into multiple functions with each function performing only one responsibility. Like, if there is a certain function post command, then it should only post command. It should do nothing else. Like, there might be some certain flow where I am posting a comment, and I'm also, uh, sending an event to Kafka, uh, just to it's just an event, I know, uh, that shows, okay, that some comment was posted on a certain post. So there should be 2 functions. 1 should, uh, that one should one implies that, okay, it's post a comment. Another function should be should be, like, send event to Kafka. So this is the kind of thing we can do. Uh, like, uh, breaking the function into multiple functions, and each function has single responsibility, and that's all.

    And I think, uh, Python data structures are the most useful to, uh, process live video stream. I think it's definitely hash map or something like that or maybe buffer when we store the video continuously and buffer fills, same buffer fills out. It might be that, or an algorithm would be more. I don't really have much idea about the algorithm. The algorithm comes about the live stream. It, uh, I have mostly worked with OpenCV where I read video and, uh, like, read each frame, and I we had some certain logic defined on each frame.

    Okay. The first thing in going, I would definitely use buffers for, uh, reading live video streams. Like, uh, buffer could be of could be of sight some certain megabytes, and it always, uh, reading and pushing it out, reading and pushing it out. That will definitely, uh, improve our memory management. And, uh, another would be maybe if somebody is reading, uh, somebody is subscribed to our, uh, video live video, then I would manage connections like that if they're directly reading from our service. But I would consider, like, post like, moving it to a CDN and people are reading that, uh, video from CDN and something like that, which will definitely take a lot of load from, uh, the calling servers.

    Okay. Uh, CloudWatch, basically, it shows you the metrics like the graphs and all these things. One of the thing I would, uh, use CloudWatch for is logs. Print every log that, uh, that shows any error or any debug or any info that is maybe relevant to our performance. Another thing that will be useful is definitely the, uh, graph metrics. And, uh, that could be may, like, uh, like, the processing time, each time we receive a live video, what is the performance level? What is the time that we took to process it and move it to like, what is the what is the logic time that's definitely one of the metric? And another would be the error maybe. Okay. What is the and how many errors that we have seen in the past? 1, 5 minutes in a past 2 minutes window, and it is and and the metric over that. Uh, maybe latency, like, how we are moving and latency over any any crucial part of our code, like, maybe we are sending that video stream after processing to somebody else. Maybe we are uploading that video to s 3 and, uh, something something like that. So they didn't see what all of these things and error metrics or all of these things.

    Well, I would I would opt for for a serverless architecture on AWS in case the stream is not really going 24 hours. It's like sometimes, like, once in a day or, like, 1 hour, 2 hours, then I would definitely opt for serverless architecture. And in my previous in my current project only where I am streaming, uh, live game to all all the users. Uh, we are using, uh, AWS IVS, so it's kind of serverless. It just give you it just gives you an RTMP endpoint, and you can post your you can send your video to the RTMP URL. And AWS AWS will give you a playback URL, which is like m 3 u playlist, and, uh, it is very easy to play it without much effort. So if the, like, if you really want to deploy it fast, the serverless architecture is really good, and the stream is not really 24 hours because I think with 24 hour stream, it's it kind of gets expensive. Then we should start looking for some other, um, like, maybe, uh, solutions.

    I think availability is mostly related to the memory or the CPU that we have assigned to the container. CPU is definitely that should be it. It might be that the container is doesn't really have the all the memory that it needs, or the CPU is not really, like, as per the requirement. The replicas are only 2. So the 2 containers might be, like, exiting before the new one is sponsored.

    I would say, uh, uh, always, uh, like, it's kind of a deployment where you save the state order, like, spawn, like, save that save the state of that stateful application and replicate it somewhere else. And I don't know. Okay. It's like it's like blue green deployment where you will you will you will, uh, spawn a new instance of something, which is a stateful application. And only when it is fully completed and we are sure that, okay, it's, uh, it's completely similar to the one that we are upgrading. We will destroy the previous one, and the new traffic will go to the new one. Last fix is also finished.

    Strategy. I would say, um, round robin is also fine, but if some instances loaded and it's gone. I think round robin or maybe the CPU, uh, like, the number of requests related if there is, something like that where request will always go to an, uh, container with the least CPU utilization.

    I want the CICD definitely. Yes. CICD, I have used checking pipeline where we would just deploy anything, and it will. Like, based on the strategy that we have defined, it will, uh, like, abort the current local containers and it will spawn will keep spawning new ones, ensuring that there is high availability. Like, if there are 4 replicas, then it will, like, exit 2 of them and it's on YouTube. And then when the new you are on, then we are then it will exit the other tools. So, like, CICD, Jenkins, pipeline, or AWS code pipeline. It's fine with it.

    I think benefit is only that that it is, uh, it is a service already defined in AWS. And if we try to implement this on our own, then it would take time. So it it's it's kind of means that we are reinventing the wheel. So if, uh, like, we have to fast deliver our projects and, like, in the future, we can definitely change it to something of our own, and let's let's test on it.