profile-pic
Vetted Talent

DIVANSHU BANSAL

Vetted Talent

Tackling engineering challenges with creativity, simplicity, and a fresh perspective is what drives me. I thrive on learning and applying new concepts and technologies to develop impactful solutions.


Areas of Expertise:

Algorithm design & analysis, data structures, optimization of space and time complexity, design patterns, OOP

Concurrency models: resource sharing, message passing, actor model, reactive model, CSP, co-routines

Distributed systems design: HA, eventual consistency, CQRS, event sourcing, distributed transaction management (saga)

Microservices: choreography and orchestration, cloud design patterns

Operating systems, computer architecture

Programming languages: Java, Python

Databases: SQL (Oracle, MySQL), No-SQL (HBase, Cassandra, MongoDB)

Message brokers: Kafka, RabbitMQ

Security: OAuth2, OpenID Connect

Frameworks: Spring Framework, Spring Boot, Spring Cloud

  • Role

    Lead Engineer, Growth Backend Team

  • Years of Experience

    5 years

Skillsets

  • Java - 5 Years
  • Python - 3 Years
  • Redis
  • Spring Boot
  • HBase
  • Cassandra
  • Restful APIs - 5 Years
  • Spring - 5 Years
  • Docker - 5 Years
  • Kubernetes - 5 Years
  • SQL - 5 Years
  • GCP - 5 Years

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior System Engineer- BackendAI Screening
  • 63%
    icon-arrow-down
  • Skills assessed :Apache Kafka, DevOps, RabbitMQ, web, hibernates, Restful APIs, Spring, Android, Docker, Git, iOS, Java, Kubernetes, SQL
  • Score: 57/90

Professional Summary

5Years
  • Jul, 2021 - Present4 yr 2 months

    Lead Engineer, Growth Backend Team

    Meesho
  • Jun, 2019 - Jul, 20212 yr 1 month

    Member Of Technical Staff

    Salesforce
  • May, 2018 - Jul, 2018 2 months

    Software Engineer (Intern)

    GE Digital

Applications & Tools Known

  • icon-tool

    Kafka

  • icon-tool

    RabbitMQ

  • icon-tool

    K8s

  • icon-tool

    Jenkins

  • icon-tool

    Docker

Work History

5Years

Lead Engineer, Growth Backend Team

Meesho
Jul, 2021 - Present4 yr 2 months
    Loyalty Points System - Designed and implemented loyalty system which cuts across multiple services internally. This involved changing price calculations across all the user feeds in app and building high throughput APIs (40k rps) AWS to GCP migration - Migrated around 15 services to GCP along with their underlying datastores with close to zero downtime and no data loss Growth Marketing Service - Re-architected the growth marketing service, responsible for sending order data to Meta and Google for running performance marketing campaigns. Community Service - Created the Community service from scratch, a social media platform for Meesho, handling posts, comments, likes, and shares for users. Developed a post ranking model based on the popularity of posts. Infrastructure Cost Reduction Project - Owned the infrastructure cost reduction project for the team, identifying all data sources that can be decommissioned, and implementing other optimizations like using compression and serialization while storing data.

Member Of Technical Staff

Salesforce
Jun, 2019 - Jul, 20212 yr 1 month
    Ledger Reconciliation Jobs - Designed and implemented points calculation job for Loyalty Solution recently launched by Salesforce. Processing done on an abstraction layer written on top of HDFS. Voucher Management - Designed and implemented the APIs and entities needed for voucher management. Implemented the end to end flow to issue voucher to a customer. Also implemented a cron job to expire the vouchers using in house MQ framework of Salesforce. Amazon DynamoDB Integration - Did a POC on moving our product to No-SQL DB from relational Oracle DB to support large volume customers and designed a solution covering the important use cases. Web Components - Developed UI modals using HTML5 and did wiring using lightning web components, a wrapper written on top of Web components. Programming tests for freshers - Authored 2 programming tests on hackerrank for college hiring and also took sessions for the freshers helping them understand the Salesforce ecosystem and helped them in their smooth on-boarding.

Software Engineer (Intern)

GE Digital
May, 2018 - Jul, 2018 2 months
    Chatbot - Developed a chatbot for GE Digital APM team using DNN model. Built a web application using Django for managing trained and untrained dataset. Also integrated it with Cortana.

Achievements

  • Won Heritage Award, given to 10 people from each year, for 2 consecutive years in 2016,17 for excellent all round performance.
  • Codechef rating - 1889 (Handle: divanshu1996)
  • Among top 10 in India in a programming hackathon organised by Publicis.Sapient (2018)
  • Finalist (top 50) in a programming contest organised by InterviewBit (2018)

Major Projects

5Projects

Hackthon (Midjourney implementation)

    Came up with an idea to support the creatives team which creates all the banners manually by using Midjourney to create images from text. Deployed the model on GCP.

B.Tech Project (Regression ANN for a chemical industry use case)

    Applied SVM Single Walled Carbon Nanotube (SWNT) vs. MWNT classifier and a regression ANN for predicting the growth rate of carbon nano tube.

Hackathon Finalist (Chrome Extension)

    Developed a new chrome extension for workbench (Salesforces API client) to make it easy for developers to do simple HTTP requests. This extension was loginless as we were doing authentication from session Id stored in browsers cookies and also developed a history feature similar to postman to store previous rest calls.

Hackathon (Covid Tracker)

    Built a covid tracker using Einstein Analytics (Salesforces AI/ML solution) capabilities and developed dashboards showing the probable hotspots in future.

Taxi Aggregator Service

    Build a basic taxi service android app using quad trees to store data points and show nearby available taxis to the customer as part of Microsofts Code Fun Do hackathon in college. (2017)

Education

  • B-Tech in Chemical Engineering

    Indian Institute of Technology Roorkee (2019)

Certifications

  • Codechef rating - 1889 (handle: divanshu1996)

  • Among top 10 in india in a programming hackathon organised by publicis.sapient (2018)

  • Finalist (top 50) in a programming contest organised by interviewbit (2018)

  • Design patterns - by university of alberta (coursera)

AI-interview Questions & Answers

I'm currently I'm currently working as a software developer at Visual, and I've been here for the last 3 years. And in my role, I've been managing multiple services, especially growth related services, which are responsible for acquisition and activation of new customers in the platform. So these include offers platform, loyalty platform. Uh, there is some referral service. Then there is a group marketing service that sends which shows data to, uh, Google and Facebook so that we can show we show ads on their platforms. So I also manage that service. Prior to Nisha, I was working with Salesforce for around 2 years. So in Salesforce, I was part of the loyalty coaching. So we built that product from scratch called loyalty program product, and our customers can use that product. Uh, I mean, that was a multi tenant product. And I have been working with Microsoft with architecture for the last 5 years and, uh, I have explored various SQL and NoSQL databases along with Fastparadis. Yeah.

Okay. So to configure this stopper container, what I will do is, uh, I will have a Jenkins job, uh, first of all, which will do the build of my code and I will have a Dockerfile in my code. So using that Dockerfile, Docker will know what are the requirements that I need for this build to complete. So Docker will resolve those resources and create a build along with the requirement of the VM and memory CPU, those things are also specified in the Dockerfile and Dockerware. Create a virtual machine which has all those functionalities. Now, uh, to pull the code from this repository, uh, in my Jenkins file, I will give AS3 link, first of all. And in that s 3 link, So I will specify this s 3 link, and you can pull so, basically, my git repository will be uploaded to that s 3, and I will pull the code from that s 3 link.

Okay. So basically, q 1 eighties will help. There is an HPA side card which I will use, uh, along with n y proxy. And n y proxy will basically work as a load balancer and HPA will be a sidecar which will see the resources for my pods, like, my current resource utilization of the pods. And if required, HPA will scale out or scale in the ports as per the current load of my system. In this way, I can build a persistent storage using Kubernetes. And also, uh, uh, if this question is about the databases, so basically, I will, uh, utilize any of the cloud providers like GCP or AWS, And I can connect to let's say if I'm using my SQL, I can connect to RDB and, uh, the RDB service for my sequin, and I can store my data there.

Okay. To optimize the API, first of all, uh, I will see that I will have to have proper competency so that I can run my ports at 60% CPU minimum and 60% memory minimum. And, basically, I need to define, uh, thread pools inside my service, the other service, so that I can parallelize the work to maximize the throughput here. And, also, what I can do is I can delay processing. So, basically, I can just push the processing to Kafka and do it async and my free HTTP thread so that it can handle other requests. Also, I can move to reactive Java. In reactive Java, as soon as my reactive new thread comes with the request and it starts processing, uh, it gets free. So, basically, um, there will be a callback then my response is ready. My controller will give a callback and after that, it will the SDP or new thread will pick up the response and give it to that client. Otherwise, it will be free to take up the request.

Uh, there will be a master branch which will go into production. There will be a developed branch which will be used for pre prod environments testing. It will be copy of the master branch only, but let's say a new feature comes, it will first go to dialogue, and it will be tested on the keypad environment and then it will get nice to master. All the feature branches will be pulled from, uh, this double up branch.

I will be maintaining thread pools, uh, from my sequence in the code so that every time a request comes to read or write, I do not create a connection at 1 time. So I will be creating thread pools. I will be specifying the main threads that I will need and I will also specify the max threads, uh, if the requirement is there during 1 time, during high load.

I cannot see any unit riser. So if the process builder don't start phase, first of all, there is in the catch block, there are no, like, I'm only printing the stat rays, uh, which is wrong. I think I should have logged the request because log, but the log library does this. Let's say if I'm using SMS 4 j, it will have that process even background, uh, sync. And this e dot print stack trace will require the stack trace to be printed to my console in real time which will take a lot of time. So I really should be using log files and logs here. Also, we can have some alerts here, uh, to notify the developers that the docker starts with. Also, there are no retries here, uh, if, uh, in the catch note, there should be retries also.

So the user load balancing strategies are round robin, where I distribute the load equally in a round robin fashion to all the servers and that is pretty scalable. That is there are multiple requests from the same user. There are some power users in our system. Then we can use sticky sessions for them and cache the responses for them in the pods. So for power users, we can also go to sticky sessions. Otherwise, round robin round robin fashion basically usually works well. Also, we can use consistent hashing here to transfer the load to my servers in a balanced way if if if I want to use sticky sessions. So in that case, I can use consistent hashables.

Okay. So first of all, all the configs, uh, will can be specified in YAML files in my code, and my Kubernetes cluster basically will consist of a very load balancer which will be handled through n y proxy. And in my pods, basically, there will be multiple it will be a cluster and in that cluster, multiple services pods of multiple services will be running as a shared resource of my q 1 80s cluster. Inter request communication can, uh, can also be enabled if 1 service wants to call another cluster. So we should enable that also.

If there are peaks in the requests. So there are some key times where I'm getting a stipend request. So as architecture can scale on the fly automatically. We do not need to set up to do scaling on our side. So it's usually works well for spiky loads.

Yeah. Grafana so basically Grafana reads data from Prometheus server usually, the user setup. So I can have some custom metrics which my ports will send to Prometheus server. And in Grafana, I can set up alerts, uh, various alerts that, let's say, if in a minute, the number of 5 x x goes above a particular threshold then some alerts to the system, uh, to the developers. So and also, I should have graphs to for different GMX metrics. I should have graphs for different uh, application metrics like 5xx, p 99, number of requests. These kind of metrics are very essential to help.