profile-pic
Vetted Talent

Bhupesh Pandey

Vetted Talent

With over 15 years of experience in software development and engineering, I am currently a Backend Developer at IBM, where I work on creating innovative and scalable cloud solutions using Golang and Docker. I am passionate about leveraging cloud technologies to deliver value to clients and stakeholders, and to solve complex business challenges.


As a certified ScrumMaster and a team leader, I have successfully led and managed multiple projects throughout the software development life cycle, using Agile methodology and best practices. I have also mentored and coached junior developers, and facilitated cross-functional collaboration and communication. I have a strong technical background and a working knowledge of NodeJs, Typescript, docker-compose, SnapTelemetry, and Modbus REST API. I am always eager to learn new skills and tools, and to explore new domains and opportunities.

  • Role

    Cloud Developer

  • Years of Experience

    18 years

Skillsets

  • Prometheus
  • JUnit
  • Linux
  • Logdna
  • Microservices
  • Modbus
  • MQTT
  • Network Security
  • Pprof
  • JFace
  • RabbitMQ
  • ReactJs
  • SWT
  • Sysdig
  • system security
  • TravisCI
  • TypeScript
  • Opensensorhub
  • Cloud Security
  • Kubernetes - 4 Years
  • Docker
  • Eclipse
  • AWS
  • Azure
  • Azure Blob Storage
  • Azure Event Hub
  • CI/CD
  • Github
  • Consul
  • Corejava
  • Cryptography
  • EMF
  • Golang
  • gRPC
  • Helm

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Golang Engineer (Remote)AI Screening
  • 57%
    icon-arrow-down
  • Skills assessed :Dart/Flutter, GCP/Docker, GraphQL, Rust, Mongo DB, Go Lang, Kubernetes, Postgre SQL
  • Score: 51/90

Professional Summary

18Years
  • May, 2022 - Present3 yr 11 months

    Cloud Developer

    Ibm
  • Oct, 2021 - May, 2022 7 months

    Principal Software Engineer

    F5
  • Nov, 2020 - Oct, 2021 11 months

    Senior Technical Lead

    Infinite Computer Solutions
  • May, 2012 - Dec, 20131 yr 7 months

    Senior Engineer

    Continental Automotive Components
  • Oct, 2015 - Feb, 20204 yr 4 months

    Project Lead

    L&T Technology Services
  • Aug, 2020 - Nov, 2020 3 months

    Project Lead

    Persistent Systems
  • May, 2012 - Aug, 2012 3 months

    System Analyst

    Atos India
  • May, 2008 - May, 20124 yr

    Senior Software Engineer

    Kpit Cummins Infosystems

Applications & Tools Known

  • icon-tool

    DEX

  • icon-tool

    Docker

  • icon-tool

    Jenkins

  • icon-tool

    Angular

  • icon-tool

    GitHub

  • icon-tool

    Linux

  • icon-tool

    AWS Kinesis

  • icon-tool

    Docker Swarm

  • icon-tool

    Prometheus

Work History

18Years

Cloud Developer

Ibm
May, 2022 - Present3 yr 11 months
    As a senior engineer and individual contributor on the team my responsibilities included: 1. Using the technologies like golang, docker kubernetes develop robust code and write test cases using inbuilt testing and mock framework for the same. 2. Update the code to add observability via means of sysdig, prometheus and logDNA. 3. Develop synthetics test cases using nodejs to monitor the production environment. 4. Ensure that the API's are stable and any issues arising in the code, API or too many error condition etc. are all caught effectively.

Principal Software Engineer

F5
Oct, 2021 - May, 2022 7 months
    SCIM Provides standard schema/definitions for users and can be used to manage standard operations, that is, CRUD (create, read, update, delete). Voltconsole is a platform for Distributed Cloud Services - a SaaS-based offering that automates deployment, security and operations of distributed apps and infrastructure across multi-cloud or edge. As a senior engineer on team: 1. Write the CRUD API's which will provide the multi cloud to interact and work with the Voltconsole(F5 Legacy Cloud Computing app) using Golang, docker and kubernetes.

Senior Technical Lead

Infinite Computer Solutions
Nov, 2020 - Oct, 2021 11 months
    Crypto Services: Services inside crypto service allows user to create and store their keys. When they have to login to a service using the key they can query their keys via these services. The keys are encrypted and stored in the redis cache. As a senior engineer on the team: 1. Write certificate expiry verifier and notify slack about expiry using Golang, Docker Kubernetes. 2. Modify the CI/CD pipeline using the helm charts and TravisCI.

Project Lead

Persistent Systems
Aug, 2020 - Nov, 2020 3 months
    Intersight Orchestrator: CISCO's Intersight Orchestrator is a tool designed to allow users to create Workflows, Tasks and stages for executing the automated workflows. Did not get to work on the requirements. Environment: GoLang, Docker, PProf, Unit TestCases, Windows 10, GitHub, Linux(Alpine) for docker, ReactJS.

Project Lead

L&T Technology Services
Oct, 2015 - Feb, 20204 yr 4 months
    Data Exchange Platform (DEX): As a senior engineer: 1. Develop RabbitMQ, MQTT, GRPC and REST connectors using Golang, Docker, Kubernetes and Azure, AWS Kinesis. 2. Explore the services like AWS kinesis to see if that would fit into our application for the sake of streaming data as the next level of development would involve analyzing the data patterns and hence kinesis would be perfect way to start. Modbus Rest Connector: As an engineer: 1. Write Rest Service to read and write data to the holding registers of the modbus using golang and docker. 2. Also, write the connectors on opensensor hub for the modbus to read high frequency data that can be later written to modbus or read from modbus using the above rest service using the CoreJava and opensensorhub. Telemetry Data Collection Tool: As an engineer on the team 1. Write the golang collector that can connect to the rabbit mq which is receiving the data at 20K/sec frequency containing multiple signals. 2. Create a golang processor that can apply signal specific algorithms. 3. Golang publisher that can publish the slowed data to 1/sec and publish it on different channel. Middleware for the Rig Activities: Write a typescript application that can serve as an middleware and create containerized apps using docker test it via portainers. Connectors For Kepware: 1. Create a connector for connecting to Kepware using core java.

Senior Engineer

Continental Automotive Components
May, 2012 - Dec, 20131 yr 7 months
    Core Java, Eclipse Plugin, JUnit; Project is based on the basic plugin requirements dealing with the generation of NvM & Diagnostic Configuration.

System Analyst

Atos India
May, 2012 - Aug, 2012 3 months

Senior Software Engineer

Kpit Cummins Infosystems
May, 2008 - May, 20124 yr
    Core JAVA, EMF, SWT, JFace.

Achievements

  • Developed service-broker REST Server components
  • Enhanced observability and performance metrics for cloud services
  • Handled production issues and implemented autoscale suspension
  • Created Monitoring Dashboards
  • Built SCIM application for VoltConsole
  • Developed UI for data exchange application
  • Implemented connectivity endpoints using RabbitMQ and MQTT
  • Developed telemetry tools and plugins for Snap-Telemetry
  • Designed middleware application in TypeScript and Node.js

Major Projects

1Projects

Data Exchange Platform (DEX)

Oct, 2015 - Feb, 20204 yr 4 months
    Golang application for data exchange with MQTT, RabbitMQ, GRPC, and REST support to replace Halliburton's legacy C# application INSITE.

Education

  • B.E: Information Technology

    Bansal Institute of Science And Technology (B.I.S.T) (2007)

Certifications

  • Certified scrummaster

  • Ibm cloud advocate essentials - ibm

  • Ibm certified advocate - cloud v2 - ibm

Interests

  • Watching Movies
  • AI-interview Questions & Answers

    I'm. I have approximately 16 years of experience in the industry. Out of that, almost 8 years, I've been working in Go lang, Kubernetes, Docker, uh, Sysay from it is. Um, recently, I've got an exposure towards your Google Cloud. So these are the major technologies I've been working with. Uh, other than this, the major projects which I've worked with Golang are mostly of 2 types, either the ones which have a high frequency data coming in. Uh, so we'll be transforming the data into some form or the other or the rest services. So these are the projects which I've worked with. Uh, we would need to make and go to the process, not gonna be data from MongoDB without a. So for MongoDB, basically, we have a method of indexing the data. So if a large volume of data is coming into the MongoDB, we can index the the data. And based on the index, we can actually identify the data very quickly so that there is no performance issues, and we would be able to basically continue with the operations without any lag or anything on the, uh, application side. Next.

    So, basically, by to a well when we're designing an application go service to handle large throughput of data incorporating both mobile driven and mobile caching, in that case, basically, uh, application has to be designed in such a way that it can get the data by means of cache if not in MongoDB. So, basically, before even going to MongoDB, you can actually, uh, query the cache. And once you have the written cache, you can actually get the data and respond with that data. If not, then you can go to MongoDB. And with MongoDB also, you can optimize your queries to, uh, research based on the indexing. Some sort of indexing, uh, indexing, hashing sort of thing can be implemented, and that can actually help us out in incorporating a high throughput of data with MongoDB.

    How would you structure a Compass Go code base to follow solid principles while taking advantage of cooperatives, which is for scale resolution? So, basically, a Compass Go code base, uh, can be structured in a way where, uh, you design the application such that the solid principles well, the solid principles when we talk about single responsibility, and interface can be defined, which can, uh, define a single responsibility. And that responsibility can be distributed to a specific structure, and that structure can have the implementation of the method inside the interface that defines the single responsibility inside the goal line for these, uh, solid principal s. Who defines, um, uh, oh, I can't remember. I'll it's Liskov. Liskov is, uh, the replacement, so you can actually replace the structure with the interface with which it is implementing. So let's just say I have a DB interface. So I can actually have 10 DB inter implementations in my, uh, in my code, and I can just use the interface instead of using their structure. With that said, I can just use the interface, and I can call the methods on that. And with that, I should be able to achieve, uh, the same operation with any DB implementation in place. I is, uh, interface. I can't remember. I, um, I can't remember. I I am unable to remember I. So s is single responsibility. O is um, o, I can't remember. Solid wins. And this is called the dependency inversion that doesn't apply to Golang. I I is I is basically what is I? Uh, what is solid principle? Let me see. Okay. So, basically, what you can do is after implementing this go code base, you can have the deployment files and the We have the deployment files. We have the resources files. We have the service files. We have the, uh, what do you call it? The environment. What is the variable file? I can't remember the name. So we can have all those files you find, and then you can have the ingress, egress, all those things you find for the applications. And via that, you can actually deploy the application to a cluster, the cluster in in a specific namespace. So that way, application can be deployed and using the egress ingress properties on the queue using the queue proxy. It should be able to the application should be able to communicate to your applicate to your application, basically. So that's how and then you can define how many instances of an application would be running by default.

    Audio voice condition going. What are we? To avoid a risk condition, what you can do is, uh, whenever you're running an application, you can actually whenever you're writing to a MongoDB, you can actually, uh, put it on the mutex section. So, basically, mutex log, mutex unlock that can actually, uh, help you avoid the, uh, risk conditions when writing to the MongoDB collection. That's one. Second thing, whenever you're actually running the application, always, you can actually introduce minus minus raise, which can actually tell you where where in all in your application the raise conditions are there, and you can resolve those raise conditions. So, uh, these are the ways. Or maybe or another way of resolving this would be, basically, you can actually pass the data to the, uh, channel, uh, that you can define a channel for the data. And on that, you can basically, uh, listen keep listening to that channel, and then you can actually write the data which you're receiving via that channel to MongoDB. That way, it is, by default, mutually exclusive. That way, we can actually avoid risk conditions.

    Is what we're scaling your MongoDB replica set in Kubernetes based on different work web patterns. Strategy. Strategy. Outline a strategy for automatically scaling your MongoDB replica set in Kubernetes based on different workload patterns. Okay. What can we do for this? Okay. So, basically, getting among the duplicate sets. Uh, so we can actually define the DB patterns in such a way that it creates a replica set, uh, scaling among the replicas set in Kubernetes. So you can actually, by default, have some replicasets, uh, in place for the MongoDB. Or Outline a strategy for automatically scaling your MongoDB replica set in Kubernetes based on different workload patterns. What can be done for this? Let's just see. You can actually create a MongoDB replica set, uh, have the default MongoDB run the 3 instances or something, And then basically, uh, define how the at what point the replicas would be created, uh, at what point the synchronization between the replicas would happen. Those frequencies has to be defined. Since you're defining a dB, that dB has to have some sort of, uh, some sort of mechanism by which it can basically decide upon how the data is written. So, basically, what you can do is you can actually have the MongoDBs deployed on the Kubernetes side. And then, basically, what you can do is you can, uh, uh, once you have actually deployed the MongoDB, then, basically, based on the memory or, uh, the data consumption on the node. You can actually define if you want to, uh, I mean, if you want to scale up or scale down based on that, you can define. And that is how you would basically create the replicas of the MongoDB, uh, runtime. And that way and then you can keep monitoring the matrix of the MongoDB application via via the Prometheus or some some sort of matrix monitoring system. Using the Kubernetes also, you can do that. And, yeah, these could be some strategies. Uh, you can actually provide some sort of load testing or load testing validations also you can do on the on the

    There's more channels and more visible even than a microservice. Go to the build a scalable event driven microservice architecture. So, basically, what you can do is you can actually create an application which can uh, create go routines based on the, uh, load, what we have on the application. So now let's just say I can, uh, uh, let's just say if I have to process a large volume of data, in that case, I can actually keep increasing the, uh, the go routines, or maybe you can do it other way around. What you can do is you can have a set of go routines which can run parallel based on the system where they are being deployed. And then what you can do is you can actually scale that application if the load on the system of the application, let's just say, go application, if it is, Let's just say 80% or maybe 7 and 85%. That's the threshold we can define. Based on that threshold, the application can create 1 more replica set and one more, uh, it can scale up. And after scaling up, it can actually handle the load, what is being, uh, sent to that particular application. It can start handling the load on that other application also. So the replic of the current application, and that way, we should be able to scale the build the application, which can actually, uh, uh, have the co channels different co channels and co routines in place, and those can actually keep increasing or decreasing based on the load, what we have on the system.

    Which pattern which pattern would you apply to ensure that go services can communicate effectively with different database technologies in a Polygon persistence setup. The Google My Services can communicate effectively with different databases in a PolyWord processing setup. Right. In a PolyGuard processing setup. So, basically, if we have to have a multiple database technologies in place inside the Go microservice, in that case, you can actually always have an interface in place, and you can hide the database implementations behind the interface that can actually ensure that you are not communicating with the specific database. You're communicating with the interface. Or other way around is We have tried to ensure that Google has been communicating at a different database signal chasing a PolyGuard persistence Polyguard persistence setup. Okay. In a polygon process, it is. How would you do that? How would you do that? Maybe what you can do is okay. When you mean it doesn't need a Polygon process. Polygon means many. Uh, Polygon means many. So many database technologies in a persistent setup. So what you can do is you can actually have a separate, uh, repositories build. Maybe and if please correct me if I'm wrong. But what I feel is, uh, you can actually build different repositories for different databases like MongoDB, Postgres, uh, Cassandra. You can have all those things set up in a different repository. And when your application is starting up, you can actually by means of factory, you can actually create those applications since it's in your application, and then you can use multiple database in your application. I believe that is what you meant by this question.

    They approach to set up a service mesh architecture. Approach to setup. What is your approach to set up a service mesh architecture in Kubernetes to provide controlled network trafficking for Golang Microservices. So this mesh, that means you would be okay. When you say service mesh architecture, that means network traffic to provide control network trap trafficking. Okay. Control network trafficking means egress and ingress. So when you set up the, uh, Kubernetes, when you set up your application on the Kubernetes, you have to define egress and ingress properties. Ingress is something where you define how many where what all applications can communicate with their application, and Egress defines, uh, where an all application can go and communicate with. So who can communicate with their application is defined by ingress or incoming request. And outgoing request, also, you can define by means of egress. That means, uh, where the application can communicate to. So that way, you are restricting the traffic on the application by means of ingress. So incoming traffic is restricted and outgoing traffic restricted by means of egress, uh, if that is what it means. Uh, I think that is what it means.

    Let a domain driven design approach go to effectively work with a new 4 g graph database. Fine. A domain driven. How can you apply domain driven design approach in Go to effectively work with Neo 4 j graph database? How will you define? Okay. I have an element of human design approach. And when design approach and go with Neo4j graph database. So as the name, which is Neo4j graph database, it deals with graphs. It could have some sort of data, uh, form, uh, data format, which it understands. So maybe what you can do is you can define some data, uh, data models. Those data models can be leveraged by the Neo four g Graph database to, uh, provide the storage, and that can be defined in a separate repository. And then what you can do is in your application, you will use that Neo four j, uh, repository as your and you will be basically just calling the APS on that to basically store the data.

    I would utilize interfaces. Good apps are common functional services that can drive with different types of data like and PostgreSQL. I'll interact with different types of data like MongoDB. See, in this case, basically, what you can do is you can define the interfaces for the specific purposes, like in, uh, when you say MongoDB or PostgreSQL. So you will have certain structured data which needs to go into PostgreSQL, and, possibly, you will have some unstructured data that goes into MongoDB. So maybe you can have 2 different interfaces in place for unstructured and structured data database. Uh, those kind of interfaces can be defined. Or if in this case, if it is going to just one of the DBS which we have in place, you can just define interface for the DB and define the methods on top of it and then provide the optimization inside those methods. And what you can do is you can just use the interface to in your in your places, like, let's just say you're creating a rest service where you have to store the data to the database. You just use our interface, uh, instance, and that using that, you would actually store the data to database using the interface methods instead of, uh, calling the structure specific structure on the MongoDB or post SQL. So that is how you can basically leverage interfaces.