profile-pic
Vetted Talent

KUSH SHARMA

Vetted Talent

Motivated and Passionate software engineer with 8+ years of experience in developing highly scalable and secured software systems using a variety of technologies like Java, Spring Boot, Python, Microservies, Jenkins, Rest API, GraphQL API, PostgreSql etc. Certified Disciplined Agile Scrum master and a Security champion.

  • Role

    Senior Member Technical Staff

  • Years of Experience

    9.8 years

  • Professional Portfolio

    View here

Skillsets

  • MongoDB
  • CircleCI
  • GraphQL
  • Hibernate
  • JavaScript
  • Jenkins
  • Jest
  • JUnit
  • LWC
  • Mockito
  • C#
  • MySQL
  • Prometheus
  • pytest
  • ReactJs
  • Redis
  • Rest APIs
  • SAPUI5
  • SQL Server
  • Kafka
  • Java - 8 Years
  • Python - 2 Years
  • Python - 2 Years
  • Cloud Foundry
  • Grafana
  • Kibana
  • Kubernetes
  • Spring Boot
  • Java - 7 Years
  • Kubernetes
  • Docker
  • PostgreSQL
  • .NET Core
  • ArgoCD
  • Aura
  • AWS
  • Azure

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Software Engineer - (Onsite, Ahmedabad)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :.NET, Azure DevOps, C#, Git, SQL, Strategic Thinking, Leadership, Problem Solving Attitude
  • Score: 45/90

Professional Summary

9.8Years
  • Mar, 2024 - Present1 yr 11 months

    Senior Member Technical Staff

    Salesforce
  • Mar, 2022 - Mar, 20242 yr

    Computer Scientist

    Adobe
  • Jun, 2021 - Mar, 2022 9 months

    Senior Data Engineer

    Visa
  • Jul, 2016 - Aug, 20182 yr 1 month

    Member Technical Staff

    First American India
  • Sep, 2018 - Jun, 20212 yr 9 months

    Developer

    Sap Labs

Applications & Tools Known

  • icon-tool

    Scrum

  • icon-tool

    Spark SQL

  • icon-tool

    Apache Spark

  • icon-tool

    C++

  • icon-tool

    Java

  • icon-tool

    C#

  • icon-tool

    HTML5

  • icon-tool

    Javascript

  • icon-tool

    Asp.net

  • icon-tool

    jQuery

  • icon-tool

    Python

  • icon-tool

    MySQL

  • icon-tool

    Linux Admin

Work History

9.8Years

Senior Member Technical Staff

Salesforce
Mar, 2024 - Present1 yr 11 months
    Architected and led Proactive Campaign Service, Knowledge Feedback Management & Personalisation service, enabling outreach to 200,000+ campaign members concurrently with zero data loss and <500ms p95 API latency. Built agentic AI actions and portals for Service Cloud level initiatives integrating Java, Spring Boot services with LWC/Aura based Reactjs frontends and GraphQL/REST APIs, improving service adoption by 25% and reducing Case time-to-resolution by 40%. Increased automated test coverage from 0% to 95% (JUnit/Mockito/Jest), cutting escaped defects by 80% and accelerating CI feedback cycles. Owned design reviews, cross-team architectural alignments and instituted coding standards and PR checklists. Designed 14 LWC and 5 Aura components and promoted the use of AI productivity tools like Cursor, CodeGenie, & Windsurf. Facilitated daily Scrum ceremonies and roadmap executions while maintaining a delivery predictability of 90% sprint goal attainment.

Computer Scientist

Adobe
Mar, 2022 - Mar, 20242 yr
    Worked with the Digital Experience Platform team to identify patterns and recommend similar customer segments from petabytes of data using Java 11, Spring Boot, Rest APIs, Python, Shell Scripting, Jenkins, Azure & data pipelines using Airflow and Astronomer for the service Lookalike Modeling. Engineered scalable DAGs handling up to 10,000 concurrent model training tasks, achieving 98% pipeline success rate and reducing average training latency through optimized resource allocation and parallel execution. Production-grade data pipelines on Azure and Airflow. Distributed computing through spark jobs on databricks. Ownership of critical project modules, Github repository setup, micro-service structure, automated CICD pipelines for DX services using Jenkins, Sonarqube, Vault, Azure Cloud alongside being a Security Champion & a certified Scrum Master running daily scrum.

Senior Data Engineer

Visa
Jun, 2021 - Mar, 2022 9 months
    Designed a data platform from scratch using Java 8, Springboot, Python, Rest APls, Shell Scripts, Graphql, Spark, Kubernetes, Prometheus and Grafana. Developed Big data services for the data platform using Spark, Hive and Hbase. Increased code coverage to 85% using pytest.

Developer

Sap Labs
Sep, 2018 - Jun, 20212 yr 9 months
    Developed multi-tenant SaaS applications from scratch using Java 8, Cloud Foundry, SAP UI5, Kafka, Rest API, Swagger, & Postgresql in a multi-cloud model, i.e Azure, AWS, Alicloud. Worked in a microservice-oriented architecture with an extensive use of messaging queue Kafka for Provisioning of tenants. Orchestration of resources in multi-cloud ecosystems, AWS/Azure meeting the GDPR compliances. Reduced the infrastructure setup cost by 10,000 Euros per setup. Optimised latency using database indexes, tweaking connection pool properties, and using thread pools in asynchronous requests. Improved test coverage to 95% by implementing a comprehensive Test pyramid of Component level tests, Integration tests, and Junits using Mockito framework.

Member Technical Staff

First American India
Jul, 2016 - Aug, 20182 yr 1 month
    Managed and enhanced features in Closing Disclosure app using .NET 4.5 framework. Re-architected Closing Disclosure application using .Net Core Developed WCF and WPF applications: Streamline and Escrow Lite.

Major Projects

1Projects

Tweet Sentiment Analysis Notification System

    Designed and implemented an automated sentiment detection pipeline for Twitter data using Java, Spring Boot, Azure Cognitive APIs, and Twilio SMS integration, following SOLID and TDD principles.

Education

  • Bachelor of Technology in Computer Science & Engineering

    (2016)

Certifications

  • Apache Spark (TM) SQL for Data Analysts

    Coursera
  • Gradle for Java-Based Applications and Libraries

    LinkedIn
  • Gradle for Java-Based Applications and Libraries

    LinkedIn
  • Learning Gradle

    LinkedIn
  • Learning Gradle

    LinkedIn
  • Learning Gradle

    LinkedIn
  • Scala Essential Training

    LinkedIn
  • Scala Essential Training

    LinkedIn
  • Scala Essential Training

    LinkedIn

Interests

  • Playing Guitar
  • Gyming
  • yoga
  • Dance
  • Watching Movies
  • Badminton
  • Travelling
  • AI-interview Questions & Answers

    Could you help me understand more about your background by giving a brief introduction of yourself? Hi. My name is Kush Sharma, and I belong to Jammu Rajasthan. I have about 7.5 years of experience in developing enterprise grade softwares and highly secured and scalable distributed systems. Uh, I have worked with variety of technology and tech stack, like using Java, Spring Boot, Python, CICD using Jenkins, data pipelines, machine learning, and various algorithms and data structures. Currently, I'm working with Adobe as a computer scientist. I'm part of digital experience group in Adobe, and we are developing a service called Lookalike Modeling. And this service is just, uh, like, creating a recommendation system, and we tend to capture all the digital footprints or digital events that are happening all around the web. And then we try to suggest some look alike, uh, segments to the users. And, uh, I'm an individual contributor. I also manage a team of 3 juniors. Uh, it's been 2 years with working with Adobe right now, and, uh, I'm also a certified scrum master by this BlendAgile scrum master uh, forum, and I'm also taking the lead responsibility of regulating the scrum process and all the agile related meetings. Uh, before s before Adobe, I was working with, uh, companies like SAP and Visa. So, uh, there in SAP, I worked for 3 years. I was working on the cloud platform team of SAP, and for 3 years, we developed 2 services from scratch for the SAP cloud platform. Uh, namely 1 is Cloud Platform Integration, and the other 1 is, uh, Integration Suite. Both of these services were developed from scratch, and I was part of the team. I was very fortunate to part of, uh, to be part of such a team where the projects were being discussed and groomed and accurate, uh, architectural decisions were being taken up, and I was part of those such discussions. And, uh, so there, the text check was also pretty much about same, uh, Java, Spring Boot, microservices, RESTful APIs, GraphQL APIs, PostgreSQL, uh, Azure, AWS, and private cloud like Ali Cloud. And there, uh, I had an end to end ownership of the incomplete systems. And, uh, right from, uh, grooming the requirements with the product owners and tech technical leads or architects, and then capturing the requirement, then implementing them, coding, and then writing unit tests, defining the test pyramid. Like, uh, 1st layer would be the unit test. 2nd layer would be, uh, UI level test like Seleniums, and then e to e test, then component level test, mock the API test for micro for microservices mock microservices, and then deployment. And, finally, if there are any customer issues, we had ServiceNow portal, and we used to regulate. Uh, we will be used to have a regularly on call schedule for different people based on, like, the sprints. And each sprint will have 2 people assigned to just look at all the, um, workload that is coming in from the user's point of view, meet a new request or or any, uh, fraud issue, which we use to actively solve and collaborate with customers to get it solved ASAP. A yes. So, yeah, I think I have a good understanding of taking design discussions, design

    How would you use Azure DevOps to automate deployment pipelines and increase the release frequency for dot net core application? Uh, I have not personally worked with Azure DevOps pipelines, but I have worked with Jenkins. Uh, so there, we used to create a lot of pipeline jobs, uh, for various purposes. For example, if you have a pull request job, whenever you create a pull request, it will automatically get triggered, uh, and then it will run and would do, uh, do the course of actions that we have defined in job. Similarly, there will be a deployment job if you want to deploy your code in, uh, any environment. So, uh, I've configured Jenkins pipelines, uh, Spinnaker pipelines, but not Azure DevOps personally. Uh, but the process should be similar to these actions that I have already performed in past. So, yeah, I think, uh, that is how we can automate these things. Apart from that, uh, we can also have some kind of branching strategies for the GitHub repository. For example, what are the branching strategies, um, and on on what level we would want to, uh, raise, uh, pull request jobs and what all actions we should be doing in that. Uh, we should be running some kind of tests also in that and a code coverage, etcetera, all these things can be automated using the CICD jobs.

    Which technique would you use in c Sharp to ensure your objects are threat safe while maximizing the concurrency? Threat safety. I have worked, uh, with c sharp and with my 1st employer, which was First American. I worked for 2 years using dot net framework 4.5 at that time, so I'm not, uh, very clear right now with, uh, this index, or the keywords used for threats threat safety, but I have worked with parallelism and concurrency in my other roles with Java. And there, we used to have the thread safety by using, uh, async, uh, asynchronous jobs using, uh, executive services, callables, futures, runnables, and even, like, just by creating a new thread and running those. In that case, we would have to maintain the entire life cycle of that particular particular thread. So yeah. Um, in c sharp, I don't remember exactly how they would do that, but should be related with threads by using

    What is the strategy for addressing and migrating technical application? For addressing any, uh, addressing and migrating the technical depth and SQL driven application. So I think, uh, today, mostly all the applications are database driven, so be it a SQL or a NoSQL database. And, uh, almost no projectors and does show many kind of small technical laps because once we are pushing some new features with tight deadlines, there might be chances of any slipovers or any technical debts. In that case, uh, recording those technical debts is the key factor to have a constant reminder that we are having these technical IDAPs, uh, in place, uh, that can be captured by using Jira Processes like, uh, Jiras or Platairs, and then slowly taking over these technical items by implementing the the the the most efficient solution to upgrade, uh, the application so that it does it performs much better and there are no more technical depths. So that is what I feel like, having a track of what is going wrong and then fixing it 1 by 1. And be it SQL driven or NoSQL driven applications, I think, uh, the key resides in prioritizing those

    How can you leverage Azure DevOps build and release pipelines to roll back the requirements? Leverage the Azure develop build and release pipelines failure. I think there is a mechanism in Azure DevOps to build jobs to build your jobs and deploy and then release your changes in a particular environment. And in case of any failures, there will be a roll a rollback feature also available. I have not personally used Azure DevOps, so I'm not very exactly sure what is the name of that particular, uh, mechanism. But, uh, yeah, there should be a reason to roll it back. In other case, like, there can be another pipeline to deploy the last latest change, which is the master change, uh, if there's any failure in merging or releasing a new uh, change in the environment. This is the very, like,

    SQL query that is running slow. What steps should it take to diagnose and optimize its performance? So if SQL query is running slow, explain, analyze would be the 1st step to figure out what's what defines it to be a slow or not. And then having some indexes if there's a SQL query which is having some kind of a where clause or filtering, then having proper indexes would help in making it more optimized. Uh, if the query is running via input output operations, so, uh, read and writes can be avoided by writing stored procedures. And, uh, what else, than in that other what are the other things that we can do? Yeah. After indexing, like, over a period of time, reindexing your tables, reindexing your indexes would help it a lot. And sharding, in some cases, like, if if there are there are billions of records and then a column has only 2 or 3 values inside it, So I would say sharding would help in, like, organizing your tables in such a form that it hits hits the right tables, uh, when the request comes in, it will also optimize its own performance.

    In ASP.C sharp code snippet for an API to process a payment a developer is using a single tricluster of multiple observations. I am going to explain why it is not following the best practice for explaining in solid principles. Okay first it is trying to get payment details then validation result what it is doing it's validate payment details it is getting payment details it is validating the payment details and if it is valid then it is processing the payment also firstly it is not a single responsibility it's violation of the single responsibility principle one class is doing so many things it's getting the payment info it's also validating the payment info and then it's also processing the payment information all these could be three different behaviors in three different classes but yeah it's done in one class so when this single responsibility principle is violated um then there is a very generic try and catch like this does not explain much this is a very generic try and catch so we are just logging an error and then throwing an exception again so we are not even printing the stack trace as well so the second problem lies and try and catch block where everything is pretty generic and we are not recording the stack traces as well

    Report review, you notice that you notice that the following user code snippet is meant to display list of user roles. It can then close a null reference exception. You're changing codes. You can explain why this is happening and how you suggest this is important. Get user roles. Uh, user roles is not null, and user roles count is greater than 0. For role and okay. User ID. Okay. It's throwing a null reference exception. If it is doing a null, why it wouldn't be doing? Firstly, get user roles method itself might be returning a null pointer exception. So that should be under a try and a catch. And, uh, yeah. So if that method is not throwing any exception, then there could be possibility that it could have user roles in it or if it it will not have. So then rest of the code looks fine, uh, in which if if condition handles that if the user roles is not null. And it should not be null, and it should not be empty as well, uh, which is fine. Like, it is seg checking the count is not greater is greater than 0. So and it is displaying it and otherwise yeah. Yeah. So get user roles is the culprit here, and we should have it under the try catch.

    How would you optimize a dot net application that has to handle large volumes of data with complex transactions? Optimize the dot net application that has to handle large volumes of data with complex transactions. Large volumes of data. Yeah. Today, a large volume of data, it's not a big problem. A lot of, uh, distributed systems are doing that, and data platforms are processing such large volumes of big data, uh, with complex transactions being made. Proper retrying mechanisms should be there in place, and in case of any failures, you should record it. Uh, we should use a a genuine, uh, data source. Like, if it's large volumes, then should be decision should be made, like, where we want to restore this data in in a new needs a NoSQL system or a RDBMS system. And and the frequency of the data and being, uh, ran or write should also be taken care into taken into account. Um, Yeah. I think if the data is huge and the frequency is huge, some distributed systems, some large volume processing systems like uh, uh, Storm, Fling, uh

    How would you you how you would use reflections if you have a metadata extension and the potential impact on application performance? Metadata extraction. I'm not sure what, uh, metadata extraction is, but reflection glasses are generally used to take more control of what we have. Um, but, yeah, in general, I would not recommend using electric red like, reflection glass, uh, but I'm not sure what do you mean here by metadata extraction.

    Asynchronous programming patterns in our asynchronous programming systems are used when, uh, we want to do something, uh, efficiently. So instead of actions being performed 1 after then 2nd and 3rd, if you want to do parallel operations, uh, which are possible and which are not linked to each other, then in that case, synchronous programming can be of great help. And, uh, yeah, we can make our application more efficient. It will be faster, and, uh, we can get some early failures and take accordingly the next step of actions instead of waiting for all the operations to complete and then responding back with a failure message.