profile-pic
Vetted Talent

Shailendra Sharma

Vetted Talent
Versatile leader with rich background in both startup and Fortune 500 software companies. Expertise in diverse domains, including CRM, SCM, Smart City, Energy Utilities, and Generative AI.
  • Role

    Sr. Solution Architect CTO Portfolio

  • Years of Experience

    27 years

Skillsets

  • React.js
  • Mistral
  • MLFlow
  • Neo4j
  • Node.js
  • Openstreetmap
  • OpenVINO
  • Oracle Fusion
  • Pinecone
  • PostgreSQL
  • Prometheus
  • Pub/Sub
  • Python
  • PyTorch
  • LLAMA
  • Sagemaker
  • Snowflake
  • SOA
  • Spark
  • SRE
  • Tableau
  • TensorFlow
  • Terraform
  • Vector DB
  • Vertex AI
  • WSO2
  • Yolo
  • Data Analytics
  • AI
  • Airflow
  • Apache NiFi
  • AWS
  • Bedrock
  • BigQuery
  • BPM
  • Cassandra
  • CDK
  • ChatGPT
  • Claude
  • Cloud
  • Croma
  • MLOps
  • Databricks
  • Dataproc
  • DevSecOps
  • GCP
  • GenAI
  • GIS
  • Grafana
  • JBPM
  • Kafka
  • Keras
  • Kubernetes
  • Lambda

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Technical Manager - MLAI Screening
  • 46%
    icon-arrow-down
  • Skills assessed :Spark, Team management, AWS, Docker, Kubernetes, machine_learning, Problem Solving Attitude, Python
  • Score: 41/90

Professional Summary

27Years
  • Oct, 2024 - Present1 yr 2 months

    Sr. Solution Architect CTO Portfolio

    Sasken Technologies
  • Mar, 2024 - Jul, 2024 4 months

    Director

    Director, Spinsci Health-Tech (former Spinsci Technologies Closed)
  • May, 2020 - Oct, 20233 yr 5 months

    DGM/Practice HeadFusion Platform | Data & AI

    L&T Tech. (L&T Smart World)
  • Sep, 2012 - Oct, 20164 yr 1 month

    Director Strategic Consulting

    IT Convergence, India
  • Nov, 2016 - Mar, 20192 yr 4 months

    Director, Cloud Products & Data Systems

    Oracle
  • Mar, 2019 - Feb, 2020 11 months

    VP of Product Development

    Columbus Global Services
  • Mar, 2012 - Aug, 2012 5 months

    FMW Practice Leader

    Mahindra Satyam
  • Apr, 2003 - Mar, 20128 yr 11 months

    Sr. Development Manager, Cloud Products & Data Systems

    Oracle
  • Apr, 2003 - Mar, 20128 yr 11 months

    Sr. Software Development Manager

    Oracle India
  • Jan, 1997 - May, 20003 yr 4 months

    Executive/Engineer

    Nucleus, Eurolink, IIS Infotech (Xansa Ltd)
  • Jun, 2000 - Apr, 20032 yr 10 months

    Sr. Consultant

    Techspan Inc USA and Techspan India

Applications & Tools Known

  • icon-tool

    Apache NiFi

  • icon-tool

    Streamlit

  • icon-tool

    Keras

  • icon-tool

    PyTorch

  • icon-tool

    OpenCV

  • icon-tool

    Kafka

  • icon-tool

    Cassandra

  • icon-tool

    PostgreSQL

  • icon-tool

    Grafana

  • icon-tool

    HDFS

  • icon-tool

    ELK

  • icon-tool

    AWS Glue

  • icon-tool

    Redshift

  • icon-tool

    Android

  • icon-tool

    React

  • icon-tool

    Node JS

  • icon-tool

    Jenkins

  • icon-tool

    Ansible

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    AWS

  • icon-tool

    Google Cloud

  • icon-tool

    Oracle Cloud

  • icon-tool

    Python

  • icon-tool

    Kafka

  • icon-tool

    Cassandra

  • icon-tool

    HDFS

  • icon-tool

    Redshift

  • icon-tool

    Google Map

  • icon-tool

    GIS

  • icon-tool

    Node JS

  • icon-tool

    API Gateways

  • icon-tool

    SonarQube

  • icon-tool

    Fortify

  • icon-tool

    WebInspect

  • icon-tool

    Hyperion

  • icon-tool

    ETL

  • icon-tool

    SOA

  • icon-tool

    Salesforce CRM

  • icon-tool

    Oracle Fusion HCM

  • icon-tool

    Oracle EBS

Work History

27Years

Sr. Solution Architect CTO Portfolio

Sasken Technologies
Oct, 2024 - Present1 yr 2 months
    Led customer engagement and solution strategy across platform engineering, cloud, data analytics and AI using AWS and GCP data and AI services on cloud. Successfully delivered 50+ cloud-native solutions spanning cloud, data analytics, AI and infrastructure modernization. Spearheaded development of GenAI-powered customer support enterprise workflows using OpenAI and Google Vertex AI, achieving a 40% reduction in support response time. Enabled AI-assisted coding to boost development accuracy by 25%. Architected integrated Agentic AI automation workflows combining IoT, real-time event processing, and advanced analytics. Delivered scalable Data Lakehouse platforms using GCP, AWS, Snowflake, Databricks, Apache Iceberg, and Spark. Delivered a high-scale EV Charging Central Platform integrating IoT, Smart Meters, Charging Infrastructure, and Electric Vehicles. Implemented OCPP protocol and scalable WebSocket servers with IoT connectivity for metering, telemetry, and control.

Director

Director, Spinsci Health-Tech (former Spinsci Technologies Closed)
Mar, 2024 - Jul, 2024 4 months

DGM/Practice HeadFusion Platform | Data & AI

L&T Tech. (L&T Smart World)
May, 2020 - Oct, 20233 yr 5 months
    Defined and executed the technical vision and strategy for the Fusion Product Suite, delivering transformative solutions across smart city, public safety, mobility, and utility domains using AI, geospatial intelligence and cloud-native technologies. Led a 79+ member cross-functional team, establishing and managing OKRs, KPIs, and an AI Centre of Excellence. Delivered AI-driven solutions including video analytics, crime analytics and smart city lakes. Oversaw product portfolio design and innovation, guiding strategic planning, execution and alignment across business units and technology teams. Directed a mission-critical defence project focused on perimeter security, integrating robotics, computer vision and multi-sensor fusion for autonomous threat detection, tracking and response. Accelerated product development by 45% by standardizing reusable core platform services and automating workflows, significantly improving operational efficiency. Integrated GenAI into enterprise SaaS platforms in collaboration with product teams, increasing user engagement by 60% through personalized experiences and intelligent feature delivery. Fostered cross-functional collaboration with sales, marketing, legal, and procurement to ensure strategic alignment and efficient go-to-market execution. Established strategic partnerships with AWS and Intel, optimizing AI infrastructure performance and reducing operational costs by 30% through joint solution design and resource optimization. Managed multi-million-dollar budgets, consistently delivering high-impact projects on time, under budget.

VP of Product Development

Columbus Global Services
Mar, 2019 - Feb, 2020 11 months
    Directed product engineering and R&D, overseeing a global team of 110+ across India, Europe, and US. As global leader, led implementation a 5-year strategic growth plan, aligning 20 plus products on MicrosoftD365/Azure platform with Data, AI, Mobile, Web to long-term business objectives, OKRs and KPIs.

Director, Cloud Products & Data Systems

Oracle
Nov, 2016 - Mar, 20192 yr 4 months
    Defined and executed technical and product strategy, aligning engineering roadmaps with business objectives to drive innovation and delivery across key Oracle Cloud platforms, including MCP e-commerce, Oracle Payment Interface (OPI), and Oracle Hospitality Integration Platform (OHIP). Led cross-functional product development teams, delivering high-quality, scalable cloud products that enhanced operational efficiency and drove increased customer adoption. Mentored innovation programs within the business unit, fostering a culture of experimentation and continuous improvement. Built and scaled a global security testing team in India. Oversaw engineering, DevOps, Quality, big data analytics, and security testing functions across multiple product lines and verticals, ensuring cohesive execution and operational excellence.

Director Strategic Consulting

IT Convergence, India
Sep, 2012 - Oct, 20164 yr 1 month
    Part of India leadership team overseeing IT, driving operational excellence and customer satisfaction. Served as a Consulting Advisor for Cummins, providing strategic consulting for assigned project. As part of India leadership team, led consulting verticals, delivering innovative business solutions including Order to cash cycle for Oracle Fusion Cloud, AWS, Big Data and Salesforce. Drove 70% improvement in customer satisfaction and successfully rescued several high-priority projects, earning recognition from both customers and leadership.

FMW Practice Leader

Mahindra Satyam
Mar, 2012 - Aug, 2012 5 months

Sr. Development Manager, Cloud Products & Data Systems

Oracle
Apr, 2003 - Mar, 20128 yr 11 months
    Directed offshore development for Oracle Fusion Cloud Applications, specifically in SCM (Shipping, Distributed Order Orchestration, Inventory, Logistics), Fusion CRM and Ebiz-R12 leading the creation of BI analytics dashboards, shipping, inventory, and distributed order management solutions. Participated in the complex Fusion Cloud/applications development journey including the inception and strategy phase, driving the complex software development life cycle from inception to release for Fusion Cloud products.

Sr. Software Development Manager

Oracle India
Apr, 2003 - Mar, 20128 yr 11 months

Sr. Consultant

Techspan Inc USA and Techspan India
Jun, 2000 - Apr, 20032 yr 10 months

Executive/Engineer

Nucleus, Eurolink, IIS Infotech (Xansa Ltd)
Jan, 1997 - May, 20003 yr 4 months

Achievements

  • Appreciation Certificate for Fusion - Safe and Smart City Products 2021, L&T Smart World
  • Outstanding Performance and Recognition Award - Feb 2008 by Steve Miranda, Exec. SVP Oracle
  • Outstanding Performance and Recognition Award Feb 2009 by Exec. GVP, Oracle
  • Merit Certificate Head of engineering and development center: Setup of Hyderabad development centers (SWC)
  • Appreciation for delivery under complex and extreme conditions for global expo at USA, TX
  • Merit Certificate Executive business management program from IIM Trichy, India
  • Head of engineering and development center: Setup of Hyderabad development centers (SWC)
  • Recommended and worked as consulting advisor for fortune 100 clients business unit head
  • Led mobile development lab at Fairfax, Virginia, USA office and won multiple projects from customers

Major Projects

6Projects

GenAI-powered Customer Support Enterprise Workflows

Oct, 2023 - Present2 yr 1 month
    Developed AI-assisted customer support workflows using OpenAI and Google Vertex AI, reducing support response time by 40%.

Fusion Product Suite

May, 2020 - Oct, 20233 yr 5 months
    Transformative solutions across smart city, public safety, mobility, and utility domains using AI, geospatial intelligence and cloud-native technologies.

EV Charging Central Platform

May, 2020 - Oct, 20233 yr 5 months
    Integrated IoT, Smart Meters, Charging Infrastructure, and Electric Vehicles with scalable WebSocket servers and IoT connectivity.

AI-driven Crime Analytics and Smart City Solutions

May, 2020 - Oct, 20233 yr 5 months
    Delivered AI-driven solutions including video analytics, crime analytics, lake management for smart cities.

AI Center of Excellence

May, 2020 - Oct, 20233 yr 5 months
    Led a team to deliver scalable AI-powered products across public safety, employee productivity, and smart city operations.

Oracle Fusion SCM Cloud

Apr, 2003 - Mar, 20128 yr 11 months
    Directed offshore development for Oracle Fusion Cloud Applications, specifically in SCM operations like Shipping and Logistics.

Education

  • Executive business management

    Indian Institute of Management, Trichy (2015)
  • Master of Computer Application

    Samrat Ashok Technological Institute, Vidisha (1996)

Certifications

  • Harvard business school: leading people, delegating, change management, career management

  • Togaf certified architect

  • Databricks: delta lake, streaming

  • Google cloud platform: big data and machine learning

  • Fine tuning llm for production, prompt engineering

  • Kaggle: computer vision, gis analysis

  • Harvard business school leading people, delegating, change management, career management

  • Scaling upinternal programnetherlands

  • Data bricks delta lake,streaming

  • Google cloud platform big data and machine learnings

  • Fine tuning llm for production, prompt engineering, openai

  • Kaggle computer vision, gis analysis

AI-interview Questions & Answers

Uh, I'm a technical architect and engineering leader person. I have a 5 years Background in AI and machine learning system. I have created very, very complex and, uh, complicated, you know, Global world class solution for start up as well as, uh, Fortune 100 companies. Uh, I'm extremely good at, uh, designing system complicated systems, uh, then at the same time, Looking at the customer side of the picture, I've covered data science, uh, prediction, recommendation, uh, computer vision, data science, uh, part, you know, majorly on the, uh, AI side. Apart from that, I have also dealt with, you know, high volume, High performing system, uh, high speed, you know, uh, communication system, uh, which typically requires, analytics on the fly, uh, inferencing on the fly. So those are the things, you know, which I can cover. I come with back with the rich experience on managing and leading team as well In the specific domains, I'm open to work for any domain. I'm looking for

To design ETL pipeline, you know, first, I need to understand the use case, you know, what exactly is the use case. So, uh, depending on the source and target data, You know, what is needed, uh, you need to use basically different components. Right? Uh, one part could be that, you know, for example, If you are reading something from the IoT devices, you may have, you know, socket based protocol or IoT listeners, you know, created the Python, right, uh, which can listen to the data coming from the specific, in IoT devices. Then, uh, finally, you can, uh, park it in the staging area. And then from studying area, you know, another batch program can pick it up and do the, uh, remaining part on the ATSI, extracting the right information for summarizing. Alright? And then finally, converting it to data frames. Uh, and those data frames can be used for a variety of purpose depending on the needs. And, uh, uh, post that, we can dump that data into the target database. It could be NoSQL database or a multidimensional database, uh, where you can, you know, understand the different aspect and finally use it for the visualization process. Uh, in addition to that, uh, you have a variety of other tools, you know, which are either Java or Python based, you know, like, like, You can utilize tool like that to to deliver some of the, uh, no code ATL flows as well. There are alternates. Uh, depends on the situation. You know? It can go as complex as, you know, uh, reading the 50,000 meters data, for example, From the, uh, for the energy analytics part and, uh, capturing the data, understanding the frequency, reading, and other things, and converting into a, uh, summarized, you know, set of the window. For example, window could be, uh, 15 minute window, um, 1 hour window or 30 minute window. And then those summarized data can be basically put into the target data warehouse or data lake to to cover the ETL. All that thing can be done in Python. And using Python, uh, you have number of frameworks, including data tree, uh, NumPy library, where you can do, uh, number of manipulations on the data. Uh, you can do number of formatting on the input data. Uh, you can also basically filter the data to to, uh, part the invalid records, you know, out of the a normal flow. And then, uh, you should be able to give the ability to correct that data at a later point of time and then ingest back into the mean ETL pipeline. So these are basically some of the ways in which you can, uh, utilize between the Python. Another important aspect is when you are dealing with really, really high volume data, you know, Python may not be able to handle the, volume flow. So what you can do is you can use divide and conquer technique, uh, segment the data, uh, partition the data, And then then basically adjust the data using Python less frameworks. So that's how I do the It will processing using,

So you can have Docker images, you know, um, and for various ML systems, you know, which are in question. And, uh, these Docker images can be, uh, deployed on the port. Right? So you you can have, Uh, port, and then you can use key tests, uh, to manage the scaling up and scaling down. Geared test can be also used for, uh, variety of other purpose. Uh, for example, The communication managing the communication between the different, you know, clusters or zones or subcluster. You can have basically different kind of rollouts Using QA test, uh, when you are actually upgrading something or you want to apply it, you want to do AB testing. So there are variety of things which you can Do using accommodation of, you know, container as well as key test. Right? The, Time efficient part only comes because, you know, you have a readily available container in place, and it doesn't take very long to, you know, spawn a new container. Right? To to handle, for example, the surging of volume in this particular case. Right? And, uh, on the top of that, you can have, uh, your own repository, um, and, uh, templates, you know, for various Key test based, uh, deployments, and those things can be utilized again to save a lot of time while Upgrading or downgrading or holding with certain things. So that becomes much more time efficient. Uh, another important aspect is that in in QTS and, uh, container and port case, Uh, you can also use, you know, number of parameters to test certain things. Typically, uh, which is not that easy and typically, you know, just a container kind of environment. For example, you can run Chow's monkey to to, uh, see how you are basically d DCDR is gonna work in a particular case. Right? So all those things can be done with the combination of details in container, docker container, or any other container In a time

Yeah. So, uh, utilizing, uh, uh, multiprocessing capabilities to improve the performance. So, typically, what you can do is you can do, uh, parallel training, number 1. Um, and distributed training, uh, for model training. So, typically, uh, in that particular, uh, process, what happens is, uh, our parent process basically does the coordinator job to, uh, run all the, you know, task where your, uh, ML model is getting trained on the, uh, you know, uh, on a particular, you know, requirement of the training dataset, and all that can be combined. So, for example, you can use, you know, Microsoft library, uh, which is already available, uh, for you to consume, uh, it's a Python based library, which can, you know, enable you to do, uh, you know, multi GPU or multi CPU based training and final results are combined and delivered to you. So on the top of that result, you know, you can figure out whether how your model was performing. That saves lot of time, um, and, uh, you know, in fact, when you are trying to run a sequential a number of epochs to train your model. Right? Uh, in practicalities, you know, our typical model training on a typical GPU may take this, you know, with the distributed and multiprocessing capabilities. Uh, you can reduce it to number of hours. So that is a serious advantage when it comes to the multiprocessing capability. So, uh, it's a parent child kind of, you know, coordinator mechanism, what you have already seen in Hadoop and, uh, other distributed processing engine. Uh, same techniques can be applied here to train your model, right, in a distributed

So, uh, QA test, you know, offers a lot of, uh, self monitoring capabilities, number 1. You know? You can utilize that. So it can, uh, give you some steps on the datasets. So on the top of that, you can also create, uh, your own datasets, for example, how many number of requests are coming and going and, you know, how many inference requests are coming. Right? For example, uh, what is the, uh, throughput time or the response time, right, when a request comes in and the output goes out out of any known ML deployment. Right? That is very, very critical factor. The third important factor is how many requests, you know, a particular, Uh, pod, for example, is able to sub. Right? So all this data can be collected. And, uh, based on that, you can do, fine tuning, um, to to optimize. So there are number of optimization which can be done at the model level, at the kernel level, and at the, uh, part level as well. Alright? So there are techniques which can allow you to basically run number of, for example, Uh, feeds of the streams onto a CPU or a GPU. Uh, you can basically fine tune that using basically all Uh, managed MLM multiplier. Uh, number 2 is, uh, you can also, uh, look at different, you know, groups. You can create different groups based on different requirements. So, for example, in your your case, you know, you may be running number of models. Right? Some models may be, uh, lightweight in nature. Some model may be heavyweight in nature. So depends on those Models as well. So you can create basically different groups, and those groups can be, uh, managed independently and fine tuned independently so that, you know, you can, Uh, look at different, you know, rates, uh, input rate, data, input rate, response rate, right, and, uh, execution time. And and then finally, you fine tune them and accordingly provision the underlying hardware in the different categories. You know? So so it requires, you know, lot of fine tuning at the, uh, all layers, not just, you know, K test, um, but other layers as well when you are actually dealing with the optimization and, you know, perform.

Docker images can be, you know, Docker images can be version controls using, you know, our typical, uh, version control systems. There are number of version control systems which are available. Uh, number 1, you can basically use that. And second thing is most important, what I have seen is you have to have a deployment strategy where, uh, you have a template, typically, uh, which you need to fill and cover the dependencies. Right? Starting with the docker images to the software version. And that version, uh, can be checked into your image repositories, and those image repositories can be basically linked to your different releases, the version of the software. Right? And, uh, then you can utilize during DevOps cycle, uh, these docker images to, uh, build a final binary or deployable unit onto the target, you know, uh, platform. Right? So that's the technique we will use for, uh, you know, version control, uh, on the docker

Can any deployment can be done using, Here it is. And, uh, first, you need to execute the command line statement, which Tell us about your deployment and the option to specify Kennedy deployments, and that can basically take care of this

Generally, for optimizing cost. You know? There are a number of things you can do on the Docker Hub. Uh, the the, It depends a lot on the model to model as well. The first point, what you can do is you can reduce basically or eliminate unnecessary install These which are there in typical, you know, Docker basically. Just install only necessary things, uh, which are needed. Right? Number 1. Number 2, what you can do is, uh, by default in any any image that you take from the operating system, there will be Lot of unnecessary things. So what you need to do is you need to optimize your image, you know, by removing unnecessary component. That will be another Another thing that we should actually carefully now look. Our third thing, uh, what you also need to look at is that When you are looking at the different, uh, you know, layers which are actually part of our particular Docker file. Right? Uh, you can also write some script, you know, which can take care of, you know, automating, Uh, you know, uh, runtime behaviors. For example, if you want to start certain services at the beginning, You can do that level of optimization because, uh, then in that case, you don't basically need to intervene too much in the Another important aspect is there are certain additional libraries in case of any workloads, uh, which can be utilized to optimize the token value. So, for example, you want to use, in a given, you know, email, a combination of GPUs and CPUs. And, uh, that optimization will not directly come from the, typical installation that you have. So what you can do is you can optimize, for example, OpenVINO kind of, you know, uh, tools, uh, and Optimize your model beforehand on the Dockerfile. And, uh, then if you deploy that Dockerfile, what you can do is, For example, some of the, uh, simple models of not very, you know, heavyweight models that you have can easily run on CPUs as well for inferencing purposes. In that way, you could optimize your Dockerfile and, Uh, ML workload. Another important thing is when you are looking at the Dockerfile, uh, you need to also understand the, uh, different, uh, capacity which is actually located to a particular instance. Right? So you have to have a control on the memory. You have to have control on the RAM. You have to have control on the CPU. Right? What What what total number of CPUs you can utilize for a particular, you know, setup? And in Docker file, I think you can do all these optimization to take care of your ML modules. And And, uh, I've seen, basically, you can sometimes multiply by a factor of 4, uh, per CPU GPU is. There are other things like, you know, parallel computing and other things, you know, or distributed computing that you can take advantage of, uh

I haven't done this to my