profile-pic
Vetted Talent

MOHAMMED AZHAR

Vetted Talent

Highly experienced Front End Developer with 12 years of experience using MVC, JavaScript, Node.js, Kubernetes, Azure, Kafka, Redux, CI/CD, REST API, SQL Server, QA, MicroServices and Cosmos Db. Skilled in developing high-performance applications for web and mobile platforms utilizing the latest technologies.

  • Role

    Senior IOT Consultant

  • Years of Experience

    14 years

Skillsets

  • NoSQL
  • C#
  • Cosmos
  • CSS
  • Cypress
  • EC2
  • Github
  • Linux
  • Mongo
  • MS SQL Server
  • nginx
  • Nodejs
  • AWS
  • PostgreSQL
  • Report Builder
  • REST
  • SSIS
  • SSRS
  • SVN
  • TFS
  • TypeScript
  • Visual Studio
  • WebSocket
  • Windows
  • MQTT
  • JavaScript - 10.0 Years
  • Azure - 6 Years
  • Kafka - 1.5 Years
  • Redux - 3 Years
  • CI/CD - 4 Years
  • .NET
  • Angular
  • Bootstrap
  • Docker
  • Entity Framework
  • HTML5 - 10.0 Years
  • Redux
  • Python - 7.0 Years
  • react
  • RxJS
  • Terraform
  • WCF
  • Azure
  • react - 8.0 Years
  • SQL - 6.0 Years
  • ADO.NET
  • AKS

Vetted For

17Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Sr. Full Stack Developer/ArchitectureAI Screening
  • 67%
    icon-arrow-down
  • Skills assessed :Adaptability, Leadership Qualities, Proactive approach, Team Collaboration, Cloud Services, Containerization Technologies, Database Systems, gRPC, Micro services, Node Js, Orchestration Tools, Pixel Perfect Coding, Restful APIs, testing frameworks, Kafka, Quality Assurance, Time Management
  • Score: 101/150

Professional Summary

14Years
  • Apr, 2021 - Present4 yr 5 months

    Senior IOT Consultant

    Avery Dennison India
  • Jul, 2020 - Apr, 2021 9 months

    Consultant

    Techneplus
  • Apr, 2017 - Jul, 20203 yr 3 months

    Technical Lead

    Expert Global IT Solutions
  • Oct, 2011 - Feb, 20164 yr 4 months

    Software Engineer

    Syntel Limited
  • Feb, 2016 - Apr, 20171 yr 2 months

    Developer

    Capgemini India

Applications & Tools Known

  • icon-tool

    Javascript

  • icon-tool

    C#

  • icon-tool

    Node.js

  • icon-tool

    Apache Kafka

  • icon-tool

    REST API

  • icon-tool

    ReactJS

  • icon-tool

    AWS (Amazon Web Services)

  • icon-tool

    Azure

  • icon-tool

    Visual Studio

  • icon-tool

    SVN

  • icon-tool

    Report Builder

  • icon-tool

    MS SQL Server

  • icon-tool

    Cosmos DB

  • icon-tool

    MongoDB

  • icon-tool

    AWS

  • icon-tool

    NGINX

  • icon-tool

    Docker

  • icon-tool

    Terraform

  • icon-tool

    GitHub Actions

  • icon-tool

    Cassandra

  • icon-tool

    PostgreSQL

  • icon-tool

    SQL Server

  • icon-tool

    AngularJS

  • icon-tool

    Bootstrap

  • icon-tool

    HTML5

  • icon-tool

    CSS

  • icon-tool

    ASP.Net

  • icon-tool

    Entity Framework

  • icon-tool

    TypeScript

  • icon-tool

    SSRS

  • icon-tool

    SSIS

Work History

14Years

Senior IOT Consultant

Avery Dennison India
Apr, 2021 - Present4 yr 5 months
    Solution design and architecture for IOT implementation in digital manufacturing (Industry 4.0). Implemented Azure IOT solutions including IOT Edge features. Developed and implemented API management, reviewed process standards, unit and integration tests. Built React Typescript templates and backend, led requirements gathering, documentation, and team best practices.

Consultant

Techneplus
Jul, 2020 - Apr, 2021 9 months
    Developed device management system for printer solutions. Gathered requirements, implemented client needs, maintained support documentation, and provided project improvement suggestions.

Technical Lead

Expert Global IT Solutions
Apr, 2017 - Jul, 20203 yr 3 months
    Led and managed teams for multiple projects including Siemens RFID Tracking, OEE System for Coca Cola, Scrubber system for Viswa Group, Machine Talk platform, Alliance Laundry Systems IOT, and Koverage Tool for KPIT. Responsibilities included requirements gathering, application development, architecture, code review, Agile delivery, documentation, and optimization.

Developer

Capgemini India
Feb, 2016 - Apr, 20171 yr 2 months
    Developed dynamic and automated business websites and vendor portals for American Titles. Participated in Agile processes, scrum meetings, planning, demos, and retrospectives. Maintained support documentation and contributed project improvement suggestions.

Software Engineer

Syntel Limited
Oct, 2011 - Feb, 20164 yr 4 months
    Worked on Demand Management System and Workflow Development for Humana Inc. Developed web applications, console applications, web services, and forms in .NET. Involved in database activities, requirement analysis, deployment, testing, support, maintenance, and Agile delivery. Prepared unit test cases and participated in daily client calls.

Achievements

  • Employee of the year 2017 in Expert Global IT Solutions
  • Simple Value Award by Syntel
  • Synergy team award for working in the Rainbow team

Major Projects

11Projects

RBIS iCore

Jan, 2021 - Present4 yr 8 months
    RBIS iCore One IOT implementation with Regards to Digital Manufacturing Industry 4.0.

Siemens RFID Tracking

Dec, 2019 - Present5 yr 9 months
    RFID Based tracking solution for the Circuit Breakers, Drives and Sub Assembly

PSD Device Management

Jul, 2020 - Jan, 2021 6 months
    Device Management System over printer solutions.

OEE System

Aug, 2019 - Dec, 2019 4 months
    Coca cola machines OEE with the help of Machine Talk device and that of Azure IOT Suite.

Scrubber

Nov, 2018 - Aug, 2019 9 months
    Scrubber system is the digital solution provided for the Scrubbers to monitor the health of scrubber. The system is developed to regulate the SO2 and CO2 content of the gas emitted out of the scrubber. The system involved the real time display of the values from different sensors. The system also includes analytics and reports needed.

Machine Talk

Dec, 2017 - Nov, 2018 11 months
    MachineTalk is integrated M2M & IoT platform that aims to integrate ERP, Machine and Automation Layers of the connected manufacturing enterprise. The MachineTalk platform addresses this issue by allowing Integration with machine Integration of real time data with ERP Manufacturing process orchestration by real-time tracking

Alliance Laundry Systems Internet of Things

Aug, 2017 - Dec, 2017 4 months
    Alliance Laundry Systems are the manufacturer company based in UK. This project Alliance Laundry Systems Internet of things provides an interface between the ALS appliance and the cloud. It reads data from the appliance () , stores it and then sends it to the cloud. In the cloud, it is stored in SQL Server Database for further dashboard reporting and analysis.

Koverage Tool

Apr, 2017 - Jul, 2017 3 months
    Koverage tool is an application developed for KPIT for validating and assisting machine learning in Advanced Driver assistance software application. There are different Modules that are availaible in this tool. The modules include Annotation Module, Project management,User Management,Application Workflow and Dashboard overview. Annotation Module will used for marking and labeling of the target objects in the video frames so as to provide precise decision making for the software to generate alert signals and warning to the driver for efficient driver assistance. Project Management Module is used to to manage different Roles like Data Engineer,Verification Engineer ,Client and Manager. Usermanagement Module is used for user creation,deletion and updation Dashboard gives the overall application status about the videos and Workflow. Manager Modules is used for assignment of work to DataEngineer and Verification Engineer.

ION

Feb, 2016 - Apr, 20171 yr 2 months
    American Title, Inc. has been operating nationwide for 20 years as a leader in real estate information services. At American Titles we as developers have developed very dynamic and automated business Website called ION and Vendor Portal to make the processes strong and paperless.

Demand Management System

Feb, 2015 - Feb, 20161 yr
    Demand Management System is the inter HR tool of Syntel Limited that deals with the demad of the resource in any of the project. This project is a Web Application that uses C# ,.asp.net and WCF to connect to other services.

Workflow Development

Dec, 2011 - Feb, 20153 yr 2 months
    Workflow team handles all the business related to claims in healthcare for Humana client. It uses the tool called Macess that is used by more than 1600 Humana associates. In this project we developed console application, web services, scripts and forms in Dot net framework. Currently we are working on application called CORD which will based on MVC architecture to keep the UI code clean. Workflow also takes care of correspondence and CLAIMS processing and tracking for the business.

Education

  • B.Tech Information Technology

    Government College of Engineering Amravati (2011)
  • HSC

    Maharashtra Board (2007)
  • SSC

    Maharashtra Board (2005)

Certifications

  • React

    W3School (Sep, 2023)
    Credential ID : 1O44KLSINM

Interests

  • Swimming
  • Chess
  • AI-interview Questions & Answers

    Could you help me understand more about your background by briefing the introduction of yourself here? Sure. Uh, I am as I have overall 12 years of We didn't send IT. Uh, I have been part of many different companies like Capgemini. Global. Currently, I'm working with every Denizen as a full stack developer. Um, in my core Skills, uh, senior I'm working as a senior full stack developer. My core skills are around React, Node. Js, TypeScript, MongoDB, PostgreSQL, uh, NestJS. Uh, I have been, uh, Yeah. Uh, mostly, uh, My background is around, uh, full stack development, and my domain knowledge is around IoT. Uh, and And, Mozilla, you have worked on Azure as a cloud, uh, uh, cloud solution.

    How does construction, uh, container How does container orchestration with Kubernetes enhance the development of the microservices Build with Node. Js and gRPC. Okay. So, uh, Kubernetes, Like, container, Like, majorly, this will help around scaling and load balancing. Uh, like, uh, Kubernetes will allow easy scaling of Node GSM microservices. Uh, so and then if you have something called as, uh, if you want the consistent environment and deployment, consist So Kubernetes offers that, and that is how it will help, uh, enhance the deployment of microservices. We would have a version controlling as well and the ruling of the updates, you know, then we would have good resource management maybe. And we can, um, provide isolation and and and security, Which is critical in microservice architecture for security and dependency management. So yeah. Definitely, this, So, uh, container orchestration will will help around all that stuff.

    What methodology would you use to implement automated testing in microservice environment focused on Kafka and Node. What methodology? Implement? Like, We would have, uh, focus on the unit testing. Uh, majorly, uh, we would focus on the unit testing, uh, like, unit testing framework? Like, Jest would help us. And even we can mock, uh, with the mock external dependencies like Kafka producer or consumers using JustMock, uh, JustMock? Then we can we can also do, integration testing, uh, end to interesting, we can do load testing. But for automation, like, you can have a CICD and, um, like, you can have a continuous gradient continuous delivery environment setup? Yeah. That will make sure our environment is stable. So That is major thing. Integration, end to end testing? We can do

    Suppose you have a monolithic architecture running in Node. Js, and you are planning to break it into microservice using Kafka and gRPC, what would you plan? How would you plan this transition ensuring minimum downtime? How would you plan this transition chain ensuring minimum downtime. So for this kind of thing, I think we can use, um, we can use pattern gradually migrating functionality from logistic to micros in it, so this will help us a lot. Uh, and and and and, like, uh, the other things you can do it in in in parallel cited, and then, uh, first of all, you will understand the modeling, map the functionality, dependency, and data flow within that infinity, and then identify the microservices, uh, like, breakdown the monolith into logical and independent and deployable services, and prioritize them on easy to extraction business values and according to the risk that we do have. Room, and then we can set up Kafka and other stuffs, uh, accordingly. We might also need a gRPC infrastructure, then we can, and we can iteratively do this. We can iteratively, uh, do this kind of thing. That would be better. Once we are gun we can document and do the training to other individuals? So the clear roadmap is also, uh, important to be defined.

    Okay. Imagine the use case where your Node. Js application is facing problem with slow start up and response times after being containerized and deployed via Kubernetes onto the Azure platform. How would you approach troubleshooting and optimizing this? We can start with, uh, for this solution, we can start with, uh, with the logs, like Kubernetes logs, and we can look for the errors and warnings that might indicate issues during startup or run time. We can analyze the startup processes, uh, uh, and then we can optimize container image, uh, maybe. Is. And then we the major thing we can do the most important thing after all this is, um, you can check and review the CPU and memory been allocated and if it is properly allocated to the to the container. And and then we can use the try to trying to do horizontal codes, auto scaling, uh, maybe to manage this. And, like, you can do performance profiling. Like, you can use different profiling tools like profilers to see what has been going wrong. If if if the all of this doesn't work, we have to go through our Node. Js code and trying to optimize it. You can see how much database is taking time. Uh, you can see the network latency. You can is Check around the Azure net like, and then the network provide like, the network, uh, how it is impacting the network, like Azure network. Latency can be impacting. And yeah. Is we can also review the Kubernetes service and ingress. Is if all this doesn't work, then you can definitely contact the Azure because, uh, as you are, uh, concerned as your support and documentation.

    Like, when implementing a new feature, feature, how do you ensure it integrates well with both Kafka event streaming, uh, event stream and existing Azure cloud infrastructure? To implement a new feature like this, Azure cloud infrastructure, you know, a series like, we can do series of things to ensure the the performance and the reliability of the application, first of all, you can understand the existing infrastructure feature and then review the current, uh, uh, Azure infrastructure and Kafka setup, and then understand that existing configuration such as network setup, uh, security rules, Kafka topics, and then etcetera, then we can clearly define our requirements. Uh, and once we have defined, like, we can review our design, uh, and then we can set up our development machine. So, feed so so we have to do feasibility test if if you give you our our new feature if the feature interacts with other services as well, and then we need to do a feasibility test. And if if everything feature, uh, if if right, then we ensure that correct serialization details of the Kafka message happens, and we can handle the Kafka messages, uh, offsets and partitioning, feed, maybe, um, and implement the, uh,

    Given a specific, uh, given a specific scenario where a fine grained microservice architecture needs to be set up with Node. Js, gRPC, and Kafka, how would you design the system for smooth intercommunication among the services and periodic data syncing with minimum latency? Okay. So to design a fine grain micro service like you were given a scenario where all this setup. So if you want to design this, probably so we assume that we have event driven architecture. Then what we do is, um, we define boundaries. How would you design the system? Like, you if you want to design the system for smooth intercommunication, like, you can consider channel, your integration, you can do load balancing. We can do a staging deployment environment, if you do the staging deployment environment, that will help a lot. And after we get the feedback and everything, then we move to production. Sorry. We would always have a server in between, like, uh, staging production, uh, staging deployment kind of thing.

    Last day, the step you would take to ensure deploy application on Azure utilizing both PaaS and IaaS offerings? App so to deploy an Azure, uh, or to deploy a Node. Js application on, uh, we Prepare our Node. Js application that it is ready for production. Uh, we have source control. Uh, uh, we generally set up the database required. Uh, and then, uh, you have to ensure that your code is on GitHub or or Azure WebPods or Bitbucket or Azure DevOps. Then you can set up the Azure services, uh, like, uh, pass offering is done by Azure App Service for hosting Node. Js. For IAS, you can consider Azure, uh, virtual machine or Azure Kubernetes service if you needed a more collaborative, controlled environment or specific configuration with the PaaS that, uh, that the PaaS does not offers. So and then you need databases, network and security services. Uh, we're gonna have a a VNet configured, and that would help you a lot. Js. Then you can configure your Azure app service, and then you can set up the, uh, IAS environment. Of if you are opting for AKS, you have to compare Kubernetes concern according to your complication needs. Js app you can set up CICD pipelines. Up. So both PaaS and IAS offerings will be taken care. Uh

    In Kubernetes environment, how would you address pod scalability based on incoming traffic? To do this, you can generally use the horizontal pod auto scaler. Horizontal pod auto scalar automatically scales the number of ports in the deployment, uh, scales the number of ports in the deployment or replica set based on the address the CPU utilizations or other metrics? Address like, you can configure your, uh, your request, uh, your resource request and limits, like, with the help CPU? How many CPUs and how many limit, uh, request CPUs and limit CPUs? That will help you scale scale scale the environment.

    How would you debug performance bottlenecks in a system that uses Kafka for real time processing and Node. Js services? If you are using Kafka for a real time processing and Node. Js services. How would you do the performance issues? Clearly, if you can if you generally, we connect, uh, monitor and collect metrics, uh, like, um, like, input output operations being done, Uh, the Node. Js CPU utilization, the Kafka CPU utilization, uh, we need to Determine, uh, high latency and low throughput. So what where are the performance issues? Is it from latency issue or performance ratios? And you can, uh, we we have to check the the bandwidth and see if there is a flag. And To profile Node. Js services using tools like, uh, inbuilt tools like, uh, inspect flag dash dash inspect Flag we have. We have clinic dotjs that will help us get the profiling of the Node. Js application to identify CPU, you tend intense operations or memory leakage. So we can review that algorithms and code, uh, our algorithms and Or we can, uh, analyze, uh, we can, uh, we can analyze Kafka producer and consumer performance and check the batch size and and and linger time and compression settings. So this way, you can, uh, you can do, uh, debug the performance bottleneck in the system that uses Kafka real time processing measures other logs that that we'll get at the metrics that we receive

    How would you ensure data consistency across distributed service when applying changes in an event driven architecture using Kafka. To ensure, uh, ensuring data across distributed system. Um, we can use some strategies like event are seeing, uh, exactly one semantic Kafka, or you can use atomic transactions. Actions. We make sure item potency is there, uh, make operation item potent. Uh, so retrying the same operation does not change. That means that if you retry the same operation, does not change the state beyond the initial application. Uh, we can also use compensating translation sagas, saga pattern. We can use we can use, uh, event ordering. Then we can use read and write models. This will, uh, and even we can capture the data. We can do good documentation and governance. Graceful error handling, uh, will help us ensure that the distributed service apply the changes, uh, so help you on the consistency on the data.

    Can you can you suggest, uh, a for monitoring and handling Kafka Messenger as this delivery failures in the robust manner? There there are some comprehensive strategies that will help, uh, run-in robust manner, Like, uh, you can configure Kafka reliability, like, set appropriate values of acknowledgments, retry, and retry backup in producer configuration? And, also, you can use, uh, mean in sync replicas and replica factors, uh, in the broker settings. Uh, and then you can monitor the Kafka clusters. So monitoring is one of the strategy you can use. Uh, proper logging and, uh, strategy, proper alerting mechanism strategy, uh, an dead dead letter queue, like, set up a dead letter queues to capture undelivered messages? Uh, this ensures the problematic messages are set aside for analysis and don't block the processing. So dead letter q is one thing. Uh, you can always use, uh, message independently, item potency techniques that don't send the duplicate messages kind of thing? And you should always have end to end tracing of the application. You should have retry logics in the consumer error handling in producers. This way you can, uh, do robust manner.

    In case, uh, where there is a sudden search in the user request causing your microservice architecture architecture orchestration why a Kubernetes to fail, how would you debug and manage and prevent such incident in future? Uh, handling the sudden search of the users and failure, we can have Uh, immediate response mechanism, you can do, uh, like, you can use immediate response and mitigation, uh, can be you can scale up the service, uh, for the images, sir. But, uh, to add any like, You can do the rate limiting. You can divert the traffic info around this. So you can debug and manage the signature. But if you want to prevent this from the future. You can identify the bottlenecks. Like, you can monitor the Kubernetes metrics. You can analyze your logs, And then you can do a root cause analysis and try to, uh, mitigate and, uh, root cause analysis and and and give a solution around it. Like, uh, we can do load balancing. And, also, uh, you can do a capacity of planning to optimize this kind of, uh, scenarios. So after post review Most incident review, you can find out the solution, and you can make sure that, uh, uh, next time, if the same, uh, how it was resolved, you can document it. Documentation, document your learning, uh, is is important.

    How do you approach database schema changes in way while ensuring compatibility with the microservice? So handling database schema change. So first of all, we can do a version controlling, to ensure compatibility? Uh, we can make sure everything is backward compatible. We should also despondent contract patterns like, uh, add a new element without reviewing the old ones? And then you can migrate, uh, like, gradually the migrate the application. Disc then there are database refactoring tools that will help us analyze the effect of this approach. That would help you, uh, and even you can do a phased rollout or a fallback plan. You should always have a fallback plan. So this way you can ensure the compatibility with the non nondistriptive dis way while ensuring compatibility with microservice?

    So how would you recover and ensure minimum service, disruption for a large scale Kafka Node. Js system where a substantial number of Kafka brokers have failed? So if, uh, recovering from such a scenario, Means you can do few approach. The first approach would be, like, immediately, you can Try again impact. Like, you you can do an impact assessment. Like, try again. You can do immediately impact assessment. Like, you can quickly assess the impact of the local failure on your system, determine which Kafka traffic and the partition have affected, and how service? How they have a attack, uh, affected? Uh, you can engage in a disaster planning recovery. Like, you can engage in a of planning require recovery plan? Like, if you have it right now, it is time to engage it. Like, you have to, implement? You you have to do it, uh, despite, uh, include switching to the standby Kafka cluster if available. Service? You can do a restart and data integrity check and, uh, restart the failed Kafka broker that would you to do immediate? Once we start, we can check the integrity, Uh, and then you can also do some replication and failure, uh, failover. You will always have a replication and failover mechanism, and you can redistribute the load around. So this will help you reduce the load, uh, and ensure minimum service disruption is there, uh, around for Kafka brokers? There are some more things, like, you can do root cause analysis. You can review and improve disaster recovery plans. You can have to dub you always have to document this incident.

    In what ways have you used functional programming principle in Node. Js to stall complex coding problem? So functional programming principle, like, first of all, function problem, like immutable dataset, uh, so that would, um, um, have say so major thing is you have, immutable data structures? There I have used the function programming, and a function programming is always better if you have pure function. Your you you you immutable function that is, uh, that doesn't change? And if it is chain if it has less in? Then there is a less well, never a less, uh, less vulnerability, less less chances of failure. You can always have higher order function like the map. For instance, map filter and reduce method integral functional programming that are used to transfer data without mutating the original array, like, without changing the original array? So, oh, that is again, uh, helpful. Uh, I have used the functional programming around, uh, curing during and partial application recursiveness? Recursiveness is always best. Like, I have used multiple time recursion. Used like, regression is always a better complexity than the traditional

    How would you approach network security in Azure cloud environment especially when dealing with external API and services? Network security in cloud Azure cloud? Particularly when dealing with external API, means we we generally use Azure virtual networks, uh, virtual network's VNets? Azure VNet? Like, Azure VNet will isolate our all resources from the third party. Then you can have, uh, you you will create a network security groups, like, to to to so that for every VM so that they are not, approach? Everything is not export, uh, exposed, and only few things are exposed. Whatever is needed, that is export to the external APIs. Then we can always have set up application gateway with the web application firewalls. Uh, you can now also other, uh, or whatever this says, like, uh, like SQL injection? And, like, you can, uh, that will help us protect against that, uh, kind of scenario? We can have Azure firewall set up. We can do Azure API management. Uh, API Azure API management is the is also one of the Azure, uh, Azure service that will help you, uh, manage the external APIs, uh, with implementing, like, authentication policies, rate limiting, IP filtering? Any other The one approach can be you can make it a a private endpoint. Um, then you can also use I'm management service of Azure, uh, that will help you, uh, achieve the