profile-pic
Vetted Talent

Vishal Daga

Vetted Talent
  • Ability to think simple, innovative, result oriented, highly productive problem solver.
  • 10 years of industrial experience.
  • Developed number of software solutions from ground zero including technology selection, development and build cycles.
  • Specialized in quality code and standard coding practices. Published framework/libraries/Paper
  • Have leadership skills and been successfully bringing full potentials of team members.


  • Role

    Lead Consultant

  • Years of Experience

    10 years

Skillsets

  • Leadership
  • Design patterns
  • DevOps
  • Scripting
  • Testing/coverage

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Software Developer II - Express JS and Node JS (Onsite, Bangalore)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :CI/CD, DevOps, AWS, Docker, Express Js, MySQL, Node Js, Postgre SQL, Redis, Strong Attention to Detail
  • Score: 45/90

Professional Summary

10Years
  • Aug, 2022 - Present3 yr 4 months

    Lead Consultant

    Xebia IT Architects India Pvt. Ltd.
  • Jun, 2018 - Oct, 20213 yr 4 months

    Specialist Software Engineer

    Hewlett Packard Enterprise
  • May, 2016 - Jun, 20182 yr 1 month

    Senior Software Engineer

    Hotelsoft Inc.
  • Jul, 2013 - Jul, 20152 yr

    Software Engineer

    IBM India

Applications & Tools Known

  • icon-tool

    Redis

  • icon-tool

    WebSocket

  • icon-tool

    Kafka

  • icon-tool

    NodeJS

  • icon-tool

    Python

  • icon-tool

    Go

  • icon-tool

    C/C++

  • icon-tool

    Express.js

  • icon-tool

    RabbitMQ

  • icon-tool

    Vue.js

  • icon-tool

    Angular.js

  • icon-tool

    Webpack

  • icon-tool

    Docker

  • icon-tool

    Jenkins

  • icon-tool

    Nginx

  • icon-tool

    Kubernetes

  • icon-tool

    Android

  • icon-tool

    Ionic

  • icon-tool

    React Native

  • icon-tool

    AWS CloudFormation

  • icon-tool

    CDK

  • icon-tool

    Bash

  • icon-tool

    PowerShell

  • icon-tool

    MySQL

  • icon-tool

    Postgres

  • icon-tool

    MongoDB

  • icon-tool

    Mocha

  • icon-tool

    Chai

  • icon-tool

    Webdriver

  • icon-tool

    IntelliJ

  • icon-tool

    vim

Work History

10Years

Lead Consultant

Xebia IT Architects India Pvt. Ltd.
Aug, 2022 - Present3 yr 4 months
    Online Banking Web Application

Specialist Software Engineer

Hewlett Packard Enterprise
Jun, 2018 - Oct, 20213 yr 4 months
    HPE AI analytics platform

Senior Software Engineer

Hotelsoft Inc.
May, 2016 - Jun, 20182 yr 1 month
    Product development

Software Engineer

IBM India
Jul, 2013 - Jul, 20152 yr

    Automations to deploy to Cloud Foundry based PaaS enviornment, and DevOps integeration. Experience - Android Application development, MEAN Stack development, AngularJS, Node.js, Ember.js, JSP, Web Services, Java, Apache Server configurations and related backend technologies. Quick learner with excellent Problem solving and debugging skills.

Achievements

  • Published framework/libraries/Paper
  • Received Manager Choice Award 2015 for 'Dare to Create Original Ideas'
  • Published Android application with 50000+ downloads worldwide
  • Winner at IIT Bombay techfest - Appsurd event 2012

Major Projects

1Projects

Multi-tenant SaaS solution with AWS MVP

Dec, 2022 - Present3 yr
    Architected application architecture for the multi-tenant-based SaaS application on AWS requiring high standard for client data security and data isolation.

Education

  • Master of Computer Application

    National Institute of Technology Karnataka, Surathkal (2013)

Certifications

  • Aws devops professional

  • Aws solution architect associate

  • Aws developer associate

AI-interview Questions & Answers

So I I was, uh, I was working as lead consultant, uh, as full stack developer. I worked on JavaScript to JavaScript to take as I worked as full stack JavaScript developer. And, um, um, also, I will I am an AWS certified, uh, developer. Uh, I have couple of cert 3 certificate of it AWS. Um, so I, uh, I I have been involved in, uh, multiple projects, uh, developing those applications from ground 0 to all the way all the way to the to its completion. And, uh, so I have, um, I I was completely, you know, that's why I I have been living a complete responsible role for the the application performance, um, for the for the for the entire technology select selection. Um, the that that starting from technical selection to to, you know, to to guiding the team members. And, um, so I I have been I've been, uh, being, uh, you know, leading teams and, uh, have been able to bring full uh

Okay. So caching, uh, is one of the most, uh, you know, difficult thing in tech in computer science. Um, there is, uh, this is, uh, this requires a lot of code changes, um, and it has to be done very thoughtfully. So I have I have been I've been involved in implementing caching and I have, uh, also invented a one smart caching mechanism. Um, I'll come to that later. But, uh, uh, when when it come to caching, we have to decide, uh, what kind of, uh, you know, uh, what kind of mechanism we'll be using for we'll be using for forecasting. I don't know. My recording has stopped. So um, we can have, uh, our lazy lazy loading gassing. So first, we'll be, uh, we'll be looking to gas. And if it is not there, we'll we'll go into, uh, we'll basically go into database and get the data, or any data source will get the data. And then store that result in CAS, in in the CAS. After, uh, the computers are in the CAS. Other way is, uh, we proactively create the case beforehand so that, uh, all of all of our parameters matches. Uh, when our parameters in the query matches, the result is automatically, uh, we we returned because it has been, uh, pre roll up or pre calculated beforehand. Um, another another, uh, casting mechanism, which I which I actually, uh, you know, invented was based on, uh, you know, parameter values. A lot of time we observe that, um, if we have, let's say, 3 or 4 parameters and, uh, of that of of when we have multiple queries, only 1 parameter changes. But the other other 2 or 3 parameters are not changed. And that 1 parameter just affect few lines of code in the in the in the computation logic. So in that case, we may not have to, uh, you know, run all the logic, uh, all we have we we may not have to run through all the statements in the logic. We may need maybe that parameter is affecting only few of the statement in the, uh, in the, in the API logic. So, uh, that mechanism is, uh, helpful to actually, uh, to only the computation, which is needed, uh, leaving aside the parameters which are not changed and the and logic which is depending on those parameters. So if this is what I call a smart guessing, uh, implement this, which, uh, which I implemented in my in one of my previous job.

So we we can have a 7 4 mechanism. The same database. Right? Same Postgres table. So in that case, we um, we we make sure that there is some kind of mechanism or new text, which is, uh, which is making sure that, uh, you know, there is no multiple right operation happening into the same, uh, same reason of of the of the table or accessing the same table at the same time. Um, and then we and then whichever process requires to modify the table can can, uh, can own this, uh, mutex. And, uh, and then do the operation and then release the mutex. So this is one mechanism. With, uh, with with databases post, like, post, there are, uh, some algorithm mechanism as well, um, which can be used to prevent, uh, multiple process accessing the same reason of database at the same time or same kind of modifications and being done at the same time. Um, but, uh, in any case, um, in in any scenario like this condition or any or those kind of situation, um, there are other ways to handle. Like, uh, we can have we can have, uh, we can we can we can have, like, a big transactional query and, uh, make sure that, uh, the entirety of the query, uh, is run-in one go. And, uh, during that time, no other queries are no other execution of the same query is happening. Uh, or or if the query fails, then the entire, uh, changes or which has been uh

Yep. So Docker can be Docker can be useful for Node. Js because, um, we can we can we can ensure that okay. We basically, with Docker, we can we can completely package our with all the required run time and, uh, uh, environmental requirements and, uh, create an image and then deploy the image, uh, ensuring that, uh, we are we, uh, answering we don't need we not we are not, uh, bound to a specific, uh, platform, clarities as when we're getting note here with with Vocker images. So, um, so so, um, so we are, um, we don't have to, you know, configure a lot. Um, and, uh, and also we can lively deploy our application to run-in a in a in a in predefined environment, in a test environment such that, uh, the the application when it is tested or when it is being developed is is is is being run-in the same way when we are deploying it to the to the external server or in production. So we we can ensure that the consistency of the environment is is being maintained. And, uh, again, when it comes to distributed, uh, distributed, uh, deployment distributed deployment of of type low distances, then in the in that case also, uh, we can, uh, we can have a lot, uh, lot of benefit. We can gain a lot of benefit by, uh, by actually using, uh, Docker to, uh, to create our instances of the application. Apart from that, uh, since, uh, since Node. Js, um, by default is a single is so called single trigger. It's not actually single trigger. We earlier, before there's another alternative. Docker is using PM 2 to to to effectively use the resources so to compute power of the host machine. But with Docker, we can easily load balance as well, uh, by responding number of multiple node, uh, node distances. And ensuring that, uh, all the node instances are, uh, other node application instance are able to run-in the same kind of environment.

So the efficacy strategy can be, um, can be, uh, we can we can have, uh, I can I can tell you from the point of view of, uh, you know, cloud environment, But also, um, when we are when we are doing it on premises as well, uh, the idea is that, uh, we we should, uh, there can be, like, uh, we can we have we can have the same ability zone as well as, uh, different which are separated by, uh, by a certain distance to avoid any the situation hitting our, uh, the the the the the the the suggestion in one of our, uh, one of our server location or within the server host itself, uh, that that is the impact is minimal if the if the if the if the dishwasher is, uh, quite apart from in in by by by certain amount of distance? So, um, so this is, uh, this is where this is how we are going to, you know, replicate the post. And when it comes to replication, what we can do is, uh, we can have, uh, you know, synchronous replication, which will which will make which will make things slow. So to ensure that we are doing consistent read and write, uh, then we will and then we and we want to make sure that all the replicated instances are are are same, uh, like active state active active state, then we can go with the synchronous replication. Again, this will be this will this will be, uh, uh, making things slow because, um, it and has to wait for the replicate, uh, for the application to be completed as well to to to market complete. And then there is taken in asynchronous application. And, uh, we can have active passive configuration as well, where we can just maintaining a read replica of our instance, of our of our primary, uh, instance. So that is that is our read replica. Uh, there is so with 3 replica, we can have, like, 22, uh, we can put it we can we can get look for 2 benefits. 1, can we think of, like, we can have multiple rates, uh, which are just using the select script to just read, read intensive workload. And we can we can also think of having the read replica as our backup database in case of a disaster. Um, so so we can have active passive as well as active active kind of configuration with synchronous replication and asynchronous

I have not seen or I have not, uh, I don't know much about print as you can see here. Uh, I know that we can we can set radius, uh, we can we can set record with key and value. But then we have a third parameter, radius dot print. And I'm not sure what is this radius dot print actually does. I've not used it so far. Since I don't know about radius.print, I am just assuming that radius.print operation result. But this is very rough case. Um, I'm not sure about radius. Print state that that 2 words radius. Print. Looks okay. There's no issue. We are we are creating that client. And then when the when we are, uh, we are creating the Redis client and then we are connecting to it. But with the Redis dot create client, we are not passing any, uh, credentials to which client we are going to create to which which client we are going to connect. So on line number 2, ready sort create client is, uh, is It's not, uh, correct. Looks looks not correct because it does not say which, uh, instance we are going to connect. Since I'm not aware of our reduced print, I cannot fully conclude with the what is the exact issue. I need to know what is reduced or print first. One issue which I can see is that we are not passing any

Request.params.i. Should have an IT because, uh, we can give any name. ID is just an arbitrary name, But that name, we should use when we are, uh, when we're getting getting the value from params. So it should have been request.params.it, not I. Yeah. This is So we, uh, the issue is, um, as I told you for, the the ID is just an arbitrary name for the parameter, and we need to use the same name when we are retrieving retrieve that value from request.params. So it should have should have been request. Params.

So, um, I can I can profile the query? There there are some some mechanism that we can use within with, uh, with SQL SQL engine itself that, uh, help us to profile the query. Uh, how the, I mean, the query plan that, uh, how the query is being planned, where, uh, then we can see the inefficiencies in the plan and how the query is structured. So one is uh, by looking into the query plan and uh, other diagnosis which I can think of is the data structure in the table also plays a role in, uh, performance of the query. So for example, a very simple example is that, uh, let's say we want to store a gender which is just male and female, for example, then we can we we should have using char too. They should offer, uh, using a because is, uh, with with, let's say, 10, uh, 10 size of string of vectors, that's unnecessary. So how we are how we are restricted the table also helps in improving the performance. But the first thing which I look into the query plan And, uh, from there, I'll I'll see, uh, where where things are being, uh, you know, redundant. It doesn't are are we doing any redundant calculation in the query? And, uh, how we can we can avoid that. Apart from that, um, obviously we can, uh, I mean, mostly, it it it comes with, uh, you know, not well structured queries Because because retrieving a data can be done in multiple ways. Uh, either you scan the entire database and and then do calculation, or you can do to, uh, you you can filter out within from the database itself. Ideally, all the computation should be handed over to database, not not not, uh, computation has been done with in the in the code. Uh, that should be the approach. Most of the computers should be done by database itself. In that way in that way, the, uh, in that way, uh, overall, the performance improves.

So the CICD pipeline, whenever whenever a code is committed, um, to a code is committed or a branch is merged both to a predefined branch, then or whatever the strategy offered deploying the deploying for each commit or for each, uh, for a particular branch. Uh, whenever there is a, uh, um, there is a need for for build and deployment, uh, we are going to build it, create, uh, executable, create, uh, you know, the the build build. So when it comes to node, node and docker, we'll be creating the image and we'll push the image to the Docker registry or whichever registry or private or public or whichever registry we are hosting our, uh, image to. And then and then, uh, we will be we'll be deploying that image to the deployment group or the or the or the servers where the application is going to be deployed. Um, after that, we'll monitor whether whether the deployment was successful or, uh, we we can we'll be using we can use a lot of tools which are out there to manage automatic rollbacks. Um, or even do blue green deployments To ensure that we do, uh, some kind of testing before we are actually, uh, moving it to, uh, production, which is not easily the need when we are doing development. So, yeah, the, the few of the approach which I which I was thinking is, uh, creating the build image. Uh, when it comes to production environment deployment, we need to have some kind of manual approval. Um, and, uh, we should be deploying to some kind of, uh, you know, staging environment and let, uh, tester do their testing before the final deployment. Um, and we need to ensure that the the the build artifact that we are we have tested and marked it successful, that same build artifacts, which are marked as successful, are only deployed. This should not be the case that, uh, somehow doing that testing, there is another build artifact got created later on, and we mistakenly, uh, deployed the untested artifacts. Our untested, uh, you know, so that kind of mechanism should be very strongly, uh, you know, uh, considered.

So, um, an API does, um, you know, there are there are 2 parts. 1 is computer simulated API. And, uh, there are some, um, some database operation which are which are which are, uh, you know, which are which are not in millisecond latency. They they take few seconds. Um, so, um, so we need to establish a new connection to the database and then disconnect to the disconnect, uh, close the connection as well. So, uh, Redis, uh, whatever we we we did, uh, all the result, the final result of the computation. We can we can we can put into Redis with a key set as a parameter, is a function of the parameters that are passed through the API. So the key is a key is a function of the parameters and the value is the final computed value, which has been sent as a response to the user. So next time onwards, when we are when we are making the same, same API request with the same same parameters, we can, uh, we can right away get the result without going into the Node. Js API calculation logic or connect to database and get the data and do all the computation. So the response time improves drastically. This, uh, um, Yeah. This requires a lot of port changes. That's the part. But, uh, that's how, uh, we can improve the performance of an app performance of an application, uh, significantly using a CAS in between the back end and the