profile-pic
Vetted Talent

Chandan PT

Vetted Talent

Seasoned Technology leader with 15+ years of experience. Proficient at solving customer problems and delivering a simple and delightful user experience to the end user by making the best use of technology. Architected, designed and shipped various Platforms, Large scale Distributed Systems and Micro-services from scratch that power SaaS products.

Recognized for delivering game changing outcomes for customers and boundaryless leadership. Proficient in Java programming, Javascript, React, Cloud technologies, and monitoring tools such as Splunk.

  • Role

    Staff Software Engineer - Payments Economics

  • Years of Experience

    16 years

Skillsets

  • SaaS Products
  • JavaScript
  • E-Commerce
  • Kubernetes
  • On
  • AWS
  • Java/J2ee
  • SQL
  • Oracle
  • HTML
  • SaaS
  • Java
  • .NET
  • Postgre SQL
  • react
  • Splunk
  • Spring
  • FrontEnd
  • Databases
  • Monitoring tools
  • Cloud

Vetted For

15Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Staff Software Engineer - Payments EconomicsAI Screening
  • 57%
    icon-arrow-down
  • Skills assessed :Collaboration, Communication, Payments systems, service-to-service communication, Stakeholder Management, Architectural Patterns, Architecture, Coding, HLD, LLD, Problem Solving, Product Strategy, SOA, Team Handling, Technical Management
  • Score: 51/90

Professional Summary

16Years
  • Aug, 2019 - Present6 yr 1 month

    Staff Engineer

    Intuit
  • Aug, 2008 - Aug, 201911 yr

    Senior Software Engineer

    Intuit
  • Jul, 2006 - Aug, 20082 yr 1 month

    Software Engineer

    GE Healthcare

Applications & Tools Known

  • icon-tool

    Java

  • icon-tool

    J2EE

  • icon-tool

    Spring

  • icon-tool

    ReactJS

  • icon-tool

    HTML

  • icon-tool

    Kubernetes

  • icon-tool

    AWS

  • icon-tool

    Oracle

  • icon-tool

    SQL

  • icon-tool

    Splunk

  • icon-tool

    AppDynamics

  • icon-tool

    Amplitude

  • icon-tool

    QuickBooks

  • icon-tool

    ASP.net

Work History

16Years

Staff Engineer

Intuit
Aug, 2019 - Present6 yr 1 month

    Driving Project end to end right from inception to completion and Post completion.

    • Requirements: Right from requirement phase by close collaboration with Project Managers.
    • Designing/creating the POC/MVP where ever necessary (requirements are not clear) and getting it aligned with Project stakeholders.
    • Collaboration: Collaboration with cross-functional teams to understand cross-cutting dependencies and alignments.
    • Architectural Design and Planning: Creating design and getting it aligned/approvals with Architects and cross functional leads. Creating Plans and share it with leadership for approvals.
    • Technical Guidance : Working with engineers(team) in actual development and implementations, code reviews and ensuring the solution and required quality standards are met.
    • Production Readiness: Ensuring all the required milestones, requirements, creating/reviewing deployment documents, dashboards (developers/leadership post production) are met before go-live.
    • Post Production Monitoring: Closely monitering the dashboards to ensure everything is taken care/ under control before making it live to 100% of the customers.
    • Analysing the Voice of Customers post production to understand the sentiments of customers on the new features etc.

    Mentoring Engineers: Apart from technical stuff, mentoring junior engineers for their Personal Growth and Career Development.

    Experience and seasoned engineer in cloud Technologies.

Senior Software Engineer

Intuit
Aug, 2008 - Aug, 201911 yr

    Worked on an e-commerce website called Intuit-market, it is a web-based application/portal for office supplies https://intuitmarket.intuit.com/.

    Roles & Responsibilities:

    • Design, code, test and debug of new features.
    • Changes/bug fixes to Admin tool used by care agents.
    • Code Review and responsible for Quality Assurance
    • Guiding and mentoring junior team members.
    • Decomposed Asp.net application to Micro-Services.
    • Migrated frontend from Asp.net to Backbone.js
    • Technology/tools Used: VB, C#, Asp.net and Backbone.js, MSSQL

Software Engineer

GE Healthcare
Jul, 2006 - Aug, 20082 yr 1 month

    Worked on Patient care applications called Centricity.

    Roles & responsibility:

    • Bug fixing, testing of application
    • Code, test of new features
    • Review test cases created by QA team
    • Provide necessary support for OpsTeam for Go-live
    • Technology/tools used: VB, VB.net, MSSql

Achievements

  • First prize winner in 24hr hackathon for creating an App to track chit fund investments.
  • Received 20+ spotlights from various leaders and teams in 2019-2022 for my work.
  • First prize winner in 24hr hackathon for creating an App to track chit fund investments.
  • Received 20+ spotlights from various leaders and by different teams in 2019-2022 for my work.

Major Projects

3Projects

DT User Migration

Intuit
Mar, 2023 - Oct, 2023 7 months

    Introduction : Project aims at Migrating existing desktop Accounting users to Online Saas based Accounting.

    Roles & Responsibility : Was induced as the first member of the team after the project was resurfaced from cold storage

    • Onboarding new team members.
    • Operational excellence Improvements naming few below was done to understand/measure the current state

    -Dashboard Setup: Splunk, Wavefront, Amplitude.

    -Observability to track end to end for debugging purpose.

    • Fixed a tricky chocking problem and received appreciation from leadership.
    • Created 360 Pipeline: A pipeline was built to expose user/Migration details from Desktop to Online to provide customised experience for the users post migration.

    Technology/Tools Used: Java, React, Oracle, Splunk, Amplitude, Wavefront

Self Service Account Maintenance

Intuit
Jan, 2022 - May, 20231 yr 4 months

    Introduction: A portal for Self Service Account Maintenance. This portal enables 2.5+ million customers across the globe to self-manage their QuickBooks Account

    https://camps.intuit.com/.

    Roles and Responsibility:

    • Was responsible for introducing new features in order to bring the call centre call savings.
    • Technical POC for the whole project. Led a team of 6+ Engineers.
    • Performing Technical analysis on the existing API's and provide input for Project Manager to introduce new features.
    • Led Migration of Self Service portal from Intuit datacenter to AWS
    • Led Migration from AWS to Kubernetes.
    • Migrated the front end from Backbone to React

    Technologies/Tools Used: Java, Spring 2.x, AWS and React, BackboneJS

Quickbooks Self Employed

Intuit
Nov, 2022 - Apr, 2023 5 months

    Introduction: There was new feature called Trips in Online version, introducing this feature in QuickBooks Desktop.

    Roles & Responsibility :

    • This project was a individual contributor role.
    • Identify the synergy of feature and corresponding API's. Analyse and identify the changes required.
    • Design and work with cross-functional teams and stakeholder for approvals.
    • Performed the necessary code changes and getting it reviewed with cross-functional team.
    • Post production monitoring.

    Technology/tools Used: Kotlin, Java, Splunk, DynomoDB

Education

  • Master of Computer Application

    Indira Gandhi Open University (2005)

Interests

  • Long Rides
  • Badminton
  • Driving
  • AI-interview Questions & Answers

    Hi. My name is Chandan. Um, I have, uh, close experience of close to 15 years now. I've been working in web technologies right from Java to JavaScript. So my major, uh, work experience goes on, uh, accounting domain. Uh, we're currently working as a staff engineer. And, uh, from an personal interest, I like to read a lot of stuff about designing, uh, what's happening with the JavaScript front and also what's happening from the back end side. So I keep myself updated usually on all the aspects whenever I get time. Uh, so this is my experience from, uh, work standpoint. Uh, my most of my experience is on web technologies. I've worked in JavaScript from plain JavaScript to React as well as

    Usually, in case of services, whenever there are, uh, new sub new versions which needs to be, uh, taken care, uh, there will be a new API with, uh, version number appended to it. Let's say that if I have a payment API to fetch, uh, get payments, uh, which could be v zero. And over a period of time, if there is major change which is happening on the APIs and there needs to be a new API, Uh, the the newer version will get into a new API, like payment slash v one so that whoever the clients who used to be querying those old APIs, they still continue to work on v zero. And whichever the new ones who can consume, they can consume it from the versions like v one. And later on, slowly, uh, those those clients who are using those v zero APIs, they can gradually progress it. That way, there won't be any downtime as such. So the newer can be onboarded to the newer APIs, and the older can still use the old APIs for time being based on the need, and slowly, they gets migrated to the newer APIs. So in terms of versioning, uh, it's the different API with a version number appended to it, like /vone, not/v2 or/v3. That way, uh, the downtime when consuming such applications will be

    Okay. So in terms of, um, monolithic, uh, applications, all the, uh, code will be sitting into 1 place, uh, as a kind of whole 1 bunch of it. So in can in terms of, uh, breaking down that monolithic into a kind of microservices architecture, basically, we need to identify which services, which, uh, so, um, it can be like single, uh, um, which of which services we can break down into smaller ones. And that way, we can take those services and break it into microservices. Uh, from a standpoint, what will happen is, uh, we'll have to have an kind of API gateway in in between. Uh, let's say that I have broken up a, microservices, uh, a model into couple of microservices, 1 or 2. Now from a standpoint, if the client is using a API endpoint, so that gets into an API, um, gateway. And, uh, those gets routed through, uh, the API gateway to a newer version as well as a older version based on the need. Uh, to start with, uh, we can throttle the, uh, request to the newer APIs to see if there is any problems. And later on, once we see that, okay, the the newer API is scaling up, everything is fine, so slowly, we can throttle up the request to the newer APIs. So but then here, what happens is there will be a state that's called dual state wherein few few requests will be going to the monolith, and few requests will be going to the, uh, microservices. So that duality has to be accounted in case of microservices. And, also, Yeah. I think, uh, that should be one of the strategy where, uh, it can work without any downtime because we'll have 2 APIs, and, uh, the the traffic is throttled between, uh, the monolithic as well as the microservices.

    For the manage transcription consistence. Yeah. Here, um, there are couple of patterns which you can use. One is choreography pattern we can use. That way, all the, uh, transactions which needs to be handled is handled by choreographer. Even though there are multiple microservices, uh, we can ensure that by doing a query of a pattern that if the requests are successful, then only go with other request. Otherwise, roll back all the requests. Uh, but here, there won't be any transactional kind of stuff because these are all microservices. It cannot be a single database. It has to have a orchestrator who does all these things in between.

    In terms of, uh, distributed system, um, we can we can use the atomic transaction by using PubSub methods. Uh, that way, whoever is, uh, listening to those events, they go back and do the changes. And, um, if there is any issue with, uh, let's say that there is a service one who needs to be having a with the service 2, Uh, whenever there is a event which gets listened, uh, either through a PubSub model, the 1st service gets updated, and, uh, it has to wait for the other service to get updated. If it doesn't, it has to roll back the transaction.

    Um, in terms of service, uh, the metrics I would like to monitor are t p ninety, uh, t p 99, uh, t p 99, t p 90. Uh, these are the 2 metrics which we'll definitely try to monitor. That way we know that, uh, to 90% of the people are to the 80% TB80 or whatever to the percent of the people. What is the, uh, what is the API? How it is performing in terms of performance? And, uh, that is one of the aspect. The second aspect is the failure rates, um, in terms of number of request versus number of failures, uh, as well and the uptime also in terms of, uh, total number of uptime of the APIs. Yeah. I think that should be

    So I'm not very conversant with, uh, Python, but I'll try to check this. So since the default currency is euro, it's always better we default the exchange rate to the euro rather than making it 0. That way, even if, uh, uh, the currency having some issue with any other currency, uh, other than USD and euro, it still go back to the default of euro conversion rather than giving 0.

    I think there is no default return. Uh, okay. Interesting. We're not clear on this. Okay.

    So in terms of, uh, single point of failures, uh, do you avoid a single point of failure? Critical payment, routing service. Uh, routing service. And my assumption here is, uh, routing service is basically a kind of service where you wrote a request to different services. Uh, if that is the case, then, uh, usually what happens is your services will be hosted into multiple servers. Uh, so even if there is 1 service which is down, but there will be other, uh, ports or the nodes which will be serving that request. So, basically, I would host these APIs into multiple servers behind an API gateway. That way, even if there is 1 service, it is down. Uh, the the the routing will takes place, to the healthy servers, which will not fail because the the probability of getting all the services down, uh, is very less. And, uh, that would definitely avoid the single point of failure instead of having service hosted in 1 service. Uh, 1 server, basically, I would recommend it to be put it into the multiple nodes behind an API gateway. That way we can avoid this. That's one of the one of the, um, point we can avoid the failures. Thinking on other aspects. Multiple regions. If you have if you have, uh, hosted this service into multiple regions, if there is, um, 1 region down, you can still, uh, route the request to the other regions. That way, uh, the probability of multiple regions getting down is very less and that you can still avoid a single point of failure in such scenarios.

    Okay. So in terms of feature toggles, uh, you can still use a configuration file, um, uh, which can be tweaked without any release. That way, uh, if you want to, um, uh, ramp up the traffic or if you want to do a switch between feature off kind of stuff, switch on, switch off, you can still use the configuration files or get config. There are lot of other third party tools are also which are available if that application is using that. That could still happen. Uh, but, overall, I would say, uh, this is a kind of configuration driven. You can use the configuration file to toggle the feature. Uh, basically, that has to be implemented in a way you check the, uh, property file first to see whether feature is turned on. If the feature is turned on, you divert the traffic to, say, for example, a newer API. And then if the feature is switched off, you toggle it to the older API, such sort of thing. Basically, that's how the the feature toggle works. And here, it can be done through property files too. That way, you don't need any, um, kind of releases to tweak this, uh, in in case of Zookeeper and all those. There are third party tools which does that, which is an easy way to do it.

    Microservices pertaining to user payment profiles. So Synchrony's data across microservices pertaining to user payment profile. Usually, services when when you have a database and the database has replicas, which you read data from those databases. Then the data which gets from the microservices are synchronized, and we get the same kind of data. What in terms of synchronous data, because multiple services pertaining to user payment profile. Payment profile payment profile is a payment data of the user. Uh, what synchronization do we need here? Anyway, you'll have a database, which will have replicas. And, uh, in terms of payment, since it said read, we'll be reading it from a replica rather than the main database. So if 2 APIs are getting called with microservices, it goes to the same DB. Ideally, if it is only 1 DB. Otherwise, if we say 2 DB okay. So each microservices with 2 different DBs. Okay. Yeah. You can use orchestration here too, but, uh, we can put the same data. If if at all each microservice is using its own database, then, yeah, definitely, orchestration can be used here. That way, we can maintain the synchronization of data going to the 2, uh, microservices and thereby to the database. Yeah. That can also return. Yes. Okay. I hope that is the one which we're talking about here in terms of synchronization. I assume that each microservices is having its own database. Otherwise, if it's a single database, then this is, uh, this, uh, this won't hold good. Yes.