profile-pic
Vetted Talent

Lamtei M Wahlang

Vetted Talent

I'm a backend developer with 8.5+ years of experience in different domains, including Payments (Compliance Unit), CRM, Fitness (IM Feature).

At FITTR, a leading fitness app, I develop backend services and APIs using Erlang, Java, Spring Framework and MySQL to create robust, scalable, and concurrent solutions that help users achieve their fitness goals.

At PayPal, I contributed to the development and testing of the compliance services with focus on ensuring customers and merchants compliance with US IRS using Java, Spring, Oracle DB and IBM ODM.

At Kapture CRM, a designed and implemented features that integrated with their ticketing system to help clients improve customer satisfaction which involves both backend and web development using Java, Bootstrap, Backbone JS and Highcharts JS.

I am passionate about learning new technologies and solving challenging problems that make a positive business impact.

  • Role

    Erlang Developer (SDE 3)

  • Years of Experience

    10 years

Skillsets

  • RabbitMQ
  • Git - 7 Years
  • Type Script - 1 Years
  • APIS - 6 Years
  • Python - 2 Years
  • JavaScript - 5 Years
  • RESTful API - 6 Years
  • Type Script - 1 Years
  • NO SQL - 1 Years
  • AWS - 3 Years
  • RESTful - 5 Years
  • Spring Boot - 5 Years
  • Spring
  • Redis
  • Java - 5 Years
  • Oracle
  • Distributed Systems - 2 Years
  • Oracle DB
  • Erlang
  • Spring Framework
  • SQL - 6 Years
  • Backend - 7 Years
  • Docker - 5 Years
  • MySQL - 6 Years
  • MySQL - 6 Years
  • Spring Boot - 4 Years
  • Java - 5 Years

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Lead Backend DeveloperAI Screening
  • 76%
    icon-arrow-down
  • Skills assessed :Accessibility, HLD, LLD, Performance Optimization, Responsive Design, Hibernate, Micro services, Spring Boot, Git, Java
  • Score: 68/90

Professional Summary

10Years
  • Dec, 2021 - Present3 yr 10 months

    Senior Software Engineer

    FITTR (Squats Fitness Pvt Ltd)
  • Sep, 2018 - Nov, 20213 yr 2 months

    Software Engineer 2

    PayPal India Pvt Ltd
  • Jun, 2015 - Sep, 20183 yr 3 months

    Software Developer

    Kapture CX
  • Jan, 2015 - May, 2015 4 months

    Java Developer (Internship)

    Kapture

Applications & Tools Known

  • icon-tool

    MySQL

  • icon-tool

    jQuery

  • icon-tool

    Oracle

  • icon-tool

    Git

  • icon-tool

    Visual Studio Code

  • icon-tool

    Confluence

  • icon-tool

    AWS Cloud

  • icon-tool

    Javascript

  • icon-tool

    Brand24

  • icon-tool

    Docker

  • icon-tool

    AWS

  • icon-tool

    Typescript

  • icon-tool

    Python

  • icon-tool

    Erlang

Work History

10Years

Senior Software Engineer

FITTR (Squats Fitness Pvt Ltd)
Dec, 2021 - Present3 yr 10 months
    • Refactored the code for progress APIs written in NestJS to address design issues and improve performance and scalability.
    • Lead the redesign and revamp of the Fittr messaging module, addressing design issues, refactoring and adding new modules to improve scalability and performance. This is done by migrating dependent services to managed services, adding clustering support, distributed caching and optimising queries to the backend services which uses Erlang, Java Springboot and RabbitMQ.
    • Contributed to the revamp of the Fittr Training Tool, collaborating with the design and app teams to improve the backend API for the tool.

Software Engineer 2

PayPal India Pvt Ltd
Sep, 2018 - Nov, 20213 yr 2 months
    • At PayPal, I played a key role in improving 1099K reporting which is a time critical process to comply with IRS by identifying and fixing gaps in the reporting system to make sure reports are upload on time.
    • I contributed to the development of system enhancements for the CP2100 flow impacting merchants, that help reduced backup withholding and improving compliance for PayPal business accounts.
    • Furthermore, I am a sole contributor for a system upgrade impacting multiple compliance services which resulted in reduced API calls and lower transactions per minute hence improving system performance.

Software Developer

Kapture CX
Jun, 2015 - Sep, 20183 yr 3 months
    1. Developed a mailing system that is mainly used in the ticketing platform for the CRM using Java, Postfix and Dovecot
    2. Integrated an in house Chat system based on XMPP and collaborated in the development of the Chat SDK (Web, Android and iOS) using Java and Erlang for the backend, BackboneJS and Strophe JS for the Web Client/SDK.
    3. Integrated APIs used for the ticketing system and lead generation in the platform

Java Developer (Internship)

Kapture
Jan, 2015 - May, 2015 4 months
    Developed a robust Reporting Engine which is a template-based, generic reporting tool utilising a combination of available metrics for creating charts and tables saved as templates. The templates are used by clients for automated ticketing and lead generation reports.

Achievements

  • Collaborated in redesign and revamping the Fittr chat module to address design issues and add features.
  • Worked on the back-end API for the Fittr Training Tool and admin dashboard.
  • Improved 1099K reporting and system enhancement for CP2100 flow at PayPal.
  • Developed system upgrade at PayPal that reduced API calls.
  • Contributed to GCCVP craftsmanship to improve software quality at PayPal.
  • Collaborated in the development and integration of the Chat SDK for Kapture CRM.
  • Played a core role in refactoring progress APIs restructuring data and schema to address scalability and performance issues across multiple services using NestJs and Typescript.
  • Spearheaded the revamp of the in-house chat services, implementing caching, enhanced monitoring, and re-architecting the system for scalability, resulting in significantly increased user engagement and system performance.
  • Upgraded and migrated legacy upstream dependencies, including Redis and RabbitMQ, for improved reliability.
  • Collaborated in chat integration with various app features, developing automated push notification system for timely updates and seamlessly integrating OpenAI capabilities.
  • Contributed significantly to a system upgrade affecting multiple compliance services at PayPal, optimizing API calls and reducing latency.
  • Led collaboration to revamp the 1099K reporting system, significantly reducing 1099K corrections and enhancing tax reporting accuracy.
  • Played a significant role in critical enhancements to the CP2100 flow, substantially reducing backup withholding and IRS B Notices, improving tax compliance for PayPal US merchants.
  • Developed and launched a Web Chat SDK using Strophe JS and Backbone JS, enhancing customer support capabilities.
  • Developed a robust Reporting Engine, a template-based, generic reporting tool for creating charts and tables for automated ticketing and lead generation reports.

Testimonial

PayPal

Priya Srivastava

Lamtei has always proved himself best in terms of technical and professional communication skills. He is creative in terms of finding out the solution to a problem and has the ability to impress people with his outstanding analytical skills.

Major Projects

4Projects

Chat Revamp

Fittr
Jun, 2023 - Present2 yr 4 months

    Collaborated with cross-functional teams to revamp Fittr's in-house chat services using Java Spring and Erlang and includes:

    • Improving caching with Redis
    • Refactoring and migrating all dependencies to managed services for enhanced security
    • Rearchitect the system to support horizontal scaling
    • Refactoring group chat module to scale to thousands of groups
    • Enhanced monitoring by adding metrics
    • Adding new features and fixing issues and bugs

Report Engine

    Design and implementation of Report Engine, template based generic reporting tool.

1099K and CP2100 enhancements

PayPal
Jan, 2020 - Nov, 20211 yr 10 months
    • Worked across teams to make enhancements for CP2100 and 1099K reporting for reducing customer friction and improve compliance of US PayPal merchants. These enhancements spans across several compliance services in PayPal as well as dependencies with other data teams including data, user and payments teams.
    • Demonstrated ability to deliver project by collaborating with the business and data teams to execute and drive the 1099K and CP2100 processes to completion ensuring regulatory compliance for US PayPal accounts on time.

Reporting Engine

Kapture CX
Jan, 2015 - May, 2015 4 months
    • Contributed to the design and implementation of the Reporting Engine at Kapture, a template-based versatile reporting tool utilising Java, Bootstrap, and Highcharts JS.
    • It dynamically generates report templates using data categorised as metrics customised according to client needs.
    • The templates are used by clients for automating ticketing and lead generation reports.

Education

  • Masters (Computer Applications)

    Indian Institute of Technology (2015)
  • BSc (Honors in Computer Sc)

    North Eastern Hill University (2012)
  • Masters (Computer Applications)

    Indian Institute of Technology, Roorkee
  • Bachelor of Science in Computer Science (Honours)

    North Eastern Hill University
  • Master of Computer Applications

    Indian Institute of Technology, Roorkee

Certifications

  • Hadoop Platform and Application Framework

    Coursera (Feb, 2019)
    Credential ID : 8QW43JSNHHEN
  • Hadoop platform and application framework by university of california san diego on coursera, 2019

  • Hadoop platform and application framework by university of california san diego on coursera 2019

AI-interview Questions & Answers

Sure. So I'm a back end developer with about, uh, 7 to 8 years of experience. And I am mostly, um, working on the back end development, uh, with a little bit of, uh, some knowledge of, uh, front end as well. But I mostly work on back end, uh, and I'm using, uh, mostly m into Spring Java services. But lately, I've also been using Node. Js, uh, for, uh, some of the services that we're working on as well as well as some I've also worked with, uh, AirDump application and a little bit of Python. So I work in different domains, uh, from CRM to, uh, payment compliance. Um, and currently, I'm into uh, fitness, uh, domain. Yeah. So that's a short, uh, brief introduction about myself myself. Yeah.

Okay. So in order to handle, uh, schema migrations without any downtime, um, I would it would, uh, probably be, uh, best to migrate parts of the data, uh, incrementally. So the best way the best approach would be to, start with the data that we see as, uh, let's say the data which is history, which is not currently what, uh, users are interacting most. So let's say, uh, last 1 year data or last, uh, 3 months, uh, 3 months and before before that. So we start migrating the bulk of the data that we know that, uh, it's not going to be updated, uh, frequently. And then, um, after that, we what we would try to do is we try to make sure I would try to make sure that we don't try to again update the DB. Maybe, like, we create new schemas and, uh, we create new tables. So then we start to update the data which is uh, more, uh, frequently updated. So let's say last one week data. So initially, I start with a timeline of, uh, somewhere before, uh, 1 month and before. And then let's say after that, once I finish migrating that data, I would try a timeline of, uh, within the last 1 month to migrate the data, let's say, to, uh, you know, to the last, uh, 7 days. And this way incrementally, we keep migrating till we have a point, uh, where, you know, like, we need to, uh, let's say, we we need to switch the data to the new, uh, schema or to the new, uh, database or tables, whatever. And at that point is where we again would need to do an update of, let's say, there would still be updates to your previous, um, uh, schema. In that case, you still need to run some updates, uh, for those, let's say, for those tables or, uh, wherever you have the changes, uh, which have been updated by the users or the customers. And that would probably, like, shouldn't take much time, but there will always be a delta, uh, of, you know, updates that we need to do within that short period of time. But it would definitely not cause a downtime. Uh, although, like, we have to still, like, be very careful on how much, uh, delta we do updates, uh, after switching to, uh, the new, uh, schema. So that is, uh, the best approach that I that I would come up, uh, like, I would think of. And also from experience, uh, for migrations that, uh, I have done. I have been part myself. This is one of the approach that, uh, we have, uh, been doing. Yeah.

Okay. So, uh, it depends on how you would like to, uh, how you you would like to run the application. So we'll take an example of, uh, of a microservice where, um, we actually run an API as well as we run a background task, uh, or a batch, uh, in the, uh, you know, in the background. So you could, uh, you could either, uh, split the application into 2. 1 is, like, a microservice following microservice, uh, architecture and another using a batch a spring batch application. So one could be a spring boot application and the other would be a spring batch application. But here since, uh, we want to use both, uh, the cases in one application. So we'll take a Spring Boot application and then we, uh, can, uh, we can use the Spring Boot application to run, uh, uh, uh, execute the service that could be used to process, uh, to run as a batch. That could be used to process some data in the background while the main application is running. So, uh, the way that we would do is, uh, probably since, uh, we are designing, uh, both of them to be in real time, is to have, uh, some shared, uh, data among them. Let's say, common, maybe, common, uh, entities, common models that we are sharing data between, uh, the 2, uh, types of applications. I mean, uh, 2 use cases. So in that case, uh, we would make sure that whatever we are updating in real time for the main application as well as a batch application, we could reuse them and it would update both. So the probably the the main challenge would be how you would scale them, uh, how you'd scale the application. So considering that we are, uh, using, uh, a batch application, we could, uh, think, uh, and execute a service to run as a batch, uh, application in the background. We could think of how many trends we want each application to run and to make, uh, depending on, uh, what is the use case that you want to run the batch. Let's say you're you're doing a file file upload while your main, uh, application is an API that, um, that, uh, accepts some data to be used with those files. So in that case, uh, you would probably want, uh, some threat to be running for processing those files while people are uploading, uh, files, uh, to the API. And then internally, uh, the those, uh, your executive service will be running parallelly to, uh, process, uh, those files. And, um, you would probably use threads number of threads depending on the use case that you want. Uh, let's say if you want if we knew the load is high, then probably you need to also, like, uh, increase the number of threads. But if the load is, uh, less and you would also need to scale the application according to how much, uh, load you anticipate, uh, for this application. Yeah.

Okay. Yeah. So, uh, uh, there would be, uh, the first thing is, uh, to understand the requirements and, uh, also to have a proper, uh, product, uh, requirement document, a PRD where, uh, how we want to integrate the payment, uh, service. So here, the main thing is to make sure that the requirement is, uh, is clearly defined. And, uh, you have you you clearly, uh, understand, uh, uh, basically, like, uh, what are the endpoints? What are your, uh, what is the type of payment that you need? Uh, all those things must be clearly defined. And once your, uh, requirement is clear, You start the integration and, uh, by making sure that all the endpoints I'm just taking here, uh, mostly, uh, REST endpoints since we know most of the integration is using uh, the rest, uh, the rest API using rest APIs. But, uh, we but I mean, it could be anything else. But I'm just taking here a REST API. So you'd want to make sure that your endpoints are clear and you have covered everything. And also, like, you've done, uh, proper, uh, testing from your side with, uh, you know, with whatever the new requirements that came, uh, from the PRD. And I think, uh, once we have gone through the, uh, initial development cycle is to start, uh, testing the payment service, uh, you know, with the testing, uh, let's say, uh, staging environment with the third party provider, uh, payment service provider. And make sure that the data or the transactions are happening, uh, according to whatever, uh, you anticipate, uh, and the requirements according to the requirements that you have given. So and also, like, you want to make sure that, um, there the amount of transactions which are happening, uh, if, you know, like, uh, how many successful transactions are happening. If there are failure, how many failures and all those metrics. And then, uh, before you go to production, probably you want to try a sandbox uh, testing with a few, uh, with a few customers or a few transactions first. And then you slowly increase, uh, the traffic, like, from 20% to 30% and monitor the number of transactions and, uh, the number of failure and make sure that, um, your whatever your application, uh, your that is integrated with the payment, uh, service is, uh, not having any issues after, you know, after, uh, the integration. Um, so, uh, probably, that would be the outline of how the process, um, would be to do, uh, external, uh, paid to integrate with an external payment service provider. Yeah.

Okay. So there are a couple of ways. So, uh, uh, having a session management, uh, like defining your own session management, uh, by, uh, using some sort of, uh, some sort of, uh, you know, some sort of, uh, authorized sorry. Authorization key or using some sort of, uh, any any type of key that you can pass to your stateless, uh, microservice where we try to process, uh, the key into different context, uh, uh, which are used by the microservice. So let's say, uh, I in Spring, you would be having, uh, a Spring context for your web application. There, you can, uh, you can define, uh, a session management which would be used, uh, with a key, uh, specifically with a specific key that is, uh, designed, uh, to to be used, uh, with the, uh, life cycle of a session that you have defined. Or, uh, in the most cases, like, we in most cases, like, we use, uh, the way that we use authentication for, uh, session management. For example, we use an authorization like a JWT token. And we always pass the JWT, uh, authorization token for each of your, um, uh, requests, which are all the stateless. But depending on the presence of your JWT token, you can make sure that you have a session, uh, for that particular, uh, user, which uses that token. And as soon as so let's say the user, uh, has signed out of your application, the token would then be, uh, renewed. And then, uh, the existing token, uh, would be discarded. And so that is the end of your session. So let's see your application, which you are connecting to your back end, you always use your JWT token to, uh, achieve, uh, session management by passing, uh, it to every API that you're calling. And in this way, uh, we make sure that the session is maintained from your application that you're calling for or for from the UI that you're calling. While also, uh, your back end is able to process, uh, and, uh, understand the context for which, uh, the particular session um, is done for, like, a user or customer, uh, which is associated with that particular, uh, token. Yeah.

Okay. So, uh, a couple of things is to first, we need to do a a profiling of your, uh, Java application, your, uh, micro service, and try to to isolate, uh, the the particular, uh, use case that you are testing, uh, in order to see if there's a memory leak. Uh, that is one way to do it using a like a profiler, like JMeter. Uh, the other way is to do a thread dump, uh, for a particular period where we see the memory leak, if that's possible in production. Let's say, if it's not possible, uh, then, uh, we could go with the the 3rd way to do it to do it is to use uh, an actuator. Like, for example, in spring applications, you have a spring actuator, which actually you can expose endpoints that, uh, actually give gives you the gives you, uh, certain metrics, uh, for your application, which you can then monitor it through, uh, different monitoring, uh, dashboards like Grafana or Prometheus. So, like, basically, exposing telemetry data, uh, in order to, uh, monitor your application. So a point where you have a memory leak, you can also use a tracing mechanism. Like, we have, let's say, in OpenTelemetry, you can use tracing along with micrometer, um, which, uh, you can then trace, uh, part of your app application during the time where you see the memory leak is happening. So this way, you'll be able to track and diagnose at least to a certain point or narrow down to where the, uh, the part of the code which causes the memory leak by checking by monitoring both the memory as well as tracing down where, uh, exactly during the memory leak your application was, uh, running. And you would be able to then, uh, diagnose and, uh, fix the memory leak, uh, according, uh, like, let's say in the area where, uh, where the where you see the trace happening during that time. And at least it'll give you some some idea of how to fix, uh, the memory leak in that particular, um, class or your particular, uh, use case. So, yeah, that's what my third my, uh, third point would be, uh, the approach that I would do for production. But let's say for initially for development or staging, uh, we can also go with, uh, profiling, which, uh, if we could, uh, reproduce the memory leak, then, uh, we can at least, uh, get something from during the time of, uh, profiling. Yep.

Yeah. So here we have one to many relation. Uh, ideally, your author should also, uh, mention the mapping for the book, uh, the field mapping for the book. Uh, although you have mentioned here many to 1 at the at the book, uh, entity. You will also want to, uh, do the mapping for your author entity. And, uh, another problem is that since you have a one to many, uh, relation, uh, you should be careful into, uh, how this could be, uh, causing an unpleasant problem, uh, issue. Uh, if you're, uh, keep on calling, uh, dependency, uh, for many books for the same author. And, uh, again, if you're, uh, since your books could have other, uh, let's say, other mapping. So this could also be, uh, that problem. Uh, so another thing that we could do is, uh, to make sure we fix the the fetch type 2, uh, lazy. And probably with only, uh, use of fetch type of lazy oh, sorry. Eager change it to lazy and use of edge type of lazy in order to make sure that we are only using, uh, we're only calling that, uh, the join, uh, if necessary. And and also, uh, making sure that, uh, you know, we do not cause, uh, any unrequired, uh, loads, uh, from the DB, uh, if not, uh, required.

Okay. Okay. So the problem here is, uh, with, uh, the rest template that get for entity and pointing the entity to URL where we are expecting the data from this particular URL. So, uh, the issue here is that your data could be anything and, uh, also here, you have a map to the type of string. Uh, and let's see if it does not map to your string class or, uh, also there could be a potentially, your data is empty or it's not. So we need to handle, uh, that case as well. And here, what is happening is, uh, you're already mapping to a type of a string, but then, uh, your, uh, response, you want to return, uh, response at get body, which means you're expecting, uh, body from the response while, uh, your, uh, entity is mapped to a string, which ideally should be a type of an object. So here, you would probably want to use an appropriate, uh, class, uh, to map to for the entity. And also make sure to handle, uh, the cases, uh, for the different scenarios if the entity is, uh, empty or no. And, uh, so that we would have, uh, it will not break your, uh, API. Yeah.

Okay. So here, you would, uh, probably want to make sure, um, you're using, uh, the best way is, uh, to maintain data integrity as well as consistency. I would probably use something like this. An abstraction, uh, which, uh, we could, uh, make sure that, you know, uh, how we are reading the data from the database. So the the the problem is in here, if we have multiple databases where we are calling the data and, uh, we are trying to update or read into multiple databases. So one way is, uh, to have a separate, uh, you know, is to have a separate calls for, uh, writing for your update or insert queries, uh, where we are updating the database. And another, uh, for your reads. So in order to, uh, maintain data integrity, we'll always use, uh, the read database, uh, which, uh, would, uh, not ideally have any issues when you're writing into the data. So all your get queries, for example, your get APIs or something that you are doing read to the database, uh, would be, uh, would be used using your read, uh, DB. And then, like, uh, for any updates, insert updates, uh, delete request would then be used, uh, with your, uh, right database. So this is one way to segregate, uh, where we want data at least some, uh, sort of data integrity, uh, when we are doing updates, uh, to database. Uh, the second thing is, uh, probably, we would not want to, uh, because maintaining, uh, different databases here would be, uh, tedious in your application. So using using, uh, uh, like, uh, using some library to maintain your, um, database. For example, in spring, we have the spring data JPA where you could use a spring boot, uh, data JPA where you could use, uh, JPA, which can, uh, you know, which can be implemented by any, uh, third party like Hibernate. And, uh, this would help, uh, to better maintain the the half and half consistency in how you interact with the database, uh, with your application, uh, layer. Um, and then like like your service layer. Um, and then you would be, uh, also, it would be easier for you to have, uh, to configure your, let's say, your pooling mechanism, your number of, uh, threads or pools, uh, thread pools that you want to use, and your time outs and everything in one place. Yeah.

Okay. So it's, uh, best to have, uh, some sort of a central logging mechanism. Uh, so there are different ways, uh, to do it. Since, uh, we most all of us have used, uh, some sort of, uh, distributed microservice architecture. So there are different ways to do it. So if we're using the traditional way, uh, probably, uh, you know, if we're using something like, uh, AWS, uh, Cloud, you could, uh, use your traditional logs to be, uh, sent to your, uh, to AWS CloudWatch. And then you could use, uh, your you could use the CloudWatch for, uh, monitoring your logs. So that is one way in the traditional, uh, you know, the the traditional application that we have. But let's say for, uh, let's say if you're creating, uh, new microservices where you could use, uh, the new tools that we have. You could use, uh, these tools like, um, Datadog or for example, even like open telemetry, uh, or any of the, uh, other, uh, third party, uh, logging mechanisms where you could even, uh, not just log, uh, your data, but you could also expose, um, different metrics like, create tracing as well as, uh, application performance metrics. And then, uh, also along with your logging and you define where, uh, you want the logs to go. Um, so I think most of us, uh, would, uh, have heard of, um, uh, Grafana or, uh, Prometheus or even like, uh, inbuilt system like New Relic, where you post your telemetry data and then you lock, uh, it to, uh, to your, uh, open telemetry. And then from there, you could then, uh, fetch your telemetry data from anywhere. This way, it would be distributed and it'll be not tied to your application. So you can actually use, uh, you can you could use, like, a plug and play and you could, uh, simply just, uh, use any third party that you want to use with your application. And that way, uh, your logging, uh, also would be easier for you and you would be able to have a proper, uh, distributor logging, uh, for your or microservice architecture.