profile-pic
Vetted Talent

Mahaligam

Vetted Talent

A dedicated Software Engineer with 3+ years of experience primarily focused on backend development using Node.js, NestJS and Hasura. Highly skilled in designing, developing, and maintaining robust software solutions that ensure secure, scalable, and high-performance backend servers with the cutting-edge technologies. To seek and maintain a full-time position that offers professional challenges utilizing interpersonal skills, excellent time management and problem solving skills

  • Role

    Back End Developer

  • Years of Experience

    3 years

Skillsets

  • Sanity CMS - 1 Years
  • JavaScript
  • Jest - 2 Years
  • Lambda
  • NestJS - 3 Years
  • Node Js - 3 Years
  • Postgre SQL - 3 Years
  • React Js - 1 Years
  • REST - 3 Years
  • S3 - 3 Years
  • Java
  • soap apis - 1 Years
  • SQS
  • stripe
  • Hasura - 1 Years
  • AWS SAM - 0.5 Years
  • Sequelize - 0.5 Years
  • SES
  • Pushalert
  • Docker
  • Node Js - 3 Years
  • Twilio - 1 Years
  • TypeORM - 3 Years
  • Express Js - 3 Years
  • Agile - 3 Years
  • AWS Cloud - 2 Years
  • Bitbucket
  • CSS
  • Type Script - 3 Years
  • EC2
  • ECS
  • Express Js - 2 Years
  • Firebase
  • Git - 3 Years
  • Github - 3 Years
  • GraphQL - 3 Years
  • HTML

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Backend Developer(Remote)AI Screening
  • 56%
    icon-arrow-down
  • Skills assessed :AWS SDK, dotenv, Hubspot.API, Mailgun.JS, OpenAI, Passport, Bcrypt, Node Js, Socket.IO, Twilio, TypeORM, Express Js, MySQL, Type Script
  • Score: 50/90

Professional Summary

3Years
  • Sep, 2021 - Present4 yr 8 months

    Backend Engineer

    Tringapps, Inc.

Applications & Tools Known

  • icon-tool

    Twilio

  • icon-tool

    SendGrid

  • icon-tool

    PostgreSQL

  • icon-tool

    Hasura

  • icon-tool

    AWS Cloud

  • icon-tool

    Docker

  • icon-tool

    Docker Compose

  • icon-tool

    Freshworks

  • icon-tool

    Firebase

  • icon-tool

    AWS SES

  • icon-tool

    SonarQube

  • icon-tool

    Git

  • icon-tool

    Jira

  • icon-tool

    NPM

Work History

3Years

Backend Engineer

Tringapps, Inc.
Sep, 2021 - Present4 yr 8 months
    • Design and develop scalable backend servers utilizing Node.js, NestJS.
    • Create and maintain efficient Rest and GraphQL APIs Architect and manage databases to optimize performance and scalability.
    • Collaborate closely with frontend and DevOps teams to ensure smooth development and deployment processes
    • Maintain high code quality standards through rigorous unit testing and continuous improvement
    • Stay up-to-date with emerging technologies and integrate third-party tools into backend systems
    • Adhere to Agile methodology principles and leverage version control systems effectively

Achievements

  • Successfully led backend teams across 5+ projects within a 2-year period, consistently implementing cutting-edge technologies while adapting quickly to new tools and methodologies. This experience has honed my ability to deliver high-quality results under tight deadlines, manage multiple concurrent projects effectively, and thrive in fast-paced environments. Through these accomplishments, I've developed strong leadership skills, technological adaptability, and pressure handling abilities, allowing me to excel in demanding project scenarios and contribute significantly to innovative tech initiatives.

Major Projects

4Projects

TPV Horizon

    Successfully integrated Twilio for customer interactions through SMS, IVR calls, and live calls. Incorporated SendGrid to enhance email campaigns. Developed and maintained REST and GraphQL APIs. Integrated additional SOAP APIs with the backend server.

QuickAsyst

    Created a backend server utilizing Hasura, an instant GraphQL API generator. Implemented Docker for the GraphQL engine and Hasura. Integrated Stripe for customer payment transactions.

Leaguemed

    Built a server with Sanity CMS using Groq queries and webhooks. Integrated Freshworks, a cloud-based CRM, for customer interactions. Incorporated PushAlert for both web and mobile push notifications.

Plantd

    Designed and implemented a subscription-based application using Stripe and Wise. Developed efficient and secure data models with PostgreSQL and TypeORM. Utilized Jest for unit testing, achieving high code coverage in SonarQube. Secured GraphQL APIs with JWT authentication and access control via AWS Cognito.

Education

  • B.E - Computer Science and Engineering

    Dr. Mahalingam College of Engineering and Technology, Pollachi

Interests

  • Cricket
  • Exploring Cities
  • Walking
  • AI-interview Questions & Answers

    Yeah. Hi. I'm Harikam. So I'm looking for a software engineer position for 3 years. I have gained a good knowledge about the agenda in the back end of the report. And I have some plus projects, and I've let the back end team provide the deliverables. So I'm working with various technologies and tools, which will help integrate and improve the performance of the system. And, like, I'm working with video CMS and other technologies, which help improve customer interaction as well as the back end system. So

    So, to upgrade the configuration, the Node.js system, like, we usually have three environments: dev, QA, and prod. So if we have configuration changes, we first do it in the dev environment. We usually use AWS Parameter Store to store the configuration, and we change those configurations and maintain them in config files. So we'll update it across the form and do sanity testing with the demo. After that, we'll move to the next steps. So, before applying any configuration change or update, we apply the note to upgrade or downgrade or revert to previous differences. We confirm with the versions and check if it will break any other functionalities. So we'll see that. After that, we'll push it to the production environment and do the testing. The main part about config configuration changes will always make major changes in the developer environment, and it will affect runtime. So, we have to make sure it won't affect the functionalities before the configuration. The first part is if we have configurations, we don't have to hard-code them in there. So we have to maintain them as an EMV or an AWS secret. So we can easily manage the credentials. So we can provide the endpoints to modify and update the values from the AWS, or we can use Docker images to update the configurations.

    Yes, currently, we are using a password strategy in the nested JS. We are using a nested framework, which is built on top of Node JS, and it provides default password strategies. For our rule-based access, we have maintained a combination of tool-level roles and DB-level roles. So, for tool-level roles, we have created groups, like user groups and admin groups. If the client has no separate group, we'll maintain it at the DB level. Initially, we used AWS Cognito to create users, and then used the JW strategy to extract the user and get their data from the database, including their role and access. Based on the rules, we are implementing guards. They implement guards in the messages, acting as a middleman at each call. We are playing cards to each API, and based on the presence of the guard and the role, we will either allow or reject the request. If the guard is present and the role is present, we will add an error to the response.

    In which case, when you prefer to use type 1 rather than a simple approach, you know, we started using a sequence. So it provides some models, and we have to write the schema files, and we have to create a model and generate a migration file. So in SQL, we face a difficulty of writing schema files and maintaining the models. And the type warm, we will overcome the issues. Like, it works in any environment like Node.js and other environments as well. But as it provides good availability and documentation, we welcome the issues we faced in user-type ORMs. So it provides a clear entity. So for adding relational database features, like all the features we can define the database models in a class, and we can provide a decorator as it supports a decorator pattern as well. We can overcome schema generation and migration generation, and it provides clear documentation about the repositories and using a base class entity. And as it works with the concept, so we can easily integrate multiple class systems with the entities files. Like, it provides an abstract class and returns different models. We can define one model and use it as in three models, and it will store in a single database itself. So each model works as a separate entity and class, so we can easily modify and do actions within the model without affecting the other models. So we failed and it's easily manageable with messages. The message is it's like it provides fully supported types of concepts. So while moving from state-wise to type warm with TypeScript, it helps. Type warm provides a better way of accessing the database using the repositories.

    What pattern would you use to ensure a secure answer? Should they handle it? Yeah. Usually, it's not restricting any basic structure of code. And for security and structure handling, we use some middleware at the initial routing levels. It provides a default middleware so we can add it before entering the route itself. And for error handling assets, it's not limited to any code structure. We can have error handling at our own level. It provides an additional layer of concepts. We can easily integrate with it.

    I have worked a lot in this Socket.IO, but I have a little bit of knowledge in Socket.IO, as it's providing a way to have interaction between the server and the client system. So, we can send and resume requests that, like, a subscription. So it accepts a request and sets a request at any time. So

    I'm not much familiar with the data socket at IO code. But I have a little knowledge about the on and clicking functionalities, like we used to read the emails using. So if the connection is established with the runtime, we go to the client functionalities inside the JavaScript function. So we can handle the disconnect functionality inside the IO.on function. So if IO.disconnect or something. If any connection is disturbed, whether it's connected or not established, we can handle that.

    The main consensus we get before saving is that yes. We used a beacon for hashing functionality to store the password. Hashing, however, is a one-way method of encrypting the data, and we can't decrypt the data as it's stored as a hash. So it will store the passwords in a hash in the database. And yes, we can store it as a hash, or we can use a JWT to encrypt the data using JWT dot sign, and we can store it as a JWT code inside the database as well. And for passwords, if we have to store them, we usually don't store the passwords. So if you want to store the query and the password, we can use the hashing methodology, and we can store it.

    We maintain the scale migrations. In the type form, we can use db sync files, and we can use the migrations, which will generate schemas whenever we change entity files in the class. We can generate the migration file using the default command generation. This will keep track of database changes at the schema level, so we can track changes. Yes. We can track changes in the schemas over time. We make use of these schema files, 10 or 25, to get the basic migration file, which are filled with SQL queries in up and down. We can use this migration file to migrate the database schema. Before migrating data, we have to ensure we have a backup, then we can move the data as well. If the latest migration file doesn't work with the database, we can easily revert it back using the revert command. This will revert the last modified database changes. We can have a clear track of database schema changes with the use of migrations.

    What should be passed as a request to capture both positive and static scenarios, like positive and static scenarios. So we have to capture both scenarios, positive and static, and negative scenarios. In negative scenarios, the external APIs may usually close to three errors, such as success error and downtime. For downtime, we can't track it, so we have to handle the exceptions. So for the additional types, we have to create interfaces or classes or types. We can assign these to the response to the response from the external APIs. Since TypeScript provides strongly typed mechanisms, we can assign the interface with variables. We can use this to use the response of errors further and process them. Usually, we face around renting errors, as most of them have a downtime at midnight. We lastly integrated a feature. They disabled some functionalities at midnight. It won't throw an error, like connection time out and contact. So we have to handle the session handling methods.

    Yes. Asynchronous behavior of Node.js helps to work with multiple clients for us. So each anchor had a less in. So, it increased the performance of the backend system. So, while integrating multiple third-party services. Yeah. We usually attach our webhooks, our endpoints as webhooks to the third parties' APIs. And from there, we use direct points to send or request data. So, also as the asynchronous behavior, we can use the await functionality to wait for the response and handle each request individually. It's lazy to manage requests and responses in a non-async way.