profile-pic
Vetted Talent

Preetam Sahu

Vetted Talent

Over the past 7.2 years, I've had the privilege of working extensively in the realm of Quality Assurance, where I've honed my skills across various testing processes. My tenure as a Technical Test Lead at Infosys has provided me with a robust foundation in both manual and automated testing, covering areas such as System, Integration, End to End, UAT, Regression, Sanity, Functional & API testing.


One of my key strengths lies in automation testing, where I've leveraged tools and frameworks like Selenium, TestNG, PyTest, Rest Assured, Playwright-JS, and Node JS to craft hybrid frameworks tailored to project requirements. Notably, I've spearheaded the development of Node JS-based automation applications/tools for complex healthcare processes, significantly reducing manual effort and enhancing efficiency.


I've had the privilege of leading critical projects, such as the delivery of complex B2B healthcare transactions and the automation of various healthcare processes, including web and mobile automation.


Currently working as a QAE in Amazon RING team.

  • Role

    Technical Test Lead

  • Years of Experience

    7 years

Skillsets

  • Automation Testing
  • Agile methodologies
  • Manual Testing
  • SDLC
  • API Testing
  • CI/CD
  • DevOps
  • Performance Testing
  • STLC
  • Defect cycle management
  • FrontEnd
  • Backend
  • project implementation

Vetted For

4Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Test Automation EngineerAI Screening
  • 52%
    icon-arrow-down
  • Skills assessed :Manual Testing, Selenium, Automation Testing, Jenkins
  • Score: 47/90

Professional Summary

7Years
  • Mar, 2017 - Present9 yr 2 months

    Technical Test Lead

    Infosys Ltd.

Applications & Tools Known

  • icon-tool

    Selenium

  • icon-tool

    TestNG

  • icon-tool

    Node JS

  • icon-tool

    Jenkins

  • icon-tool

    AWS

  • icon-tool

    GitHub

  • icon-tool

    Cucumber

  • icon-tool

    SOAP UI

  • icon-tool

    Postman

  • icon-tool

    JMeter

  • icon-tool

    Confluence

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

Work History

7Years

Technical Test Lead

Infosys Ltd.
Mar, 2017 - Present9 yr 2 months
    Orchestrated System, Integration, End to End, UAT, Regression, Sanity, Functional & API, and Automation testing methodologies

Achievements

  • Developed Node JS based Automation applications/tools for complex Healthcare processes, reducing manual effort up to 60%
  • Elevated API robustness by 22%
  • Improved application robustness by 12% through JMeter performance testing
  • Reduced manual effort by 42% using Playwright-JS automation

Major Projects

3Projects

EDI Attachments

Nov, 2021 - Present4 yr 6 months
    Led the E2E team in delivering complex and critical B2B healthcare transactions

Web/Mobile Automation of Healthcare Transactions

Nov, 2021 - Present4 yr 6 months
    Automated various healthcare processes using Selenium-Java/PyTest and Playwright-JS, TestNG framework including android-based mobile application using Appium

Web Development using MERN Stack

    React-based frontend for test data upload with NodeJS backend and MongoDB database integration, including a YouTube Clone and Movix movie database

Education

  • Bachelor of Electronics & Communication Engineering

    Biju Pattnaik University of Technology (2016)

Certifications

  • Software development engineer in test

  • Devops professional

  • Tosca certification

AI-interview Questions & Answers

API system into an existing system.

The dynamic elements as efficiently as possible using the efficient ways or explicit rate. And also, if there is a failure of any test cases while running the test suites, I can also use retry mechanisms after that. So in that case, the flakiness could be because of any external factors. So to make sure that it doesn't happen for some other reason, like whether this is happening because of the code or something. So what we can do is we can definitely try running the test case again so that maybe it drops the process successfully.

Yeah. So, one thing that we need to ensure while creating those selling up tests is that we do not hard code anything, which would hard code any values that we are trying to test. And to make sure that it's scalable. If something changes depending upon the data, it shouldn't be restricted. Like, the test wouldn't be restricted to a particular data. Then, the test cases should be created in such a way that any data can be used for the functionality of any specific functionality for new build test cases. So it should be written in such a way that it's easily scalable. And, I mean, shouldn't it be those hardware values or any sorts? Whatever values that you should be using, it should come from the function itself, which in the function, the test cases will be written so that it can take any value. And, also, we can ensure that whatever locators we are using, it is generic. It's not specific to a particular element, or locator. Or, if there are any elements that have been added in the future, it should be able to handle that as well. So make sure that the locators are properly handled. You can use stars, from if you're using XPath to locate an element, then you can use a star to trace the path accordingly so that if at all there is any change, you should be able to handle it.

Synchronization issues, okay? So synchronizing issues mainly happen because of dynamic elements. If any dynamic elements are found and try to process those elements before the tag is rendered. In those cases, we have good synchronization. So, the best option would be to use a mechanism with implicit weights for an overall test case. But if there are any elements with multiple elements stored or being created, we can use an explicit way to handle that specific DOM element. In that case, we can efficiently fix synchronization issues. We can also use threading. That's not a mechanism I recommend, because it really pauses execution. So, in order to debug scripts, you can definitely use a standard script. Once it has been debugged, you can call the specific locator or anything you want to apply, and it's been taken care of. Then you can definitely get rid of it. So, the best option would be to use location explicit. But also, sometimes you can use explicit location, which is also a good option.

So the preferred way to manage the project I would be working with CICD is to parameterize them. I mean, we can use Jenkins. We can parameterize, like, you can parameterize the browser drivers. If you want to run the test cases in a specific browser, you can parameterize it. And, like, push it to the Ruby script itself. And depending on the parameter imposed, whatever parameter is imposed, the code should be ready to handle those. Whatever is coming from the command line, it should be able to handle it. The code should be able to handle those things. That's the best approach we can follow right now here in our csc pipeline. You can parameterize browser drivers, definitely. So, depending on what type of browser you want to run, you can pass it in the command checking, and that specific flag of driver browser type is being kept in the code. And depending on what type of browser it has been passed from the command line, the code should be ready to handle it.

Yeah. So if you're getting to a particular specific test, I was going to make sure that before running the Jenkins, we're going to make sure that specific test has been run locally, before the build is generated properly. So that makes sure to ensure that there are no job failures, which should make sure the build is successful. We can use different mechanisms as mentioned before. I can retry that, you can retry that, to retry mechanisms if there is a job failure. It might be that it happens because of some sporadic issues. It does happen on connectivity issues that get a lot of external factors coming to fetch, which can make it fail. So by using retry mechanisms, you can handle it. You can re-run the same test case that has failed. Yeah. But before that, we need to ensure that the script is able to handle this specific type of scenario properly, and it runs without any errors.

Syntax, overall the stuff schema and stuff looks appropriate, but I think we're missing out on the specific credential details. Like, we should be using the credentials. Right? The variables that need to be passed to the step definitions are missing. There are no variables over here. So it should be even the user on the login page enters the valid credentials using the username and password. Username and password should be variables. So that data used in the second text code should pass to the step definitions. And whatever functionality you want to provide inside that, you can provide it. I think the credential parameter is missing, that should fix it.

Yeah. So in this case, what we can do is use implicit weights to wait for the elements to properly get loaded. We can also use explicit weights, but since we can handle this with explicit wait itself. I can wait for 3 seconds, maybe 20 seconds, put a weight of duration of 20 seconds so that all those elements get properly loaded, web elements get properly loaded. And we don't want to miss out on anything, the code doesn't miss out on anything. So we can put our explicit wait before. Yeah. We can wait. We can put in an implicit wait just after the driver dot get statement so that before we enter using it, to make sure that all those DOM elements or the elements get properly loaded before making operations on it.

So, the most optimal approach to get a maintainable, scannable framework would be to use a phobject model. So, where we segregate or structure the specific framework into pages, different types of pages that we are about to test in Selenium. That would have all those functionalities of it, if it is a login functionality, everything related to login will go inside the login page. All those functionalities of the dashboard will go to the dashboard, so each page has been handled separately. So what we are about to do is, it becomes reusable. You can reuse and yeah, so what we can do is, all those locators that go into the specific page, you can add it to the page object, and there you can call those functions, like create functions, in functions or whatever actions you want to perform, you can use those functions. Then one more thing is, if a particular functionality is generic, like it's a waiting mechanism, it's somewhat generic, right? So what we can do, we can put it in a utility function where you can utilize it as many times as possible. Yeah, it's been created, you can create a class, and you can define a function, wherein you can pass in the specific element, by or be it a by element, or be it a driver element, web element. So that you can call that particular specific function by inheriting that utility class, and can reuse as many times as possible. And, yeah, hard coding is a big no in a specific framework, so that it can take any values, it should be created in a very scalable manner, like not hard code any values, it should take in the arguments, the arguments should be very understandable, should maintain the consistency. And you can use like, yeah, Jenkins, when building the framework, you can also parameterize the value, like depending upon what type of browser you want to run it, by using the parameterizing concept in Jenkins. Yeah, you can do that. Yeah, that's how we maintain scalability, like test properly and make sure that it is stable.

Yeah, so since it's the agile environment, the agile environment, so it would really greatly impact my creation of test resilient test frameworks, which will really depend on how the sprint moves. So, we would do a lot of instrument automation, and it always happens that when manual testing or functional testing is happening on sprint A, the automation will follow an approach a minus 1. Like, it will go one sprint behind so that we can once the functionalities have been deployed properly, we can automate those functionalities. So, definitely, because depending on the type of complexity, depending on the estimation, the story estimation, we have to make sure how to structure a framework properly, so that we can efficiently automate those conflicts within the timeline.

Mentoring junior test testers into automation of complex scenarios is really challenging, of course. And we need to be really smart about it because we have strict timelines. And within those fixed times, we have to implement those changes. So, my approach would be to make sure that the knowledge impact for the changes is properly understood, so that they understand the changes and the scope that we need to automate. That's really important. Just going in blindly won't help. It would just drag us down. Secondly, since it's a complex test scenario, dividing it into smaller chunks and smaller functionalities so that we can achieve maintainability and scalability is key. And once we've segregated the complex test scenario into smaller chunks and modules, it becomes easier for junior testers to understand how to implement those changes.