
06 years of experience in Software Testing Field and Worked as Manual, Automation, Performance and API Tester.
Worked at all phases of Software Testing life cycle. Well versed with Test Plan, Test case design, Test Execution, Defect Management, Documentation, and closure with cross-cultural teams.
Proficient with Agile as well as Water Fall methodology and active involvement in Agile ceremonies (Sprint Planning, Sprint Backlog Grooming, Sprint Review, Sprint Retrospective, etc.)
Active Contribution in setting up the Test Environment on Azure.
Working knowledge of various new and emerging technologies like Postman, (API Testing), Telerik (Automation & Performance Testing). Java with Selenium (Automation Testing), JavaScript with Playwright (Automation).
Strong Experience with Object Oriented Programming Concepts (OOPS).
Strong Experience in Manual Testing of Web Based Application with a focus on Functional, Smoke, Sanity and Regression Testing
Conducting Cross Browser Testing using Sauce Lab for the software product under varying conditions and analysing the behaviour of the system.
Conducting Accessibility testing using NVDA tool under varying conditions and analysing the behaviour of the system.
A team player with strong Communication, Leadership, Organizational & Personal Relationship skills.
Senior Test Engineer
Coforge LimitedSenior Test Engineer
NTT Data Services Pvt. Ltd.Analyst
Optum Global Solutions Pvt. LtdProcess Associate
EXL Services Pvt Ltd
Test Plan

Defect Management
Azure

Postman

NVDA

Sauce Lab

SQL Server Management Studio
Hi. My name is Saurab Kumar Sarma, and I'm a senior software testing engineer with around 6 years of experience in manual API and automation testing. I have worked in multiple domains, like airline aviation, supermarket, retail supermarket, and health care and legal as well. So, my current project is related to retail. The client name is Kohl's Supermarket, which is a very large retail supermarket based in Australia, and it has a huge potential to list its own products to meet the daily needs of Australians, like milk, chocolate, beverages, liquor, and everything. They also have the ability to upload other suppliers' products who sell to their customers, like in Amazon in India, Walmart in Australia. So, it's similar to Amazon in Australia. My roles and responsibilities are here. We use a methodology where we have a board to check the status of this print, how many tickets are about to start, how many tickets are available in this print, and so on. We have attended daily stand-up meetings, sprint reviews, sprint planning, retrospectives, and problem-solving meetings as well. In our team, they are very small squads. When I say a squad, it's a group of 3 to 4 QAs. Every squad is responsible for performing a specific task. My squad name is Digital Venture, and we are responsible for providing advertisements on the Google supermarket website. We take care to reflect the banner on every page's boosted products, single tile product associations, content associations as well. We have used third-party software, Citrus, and Adobe Experience Manager to create a campaign, and then the campaign will reflect on the Google supermarket. This is an overview. Here are my roles and responsibilities. First, my role is to assign tasks to available queues in my team and provide walkthroughs so they can understand what they need to test. Okay? If they have any concerns, I'm responsible for that as well. Then, I'm also responsible for providing Katie's sessions for application overviews to new joiners in our team. Also, I have attended daily stand-up meetings with my client. I'm overlapping my time with the Australian time, and I give the status of my work as well as my offshore team. Then I connect with my offshore team to get their status as well, which tickets they are working, what's the concern, and everything. So, my second role is to receive the requirement review.
We went and analyzed them and gave a walkthrough. I mean, we took a walkthrough from the developer if it was necessary. Explain in terms of both manual and automatic systems, how we test an application's response under peak load conditions. How would you test an application's response under peak load conditions? The peak load condition is the application's highest capability to handle the user base. In terms of manual testing, what we do is explore every module or page and check the application's behavior, how it performs, whether a page is crashing or not, or how much time it takes. For example, the response time to open a page or perform an action is around 3 to 4 seconds. If it takes more than that, we can identify the issue while doing the manual test. And in automation, it's very easy to test peak load testing with code. We execute the test cases and check how much time every script or task takes to run an action. This entails the application's behavior and reaction under peak load. We also need to identify if the application is working fine under peak load. So we check how long it will go smoothly and how long it will work fine the next time. If everything is working fine, we then try to test some hard actions as well. If I say hard actions, I mean the main or complex functionality of an application. Like in a retail application, the main and complex functionality is obviously providing campaigns or advertisements based on revenue, and they look out and because they are using multiple other third-party tools as well. We also need to figure out the integration timing as well. So we test the critical or crucial functionality of the application under peak load to check the timing, to check the response timing, to check the behavior, or we compare that situation to our previous situation where we didn't have peak load. So that's how we have tested in manual and automation, and we also identified on which pages a page will start crashing and everything. We also ensure to check that. The response under peak load conditions is all we can do, and it depends on the different modules or types of applications we are working with. So it also depends on the specific project. It can vary upon every project. In my previous project, which was an airline aviation project, it was a very complex project because at every step there was a calculation time. If we missed a single minute or seconds, it would have a huge impact.
So the question has been changed. How do you ensure the accuracy of test data for executing test cases, especially when testing complex business scenarios? So we're ensuring test data is accurate. We're executing that. We get the test data, identify it, and determine if it's good or not. Especially when testing complex business scenarios. So this depends on the different projects. As you can see, in my current project, we pick the test data, the product supplied by the supplier, and push it to our front page. We have the data available, and we pick the data based on each module. If I give you the payment scenario, this is the most complex scenario. In the payment scenario, we use different credit cards with valid numbers, valid CVV, and valid expiry dates. Then we use boundary value analysis and try to give invalid figures as well. And then I see your test cases. So we're just checking if the test data is good. If it covers all the requirements and expectations. If we need to test the functionality and check the test data and test cases, whether they cover all the scenarios we expected or not. This is something we need to identify first. And because, like, if you're testing a payment page, there's so much data – the address, the purchaser's address, the credit card information, and whether all products are available. You might have free options with empty nationality fields. It's a specific case.
Oh, great. Is my previous answer has already been recorded? I don't know. Okay. Share a complex testing scenario automated and the outputs you used to validate the accuracy of automated test results. Okay. So far, I have not written a script in a Chrome scratch to automate the recognition, I mean, to automate the test cases or the test scenario we have. But, yes, I have executed the test cases of a framework already available. So I'll give you the answer from that experience. And, so whenever we run the test execution, we first try to debug in that debug mode so that we can check if it's verifying or validating everything, hitting every thread or not. We're just clicking on the next step to go to the next thread and then check whether it's applying or whether it's working good or not, or if there's any assertion to validate something, so if it's checking that element or not. This is something we normally validate every line of code. And then from start to end, we'll do the same one by one. And if there's any issue at the same time, then we check that. So this is something we do while debugging. But unless we're not debugging and if we run the test cases, so we validate the accuracy of automated test case results to add the assertions and the validations afterwards. So if you think there's a complex scenario, we can add try and catch to check whether the thread is failing or passing or not and put if and else conditions as well if needed. But the main point is to validate accuracies while adding the validations and assertions in the test. This is the main point.
When designing a high level system for QA automation, what design patterns would you consider and why? When designing a high-level system for QA automation, this is something related to a framework. Because we normally don't design the application. We normally design the framework. So the design pattern we use is the page object model. This is the approach. It's also a design pattern where we have a pages object folder in which we provide every page object. We create a page object class for every page available on the website, and then we put every element or the required element in that class under the variables using next path, name, ID, locators. And here we are using another design pattern, in which we have a project name, and then we have the pages folder. Under that, we have all page objects available. We also have a case text folder. Under that, we have multiple spread folders as well. In every spread folder, there is a more folder. So like I said, my spec for is digital venture, and under the digital venture, there is a 2 folder as well. The 1 is recreation, and another one is feature. So in the recreation, we have all the regression test cases available. We have 15 to 20 regression tests for every sprint. Whenever the sprint is released, we run those forward. And for feature tests, there is a test case for every feature functionality. So this is something we have the folder structure. And if we go to the blue test folder, we have a utility folder as well where we have the test data available. We're not manually adding down the data. We're speaking the data from the files so that we don't need to add or edit the data in every test case. And then we have reports. So under the reports, we have the screenshot or the video recording as well. And if you go under that, then we have the configuration, project configuration file. If you are using Playwright, then we have the config rotation and Playwright dot config rotation. And if you are using the Selenium, then we have the config files. And if it is based on a Maven-based project, then we have 2 dot x files. So this is something we have the design pattern. I'm not sure if there's a specific design name for it because I'm not very familiar with the commission.
To ensure reusability and maintainability of your automated test escapes, you would structure your Python code as follows: Okay, how would you structure your Python code to ensure reusability and maintainability of your automated test escapes? To ensure this, you would structure your Python code to use object-oriented programming principles, such as encapsulation, inheritance, and polymorphism. For a specific Python code, Java or JavaScript code, you would give an example. So in my recent project, I had a like, an opportunity to write the test cases for one of my records and ticket. And the scope is, categories. So in codes, we have three levels of categories, level 1, level 2, and level 3. So there is a section of categories in which we have all the categories in level 1. And once we click on level 1, then we reach out to level 2. So the next section will open of level 2. And under that, there are more than 5 to 10 categories as well. This is more than just subcategories. And once you click on 2nd level and then the 3rd level also. So this is the last level available in our portal. So if you're writing a code, okay, if you're writing an element, web element, I mean to say, to write it. So what you need to do, you need to just you don't need to pick any name or any specific name. So what I did here, I just picked the complete section of 1st element and then used the text folder. So that whenever I need to pick the 1st level category, so there's a same element name, but in the parameter, I provide the category name. So the category name should be I mean, the condition is the category name should be matched on the front-end portal. So if you just miss out any spelling or any space, so it will not work. But if you provide a similar name available on the portal, then it will click. Then it will go to the section and find out the name and the similar name format and then we click on that. So that's how I mean, in level 1, we have around 10, you know, categories. In level 2, we have around 8 to 9, and then in level 3, we have 5 to 6. So if we overrule, count, so there would be around more than 90 elements could be there. But I mean, read. I mean, I write the relevant in that way. So I just need 3 variables of 1 for category level 1, 2nd for category level 2, and other 3rd for category level 3. So I just need to mention the category name category name every time and I can hit any category available. So that's how we can reuse our code. And we can reuse our code. We need to I think the reusability is something we need to check, how much I trace any available in that functionality. If we are doing any end-to-end function, so what is the sort of functionality we are clicking again on again. So we can put that in our loop method or so we need to identify that what is the function we are going to use again and again. In that basis, we can write our escape. So that we can reuse and we can improvise or we can reduce the code in our automation.
The function does not correctly update the tone to display a user's font profile because of the following issues: 1. The function is using `document.getelementbyid` instead of `document.getElementById`. The correct JavaScript function to get an element by its ID is `document.getElementById`. 2. The function is mixing up different ways of getting an element in different contexts. In Selenium, the object model is used, but in JavaScript, `document.getElementById` is the standard way to get an element by its ID. 3. The function is not correctly checking if the user is logged in. The condition `if user dot is logged in` is not matched with the function or variable that checks if the user is logged in. 4. The function is not correctly updating the profile element when the user is logged in. The line `profile element dot inner text name as the user dot name is` is not a valid JavaScript statement. 5. The function is not correctly handling the case when the user is not logged in. The line `profile element dot inner text as profile element dot inner text` is not a valid JavaScript statement. 6. The function is not correctly handling the case when the user is logged in but the profile element is not updated correctly. The line `profile element dot inner text as user profile` is not a valid JavaScript statement. Here is the corrected JavaScript code: ```javascript function displayUserProfile(profileElement) { if (user.isLoggedIn) { profileElement.innerText = user.name + ' is ' + user.age; } else { profileElement.innerText = 'Please log in'; } } // Example usage displayUserProfile(document.getElementById('profile')); ``` Note: I assumed that `user` is an object with properties `isLoggedIn`, `name`, and `age`. You may need to adjust the code to match your actual implementation.
Public class, we have class DatabaseConnector, and then private static data's disconnected instance is close to null. Okay. We have valid disconnector method. Valid constructor. Okay. It's a constructor. Then we have public static method. Get instance. Initialize. If instance is null, then an instance is set to new DatabaseConnector. Initialize the return instance to connect to the database.
What approach do you use to evaluate the quality of your code? I use automation and ensure it adheres to best practices. As I already said, I'm not good in code writing, but I'm good in business. So the approach I use is to understand the business or the functionality. So, they used to value the quality of your code when writing test automation. When we are writing it, we are automating the functionality or the web UI, then we are evaluating that the code should be reusable. I mean, if there is any functionality which we need to reuse, we can put that in a loop or create a method in that manner, even if it's a new method. We should use a single method. For example, I gave an example of the category in my earlier question. So, we need to reduce the code and check if there is any code, any method or anything, that is not required, so we need to reduce that. And if there are two methods available, like if I give you an example, one is to find the element, and the second is to click. So what we can do, we can put both in the same method. I mean, find the element and then click as well and then put enter. I mean, the simple functionality, like in every aspect, there is a precondition and end-to-end condition. So if the end-to-end condition is the same, every time, no matter if you are on which page or if you are coming on different pages. If the end-to-end condition is the same, then we need to write a method in that way, so that it can complete end-to-end automation, and then we can use that method. So this is something we can do and evaluate the quality of your code. We are evaluating that via putting comment lines, control dots, logs, JavaScript, and print statements in Java. And we can validate. We can put the line and then validate the error, if you are validating something, we can write the code to get the text value and then validate whether it is matching our expected condition or not. So here, we can use a lot of practice to evaluate the quality of our code. But it depends on the scenario to
Given the requirement to automate performance testing, the tools I would choose are JMeter, and to validate the reliability of performance results, I would use metrics such as response time, throughput, and error rates. Actually, I have limited experience with performance testing. However, in my 2nd last project with Malored Arrow, the CA project, we used JMeter to check the end-to-end functionality. We created a user database, put pilot data into the job, got a flight, and then put that job into the flight. We validated the data rate every x time, measuring how much time it took on our earlier version. We then updated the version, took the data, and compared the time it took to complete the end-to-end functionality and every line of action. This is something I did, but if you're asking me what tools I would recommend, I would refer to JMeter because it's a good tool that allows us to provide a user base and iterate and set time laps. We can give a user base of 1,000 or 10,000 and set conditions such as clicking on an action or function after a 1- or 2-second delay, and then reiterate one, two, or ten times. This is something we can do.
Software development life cycle manual and how they impact our process. The software development life cycle, or SDLC, is a process in which we receive requirements and, in time, deliver the software. The SDLC process covers multiple phases. In the first level, we receive the requirement in the form of a PRS, and then our business team verifies, analyzes them, and creates a status based on them, getting approval from the client. Once we are good, the UI or UX team involves and creates a layout and chart, defining the option functionality, third-party tools, and integration. When we are excellent, we get the expected data, and we can test it. Every layout is done by the UI or UX team. Then it comes to the development phase, where we are involved. The developer has a ticket with the required scenario. We provide the scenario in a V module or ASR way. The developer is working on their ticket, and we are writing test cases and creating a test plan. Based on the test plan, we write test scenarios and add test cases in SITA. We also add positive and negative conditions and get a test execution cycle. Once the ticket is ready, we change the ticket status to "in processing" and start test execution. While doing test execution, we attach artifacts, such as video screen recordings, of the best scenario at that time. Once we are good, we mention a comment, saying all tests have passed and all artifacts are attached to the respective test text ticket. We also provide a Voxel based on UAT. We change the script to "in review" and schedule a meeting with the client product manager and our delivery lead. Once we have the folks for the meeting, we explain the scenarios we have covered, how much we have tested, and how we have tested it. We do a quick revision on the test environment, and once they are good, they have no concerns. They feel like, "Yeah, I have covered everything." Then I can close that ticket. Once we close the ticket, we're ready to merge in preprod, which is the prior involvement of production. In preprod, we perform the allocation on preprod, and once we are good with the allocation, we will update the report in the confidence page. Now the product is ready for production. Once the product is available in production, STNC has a phase. In the first phase, we receive the requirement.