profile-pic
Vetted Talent

Supriya singh

Vetted Talent

As a Head of Quality Assurance, I bring over 8+ years of expertise in crafting and executing comprehensive quality strategies to ensure the seamless functioning of software products. My forte lies in leading diverse teams to deliver high-quality solutions by implementing robust testing methodologies. Specializing in Performance Testing, API Testing, Web Automation, and Security Testing, I am passionate about driving continuous improvement and fostering a culture of quality excellence within organizations.

  • Role

    QA Engineer

  • Years of Experience

    8 years

Skillsets

  • SOLID
  • Scrum
  • testing
  • SQL
  • BDD
  • OWASP
  • Effective Communication
  • On
  • Jira
  • Leadership
  • automation - 8 Years
  • Team Lead
  • Selenium
  • Automation Frameworks
  • Critical Thinking
  • Database
  • DevOps
  • JMeter
  • Automation Testing
  • Jenkins
  • Postman
  • Agile
  • Confluence
  • Performance Automation
  • Agile methodologies
  • Manual Testing
  • Test automation
  • Software Testing
  • Defect Management
  • Security Testing
  • Scrum Master
  • Continuous Improvement
  • QA Lead
  • Api automation
  • Web Automation
  • Quality Assurance
  • R
  • Design
  • QA - 8 Years
  • SQL Queries
  • API
  • Python
  • Security
  • Notion
  • C
  • ALM

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Quality Assurance Engineer (Hybrid - Gurugram)AI Screening
  • 56%
    icon-arrow-down
  • Skills assessed :Excellent Communication Skills, executing test cases, Manual Testing, Mobile Apps Testing, Python, QA Automation, test scenarios, writing test scripts
  • Score: 50/90

Professional Summary

8Years
  • Jan, 2023 - Present3 yr

    Head Of Quality Assurance

    Core Maitri Pvt Ltd
  • Feb, 2024 - Feb, 2024

    Senior Quality Lead

    TridentShoxx Labs
  • Feb, 2024 - Feb, 2024

    Associate Quality Engineer

    Finastra Software Solution

Applications & Tools Known

  • icon-tool

    Robot Framework

  • icon-tool

    Python

  • icon-tool

    Selenium

  • icon-tool

    OWASP

  • icon-tool

    Jenkins

  • icon-tool

    HP ALM

  • icon-tool

    JMeter

  • icon-tool

    Postman

  • icon-tool

    Notion

  • icon-tool

    Confluence

  • icon-tool

    ZAP

Work History

8Years

Head Of Quality Assurance

Core Maitri Pvt Ltd
Jan, 2023 - Present3 yr
    Spearheaded team of QA engineers, overseeing API Automation, Web Automation, and Performance Automation initiatives. Acted as Scrum Master within the team.

Senior Quality Lead

TridentShoxx Labs
Feb, 2024 - Feb, 2024

Associate Quality Engineer

Finastra Software Solution
Feb, 2024 - Feb, 2024

Achievements

  • Spot Award for the Best Performance
  • Received Best Team Lead Award & Recognization

Education

  • Master of Technology

    MS RAMAIAH INSTITUTE (2015)
  • Bachelor of Engineering

    GNDEC BIDAR (2013)

AI-interview Questions & Answers

Could you help me understand more about your background by giving a brief introduction by of of this okay. So my name is Updia Singh, and I have completed b BTEC in 2013, and then I started my MTech from 2013 and completed in 2015. I started my 1st job in 2016, January, uh, which is Finestra software solution. It's in Bangalore and it is basically on, um, on a it's a payment based application where I have worked in. And, um, there it's it's the our basically, the clients are from the Budapest and we have created a small, um, interface for them where they can do the transactions, which is called as a inter day transaction, inter day transaction, the foreign transactions. So, basically, I was into that domain for as an automation engineer wherein I can I where I was evaluating the manual test cases and based on those manual test cases, I used to create automation scripts scripts, and that is by using the Robot Framework, which is inbuilt as a Python libraries? And then after that, I joined to, uh, Trident Shocks Lab in 2018. And, um, currently, I'm working with the core, uh, core method software solution, which is a parent of the parent organization of the Trident Shops, and here I am holding a senior quality assurance lead. So wherein I have 4 member in, uh, in my team and I handle them, plus I provide them all the supports related to the manual manual testing plus also on automation free automation design also. So currently we are into the web automation by using robot framework plus API automation. Also, we are doing with the robot framework. Apart from that, we use JMeter for our performance testing and other tools also like postman. Also, we have used for manual testing the APIs. So this is the overall background of mine.

Give an overview of the test automation architecture you would use for a hybrid mobile application. Okay. So the test automation architecture architecture would be so for the mobile testing, so we have multiple platform like Ios, Android, then iPhones. So, um, yeah. So, uh, in that case, currently, what we are using is we are using the robot framework. So robot framework is a kind of a framework which uses APM library in order to test the mobile automations. So we are using the robot framework wherein the architecture is a test data, data, then is your library is defined. Then the basically, the test data will come up, then your robot framework will come up, and then there will be a tools related to the and there will be a settings sections there. And after then there will be an execution. So this architecture is basically, um, this is basically a data driven architecture. Here in the robot framework, we use keywords which will drive all your automations and we have that APM libraries, which will expose some set of keywords to us, which we use which we are using directly, uh, into our systems and which help in the automation

Share a complex testing scenario you automated and the approach you used to validate the accuracy of the automated test result complex of the test scenarios. So, basically, the complex will come, uh, when when there is, uh, some sequential, uh, sequential, uh, cases when you try to execute. So what that exactly means is, for example, if you are executing a set a condition and the output of that condition will become an input to the other conditions, and that condition will provide you that particular test case will provide you an output. And those when you're validating to the database or to the 3rd to the, uh, to the third party, like, for example, from the with the Excel or to the database, then that time, the complexity with the validation and verification of the particular output will be the tedious thing. So the approach, what we have used is, like, we have marked our test datas in such a way that, okay, the output of 1 should have these many sets differ of output. Uh, like the output from the 1st test case should be in this form, and then only you will provide that input to the next 2 test cases. And by that, we have easier approach to the, um, databases, validations. So this helps us to sort over, uh, issues in the automation part.

Explain how you would approach testing a mobile app compatibility with different versions of operating systems. Different versions of operating system. So so, basically, currently, we use, um, like, currently, we use DevOps to manage our to manage our test, um, sorry, to manage our test automation. Also, the entire application is managed using the DevOps. So we in the DevOps, we have a different different versioning things. So every version will will have the different set of scenarios and cases to be executed against it. So what currently we are using is we are, uh, we have a set of automation, which is basically targeting to a particular version. So you won't have other versions. You you won't be having the other versions lined along with that. So, um, we have a small piece of code which is only only testing a version e. Then another code which is testing a version b. So in that approach, we are doing the CICD integrations and whatever version because we are also using the git and, uh, which is pulling from the DevOps. So we, uh, every versions are will get pulled and if there is any bug or enhancement in the version, so we'll pull those code into our system. We do and again, we pushed into the same repository. And through CICD, it will continuously getting tested. So this is how we are testing the versions.

How do you ensure the accuracy of test data for executing test, especially when testing complex business scenario? Accuracy of test data. See, there will be the test data test num if every every test data will have a certain pattern, like whether it should be in the JSON or it should be a dictionary type or should be a list type. So in that case, first, you have to identify the requirement whether this what type of a test data is it? How like, what exactly the key like, uh, what type of a data is can be in, uh, should be provided, whether it's a rare card type of a data or adjacent type of a data or a dictionary to type of a data which has to be sent. So what we will do is, like, any test data, we will write the multiple, uh, checklist. Okay? We have to have this, uh, this particular type of data. For example, if it it should be a JSON type. So our condition should check those type of a data. If the data is coming in the same format, then only we will, uh, take that particular data and provide as an input over test case. That we should not be providing direct we should not be accepting that directly the data. So this will ensure that the data which is coming into the test case is correct.

Describe the testing challenges you face to the queue automation in the past and the step you took to overcome. Okay. The, uh, okay, the major challenge which I faced in the testing is basically I was into the automation. So I face issue with the execution time. Like, basically, we have around 400 to 500 test cases or test scripts basically and running those test script overnight will, um, is one thing which we are doing, but the, uh, which we were doing, but, um, that is also problematic because it is, it is continuous. Like some of the cases will continuously fail and coming next day and seeing those issues and again, running is a tedious job. So what exam what the issue how did we resolve is basically we have incorporated parallel testing. So knowing parallel testing, we have laid down time, the overall execution from from from, like, around 4 to 5 of us too. It is just turning in 1 hour. So by using the parallel testing, we can achieve we have achieved the total execution of we have reduced the downtime execution. So which helped also, and it it is, like, very quick and fast as

The phone just has never determined why the phone does not connect. Click update the down to display. So basically, in the let function only, we have asked for, uh, user profile. User profile document dot get element by ID using user profile. So if that particular person haven't logged in, so this profile element, we will not get. So this is not correct here on the line number 2nd. So if the user dot is logged in, profile element dot inner text name is logged in through name validation. Example. The user profile. Because in the let function we have called element by user ID and we haven't provided the user profile ID here. So this profile element will not work, and due to which it will fail the case. From the call function, we have only provided true name and age. User profile is not have been sent. So user detail no. So this won't work. Profile element will not fetch you any result, and this will fail the case.

Python model data to cellular model data. Just a logic error that caused the test to always pass even if it is supposed to Test login. Snippling here. Try driver dot find element by ID login. You do not click. Even even if it is supposed to fail. Because we have test passed with the exception. Because in the print statement, like, um, the issue is in the print statement, we have written test passed with the exception. So if whenever there is a failure also, it will pass the test case and it will provide you the exception. So, logically logically, this is wrong because the print test giving you test passed with the exception. So if there is any failure happened here, so it will print the test case as passed and it will throw the exception. Rather, it will cache that product instead of instead of print, it should catch the exception. Here, we are printing as passed with the exception e. So due to which this will never fail. It will give you passed result.

How do you enter? Python based automation framework, we can, like, um, we have a we have a robot framework which will give you it's a Python based automation framework and it is very much robust and it is into it is well defined and it's already working with the c I, uh, c I system. So you can have the DevOps integrated for the c I c I connection with the robot framework or you can use Jenkins also to integrate your robot framework with the c I because it's a Python based also and it is very far robust. And what we'll do is basically what we have to do is we have a automation script, and that script should be into a repository on the DevOps. And then from the DevOps, you have to create a pipeline job pipeline job, and you have to integrate. You have to add all the configurations of your Python or out of the library's version and also pointing your code to that particular location of the repository of your automation depository, and it will it will and also you have to provide the variables like username and passwords which are confidential into the dev ops system. And once you provide those thing, it will start to build the artifacts and then automatically it will pick your it will go to the location where you have pointed your repository. It will pick that, provide the very it will detect the required variables which needs to be executed. It will take that and it will run your, um, your it will continuously run your, uh, continuously run your job job against the application. So continuous continuous integration is very look very easy and seamless in the DevOps currently which we are using. So it is perfect. Like, it is working fine. Uh, only disconnection is only one time connection is required. You have keep updating your code. You have to keep you can keep adding your code into the repository, and it will automatically pigweed that CI and it will run through that, and it is just one time configuration in the DevOps. Or if you are using Jenkins also, it is like, um, uh, it is like one time configuration we have to make.

Explain the methodology you would apply for migration testing to ensure critical operations are function post migration. Okay. Explain the methodology you would apply for migrating the testing to ensure critical operations are functioned post migration. So when you are migrating, so whenever we are migrating, then we have to, uh, understand the, like, issues in which we can face. Like, for example, if we are migrating our automation from 1 framework to the other, first, we have to identify what all the critical areas, what is the like, what are the parameters which is dependent on it, what are the variables test data? Those are, uh, dependent on that per on for that those for automation. Identify the critical test cases. Keep those test cases one side. Start to migrate the easier one first. Check how the migrations are going on. Keep running your old critical cases with the old migration, and slowly slowly, you have to come and slowly you have to migrate your cases. Once your ease late late low or the easy cases are stabilized and it is running fine, then you can, uh, slowly, you can add the critical cases also to this in the new migration framework, migrated framework, and then you can build a break by running it continuously. Um, by running the continuously, the new framework, also the old one, will ensure your continued testing also, plus also ensure that your migrations are working fine. So it won't impact, but it will all it will explore you. We convert all your, um, like, uh, all your old code into the migrated new code. So but always first approach with the simple cases to migrate because they're they're only we face lot of issues, challenges while migrating. So if those care can be sorted, we can easily migrate our critical cases and operations also.

Given the requirement to automate performance testing, what tools would you choose? Performance testing could be done using multiple tool multiple sorry. Multiple tools so that we we can currently we have JMeter. So there you can write the JavaScript, the JavaScript language also to automate your web related pages for performance. Otherwise, we can go for the robot framework the robo framework also. It also provide us the well, it also provide us the automation code. You can have you can write the automation code for the performance all there as well. But currently, we are using, um, JMeter where you can write all your, uh, all the page level performance by providing the HTTPS URL, and you can give the load. Uh, you can give the stress also with multiple people. It read with 10 user, 20 users, provide all your Chrome credentials, your browser credential like, you have to give the browser credentials and you have to provide your HTTP URL and it will, uh, it will execute. So it will give you all your latency, throughput, and how much time it takes and what are the pages we've got failed. That also we can do using JMeter, plus also we can write those, uh, JavaScript in the there is a editor also in the, uh, g meter where you can write your JavaScript code, which will make you to, uh, through which also you can validate your performance. Plus, through the automation, you can have this robot framework also. There also you can write the automation, and you can get the start time and time and just subtract it. You will get the performance validated with the KPIs, which is, uh, which is defined for application for each pages. Like what is a set of KPIs, uh, defined? For example, it could be 2 second, 3 seconds. So based on that also, you can validate it and this way you can do it. So currently we are using JMeter and there is 1 more tool best currently in the market which we are still exploring. It's a locust, which is a Python based also. So but still, like, we are working on that. It's it's, um, no. Um, Yeah. So we we haven't got any good result, but currently, locust is also used for the Python based application.