profile-pic
Vetted Talent

Sagar Mishra

Vetted Talent

Highly skilled Test Engineer with 5.6 yrs. of experience having background in Manual testing, API testing, Automation testing, and Mobile/TV App testing.Proficient in test management tools such as JIRA, Target Process.Skilled in Functional testing and Agile methodologies.Experience working as an On-site spoc in Belgium for a tenure of 1.6yrs. where I was working closely with clients and business analyst in the requirement definition and identifying the bottlenecks of the design upfront at the implementation stage. Aspires to continue enhancing testing skills and staying updated on industry trends.

  • Role

    QA Engineer

  • Years of Experience

    6 years

Skillsets

  • C
  • PLSQL
  • Confluence
  • Agile
  • Business Analyst
  • Postman
  • MS Office
  • JMeter
  • Test cases
  • Project Management
  • automation
  • Jira
  • On
  • testing
  • Selenium
  • ALM
  • Agile methodologies
  • Github
  • Eclipse
  • Regression Testing
  • API
  • LinkedIn
  • Automation Testing
  • Telecom
  • Filezilla
  • Design
  • R
  • Automated Testing
  • Mobile App testing
  • Functional Testing
  • API Testing
  • Manual Testing

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Quality Assurance Engineer (Hybrid - Gurugram)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :Excellent Communication Skills, executing test cases, Manual Testing, Mobile Apps Testing, Python, QA Automation, test scenarios, writing test scripts
  • Score: 45/90

Professional Summary

6Years
  • Test Engineer

    Infosys Ltd.
  • Test Executive

    Infosys Ltd.
  • Senior Test Executive

    Infosys Ltd.
  • Test Engineer

    Infosys Ltd. J A N U A R Y
  • Senior Test Executive

    Infosys Ltd. J U LY
  • Test Executive

    Infosys Ltd. J U N E

Applications & Tools Known

  • icon-tool

    Postman

  • icon-tool

    Newman

  • icon-tool

    Jmeter

  • icon-tool

    Selenium

  • icon-tool

    Eclipse

  • icon-tool

    MS Office

  • icon-tool

    SoapUI

  • icon-tool

    PLSQL

  • icon-tool

    Putty

  • icon-tool

    Github

  • icon-tool

    Confluence

Work History

6Years

Test Engineer

Infosys Ltd.
    Involvement in the testing process of European Telecom client Proximus as both offshore and on-site spoc, handling various testing and project management activities.

Test Executive

Infosys Ltd.
    Involved in the testing process of US based Healthcare client - AETNA, ensuring high-quality software solutions. Responsibilities included manual testing, executing test cases, identification and documentation of software defects.

Senior Test Executive

Infosys Ltd.
    Involved in designing and execution of test cases through both automation and manual testing for AETNA, as well as working on the enhancement of the automation suite.

Test Executive

Infosys Ltd. J U N E

Senior Test Executive

Infosys Ltd. J U LY

Test Engineer

Infosys Ltd. J A N U A R Y

Achievements

  • Received Client award for Best Test Execution as fresher with One year experience
  • Received Insta awards three times for consecutive years for Best and consistent Performance in the Testing team and collaborative as Agile member
  • Worked in Automation Enhancement and has been listed in Client Company Official page
  • Received Insta awards three times for consecutive years for Best and consistent Performance in the Testing team and colloborative as Agile member.

Education

  • Bachelor of Computer Application

    Dr. Virendra Swarup Institute of Computer Studies
  • Bachelor of Computer Application,

    Dr. Virendra Swarup Institute of Computer Studies

AI-interview Questions & Answers

Could you give me an insight more about your background? Okay. So, my name is Sagar Mishra, and I'm from the city called Kanpur, Uttar Pradesh. I have a world experience of 5.6 years in the IT industry. So, I've completed my graduations in the year 2018 and since then, I've been working with Infosys as a tester. So, I've started my career as a testing executive and I'm currently working as a testing engineer in Infosys. Regarding the different kinds of testing that I have done till now, manual and automation testing, API testing, set-based box testing, performance testing, regulatory testing, and end-to-end testing. And, from the last 2 years, I have been working on-site at a location in Belgium. So, that gives me an ability to work with clients and build relationships with them. Also, if you talk about the tools, the test management tools that I'm aware of are Jira, Target Process, and SPLM. And about API testing, I have done testing on Postman, Newman, and Zapier. About mobile app testing, I have done it on the automation part with the APM tool. And, for web automation, I've done it using Selenium with Java. So, that is a brief info about me.

So, the strategy for implementing continuous testing in the DevOps life cycle is to see a queue and share the progress. Basically, the DevOps life cycle depends on the CICD process, which includes continuous integration and continuous deployment. The process starts when code is integrated by the development team and pushed to the testing environment, where it is tested, and then deployed to the production environment. The process then jumps back into the picture, and different tools are used to navigate the DevOps cycle. The main part is the Jenkins process, where Maven is used to build the code and push it to GitHub. Jenkins then pulls the code, and testing is performed on all parts. The code is then pushed back to GitHub, which serves as the version control. This process forms an infinite loop, where development, testing, and deployment are always in progress. This concept is also part of the agile methodology, which emphasizes collaboration between development and testing teams.

How would you implement function to validate server response time within defining limits during the performance? In Python, how would you implement a function to validate server response times? In Python, how would you implement a function So, during performance testing, like, suppose we can we can use, I can give you an example of how we use, combined Python with Selenium. Like, we have different, kind of weights that has been provided by Selenium. So we can put down those weights. Suppose a server is taking, x amount of time to get that to get the data fixed or to get a web page to be loaded. So in that case in that particular case, we can, we can mention that particular element is not until the particular element of the particular web page is not visible. We should not go to the next step. So there are different weights in that. In this case, we should, use the explicit weight so that we can put out the expected condition first. Once that expected condition has been met out, that is, we got a proper response from the server. Then in that case, we will move it to the

High level system for QA automation. What design patterns would be considered for automation? For automation, there are certain things that we need to look out for. 1st of all, out of suppose we have some 10,000 test cases for the manual part, and out of those 10,000 cases, we need to derive the automation once. So we need to first check out the priority first, that what are the business level priorities that we have for the test cases. So we have to pick the high priority IC test cases first. Then, we need to check what are the chances out of those 10,000 cases. How many test cases can be automated? So there can be a possible reason that few of the test cases cannot be automated. We cannot automate all the 10,000 test cases. So we need to check the compatibility with our framework or with our automation. So out of those 10,000 cases, how many test cases are compatible for the automation part? Also, about the test data dependency, like out of those test cases, how many test cases have the test data dependency and from which team we need to get the test data and how it will work. So that is also one of the cases that we need to check, like what is the data dependency on our automation cases. Then, on like what all, suppose it's it's a web automation part or it's a mobile automation. We need to check the platform also in what all scenarios, we or we can say in what all platforms we need to run our automation. If it is only the web parts, so what kind of technologies we need to use or if it needs to check on both web API and mobile platform. So in that case, we need to choose our technologies or to we need to choose our automation tools that we need to work upon. Because if we choose Selenium part, so it cannot automate the mobile testing part. So we need to be careful about the technologies and the tools that we are looking upon. Also, we need to gather information about the project also in the initial level. So if we have the module knowledge, if we have the domain knowledge itself, so it will be easy for us to filter out the automation cases or design a high level system for the QA automation. So, those are the things that we need to take care of here while designing a high level system for the QA automation.

Comes both manual and automation testing. How would you test an application response under peak load condition? So, for the manual part first. If we are looking for a peak load condition, that means we are testing any kind of functionality in which n number of users is using that functionality. So, suppose if we have a web application and n number of users are trying to log in. We can log in via multiple users at the same time, and then we can check how the app, or the application, is responding back. Or manually, the better form would be to push our data, something like suppose we have to push the data of customers. For example, how many customers are in the application itself? So we can put an Excel sheet having the n number of data of customers, and then we will check how the response is coming back from the system. Regarding the automation part, we can test the login feature by pushing up an Excel with a number of customers and log in at the same time, so that we can do it from our automation framework where we can push the Excel with all the related information about login features. And then when we'll push it via Maven, we will run the test runner file and then check how the response is coming. If there is any login information or the server is delaying, or the server itself is collapsing. So those kind of validations, we can.

We accomplished this is are you automated and the approach you. The complex part was not related to the automation part, but in my last project itself, it was more of exploratory testing. So, we didn't have the actual requirement defined. No one was actually knowledgeable about the actual requirement. So, we needed to reach out to the solution architect for the actual requirement. It was very hectic to automate until we wouldn't get the actual requirements fulfilled by the solution architect. So, we used to automate a few parts of a particular feature, then we'd check with the solution architect. They would give the feedback. Then, again, we had to make certain changes, and we would do that. And, again, we would go to the solution architect and check that. This process would go on because the solution architect also needed to check it with the product team or some POs, and they weren't pretty much aware of what the expected output would be. So, those kinds of difficulties I have faced, but with the collaboration and integration of the team, we can easily do that.

I'm not sure about the answer.

It's not directly related to what we're doing here.

So, basically, when we are doing the automation part, we need to make sure the XPath that we use to derive different elements are correct. So in that case, we need to be specific about the XPath. We need to use the most specific locators like ID, name, if that can be a possibility. So we have to be 100% sure that our experts are correct. Then, again, as far as possible, we need to maintain our common functions in the utilities that can be used many times with different projects. So suppose there is a login functionality, logout first likely taking the screenshot, enabling a web element, disabling a web element, checking whether the particular element is enabled or not. Those kinds of things we need to put in the common functionalities and not just make them common to ensure they can be used every time. So we need to make sure that whatever automation code we are providing, it is reusable. A customer or anyone else can reuse it again and again. Also, the naming convention that we are following in our code should be proper and easy to understand, so that if, in any case, like, my framework has been used by someone else. So by just reading some comment lines on those kind of things, they can easily understand what is going on in that particular code. And also, the debugging part. So everything regarding the test data part first of all, regarding the test data part, it should not be hard coded. So every time for the test data, we need to parameterize it. So in that case, anytime the project comes into picture and someone asks for new testing, we just need to change the test data code. We just need to change the test data, not the code. Okay? So we need to parameterize all the test data, or we need to put it in a separate Excel for the test data part, so that anytime we need to change the test data, we need to take it from the Excel or edit the Excel itself. So those kinds of practices we need to follow.

I assess the testability of a new feature by reading out the requirement document first. Then, I can have a call with my solution architect or within my team itself, whatever the lead or the senior that this is my understanding of the feature. But, basically, with the solution architect, that will be a more feasible process. So, we'll discuss that. This is my understanding. And, can you tell me if I'm lagging somewhere or if there is any feedback? Once I get it, I will drop the high-level scenarios for it. I will drive the high-level cases. Then again, I will send it back to my test lead and have the verification then. When the high-level test cases have been verified by the test lead, we will go for the detailed test steps with the expected and actual results as per the SaaS document or the first trial design document. So once that has been done, after that, we will start the test execution part, keeping in mind what can be manual testing and what can be automation testing. So if there is any automation test cases, we can mark it as automation. And first, we will start with the manual testing. So we will start with the basic functionality of the feature, and then we will go and complete all the testing. During the testing, we will try to find out if there is any bug in the feature which is not expected, and the actual results are not matching up. We will push it down to the development team, and we'll follow up regularly on those bugs. Once we have the fix of that bug, we'll again retest that particular functionality, and then we will close that particular bug. After testing out those bug fixes, we will do a round of regression to ensure that our existing functionality is not broken due to that particular fix. And then we will close that particular feature. Before closing it, we will show the demo to the solution architect or the client, whatever we have automated and manually done the testing. And then once we get the confirmation from the client, we can close that feature.

For migration testing, to ensure critical operations are functional. For migration testing, basically, we need to make sure that the core functionality of a particular application is working. Also, we need to make sure that after migration, a particular website or that particular application is working on all the platforms, whether it's a web platform, whether it's a mobile platform, whether there are different variants in the mobile platform, whether it's iOS or Android operating systems. So those kinds of precautions we can take. Also, functionally after post-migration, it will be working on all moments and all the systems.