profile-pic
Vetted Talent

Sagar Mishra

Vetted Talent

Highly skilled Test Engineer with 5.6 yrs. of experience having background in Manual testing, API testing, Automation testing, and Mobile/TV App testing.Proficient in test management tools such as JIRA, Target Process.Skilled in Functional testing and Agile methodologies.Experience working as an On-site spoc in Belgium for a tenure of 1.6yrs. where I was working closely with clients and business analyst in the requirement definition and identifying the bottlenecks of the design upfront at the implementation stage. Aspires to continue enhancing testing skills and staying updated on industry trends.

  • Role

    QA Engineer

  • Years of Experience

    6 years

Skillsets

  • C
  • PLSQL
  • Confluence
  • Agile
  • Business Analyst
  • Postman
  • MS Office
  • JMeter
  • Test cases
  • Project Management
  • automation
  • Jira
  • On
  • testing
  • Selenium
  • ALM
  • Agile methodologies
  • Github
  • Eclipse
  • Regression Testing
  • API
  • LinkedIn
  • Automation Testing
  • Telecom
  • Filezilla
  • Design
  • R
  • Automated Testing
  • Mobile App testing
  • Functional Testing
  • API Testing
  • Manual Testing

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Quality Assurance Engineer (Hybrid - Gurugram)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :Excellent Communication Skills, executing test cases, Manual Testing, Mobile Apps Testing, Python, QA Automation, test scenarios, writing test scripts
  • Score: 45/90

Professional Summary

6Years
  • Test Engineer

    Infosys Ltd.
  • Test Executive

    Infosys Ltd.
  • Senior Test Executive

    Infosys Ltd.
  • Test Engineer

    Infosys Ltd. J A N U A R Y
  • Senior Test Executive

    Infosys Ltd. J U LY
  • Test Executive

    Infosys Ltd. J U N E

Applications & Tools Known

  • icon-tool

    Postman

  • icon-tool

    Newman

  • icon-tool

    Jmeter

  • icon-tool

    Selenium

  • icon-tool

    Eclipse

  • icon-tool

    MS Office

  • icon-tool

    SoapUI

  • icon-tool

    PLSQL

  • icon-tool

    Putty

  • icon-tool

    Github

  • icon-tool

    Confluence

Work History

6Years

Test Engineer

Infosys Ltd.
    Involvement in the testing process of European Telecom client Proximus as both offshore and on-site spoc, handling various testing and project management activities.

Test Executive

Infosys Ltd.
    Involved in the testing process of US based Healthcare client - AETNA, ensuring high-quality software solutions. Responsibilities included manual testing, executing test cases, identification and documentation of software defects.

Senior Test Executive

Infosys Ltd.
    Involved in designing and execution of test cases through both automation and manual testing for AETNA, as well as working on the enhancement of the automation suite.

Test Executive

Infosys Ltd. J U N E

Senior Test Executive

Infosys Ltd. J U LY

Test Engineer

Infosys Ltd. J A N U A R Y

Achievements

  • Received Client award for Best Test Execution as fresher with One year experience
  • Received Insta awards three times for consecutive years for Best and consistent Performance in the Testing team and collaborative as Agile member
  • Worked in Automation Enhancement and has been listed in Client Company Official page
  • Received Insta awards three times for consecutive years for Best and consistent Performance in the Testing team and colloborative as Agile member.

Education

  • Bachelor of Computer Application

    Dr. Virendra Swarup Institute of Computer Studies
  • Bachelor of Computer Application,

    Dr. Virendra Swarup Institute of Computer Studies

AI-interview Questions & Answers

Could you give me an insight more about your background? Okay. So, uh, my name is Sagar Mishra, and, uh, I'm from the city called Kanpur, Uttar Pradesh. I have a world experience of, uh, 5.6 years in the IT industry. So I've completed my graduations in the year 2018 and since then, I'm working with Infosys as, uh, as a tester. So I've started my career as a testing executive and I'm working as a testing engineer in Infosys. So regarding, uh, as as I mentioned, I was I was from testing domain. So regarding the different kind of testing that I have done till now, uh, manual and automation testing, API testing, set of box set of box testing, personal testing, regulation testing, and end to end testing. And, uh, from, uh, in the last 2 years, I have been working on the on-site location in Belgium. So that gives me an ability to, uh, work with client and have those aspects of, uh, client building a relationship. Also, uh, if you talk about the tools, so the test management tools that I'm aware about are, uh, Jira target process and, uh, SPLM. And about the API testing, so I have done, uh, the testing on Postman, Newman, and Zaymeter. About the mobile app testing. So I have, uh, done it on, like, the automation part with the APM part. And, uh, for the web automation, I've done it for with, uh, Selenium with Java. So that is a brief info about me.

Discuss a strategy for implementing continuous testing in the DevOps life cycle in the road of seeing a queue and share the progress. So, uh, basically, the DevOps life cycle, uh, depends on CICD process. So we have this continuation, uh, continuous integration, continuous continuous, uh, deployment process. So it always start on the part where, uh, a code has been integrated, uh, by the development team and it has been pushed, uh, to the testing environment and we we tested it. And, again, it will be deployed to the production environment. And then, again, uh, the whole process will, uh, jump into the picture. So there are different different tools, uh, that we, uh, that we go through the whole, uh, DevOps cycle. The main part is the Jenkins part. So, basically, uh, we as a tester or QA engineer, you can say, we we focus on the Jenkins and the Maven part mostly. So Maven and Maven will help you to, uh, build the code and push it into to the GitHub. And, uh, why Jenkins will pull the code, and then again, we will do the testing as in all those part. Then, like, I'll push back, uh, to the GitHub. That will be our version control. So that's how the whole process of, uh, DevOps will work. So, basically, the development and the testing part and the deployment part, uh, that will come as a picture in the infinite loop. So if you if you will see an a picture of a developed, uh, DevOps life cycle, it it will be all always in the infinity, uh, kind of a site where the continuation integration and the continuation deployment will go on and on and on and on. So that is also a concept, uh, with the agile agile methodology because in the agile methodologies, we we always focus on to, uh, like, integrate the development and more than integrate, like, collaboration of the development and the testing part

How would you implement function to validate server response time within defining limits during the performance? In Python, how would you implement a function to validate server response times? In Python, how would you implement a function So, uh, during performance testing, like, uh, suppose we we can we can use, uh, I can give you an example of how we use, uh, combined Python with Selenium. Like, we have different different, uh, kind of weights that has been provided by Selenium. So we can put down those weights. Suppose a server is taking, uh, x amount of time to, uh, to to get that to get the data fixed or to get a web page to be loaded. So in that case in that particular case, we can, uh, we can mention that particular element is not until the particular element of the particular web page is not visible. We should not go to the next step. So there are different weights in that. In this case, we should, uh, use the explicit weight so that we can put out the expected condition first. Once that expected condition has been met out, that is, uh, we we got a proper response from the server. Then in that case, uh, we will move it to the uh

High level system for QA automation. What design patterns would be considered? So, basically, uh, for automation, there are certain things, uh, that we need to look out for. 1st of all, uh, out of suppose we have some 10,000 test cases for the manual part, and out of those 10,000 cases, we need to derive the automation once. So we need to, uh, first check out the priority first that what are the, uh, business level priorities that we have for the test cases. So we have to pick the high priority IC test cases first. Then, uh, we need to check what are the chances out of, uh, those 10,000 cases. How many test cases can be automated? So there there can be a possible reason that, uh, few of the test cases cannot be automated. We cannot automate all the 10,000 test cases. So we need to check the compatibility with our with our framework or with our what whatever automation we are doing. So those out of those 10,000 cases, uh, how many test cases are compatible, uh, for the automation part? Also, uh, about the test data dependency, like, out of those, uh, test cases, uh, how many test cases have, uh, the test data dependency and from which team we need to get the test data and how it will work. So that is also one of the, uh, cases that we need to check, like, what is the data dependency on our, uh, automation cases. Then, uh, then on, like, uh, what all, uh, suppose it's it's it's a web automation part or it's a mobile automation. We we need to check the, uh, platform also in what all, uh, scenarios, uh, we, uh, or or we can say in what all platforms we need to uh, run our automation. If it is only the web parts, so what kind of technologies we we need to use or if it needs to check on both web API and, uh, mobile platform. So in that case, we need to choose up our technologies or to to we we need to choose our automation tools, uh, that we need to work upon. Because if we if we choose Selenium part, so it cannot be, uh, it cannot automate the mobile testing part. So we need to be careful about about the technologies and the tools that we we are looking upon. Also, uh, we need to gather information about the project also in on the initial level. So if we have the module knowledge, if we have the domain knowledge itself, so it will be easy for us to, uh, filter out the automation cases or design a high level system for the queue automation. So, yeah, those are the things that we need to take care of here while designing a high level system for the queue automation.

Comes both manual and automation testing. How would you test an application response under peak load condition? So, uh, for the manual part first. So if we are looking for a peak load condition, uh, load condition, that means we are, uh, testing any kind of functionality in which n number of user is is using that functionality. So suppose, uh, if we have a web application and n number of user is trying to download it. Not not download. A number of user is trying to log in it. So in that case, uh, we do it. We can we can log in via via multiple user at at at the same time, And then we can check how the app how the application is responding back. Or manually, uh, the better form will be to push our data something like suppose we have to push the data off the customer, like, how many customers are in the application itself so we can put push an Excel sheet having the n number of data of of the customer, and then we will check how the response is coming back from the system. Regarding the automation part, so we can test the login feature by by pushing up the Excel with a number of customer and log in it on the same time so that we will do it from our automation framework where we can push the Excel by by giving, uh, all the related information about login features. And then when we'll push it via the Maven, then we will run the test runner file and then check how the response is coming, if there is any login information or the server is, uh, delaying, or the server itself is collapsing. So those kind of validation, we can

Share accomplished this is are you automated and the approach you Uh, complex part was not, uh, it was not related to, uh, the automation part, but, uh, in my last project itself, it was more of the exploratory testing. So we don't have the actual requirement defined. No one was actually, uh, knowledgeable for the actual requirement. So we we need to reach out to solution architect for the actual requirement. So it it was very hectic to, uh, automate 1 until unless we we won't, uh, get the actual requirements fulfilled by the solution architect. So we used to automate a few part of a particular feature, then we we check with with the solution architect. They will give the feedback. Then, again, we have if if we have to make a certain engineers, we will do that. And, again, we will go to the solution architect and check that. So this process will, uh, uh, go on into the picture because the solution architect also need to check it with the product team or, uh, some, uh, some POs and all because they are also, uh, kind of, uh, what to say, Not pretty much aware about what will be the expected output. So those those kind of, uh, difficulties I have faced, but with the with the collaboration and integration of the team, we can easily do that.

Sorry. Not sure about the answer.

That function is not directly

So, basically, uh, when we are doing the automation part, we need to make sure the x path x path that we use to, uh, derive different elements are correct. Uh, so in that case, we need to be specific about, uh, the XPath. We need to use the most specific locators like ID, name, if that can be a possibility. So we have to be, uh, 100% sure that our experts are correct. Then, again, uh, as far as possible, we need to maintain our common functions in the utilities that can be used many time with the different projects. So suppose there is a login functionality, logout first likely taking the screenshot, enabling a web element, disabling a web element, checking whether the particular element is enabled or not. Those kind of things we need to put it in the common functionalities and not common just to make sure that it can be used every time. So we need to, uh, make sure that whatever automation code that we are providing, it is reusable. A customer or any anyone else can can reuse it again and again. Also, the naming convention that we, uh, we are following in our code, that should be proper and easy to understand so that if if in any case, like, uh, my framework is has been used by someone else. So by just reading, uh, some comment line on those kind of things, they can easily understand what is going on in that particular code. And, uh, also, the debugging part. So everything, uh, in regarding the test data part first of all, regarding the test data part, it should not be hard coded. So every time, uh, for the test data, we need to parameterize it. So in that case, in anytime the project will come into picture and someone will ask for the new testing, so we just need to change the test data code. Uh, we just need to change the test data, not the code. Okay? So we need to, uh, parameterize all the test data, or we we need to put it in the separate Excel for the test data part so that anytime, uh, we need to change the test data, we need to take it from the Excel or we need to edit the Excel itself. So those kind of, uh, practices we need to follow.

How do you assess the testability testability of a new feature and on your testing? So if any new feature will come into the picture, I will I will read out the, uh, requirement document first. Then, uh, I can have a call with my solution architect or within my team itself, whatever the lead or the senior that this is my understanding of the features. But, basically, with the solution architect, that that will be a more feasible process. Uh, so we'll we'll discuss that. This is my understanding. And, uh, can can you tell me if if I'm lagging it somewhere or if if there is any feedback? Once I'm getting it, I will I will drop the high level scenarios for it. I I will drive the high level cases. Then again, I will send it back to my test lead and, uh, have the verification then. When that, uh, high level test cases has been verified by the by the test lead called the solution architect, we will go for the detail test steps with the expected and the actual result as per the SaaS document or the first trial design document. So once that has been done, after that, we will, uh, start the text execution part, keeping in mind what can be, uh, the manual testing and what can be the automation testing. So if there is any automation test cases, we can mark it as automation. And and, firstly, we will start with the manual testing. So we will start with the basic, uh, functionality of the features, and then we will go and complete all the testing. During the testing, we will try to find out if there is any bug in the feature which is not, uh, expected, aware with and the actual results are not matching up, we will push down the box to the development team, and we'll follow-up regularly on those box Once we will have the, uh, what to say, the fix of that bug, we'll again retest that particular functionalities, and then we will close that particular bug. After testing out those, just for the bug fixes, we will do a round of regression also that, uh, our existing functionality is not break due to that particular fix. And then we will close that particular feature. Uh, before closing it, we will show the demo, uh, to the solution architect or the client, whatever we have automated and manually done the testing. And then once we'll get the confirmation from the client, we can close that feature.

For migration testing, uh, to ensure critical operation are functional. For migration testing, basically, we need to make sure that the the core functionality of a particular application is working. Also, we need to make sure, like, after migration, uh, that particular website or that particular application is working on all the platforms, Whether it is a web platform, whether it is a mobile platform, whether it there are different variant, uh, in the mobile platform, whether it is Ios uh, Ios or Android operating systems. So those kind of, uh, precautions we can take. Also, the functionality wise after post migration, it will be working on the all the moments and all the stuffs. So let me