profile-pic
Vetted Talent

Aakansha Tiwari

Vetted Talent

Experienced Software Test Engineer with a demonstrated history of working in the Travel industry, Payments and Banking domain. Skilled in Test Automation, Java, Test Planning, Regression Testing, and Selenium ,API testing , Test case Development. Strong engineering professional with a Master of Technology - M.Tech focused in Information Technology from C - DAC, NOIDA.

  • Role

    Principal Engineering, QA Engineer

  • Years of Experience

    6.9 years

Skillsets

  • SQL
  • Spring Boot
  • Regression Testing
  • Load Testing
  • Gradle
  • Functional Testing
  • Elasticsearch
  • Cypress
  • C++
  • REST Assured
  • Kubernetes
  • Kibana
  • IntelliJ
  • C#
  • Azure DevOps Server
  • Selenium - 3 Years
  • TestNG
  • Cucumber
  • Jira
  • JMeter
  • API Testing
  • JavaScript
  • Postman
  • Java
  • Visual Studio
  • Jenkins
  • Kafka
  • Redis
  • Maven
  • Selenium - 3 Years

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Quality Assurance Engineer (Hybrid - Gurugram)AI Screening
  • 54%
    icon-arrow-down
  • Skills assessed :Excellent Communication Skills, executing test cases, Manual Testing, Mobile Apps Testing, Python, QA Automation, test scenarios, writing test scripts
  • Score: 49/90

Professional Summary

6.9Years
  • Jul, 2024 - Present1 yr 3 months

    Principal Engineering Manager In Test

    Freecharge
  • Oct, 2022 - Jul, 20241 yr 9 months

    Senior Quality Engineer

    Paytm Payments Bank
  • Nov, 2021 - Oct, 2022 11 months

    Quality Analyst Consultant

    Thoughtworks
  • Jan, 2019 - Apr, 20201 yr 3 months

    Software Engineer

    Fareportal India
  • Jun, 2020 - Nov, 20211 yr 5 months

    Quality Assurance Engineer

    OnceHub Technologies

Work History

6.9Years

Principal Engineering Manager In Test

Freecharge
Jul, 2024 - Present1 yr 3 months
    Created and maintained API automation frameworks using HttpClient, TestNG and Gradle, ensuring efficient and reliable testing of backend features. Developed and implemented testing strategies, test plans, and frameworks to improve the efficiency and effectiveness of the QA process. Coordinated with cross-functional teams, including development, product, and operations, to define quality standards and ensure alignment with project goals. Led and managed a team of 10 QA engineers in a cross-functional team, overseeing the execution of both manual and automated testing across multiple platforms. Actively participated in sprint planning, reviews, and retrospectives to provide input on testing progress and suggest improvements. Experienced in maintaining UI automation using Cypress.

Senior Quality Engineer

Paytm Payments Bank
Oct, 2022 - Jul, 20241 yr 9 months
    Created and maintained API automation frameworks using Spring Boot, TestNG and Maven. Performed end-to-end backend testing, including integration testing, functionality testing, UAT, and performance testing. Performed Kafka testing to ensure reliable communication and data processing between microservices. Conducted Elasticsearch testing for effective data searching. Integrated automation frameworks with Jenkins for continuous integration. Conducted data validation and tested integrity using SQL queries. Identified, documented, and reported software defects.

Quality Analyst Consultant

Thoughtworks
Nov, 2021 - Oct, 2022 11 months
    Prepared and executed test cases, test scenarios, and test data to ensure comprehensive test coverage. Conducted API testing and contributed to REST API automation using REST Assured framework. Executed smoke, integration, and regression tests across various platforms. Created and implemented Test Plan and Test Strategy Document. Conducted root cause analyses on recurring defects, resulting in effective corrective actions.

Quality Assurance Engineer

OnceHub Technologies
Jun, 2020 - Nov, 20211 yr 5 months
    Created and executed Test Plans, Test Cases, and Test Scenarios. Designed and developed automation scripts using Java within Cucumber framework. Conducted exploratory and automation testing using Selenium WebDriver, TestNG, Maven, and Cucumber. Managed API testing using Postman tool. Maintained Test Reports, logging defects, and prioritizing them. Conducted log monitoring and database testing.

Software Engineer

Fareportal India
Jan, 2019 - Apr, 20201 yr 3 months
    Prepared and executed Test Cases, Test Scenarios, and Test Data. Conducted sanity, smoke, and regression testing. Tested APIs and developed Hybrid Framework using .NET, C#, and NUnit. Conducted DB testing using SQL Queries and End-to-End testing manually and via automation.

Major Projects

1Projects

Class Imbalance Problem

    To analyze and propose a solution for the class imbalance problem.

Education

  • Master Of Technology (M.Tech) Information Technology

    CDAC - Noida (2019)
  • Bachelor Of Technology Computer Science And Engineering

    PDM College Of Engineering For Women - Haryana (2015)

AI-interview Questions & Answers

Okay. Hi, Amokansha. I have around 5 years of experience in QB testing. Be it's manual or automation. I have been working with, uh, Patience since last, uh, 1 and 1.5 years. I have a working experience in Selenium for UI automation using Selenium, VDD, Coumbur, TestNG, Maven. And for back end automation testing, I'm using RestAssure along, uh, RestAssure, Spring Boot, Lambach for automation. My end to end responsibility in Paytm Payments Bank is to men to handle uh, any testing that, uh, any API testing that we are doing, uh, along with this integration, you wet, uh, and, uh, and automating that all these scenarios that can possibly increase the code coverage of the dev code.

Okay. So what are the key element that okay. For a QA prospective, what, uh, when we raise a bug, we make sure that the bug is valid. Uh, the priority of the bug, severity of the bugs that if we can mention, it's not mentioned over there. Steps to reproduce if it's, uh, related to data or it's related to some critical functionality, how the dev can reproduce it. If we can, uh, if we can identify the impact also, then we will mention the impact also. So that's the thing that we, uh, make sure to highlight in the report. If a bug report as a QA person or if I we get someone that also we make sure that how we can reproduce steps to reproduce and the impact or the description of the bug is one of the main key point of any bug. And how do we prioritize the bugs depend on the, uh, uh, first of all, it depend on the product. Uh, what, uh, if it's a critical bug, then we need to prioritize it as p 0. If it's, like, something that is not very high impactful on the, uh, on the on the application functionality then product can take a call to prototype prioritize it or we can see the severity or priority of that bug. Uh, severity is one of the key factor that can we can consider to prioritize any work. If it's highly severe for the system, then we need to prioritize it, uh, so as soon as possible. If less severe, then we can delay depending on the priority depending on the severity of the bugs, we can take a action of the bugs

Okay. So while testing any mobile applications compatibility, we make sure the different, uh, operate different versions of that particular operating system is compatible. Uh, some sometimes that happens that we don't need not to check every version of the device or an or if I say OS Android of that particular, uh, device, but we make sure different OS versions are compatible with the, uh, with the new changes or with the app. Uh, the, uh, the UI is not breaking. The functionalities are not breaking on the current versions, basically. Uh, we can sometime ignore the older version where we are not providing the support, but, uh, the, uh, the current versions are working correctly or not with the new

So, uh, in my current augmentation, there was a feature of updating, uh, a primary doc of a user with this hash value. And, like, uh, the the complexity of the system is we have to use 2 different kind of databases with 2 different kind of microservices and integrating them. And then the 3rd service will use that particular hash number in their system. So one to automate that I need to make a connection with all the 3 services in between all the 3 services then, uh, make that, uh, because, uh, it's not like every service is up and running on in testing environment every time. So we have to mark those a few uh, like, one of the service to get the response every time so that whenever we are running our test cases working fine in automation along with the, uh, one case test positive test scenario where, uh, it like, every time it's giving me the correct answer. Otherwise, the mocking part of the the one service is kind of a challenging thing. So first, we proceed with all these testing that we manually do and identify a scenario where we can actually hit the service all the time so that we have a correct response. For other scenarios, we use the mocking and, uh, that's how we proceed to automate all the features because it was not very feasible to, uh, keep all the service up and running every time because we are working in a microservices fashions. So that's why we use

Okay. So for, uh, manual and automation tests, Okay. So to, uh, test any service manually test any service, uh, using a peak load, which is kind of, uh, not very feasible, but we try to we can do it using all all postman. Uh, we can use some, uh, some kind of, uh, c v s CSV file if your subsystem supports that and run your particular API for, like, continuous for some time to see the variation in the response time. Or you can and for automation purpose, we can use the JMeter and see the performance of the, uh, API wherever you are, uh, testing at a peak load condition. So what I verify a peak load condition is depend on the systems, uh, stress, uh, load, uh, ability to hand handle the load or entrance the concurrent user at a particular time. So to test those things manually, it's not very feasible on but for in, uh, for testing purpose, we can use to concurrently hit the application. We can take the help of the developer so that he can reduce for the testing purpose. Uh, he can reduce some kind of, uh, variables of t c that, uh, suppose my production I have application which can concurrently handle thousands of user. But for testing purpose, it's not very feasible to have thousands user can users still hitting the application. So we can ask the developer to reduce it for 700 and we can try to do it by using automation or by manually or we can parallel use both the, uh, both the approaches to test the load at the peak hours and see how my applications behaving. If I'm getting sorry and if you actuation we can report. I can report it to the developer and he can fix it for the, uh, lower batch of the application and or or what we can do one thing we can do is we can, uh, a b develop deploy that particular feature and test how the run uh, how for 1 or 2 server that particular, uh, application is behaving for that particular peak condition to to see the differences of the load on the particular,

Okay. So for in in any automation framework, we can use, uh, uh, we can ensure the, uh, the the usability or maintainability of the code. For usability, we have put, uh, we can use the page object model. In my last organization, when we are using the UI automation framework, we were using form, that is page object model where we are keeping the, uh, similar code in page classes, Java classes, and using it in reusing it again and again, or we can create the generic methods in our, uh, in our framework. So that we can use, uh, a a sing so so that will help you to have a single method and you can use it everywhere according to your convenience. So you can change or prioritize the method according to your requirement. So that will increase your the usability of code. Maintenance depend on the how frequent you are making changes in your application and the same changes you have to keep on making in your, uh, in your automation framework also so that, uh, you are up to the mark up up with the new changes every time the release is going so that you are not missing any bug or any functionality that that should not be breaking. So, uh, and also you can use various plug ins, like, if you if I go with the the structure framework, you can use to reduce the code and, uh, like, you don't need to write the gutter and setter, and you can directly use the uh, annotations to make it easy and reduce the code line of codes. And it's very is helpful to use in your automation framework. So there are various thing that you can use in your, uh, automation style scripts where you can reduce the code and reuse that same particular code, uh, depending on your requirement. So

So in singleton class we have, uh, basic we use a basic concept of only 1 instant at a time. So, uh, in this if this code is given, we have 3 things that we make sure in a singleton class that we have a private constructor that we are following, that it will work fine. Then we we are checking if instance is equal to equal to 1 null. If already the instance is not created, we are creating the instant. Otherwise, we are returning the existing instance of the particular database connection and the then we have so one thing that that might break this code or that may cause an potential bug is the connection close. We are not closing any connection the connection anywhere. Right? Uh, so, uh, what will happen is, uh, suppose I'm using that particular instance of the database connection at in my application and another application is trying to access their particular connection. At my end, when the work is over due to some reason, I am not releasing the connection. The other that instance is consumed at my end, but other other other services are blocked for that. So we have to kind of create a, uh, connection close is one of the thing that we can enhance in this or I it might be that we are using it somewhere else, but all the 3 conditions are getting followed over here. We have static very well also. So all the the singleton thing that we are mentioning design pattern, we are following correctly. Uh, only thing that I can see that, uh, on my perspective is if the if we can add 1 more condition where we can are closing the connections if it's the work is over then it work way matter way more better than the

Okay. So the the current code that is displayed on the screen, what it is doing it in first line we are we are creating, uh, we're initiating our, uh, web driver that will open the Chrome browser, and we are launching an, uh, we are launching an URL. Like, we are going to an URL http example dot login where that page will get displayed. Then we have a login then we are defining a test case, uh, test login. And in try block, I'm trying to find an element and clicking on this. And we have a assert. So, uh, what we have our assert condition where we are finding a text in the driver page source. Driver dot page source is a method that will give you all the, uh, page of, uh, source of the, web page that will be loaded. And, uh, that web page and on that particular web page, we want to find a text. Right? So this text, the condition that assert condition we have we are giving is welcome user. What we are not kind of getting it anywhere. It's like always it's like a, uh, hard code value over here. So you have to first find the element where that particular condition is visible. Kind of you have to check where this is present and then search in the web driver to see if it's present or not. So that's that's why the assert So in assert condition, if you see, we are not checking anything. We are just simply asserting the value. There is nothing like, uh, actual and, uh, actual and what do you call it? Actual and expected values. We are just passing a value. So it might help you. And we are not using any assert function also. Assert true, assert false. We are just asserting the value. It's like just a verifier condition. It's like a soft assert, which won't break your test case. It will just see if it's passed, uh, the condition is getting true or false. It it either it's true, then it will move forward in the same test case. If it's false, it will just exit this case and give like, it will continue on existing. Let's just say that your condition fails. So we have to properly use the assert condition over here. Like, assert fail, assert parts, assert equal to whatever you want to make sure that it's a hard set and it will stop the execution of the test

Okay. The practices that I follow in writing and automation tests cases was first to cover as many scenarios as I can in auto while automating them. Second, uh, the, uh, there should not be any flakiness in my test cases. 3rd is the time. I am not taking any unnecessary weight or unnecessary, uh, sleep time in my test cases. 4th is all the condition are matching. It's not like I'm some no hard code really should be there. So that it should be all the things are available at the run time. So we are making sure that, uh, it's like everything is true, false. Actually passed, not, uh, some false positive. Like, all should be positive. True positive. And, uh, practices that I follow is clean code practices. I try to write as much any as clean code as possible. Reusabilities of code, I try to make more generic method so that I can use it everywhere require depending on my requirement. Uh, dependency if, uh, I'm try to ensure that the, uh, the sequence of the dependent test cases are properly working, then only I'm asserting values. And 3rd is the, uh, status codes that I try to check-in every scenario before going to check the status response. And also, as I'm testing, usually have been associate working with the back end testing. So most of the time, I make sure that whatever pressure happening on the d v kind of because we are dealing with the APIs. APIs. Some you get the response, but sometime happens that d b is not getting updated or other services are not getting the correct data. Like, if we on d b run, we are sending an event that is working or not. So all those things all those condition I try to cover in my automation to make sure that in the best practice and give me the more core coverage and uh

Okay. To, uh, so, uh, while testing any migration in a service, uh, we make sure that the current service is, uh, current version of that particular application is working fine along with the new migration is working fine. Suppose, uh, for an example, in my just recently, I have a feature where we are migrating, uh, the user from 1 1 platform to another. We so while migration, we make sure that the current services are not blocked for the user. He is able to use it and making the changes on the other. The same things are, like, same functionality we are providing there also. So we are making sure that this is a part that is already existing is working fine, and the b part is also working fine till the full development on the b side is not, uh, done. Once all the development of b side is done and user is migrated over here, we make sure that the user is able to do all the post op operation that he was doing in the a side. Like, previously with the old version or previously with the, uh, when he was not migrated from 1 platform to another, it it was working fine for the user. Same are over there. Sometimes that happens that when we are migrating the user, we restrict something. That also, we have to make sure that if we are restricting something in the new version, it's it's, uh, it's it's working fine. It's restricted for all the user, not like some new user are able to access it on older user are able to access it. So whatever is the requirement, make sure post migration of the users or post migration of the anything that is also working fine and no, uh, visible changes for the end user is there. Like, uh, because you are just, uh, uh, on the back end, you are migrating user, not so user should not get impacted when you are migrating something. If them from 1 technology to another, 1 platform to another. So that all the things we mentioned while and also functionality wise it should not break for any user, any kind of user when you are doing the the migration is in progress or after migration. So we make sure that happens.

Okay. So, uh, in my we are using, uh, agile methodology, uh, along with SDLC. So what happens is whenever this print start, we have got the requirement from the product managers or product owners. Uh, they will give us a brief introduction about the requirement. The developer debugged that all the requirements of our on their own do the analysis along with the impact of the changes that they are making. That meeting also include QA, where QA also describe if QA has the knowledge on the existing environment, they can provide their input about the changes and the impact. They feel they that particular change can bring into the subsystem. Then we start, uh, as a QA, I start writing the test cases of that particular feature that I need to test in in in the upcoming days and get the all the test cases reviewed with the developers and, uh, with the product manager or define all the if if if I read so, uh, in that that particular meeting only where we are discussing the test cases impact, we can raise the any requirement that we have for the particular testing. Suppose I am testing something and I need some new test bed or some kind of test data, so we start, uh, and we need some other help. So we can raise the dependency if required. Otherwise, we start after the test case review meeting, we can start creation creating the test or test data if we can create on their own. Once we get the changes, we start, right, of testing the scenarios. We maintain the test stability, all the all the, uh, testing artifacts with us and provide a kind of a simple demo to their developer to the developer and the product manager that that is the new feature that we have developed with with, uh, and then we deploy that particular changes on a high for integration testing in different world where we perform the integration testing, then it goes for, where the product owner manager or product owner will see the changes that it's what he want or something different is developed. And after that, we deploy the changes to the production. So that's the, uh, whole cycle that has so, uh, because we are using agile every way in every requirement, uh, every uh, changes are correctly deployed and everything is working fine. And it's like alternate, uh, alternative deployment. So every, uh, the changes are very small so you can easily identify and monitor the changes, uh, new changes along with the old changes. Like, you can perform the regression also to make sure that nothing is breaking. So