Sr. SDET
Netskope Inc. TaiwanSr. SDET
Binance inc. TaiwanSr. SDET
Innova Solutions TaiwanQA Automation Engineer
Cybage Software Pvt. Ltd.,QA Automation Engineer
Xpanxion UST globalGit
REST API
Python
Jira
Visual Studio Code
Slack
Figma
Postman
Microsoft Teams
AWS (Amazon Web Services)
Zephyr
Java
GitLab
Jenkins
Selenium
Groovy
SoapUI
TestRail
Cucumber
Specflow
REST Assured
pytest
C#
Best QA in the company.
Technical skills are outstanding.
Innovative and creative.
Scrum and Agile knowledge is very good.
Okay. So black box testing and black box testing, uh, what is our difference? Right? So white box testing is, uh, you are means, uh, you can see the code, you can see the design, you can see the requirement and everything. How does That system look like, and you need to test that. That is a white box testing. White box means everything is visible to you. There's nothing hidden, uh, so that you need to assume. And black box testing is that there's a box. Um, you can see a system where the details are not revealed to you. It means that, uh, in a document it is written that, uh, this this piece of code will work like this, but you don't have access to that code. Uh, so you need to test by assuming things that this functionality will work according to this so that I can test in this way. For example, Uh, let's say there is a system which will calculate. Uh, if you send two numbers to it, it will calculate and return you the sum. So in black box testing, Uh, we don't know how it is calculating the sum, but we know that it is doing this functionality. And in white box, you can see the code to make sure that The the two numbers are added by this functionality or this approach or this logic. So this is the mainly difference between white box and black black box. The main definition is if you know the internals of a system, if you are, uh, if you have the code access, it is white box testing And the IC versus black box testing.
What is regression testing and why it is important? Okay. So regression testing is mainly, uh, when we deploy a new code on the environment, it can be staging production or anything. Uh, we need to make sure that the older functionality works fine. Okay? So whenever we are developing a new functionality or a new design, uh, and when we are, uh, deploying that to a QA environment. So before we test the new functionality, we need to make sure that the older functionality is working fine. So if we are testing the older functionality with a newer code on the environment that is called regression testing. And it is important because We need to make sure that the new changes to the function older functionality will not infected that. For example, there's a function which will add. Okay? So I am adding a new functionality that dysfunction will multiply also. But by mistake, I change the logic of addition. So when I test addition, it will fail. Right? So I will get to know that my new code change affected the older one. So, uh, for every new code deployment. We need to do regression testing to make sure that the previous functionality is working fine. That is why regression testing is important.
Can you describe a defect tracking system you have used in the past? Okay. So I have used Jira. Uh, and in Jira, what we are doing so first, the QA or the engineer, the quality assurance engineer test the system. After test the system according to the business requirement, what it gets from the product manager or product owner, uh, uh, he 1st, write the test cases and depending on the test cases, you're testing the system. So once he found that some difference between the actual requirement and the implementation, He defect a log means a ticket in Jira. How he, uh, create it? He need to click on create button. In the create button, he need to select the description, and he need to select the priority, the severity, the summary of the defect, step to reproduce, and the URL credentials, whatever. And then he need to briefly explain that what is expected, what is actual. He will assign to a developer, uh, who have made this functionality, and then He need to add the time and everything. And after that, he created. So once that ticket has been created, there's a notification sent to, uh, the developer, whom it is assigned to, and then he will first investigate, and he will find that, For example, if a QA by mistake added a defect, right, and developer find that this is not a defect, so he will just add a comment, and he will say NAB. He will turn the status from open to NAB. And if he thinks that it's a valid defect, it will change to, um, work in progress or in progress, and then he will fix it. And he will submit a PR, uh, for that defect. And he will assign that, uh, he will assign that defect to the QA who have logged this defect and assigned the defect same to the senior developer to do the code review. So the QA will do this thing, and then it has been resolved. So QA will, uh, mark it fixed after testing, adding comment.
What steps would you take to validate the functionality performance of the software product? Okay. So first, I will first, I will read the requirement, the design document, and the requirement document, the SRS document for that product. And for example, let's take an example that I need to test only the login functionality. So first, I will check how the UI look like. I will check all the fields that it have a username, password, create account button, login, sign up button, Uh, then log in with Google, log in with Facebook, like that. And then I will check the validation. For example, if I get the username and click on uh, login, it will not work. It will give error for, uh, the password and same for the username as well. If I give wrong, It will give me some error that invalid credentials, something like that. And then, uh, if I will provide the right user and password, it will log in to my accounts page. This is for the main functionality for the login page. And for this performance testing, I will do load testing on top of that. I will do stress testing means, Uh, I will, uh, uh, with the help of any tool like JMeter, I will make a traffic on that login page around 501,000 users 1,000,000 users. And I will just check the performance that how the performance is. Everybody get logged in, and then I will validate that Uh, 500 users are logged in with the these credentials, and they can see the account page. And I will check the graph, and I will see, Uh, all the things means it's logged successfully. How much is the time taken for the login API to get the response and the user get logged in? I will check the UI. It's not crashing. So these are all the things I will check for the functionality and performance of the software
How have you worked with cross functional teams in the past with less important requirement and objectives? So sometimes the thing is that when, uh, when we got a new functionality or a new product, so our product managers got a overview and it has been divided into different different, uh, teams. Uh, let's say, uh, let me give you the example of my previous company, Binance. So here, uh, if, uh, I'm into the channel integration team, so here what we are doing, uh, for every channel we integrate, we need to do do 2 flows, withdraw and deposit. So what happens that 1 PM is taking care of withdraw flow, one VM is taking care of the deposit flow. So, um, when we are testing the whole system, let's say, when we're doing the end to end testing, right, so we need to I talk with cross functional team, for example, my team is doing withdraw. Right? So I need to coordinate with the other team to make sure that the messaging queue and the messages are coming alright, a and it is exchanging over the distributed system so that, uh, the deposit is successful, the window is successful, endpoint is successful. A and in our, uh, deposit window flow also, we are checking some services, uh, that is depend on the different teams, like asset service, KYC service, KYC services for particular for the user, asset services particular for the user. So when, uh, so this is it comes into all, uh, into the business flow. So for implementing any new channel, we need to always coordinate with cross teams. Uh, we take meetings, we take calls regularly, we we take the explanatory sessions like that with them. We show our progress, we ask their progress, and we are checking integration, and then we are doing demos together to make sure we understand the whole system properly. In this way, I have, uh, worked with cross teams in the past.
Can you provide an example of a time when you provided constructive feedback to your team? So, uh, every year, we get a performance bonus review. So that time, as I'm leading my team, so I need to provide the feedback to the team. So first, I will, uh, align 1 to 1 meetings with them, and I will, uh, depending on their work and everything, depending on what they have done, achieve their goals or not like that, I give them the feedback, uh, and I need to give the feedback of my team to my managers, higher managers as well for the performance review. And sometimes it happens that, uh, it's not about the performance review cycle. But sometimes in the regular working also, if I find a glitch, For some of my teammates or if I find that there's a guy or a girl who's doing their work very awesome in a very awesome manner or doing in a very effective manner, I need to provide a feedback with them in our daily stand ups. I appreciate them in front of the team and saying that these people are doing like this. It's very good like that. So it happens, So for me, you can say weekly or 1 or 2 times in a month. I always provide constructive feedback to my teammates.
How would you ensure that our product meets regulatory compliance standards? So the thing is that, uh, there is a documentation created by compliance team for every product. So before we, uh, launch our product or before we ship, we need to make sure that all the compliance requirement are fulfilled. So the thing is that here, we do 2 approach. Uh, one approach is if we are if our our timelines are very tight, we send the product documentation to the compliance team and they check their points and they give us a sign off that this thing is lacking. This is not lacking like that, and we change it accordingly. Sometimes it happens that we have a meeting with compliance manager, and I sit with them. And I will check each and everything with their document and our document, and we can check that everything met our criteria with compliance and our product requirement, and there is no further requirement chain from compliance side, then we can say that our product is compliant ready and it is following compliance standard.
In what way have you used test automation doing pre efficient in your previous job? Okay. So in my previous or current job, we are doing 2 types of automation. 1 is UI automation, one is API automation. So, uh, in my previous company, we used 2 framework. 1 is for UI, 1 is for, uh, API. In this company also, we use 2. So for UI automation, we are doing Python with Selenium and p by test as our test, uh, framework to run and execute the test, uh, for the UI. And for the API, we are doing end to end testing. So as of our upstream API are not always response full. So here we are doing mocking. So we mock the response and, uh, we engrave our response with what we are expecting. And then we send a request to the admoc server. We get the response and we match. If the response is matched successfully, we say our test has passed. So Here, what we are doing, we added both the automation test scripts to our Jenkins server. So whenever the new code is deployed to any environment, This test this step will be done automatically. And if every test is passed, the step will pass and the code is deployed to the next environment. But if any of the step is failed, so that step in Jenkins pipeline will fail and we cannot move it further and we can check the logs to make sure that, uh, what other test is failing. We fix it and we redeploy. So in this way, without human intervention, We are doing the regression testing for all automation, UI, and backend. And it really increase efficiency because no human, No human effort required for testing the regression and for testing the scenario which we already tested.
How would you approve the task of building an automated test framework from scratch? Okay. So the thing is that first, I will check what is the automation framework all about. It is doing UI testing or it is doing the API testing. Right? Then I will check the documents. I will check the documents of the API which we need to automate. For, let's say, example of API Automation Framework. So first, I will check all the documents. What are the APIs we need to automate? Is that the REST API or the SOAP web service? Uh, what other type of request it have? Is that it have any authorization, authentication, something like that? Do it have any special headers? First, I will read the requirements of all the APIs. Once the API requirements are fulfilled, then I will discuss with my team to understand that which Language is more better for this approach or for this these API. What tool is better means to use. And once all the thing is finalized, then we make a design. Design means where all our interface will be saved from where we are getting the test data, Then, uh, how we are sending the request to the web server? Uh, so sometimes it happens that, for example, our Upstream API are not responding properly. Right? So we need to mock the response. So first, I will check, are we we need to use mocking or we don't need to use mocking? Or we are just hitting the upstream APIs. Okay. So after depending on them, we have we'll discuss our design because API testing requires database testing as well API testing as well, and we need to match the actual response with database. If it is post, then we need to check our new entries added to the database tables. If it is get, then we need to Check that the entries which is coming from the database table are matching with the entries or the response from the API. Like that. So once I will finalize the design, then we start implementing. I will divide all the framework task into smaller pieces to my team including me. And then we build parallelly the automation framework, and then we integrate that, and then we start testing.
Can you describe a situation where you had a rapidly adjust your testing plan strategies due to sudden change in the product requirement? Yes. It happens with me. So when the product plan is finalized, the SRS document is given to us by the client and the product manager, they agree of for giving these functionalities. Uh, let's say, uh, they have given us 7, but we talked with them and they agreed, okay, 5 is delivered in the 1st phase and the 2nd phase will be 2 and more add on. But in the last moment, they said no. We want all 7. You need to increase the timeline and you need to increase the bandwidth. You can take holidays later like that, but you need to test all these 7. So we have, uh, written the test cases for that, and we started testing. Our developers are working parallelly in building that system, and we are testing in parallel with that. So that time line is very crunchy because at last, the client changed the requirement. And it happens with us, but we have adjusted and somehow we have delivered without bugs. Our client is so much happy, and we got lot of appreciation email. But we have just conveyed this thing to our VP that this time we have delivered. But make sure, please, we have ample amount of time because we don't want any work any product to be delivered with Defects. And we don't want our product is not working fine in production. So we want ample amount of time to make sure that all the Uh, functionality is well tested and well developed delivered to the client.
Have you ever introduced a tool or process that significantly improved QA process at your company? So, actually, I have been actively from master in my team. So I will be checking with all the QAs and developers their task, what They are doing and what I'm do means what they have to do, any problem they are facing in communication, collaboration, something like that. So If you talk about the processes, I established some processes related to the work, related to the, uh, end of the task, related to the deadlines and everything. So that works effect effect effectively in my team, and the performance of everyone increased to 70, 80% after increasing their process. And we are having timely meetings to make sure that everybody's on the same page. But if you talk about the tool, we have made some, Uh, you can say some frameworks for doing automation testing reliable and to adding, uh, means you can say the, Without human intervention, we are doing all the test, uh, to increase the reliability and the speed. And there's 1 tool which I uh, which I introduced in my company that is TestRail. Because TestRail have lot of APIs you can integrate in your, uh, you can integrate in your framework, And you will send all the test cases to it. You will update the results so that everybody can see, uh, how this build is working and all the regression test Passing. You can create the charts and everything. So it will give you the visual effect of the testing and everything. So this increased performance and Uh, improved QA process in our team after I introduced TestRail and improved uh That's
Periods, if any, do you have with using Fintur integration and delivery pipeline? Yep. So all of our automation framework, uh, existing functionality, which are very stable, they are running with Jenkins. So we have created a Jenkins pipeline for our, code to be moved to different environments. So let's say, uh, so there are, uh, in our company, we are using some environments like dev, QA, then we have pre prod and prod. So every, uh, every, uh, environment have a pipeline. In that pipeline, we have 2 or 3 steps that is related to QA. So first is UI automation. 2nd is, uh, API automation. So here, what we are doing in this step, we are giving the get repository address or both UI and, uh, uh, you can say API. Let's say, UI, we are doing with Python and p by test. Right? So we need to give the git repository. So it will clone the code from there, the master branch, and then it will hit one, uh, command line. So that command line starting with py test something something, then you need to give the test suite name and everything. So when the execution will start in that pipeline, it will run all the priority 1, UI automation test cases to make sure that the UI is working fine. Then when it comes to API automation so here, we are using again Python and with some libraries to test the, you can say, uh, APIs. So here we have, uh, 1 plug in that is Python plug in and is provided by p y test. It will give all the methods to us post get like that like that. And we have tailored that method. We have already done that method using our custom tokens and everything. And here also, we give a git repository, and then we give a command like p y test. This is just like that. So it will run all the API test. And when both the tests are worked fine, it means our both the steps in the pipeline worked, and then the code will deploy to the next environment. So it helps in doing regression and make sure that the older functionality is not working.