
Sr. SDET
Netskope Inc. TaiwanSr. SDET
Binance inc. TaiwanSr. SDET
Innova Solutions TaiwanQA Automation Engineer
Cybage Software Pvt. Ltd.,QA Automation Engineer
Xpanxion UST global
Git
REST API

Python
Jira

Visual Studio Code

Slack
Figma

Postman

Microsoft Teams
AWS (Amazon Web Services)

Zephyr

Java

GitLab
.png)
Jenkins

Selenium

Groovy

SoapUI

TestRail
Cucumber

Specflow

REST Assured
pytest

C#
Best QA in the company.
Technical skills are outstanding.
Innovative and creative.
Scrum and Agile knowledge is very good.
So black box testing and black box testing, what is our difference? Right? So white box testing is, you can see the code, you can see the design, you can see the requirement and everything. That is a white box testing. White box means everything is visible to you. There's nothing hidden, so that you need to assume. And black box testing is that there's a box. You can see a system where the details are not revealed to you. It means that, in a document it is written that, this piece of code will work like this, but you don't have access to that code. So you need to test by assuming things that this functionality will work according to this so that I can test in this way. For example, let's say there is a system which will calculate. If you send two numbers to it will calculate and return you the sum. So in black box testing, we don't know how it is calculating the sum, but we know that it is doing this functionality. And in white box, you can see the code to make sure that the two numbers are added by this functionality or this approach or this logic. So this is the main difference between white box and black box testing. The main definition is if you know the internals of a system, if you have the code access, it is white box testing and the other is black box testing.
What is regression testing and why it is important? Okay, so regression testing is mainly when we deploy a new code on the environment, it can be staging or production, and we need to make sure that the older functionality works fine. So whenever we are developing a new functionality or a new design, and we are deploying that to a QA environment, before we test the new functionality, we need to make sure that the older functionality is working fine. So if we are testing the older functionality with a newer code on the environment, that is called regression testing. And it is important because we need to make sure that the new changes to the older functionality will not affect it. For example, there's a function which will add. I am adding a new functionality that the function will multiply also. But by mistake, I change the logic of addition. So when I test addition, it will fail. Right? So I will get to know that my new code change affected the older one. For every new code deployment, we need to do regression testing to make sure that the previous functionality is working fine. That is why regression testing is important.
I have used a defect tracking system in the past. Specifically, I have used Jira. And in Jira, what we do is first, the QA or the engineer, the quality assurance engineer tests the system. After testing the system according to the business requirement, which comes from the product manager or product owner, the quality assurance engineer writes the test cases and tests the system based on those test cases. Once the quality assurance engineer finds a difference between the actual requirement and the implementation, they log a defect, which means they create a ticket in Jira. To create it, they need to click on the create button, select the description, priority, severity, summary of the defect, steps to reproduce, and URL credentials, and then briefly explain what is expected and what is actual. They will assign the defect to a developer who implemented the functionality, add the time and other details, and then create the ticket. Once the ticket has been created, a notification is sent to the developer who is assigned to it, and they will first investigate. If the developer finds that the defect is not valid, for example, if the QA by mistake added a defect, they will add a comment saying NAB, change the status from open to NAB, and close the ticket. If they think it's a valid defect, they will change the status to work in progress or in progress, fix it, submit a PR, assign the defect to the QA who logged it, and assign it to a senior developer for code review. The QA will then test it, add a comment, and mark it fixed.
What steps would you take to validate the functionality performance of the software product? Okay, so first, I will read the requirement, the design document, and the SRS document for that product. And for example, let's take an example that I need to test only the login functionality. So first, I will check how the UI looks like. I will check all the fields, including a username, password, create account button, login, sign-up button, and then log in with Google, log in with Facebook, and so on. And then I will check the validation. For example, if I get the username and click on login, it will not work. It will give an error for the password and the same for the username as well. If I give wrong credentials, it will give me an error that says "invalid credentials," something like that. And then, if I provide the right user and password, it will log me into my account page. This is for the main functionality of the login page. And for this performance testing, I will do load testing on top of that. I will do stress testing, which means I will use a tool like JMeter to make traffic on the login page with around 500,000 to 1,000,000 users. And I will just check the performance to see how it is. I will validate that 500 users are logged in with these credentials and can see the account page. And I will check the graph to see all the things, that it's logged successfully. I will check the time taken for the login API to get the response and the user to get logged in. I will also check the UI to see if it's not crashing. So these are all the things I will check for the functionality and performance of the software.
How have you worked with cross-functional teams in the past with less important requirements and objectives? So sometimes the thing is that when we got a new functionality or a new product, our product managers got an overview and it has been divided into different teams. Let's say, I'll give you the example of my previous company, Binance. So here, if I'm in the channel integration team, what we are doing for every channel we integrate, we need to do two flows, withdraw and deposit. So what happens is that one PM is taking care of the withdraw flow, and one VM is taking care of the deposit flow. When we are testing the whole system, let's say, when we're doing end-to-end testing, right, so we need to talk with cross-functional teams. For example, my team is doing withdraw. So I need to coordinate with the other team to make sure that the messaging queue and the messages are coming alright, and it is exchanging over the distributed system so that the deposit is successful, the window is successful, and the endpoint is successful. And in our deposit window flow also, we are checking some services that depend on different teams, like asset service, KYC service, for particular users, asset services for particular users. So this comes into all business flow. For implementing any new channel, we need to always coordinate with cross teams. We take meetings, we take calls regularly, we take explanatory sessions with them. We show our progress, we ask their progress, and we check integration, and then we do demos together to make sure we understand the whole system properly. In this way, I have worked with cross teams in the past.
Every year, we get a performance bonus review. So, I lead my team, and I need to provide feedback to the team. First, I align one-on-one meetings with them, and depending on their work and everything, I determine whether they've achieved their goals or not. I give them feedback, and I also need to give feedback on my team's performance to my managers and higher managers for the performance review. Sometimes, it's not about the performance review cycle. But sometimes, in regular working, if I find a glitch in a teammate's work or if I see someone doing their job exceptionally well and effectively, I need to provide feedback to them during our daily stand-ups. I appreciate them in front of the team and say that these people are doing it that way. It's very good, like that. So, for me, it happens weekly or one to two times a month, I always provide constructive feedback to my teammates.
How would you ensure that our product meets regulatory compliance standards? There is a documentation created by the compliance team for every product. Before we launch our product or ship it, we need to make sure that all compliance requirements are fulfilled. We do two approaches. One approach is if our timelines are very tight, we send the product documentation to the compliance team and they check their points, giving us a sign off that it's compliant. This is not lacking like that, and we change it accordingly. Sometimes, we have a meeting with the compliance manager, and I sit with them. I check each point with their document and our document, ensuring that everything meets our compliance criteria and product requirements. If there's no further requirement from the compliance side, then we can say that our product is compliant and follows the compliance standard.
In what way have you used test automation to improve efficiency in your previous job? Okay, so in my previous or current job, we are doing two types of automation. One is UI automation, one is API automation. So, in my previous company, we used two frameworks. 1 is for UI, 1 is for API. In this company also, we use two. So for UI automation, we are doing Python with Selenium and Pytest as our test framework to run and execute the tests. And for the API, we are doing end-to-end testing. As our upstream APIs are not always responsive. So, here we are doing mocking. We mock the response and engrave our response with what we are expecting. Then we send a request to the Admoc server. We get the response and match it. If the response is matched successfully, we say our test has passed. So, here, what we are doing, we added both the automation test scripts to our Jenkins server. So, whenever new code is deployed to any environment, this test step will be done automatically. And if every test is passed, the step will pass and the code is deployed to the next environment. But if any of the steps fail, that step in the Jenkins pipeline will fail, and we cannot move further, and we can check the logs to make sure what other test is failing. We fix it and redeploy. So, in this way, without human intervention, we are doing regression testing for all automation, UI, and backend. And it really increases efficiency because no human effort is required for testing the regression and for testing the scenarios we already tested.
How would you approve the task of building an automated test framework from scratch? Okay, so the thing is that first, I will check what the automation framework is all about. It is doing UI testing or it is doing API testing. Then I will check the documents. I will check the documents of the API which we need to automate. For example, the API automation framework. So first, I will check all the documents. What are the APIs we need to automate? Is that the REST API or the SOAP web service? What other types of requests does it have? Does it have any authorization, authentication, or special headers? First, I will read the requirements of all the APIs. Once the API requirements are fulfilled, then I will discuss with my team to understand which language is more suitable for this approach or for these APIs. Which tool is better to use. And once all the things are finalized, then we make a design. The design means where our interface will be saved, from where we get the test data, then how we send the request to the web server. Sometimes it happens that our upstream APIs are not responding properly. So we need to mock the response. First, I will check if we need to use mocking or not. If we do, then we will discuss our design because API testing requires database testing as well. We need to match the actual response with the database. If it's a post, then we need to check if new entries are added to the database tables. If it's a get, then we need to check if the entries coming from the database table match the entries in the response from the API. Like that. So once I will finalize the design, then we start implementing. I will divide all the framework tasks into smaller pieces for my team and me. And then we build the automation framework parallelly, integrate it, and then start testing.
Yes. It happens with me. So when the product plan is finalized, the SRS document is given to us by the client and the product manager, they agree on giving these functionalities. Let's say, they have given us 7, but we talked with them and they agreed, okay, 5 is delivered in the 1st phase and the 2nd phase will be 2 and more add-ons. But in the last moment, they said no. We want all 7. You need to increase the timeline and you need to increase the bandwidth. You can take holidays later, but you need to test all these 7. So we had written the test cases for that, and we started testing. Our developers were working parallelly in building that system, and we were testing in parallel with that. So the timeline was very crunchy because at last, the client changed the requirement. And it happens with us, but we had adjusted and somehow we had delivered without bugs. Our client was so much happy, and we got a lot of appreciation emails. But we had just conveyed this thing to our VP that we had delivered. But we made sure that we had ample amount of time because we don't want any product to be delivered with defects. And we don't want our product not to be working fine in production. So we want ample amount of time to make sure that all the functionality is well tested and well developed, and delivered to the client.
Have you ever introduced a tool or process that significantly improved QA process at your company? So, actually, I have been actively managing my team. So I will be checking with all the QAs and developers to ensure they understand their tasks, what they're doing, and what they need to do, as well as any problems they're facing in communication and collaboration, something like that. So if you talk about the processes, I established some processes related to work, related to the end of tasks, related to deadlines, and everything. That worked effectively in my team, and the performance of everyone increased to 70-80% after establishing these processes. And we're having timely meetings to ensure that everybody's on the same page. But if you talk about the tool, we've made some frameworks for doing automation testing reliably and adding means you can say, without human intervention, we're doing all the tests to increase reliability and speed. And there's one tool which I introduced in my company, which is TestRail. Because TestRail has a lot of APIs that you can integrate into your framework, and you can send all the test cases to it. You'll update the results so that everyone can see how this build is working and all the regression tests passing. You can create charts and everything, so it will give you a visual effect of the testing and everything. So this increased performance and improved the QA process in our team after I introduced TestRail.
Fintur integration and delivery pipeline, yes. So all of our automation framework, existing functionality, which are very stable, they are running with Jenkins. We have created a Jenkins pipeline for our code to be moved to different environments. So let's say, there are environments like dev, QA, then we have pre prod and prod. Every environment has a pipeline. In that pipeline, we have 2 or 3 steps that are related to QA. So the first is UI automation, the second is API automation. Here, what we are doing in this step, we are giving the git repository address for both UI and API. Let's say, UI, we are doing with Python and Pytest. So we need to give the git repository. It will clone the code from there, the master branch, and then it will hit a command line. That command line starts with py test something, then you need to give the test suite name and everything. So when the execution starts in that pipeline, it will run all the priority 1 UI automation test cases to make sure that the UI is working fine. Then when it comes to API automation, here we are using again Python and with some libraries to test the APIs. We have a plug-in that is Python plug-in and is provided by Pytest. It will give us all the methods like post, get, and we have tailored that method. We have already done that method using our custom tokens and everything. And here also, we give a git repository, and then we give a command like Pytest. This is similar to that. So it will run all the API tests. And when both the tests work fine, it means our both the steps in the pipeline worked, and then the code will deploy to the next environment. This helps in doing regression and making sure that the older functionality is not working.