profile-pic
Vetted Talent

Rashmi Verma

Vetted Talent
To work in a dynamic environment that provides me a wide spectrum of experience and exposure. I want to make use of all my skills at the workplace and to serve the organization with a positive attitude and efficiency. I am seeking an opportunity where I can effectively contribute using my skills as a QA Engineer.
  • Role

    QA Engineer

  • Years of Experience

    3 years

Skillsets

  • Agile
  • Automation Testing
  • Scrum
  • API Testing
  • Regression Testing
  • Functional Testing
  • UAT
  • Performance Testing
  • Defect Tracking
  • Test Execution
  • System Testing
  • Test Design
  • Data Scraping
  • Integration Testing
  • Black Box Testing
  • STLC

Vetted For

8Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Quality Assurance Engineer (Hybrid - Gurugram)AI Screening
  • 52%
    icon-arrow-down
  • Skills assessed :Excellent Communication Skills, executing test cases, Manual Testing, Mobile Apps Testing, Python, QA Automation, test scenarios, writing test scripts
  • Score: 47/90

Professional Summary

3Years
  • Oct, 2022 - Present3 yr

    QA Engineer

    Cvent India
  • Oct, 2020 - Jun, 2021 8 months

    QA Engineer

    Growth Natives Pvt. Ltd.

Applications & Tools Known

  • icon-tool

    Java

  • icon-tool

    Selenium

  • icon-tool

    Eclipse

  • icon-tool

    Asana

  • icon-tool

    Jira

  • icon-tool

    Postman

  • icon-tool

    JMeter

  • icon-tool

    RTM

  • icon-tool

    TestNG

  • icon-tool

    Email on Acid

  • icon-tool

    Drupal

Work History

3Years

QA Engineer

Cvent India
Oct, 2022 - Present3 yr
    Hired by Cvent as third-party vendor, working for more than a year now and managing QA for their website.

QA Engineer

Growth Natives Pvt. Ltd.
Oct, 2020 - Jun, 2021 8 months
    Worked as QA Engineer Intern and then promoted to QA Engineer.

Major Projects

3Projects

Cvent

    Thoroughly managing QA for their website.

Gander Gifting

    Based on casino management, from which casinos can buy and gift the product. Main responsibility to create and implement the test plan and cases for the project.

Postclick/Instapage Website

    Tested the whole website through automation and manual methods and worked with the development team simultaneously.

Education

  • B.E - CSE

    Chitkara University of Engineering & Technology (2021)

AI-interview Questions & Answers

So hello, uploads team. Myself, Rashmi Verma, and, uh, I am from Shimla. And currently, I am, you know, living in Chandigarh. I am group. Currently, I was working in group uh, with 2, uh, you know, 2.8 years of experience full time and 1 year of internship, which include us 3.8 years of total experiences to my resume. And, also, I have worked for 1 year, more than even 1, 1.3. It's total 1.3 for Seavind organization. And I'm working over there as a queue engineer only. I have worked, uh, as a queue engineer for the website where we are following, you know, the agile methodology. I have worked under the agile methodology, and we are working over there, uh, for in this sprint. Yes. In the sprint. And, also, after that, I was working, uh, you know, uh, with working growth NetApp. I have handled multiple clients. Uh, I have, uh, tested the dashboards. I have tested the apps, and I have also tested the websites. Uh, yes. And, uh, rest, I was working on the automation part as well. I am working on the data scraping part. I am working for, uh, automation part as well. And, uh, you know, I have experience, uh, in functional testing, integration testing, regression testing, sanity testing. Most of the functional part, I have, uh, you know, knowledge of these things. And with this, I am also performing the ETA testing and the performance testing and load testing for the ecommerce websites. I have tested the Shopify websites as well, and, uh, I have made the checklists. I have made the test cases according to the requirements of the particular website. Uh, I am responsible for, you know, uh, for maintaining the RTM documents, which is, you know, requirement traceability matrix document. I have worked on the major functionality, UI testing over different platforms. Also have strong in experience in cross browser and parallel testing and cross, uh, you know, uh, platform testing as well. I have experience on responsive testing as well, which includes your mobile devices, iPad devices, and your multiple screen sizes. And, uh, after that, worked on I have, uh, it while working with this event, I have worked on the back ends. I've created the pages using computers, uh, you know, the components which we are using over there, uh, and, uh, yes, on the Drupal back end. And also have the knowledge of automation testing using up Java Selenium. Uh, yes. So that's all about me. Um, that's all. I think I'm just it's done. And, uh, I can

While designing a high okay. While designing a high level system for QA automation, what design patterns would you consider and why? Basically, the um, basically, whenever we are, uh, you know, designing a high level QA automation, then first thing which came to our mind is data object model, factory model, and singleton. Basically, by doing these things of what we do, uh, you know, we of most of the times, we follow the data object model in which, uh, which, uh, I think that's the well, in data object model, we what we do, you know, we, uh, put the files over, uh, you know, different different sections. In 1 file, we are, you know, uh, defining all our objects. In 1 file, we are defining our methods. And in 1 file, uh, from where we are calling or, you know, saving our data. In these, uh, pattern or categories, what we do, we create create the design patterns, you know, work whatever design we need to follow. 2nd, we do the structural design patterns. Uh, structural design patterns includes your structure, which you need to follow, the structure you have to, you know, meet for this, uh, writing of the scripts. And third is the behavioral design pattern, How our script is going to behave? What are the, uh, test cases which are going to be execute, uh, first? And what are the test cases which are going to be executed in the last one. So I think that's all I can, uh, you know, tell you about the high level system QA automation. These are the design patterns which we need to follow. So, yeah, I think, um, that's all I can say. Uh, we can use data driven testing and fluent interface patterns as well. Uh, that is also we can use over here.

Next is, how would you structure your Python code to ensure reusability and maintainability of your automated test scripts? How would you structure your Python code? Basically, this, uh, you know, uh, in simpler language, I can see, um, when you are writing a high level test scripts during testing activities, the whole, you know, Dolby, uh, you are working on the system, it makes you rely you have to make it reliable, reusable, and easier and can be reused across different testing scenarios. Uh, whatever coding standards you are using, that should be best practices and are should be, you know, having under the rules, help provide accurate test results. Uh, we have to reduce the time spent on the debugging test scripts and ensure that whatever focus remains on testing in actual software products. Then, uh, you know, enhanced readability, how many we have to increase the, uh, we have to increase the and enhance the readability and the maintenance, which includes coding standards. When followed, make the test strips readable and understandable for anyone on the team. You know, for example, using, uh, clearing the name conventions for variables and functions such as functional name. For example, you are using a function name, uh, as, uh, you know, my name is Rashmi, Rashmi, so we are using my name only. Rashmi with valid credentials is more intuitive than test one. You don't have to, you know, uh, you have to log in this. You have to, you know, test sign up. Valid credential is is more initiative than the test one. You are not going to write the test 1, test 2, test 3, test 4, uh, these kind of naming convention in your script because it is very difficult, you know, to remember what is the things is written over under the script. So you have always you need to always follow the correct naming convention, which should be, you know, by reading those naming convention, you are able to, you know, think or you are able to learn what are the convention, uh, you know, you are using. Also, you have, uh, you have the knowledge of better efficiency. The code should be more efficient, and it should be, you know, very efficient table. And you can, uh, you know, uh, anyone can, you know, read your code and test scripts. Uh, anyone can go through, uh, by those scripts. So for example, uh, create reusable functions for common tasks like a function to log in and or application. This function can be used in multiple test scripts instead of writing the login code in, uh, only repeatedly. For example, if you are, uh, you know, using, uh, uh, some code which is, uh, you know, gonna follow your 2 test fields, so you should use these scripts like this. So you can use this same script for the other, uh, fields as well. So you have to write the universal code. I can see. You have to always provide the universal code. So it should be better and easier for everyone. You know? Oh, oh, we are some permanent employees, which is working on in a new organization. Anyhow, I left my organization, Then the new person will came and so whatever script I have written over there, that should be, you know, reusable, maintained, uh, you know, properly. So anyone can access that, you know, script. And there is 1 more thing I can say. Re deep, uh, you know, reliability and accuracy of code. Your code should be reliable and accurate because higher the accuracy, higher the chances that your all the test cases will get passed. And less number of defects or bugs will come or errors will come in this thing. So I think that's all I can see in this

Give it an overview of the test automation architecture you would use for hybrid mobile application. Okay. So, you know, open source test automation framework that is designed for Android application. It is a powerful tool that helps developers automate tests for natives and hybrid Android apps. Uh, then, uh, Reveratam supports black box or and, you know, white box testings for this API. It makes easy to write the Java test in JavaScript. Basically, uh, there are top 10 automation frameworks which you can use, you know, for this architecture for hybrid mobile structures. Uh, for example, you have plat you the whatever the, uh, you know, architecture you are using, In that key criteria, mobile test automation frameworks includes, first test, platform compatibility. It should be compatible in your all the platform. Whatever platform you are using, whatever device you are using, it should be compatible with everyone. Then you have second number is integration compatibility. It it should be integrated in all the ways. For example, whenever we are building any of the object or any of the project in which, uh, in in this, we are following integration compatibilities. What is integration? First, we develop any of the product in units. Then, uh, you know, by integrating all those unit, whenever we are integrating all those units, it should be compatible. For example, you know, our code should not be broken. Our server should not, uh, you know, work. So we have, uh, you know, uh, so whatever units we are writing in our scripts, it should be compatible. For example, uh, I am writing the test cases first, uh, you know, login functionality. And there is 1 more developer or a desktop, which is writing the functionality for sign up. So when we are combining both the test cases, it should work. We you know? By integrating those 2, um, uh, you know, compatibilities, it should work properly. Then easy to use. It should be very easy to use. Whatever architecture we are following, anyone can access that architecture. That is the first thing which we need to always keep in our mind because without if we are using very complex structure, it is very difficult to maintain. Then it should be device compatibility. You know, whatever test cases you are you writing or whatever script you are writing, it should be, you know, very much device compatible. Because if it is not device compatible, so any user cannot execute these test scripts in their system. So if I am, uh, you know, working on the Windows system, so it whatever script I'm writing, it should be compatible with the voice versions as well. It should work properly in the MacBook user's device as well. So we always, uh, we need to keep this thing in our mind always that it should work properly. Then, uh, there is one more thing. Reporting and analyst. There should be proper reports. For for example, whenever we are writing a test cases or a test scripts, uh, for our things, So in this sub the main important, uh, role is that reports and analytics. For example, how much test cases is passed and how much test cases is failed. We need proper reports for this. So we have to make the proper reports for, uh, these particular function and support. Our architecture of automation should should support everything. It should be very supportive with the devices, with the system, with the operating system, and everything. So, yeah, I can see. And, uh, you know, that's all I think I can say about this.

Next question here, What process do you follow to ensure comprehensive test coverage in both manual and automation testing? Basically okay. Uh, basically, what we do, uh, first, you have to make the, uh, first, whenever we are doing any of the testing by automation or the manual testing, we have to make the checklist for all the testing activities. Whatever test we are performing, any you know? If you are working on the UI part, you have to meet the checklist for even font size, font color, you know, uh, whatever font family you are using. A little thing, you need to make the checklist in which all the scenarios are going to be covered. You you need to, you know, write the test. If you are working on any of the functionality part, you need to write the test cases. Whatever test cases are coming or either they are positive test cases or they are negative test cases, you have to, you know, give in all and everything in, uh, for this, uh, you know, or for the comprehensive, uh, test coverage for both manual and automation. Always, you know what are the test cases. What is the checklist which we need to follow to complete, you know, to give a proper, uh, website to the user, it is QA verified website. You cannot give anything to the user if it is not, you know, properly verified at your end. So far, to ensure the quality of any product, you always need to write the, uh, you know, test cases and checklists. Second, I can say you always need to prioritize your test cases to your checklists. First, you have to execute the high level test scenarios. Test cases. You always you know what is the high level test cases, where, you know, what is the high priority test cases. Then you need to know. You not need to know. You should know what is the medium level, what is the priority of every, you know, test cases which you are writing? Because priority is very important. If there is some critical bugs which will come, the critical bugs will always come under the high priority test cases. You know? Then the medium one is come under the, uh, medium category, and the low level will come on the least priority. Because by knowing these things, if we know the priority, we know which we which is going to impact mostly to our product and what we need to fix first. So this is very important. Then sec third thing is create a list of all requirements for the application. You always write down what are the requirements that is, you know, being followed by, uh, creating these particular functionality to the application. It is very mandatory, and it is very very much important in this case. Because if the the requirements does not list down or at your end, you are going to miss the test cases coverage area. You are gonna miss everything. You are not going to you know? There are some issues which you left at your end, and it will come declines in. So always list down all the requirements because whenever you have all all the things written down at your end, you will be able to ensure the quality of that particular project. So it's very important. Um, next thing is write down the risks, uh, inherent of the application. Also, leverage test automation. Uh, you know, it helps you to write the testing skills, or you need to, uh, you know, does sorry. You need to design the test cases and scenarios based on the requirements. Also, the risk, you always write, you know, may as I will

Sorry. Share a complex testing scenario to automated and the approach you use to validate the accuracy of the automated test results. Okay. So in my current organization, uh, what I have automated first, uh, is, uh, you know, uh, I there is a portal. There is a product. I cannot share the name of that portal. So what I have done for that portal is, you know, I have just logged in over there. 1st, I have logged in. I have write the test cases for all the logged in perspective. So by using Java Selenium framework in Eclipse, I have, you know, get logged into that system by using automation only. Once I logged in, then my work is over there. There are so many tabs under this. And when user post there, you know, in every tab, this, uh, you know, data will come and, uh, user will be able to, you know, retrieve their data. So I just want whatever users will come that is related to the digital marketing, basically. So it is all, uh, the, uh, thing I have automated in my current organization is mostly related, you know, mostly is related to the that work. Uh, you know, numbers, pie charts regarding these. So we need how much engagement of the, uh, you know, user is coming on particular website. So we need to find these things. For that, uh, I have retrieved the data. For example, I have retrieved the data from tab 1. I have clicked on the functionalities. I have the write the code for the drop down values. I have write the code how to retrieve the you know, how to handle the pop ups over there. And the accuracy of my automation script is very well done because I have given this to the client and client is very happy for my work And, also, I have write the test key or I have written, uh, so many, uh, you know, scripts data scraping scripts in which, you know user have given us for example thousands of number of URLs and we have to get the data from their websites For example, if he wants, you know, some data in the number of columns or rows in the Excel sheet only. So that is also done from my end. And there is one last thing which I have done that I have, uh, you know, automated all the forms and everything on that particular product. So that is all I have done from my end.

Script snippet to remind why the function does not correctly, uh, update the dome to display, uh, user profile, identify and explain the issue with the provided code. Okay. 1st, you have to build a function based console message generator. You know, the color, the background, font size, and text. And, uh, for example, here, you the percentage and the uh, plus signs is not getting used. We need to provide this and build another console log message generator. Now I'm not able to see this will not going to be

Can you spot or if it has caused me in the show? No. I think the script is correct, and it is working correctly.

Yes, ma'am. What approach do you use to evaluate the quality of your code when writing test automation and ensure it adherence to best practices? So, uh, you know, whenever we are writing any of the, uh, you know, automation scripts, so we need to ensure the coding conventions, you know, of in coding, uh, first is coding convention, which comes under set of rules or the guidelines that define the style, format, and structure of the test automation code. Different developers and teams. Coding conventions, uh, are the things which comes under the identification, command, spacing, brackets, you know, variables, and etcetera etcetera. So coding convention is very important whenever we are writing our scripts. Then we need to follow the design principle. But, uh, design principles are the practices that guide the development of the test automation code, making it more modular, reusable, scalable, and reverse. Abstraction is basically one of the principle which involves hiding the implementation details. Also, after the designing part, encapsulation comes under this. And encapsulation is the another principle that involves grouping, uh, related data that behavior into, uh, consecutive units such as classes and modules and limiting access from the outset. This helps protect the integrity and consistency of the code. Inheritance allows reusing of the existing code or functionality from a parent class or a module in a, uh, child class or a module avoiding duplication. Polymorphism, we use encapsulation data, abstraction, and inheritance. Lastly, separation of the divides the test automation code into distinct layers or a component with specific responsibilities and dependencies, improving modularity and testability. Then the third one is code analysis tools. We we require or we should be knowledge of the, uh, tools which, you know, analyze our code if we have, right, uh, written our code in this standard required, you know, structure. Example of code examples of code analysis tool includes Pylenet, Flaket, uh, Chickstyle, PMD, Jaco, or Java, and Jest, NIS, all for JavaScripts. And the last thing is code reviews. Code reviews are very important. It is the process which is examining and evaluating the test automation code by one of or more peers or experts, uh, before merging the deployment code in your GitLab or in your with the other scripts. This process helps to ensure quality and standard of the code, such as finding and fixing the bugs, errors, or defects. And it, you know, enhance, uh, your code level also. Optimization, uh, it helps in the automate optimization, the code performance efficiency, and readability, sharing, and learning code knowledge skills or techniques among developers or testers, and aligning and standardizing the code with the project requirements. Code reviews can be done manually or automatically, uh, using tools such as Git, GitHub, or Jared. Differences approach can be used taken for code reviews such as pair programming, pull requests, or quote walk throughs. Then the 5th thing is continuous integration. Continuous integration is a practice of integration and testing, and that is automation code frequently and automatically using tools such as Jenkins, Troy CI, or CircleCI. This helps to ensure quality and standards of the code by detecting and resolving the core things. That's all.

Uh, this is given the requirements to automate performance testing, what tools would you choose, and how would you validate reliability of performance results? Okay. Automate performance testing. Uh, so I will factors to consider prior. Basically, features to, uh, first, we have to, you know, uh, factors to consider. Whatever the factors we need to consider for this, right, for selecting a performance testing tool, there are several factors to consider that will impact your testing process and results. These factors includes the type and scope, uh, of your software, the budget and resource available, and the skills and experiences of your team. When it comes to the type of software, you should choose a tool that can be test and the specific specs and functionalities of your software such as, uh, web applications, web applications, and desktop applications, mobile application, APIs, and database, etcetera. Additionally, you should take into the consideration the size of complexability of your software as well as how much load and stress it can handle. When it comes to the budget and resources available for testing, you should select a tool, uh, that fits within the ease constraints. A tool that fits within this constraint, licensing, installation, maintenance, training, and support. In addition to this, the hardware software and the network infrastructure used for the texting should be taken into account. Lastly, you should choose a tool that matches the skills and experience of the team members that includes programming language frameworks, tools, and methodologies. They are familiar with it. Furthermore, the learning curve and the usability of the tool should be considered as well as, uh, easy to difficult it into the create, execute, and analyze test. Second 1 is features to look for. When looking for features that will help you conduct effective and efficient performance testing, you should consider scalability and flexibility, accuracy and reliability, or an automation and integration. A tool should be able to scale up or scale down the number of virtual users. Users. Then the third thing is examples of tools. You should know the performance testing tools are plenty. Uh, then each, uh, has its own strength and weaknesses. Popular and effective tools, which is I am going to use is JMeter, which is an open source and cross cross platform tool. I am going to use this tool only because this tool is very much helpful. Tools like Apache JMeter, LoadRunner, and commonly used for load testing to simulate user traffic and measure system responses. And, uh, then, uh, I can say for analyzing web application performance tools like Google page speeds, you know, uh, then web page insights. Pingdom is the test tool, uh, which can ensure performance of your particular website. How in how much time your website is going to load at their end? What is the SEO percentage of your website? What is the loading time of your website? In how much seconds? Or what are the heavier files which is going to, you know, make difficult made it difficult at the user's end. So these are the things I can see. Also, optimization is optimization it will, uh, it also give you the optimization recommendation to you and to your developers, you know, what are the things which is requiring the, uh, updation part as well. And, uh, yeah.

Describe your experience with software development life cycle models and how they impact your process. Okay. Basically, I can say software development life cycle is the most important part which affect, you know, QAs each and every process. I know the, uh, things which is going to cover in SDLC in the work life balance. There are 6 constraint date which follows the SDLC life balance. 1 is plan, then it is design, then the implement, then test, then deploy, and then maintenance. 1st, you know, whenever we are going to first is planning part, which is obviously done by the managers and the team leads of the team. But planning is very important at the QA's end as well. Planning phase typically includes task and cost benefits analysis, scheduling, you know, resource estimation, and allocation. The development team collects requirements from the, you know, stakeholders or from everyone. But what is the part on this, you know, QA team will perform is, uh, whenever you are going to plan, you know, to develop any of the projects, so QA team will, uh, you know, rethink from their brains, you know, what are the requirements which is given by the clients, and what are the test cases, what are the things that they require? So all our whenever developer is planning to develop that particular project, so tester is also, you know, planning in their minds that, uh, they are following the s t the STLC, software testing life cycles that, you know, how they are going to execute this particular project. Then the second thing is design phase. In design fees, uh, you know, a tester should also ensure, you know, the okay. I have been screen switched my and the 2nd phase is design phase. Software engineers analyze requirements and analyze the best solutions to create the software. Basically, for example, they may consider integrations, pre existing modules, make technology choices, and their development tools. And turn uh, 3rd point is implementation. In this phase, developers are going to implement the things. And the 4th and the important phase is test phase. Test phase is related to the QA process only. SMA developer team combines automation and manual testing to check the software and the bugs. Quality analysis team includes testing the software for errors and checking if it is meets the customer requirements because many teams immediately test the code they write. The testing phase often runs parallel to the development team if you are using the agile methodology. If you are using the waterfall methodology, so so it will come to QA at the end. But if you are using the agile methodology, then it is coming, uh, to the QA parallelly because 1 unit is developed by the the development team. It will go into the testing only. If there is an issue in the particular unit, so team will report immediately. So this is the process, you know, which we follow by as then we need to you know, once testing is done, when so everything is done from their tester's end, and it is, you know, good to go from their end. Then we need to deploy the code on client server, which we call u eight. And then we give it to the, you know, client, uh, for a review. Uh, once the client review review is done from their end, it's go to from their end, We deploy the code on the production server. And once it is deployed on the production server, everything is done from their end. The last thing is maintenance. In the maintenance space, among other tasks, the team fixes bugs, resolve customer issues. And if there are multiple feedbacks for the future, they do the and maintenance of any website is very important. If it's ecommerce or