profile-pic
Vetted Talent

ROHIT PATIL

Vetted Talent
Seasoned Sr. Test Automation Engineer with 9+ years of experience in delivering high-quality, comprehensive testing across projects with expertise in diverse tools and frameworks, strong leadership, and focus on automation efficiency and CI/CD.
  • Role

    Sr. Test Automation Engineer

  • Years of Experience

    9 years

Skillsets

  • Python
  • Selenium
  • Test automation
  • Load Testing
  • performance benchmarking

Vetted For

4Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Test Automation EngineerAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :Manual Testing, Selenium, Automation Testing, Jenkins
  • Score: 54/90

Professional Summary

9Years
  • Jun, 2020 - Present5 yr 3 months

    Sr. Test Automation Engineer

    Oracle Corporation
  • Apr, 2019 - May, 20201 yr 1 month

    Sr. Test Consultant

    Qualitest

Applications & Tools Known

  • icon-tool

    Selenium

  • icon-tool

    JMeter

  • icon-tool

    LoadRunner

  • icon-tool

    Cloudwatch

  • icon-tool

    Grafana

  • icon-tool

    Git

  • icon-tool

    Jenkins

  • icon-tool

    Wireshark

Work History

9Years

Sr. Test Automation Engineer

Oracle Corporation
Jun, 2020 - Present5 yr 3 months
    • Projects: Inpatient Pharmacy, Outpatient Pharmacy, Supply Chain
    • Built Manual and Automation testing teams from scratch and is part of the Organization's Automation Review Board to ensure the best practices
    • Optimized and ensured a remarkable 90% pass rate for GUI automation tests and achieved a 95% package quality
    • Implemented shift-left testing, reducing defects by 50% in 2023
    • Increased Automation Adoption over Manual Testing by 20% in the teams
    • Automated Hazard Analysis (Selenium and Python), streamlining workflow and saving Product Owners 2 days/quarter
    • Developed and executed over 120 automated Selenium/Pytest tests for a medical research application, improving test coverage by 30%
    • Led development, maintenance, and execution of complex cross-product automation tests, evaluating best-in-class tools
    • Guided cross-functional teams towards KPI achievement, ensuring adherence to project timelines and goals through effective direction and support

Sr. Test Consultant

Qualitest
Apr, 2019 - May, 20201 yr 1 month

    Project: The Associated Press

    • Executed Load testing of 1 Million Concurrent Users to provide the exact bottlenecks in the Infrastructure
    • Performed API Load Testing using JMeter to make sure the news stories are published in the time intervals to mimic the real-world scenario


    Project: TeamViewer

    • Built a framework to compare the performance of TeamViewer against its competitors using Wireshark, Omnipeek and Grafana
    • Mentored a team of 5 to drive first in kind Performance Benchmarking project to measure Application Response Time, Network Latency, Frame Rate, Error Rate, Throughput and Image Quality


    Project: Guidewire Software

    • Audited the Performance testing results of Guidewire Insurance applications to make sure the results are legit and generated by the inbuilt performance tools


    PoC:

    • Scripting with Web (HTTP/HTML), API and TruClient Protocols in LoadRunner
    • Performance Testing using JMeter and LoadRunner for a Salesforce App

Achievements

  • Optimized and ensured 90% pass rate for GUI automation tests
  • Achieved 95% package quality
  • Implemented shift-left testing, reducing defects by 50% in 2023
  • Increased Automation Adoption over Manual Testing by 20%
  • Automated Hazard Analysis
  • Developed and executed over 120 automated Selenium/Pytest tests for medical research application
  • Guided cross-functional teams towards KPI achievement
  • Earned Quality Star awards in 2022 and 2023
  • Earned client praise for problem-solving skills

Major Projects

5Projects

Inpatient Pharmacy, Outpatient Pharmacy, Supply Chain

    Developed and maintained comprehensive test suites and tools for pharmacy and supply chain applications.

Medical Research Application

    Created extensive automated test coverage using Selenium and Pytest for effective medical research application testing.

The Associated Press

    Executed Load testing of Million Concurrent Users and performed API Load Testing using JMeter for timely news story publication.

TeamViewer

    Built a performance comparison framework for TeamViewer using Wireshark, Omnipeek, and Grafana.

Guidewire Software

    Audited performance testing results of Guidewire Insurance applications for accuracy and legitimacy.

Education

  • M.Tech in CNE

    VTU University
  • B.E in E & C

    VTU University

Certifications

  • Certified eggplant functional

AI-interview Questions & Answers

So I have done my graduation in computer works, uh, from college called Bangalore, which is affiliated with you. Um, from there, I picked as an intern, uh, into Sharma as a tester. I've been doing manual testing as well as automation testing, uh, through this internship, Post which I got converted, uh, as a tester, full time tester. Uh, there, I have started doing automation testing, um, especially at the regression of the system testing level. Before that, uh, initially, we do uh, the black box system through manual. Um, and then once panel testing is done or black box is done, uh, we do all the automation testing. So I have been using, uh, Selenium as well as a plan for that. Uh, I have done a couple of years on that. Post which, uh, I have more company called Qualitest, where I have done performances a little bit, uh, using tools like, Epic as well as JMeter. And then, uh, post which I came back to, um, which is now also called as Oracle. Um, have been doing, um, automation testing as well as some kind of management. Basically, uh, to look after, uh, the 3 teams, uh, each team of having 4 to 5 associates, uh, to make sure that the quality is up to the mark, especially with respect to the number of defects that have been linked to the client, find the pattern, and then report that, uh, to the respective stakeholders to make the right decisions with respect to status test strategy, uh, so that we approve the right test cases and execute at the right time. So, uh, my current roles and responsibilities are to do the automation test scripting, uh, and then look after all the reviews, um, that have come up for automation scripts and, uh, make sure that we do the right uh, testing at the right time, uh, for for the all the three products that I look after. So this is my current rules and responsibilities, uh, day in and day out. Thank you.

So, uh, I haven't bought transferred to the API testing, so, uh, uh, I don't have experience in this area.

Okay. So I would definitely look at the log file of the job to see where it has failed, um, especially that, uh, which step specifically it has failed. We'll have, uh, the, um, logs as well, uh, with respect to which step has been fixed. So I look into that and, uh, make sure that, uh, there's enough dynamic wait that is required. Uh, usually, we see that a lot of test cases failed because of that. So I'll make sure that there's enough weight and, uh, elements are located properly, uh, and the page has been, uh, ready to, uh, interact with those elements. So log file is the first thing that I would look at. And, also, if time permits, I look at the previous jobs that have been passed or failed. Uh, if it is passed, definitely, I'll get to do that. What's the difference? Whether it's in the script or it's in the, um, application itself. Right? So if I tell the screen prints, If there is an exception, uh, usually that we have in our admission scripts. So I'll check for the exceptions, uh, what are the exceptions that have been logged out of this free job, and I'll make sure that understand the exception well and then, uh, uh, fix it in the script. Thank you.

Okay. Uh, so as as rightly asked, so there are so many scripts that fail because of the synchronization issues. Um, I would I would make sure that the explicit rates are especially in place before we interact with the elements, um, to make sure that the elements are fully present or located in the dom. So for example, uh, if I have to use explicit weight, I'll use, uh, it, uh, use it with the expected conditions, uh, input the expected conditions as you see. And, uh, define wait, wait equal to, um, web driver wait, driver comma. Let's say that you have to wait for explicit time of, like, 10, 20 seconds, so I'll I'll give 20 seconds. Uh, so then, uh, I wait dot until, uh, in the brackets. Maybe I'll use accordingly whatever the requirement. For example, presence of all items, uh, present or located and then provide the x path or whatever element that is there, and then make sure that it is fully ready. Uh, so, uh, this will make sure that the application that we are testing and the script that we have are in sync. Uh, it should not be that. The script is executing faster than, uh, the application itself. So that is where we see most of the, um, scripts fail. And, also, uh, if there's a requirement to use implicit weights, uh, to make sure the driver is, uh, waiting till all the elements that are there, the DOM are located, we'll use that also. So it depends on the situation, but especially using explicit implicit rates, uh, we can handle this signature issues.

okay so as selenium organization or whatever team also suggest page object model and we also use page object model right now in our products as well so let's say that there is a e-commerce website that we need to or we are trying to automate definitely I have a folder called pages or directory or package or whatever it is required so package most specifically by the that we have we have the packages in the directory basically and in the inside that we'll have different pages for the different different Python files basically for the different pages in the e-commerce site for example login page and then the home page and then maybe search page if required and then maybe R2 card page and then payment page and then finally logger for again we can combine this with login page as well so we define the respective methods or functions in the respective pages as well and the expats or the elements basically the selectors whatever is required with respect to those pages we define it there like especially let's say that if there is a login function that have to write I would write that in the login page and then whatever the elements that are required such as selectors expats or CSS or whatever it is I would define there and I will call that in my test file right so test function especially so we call that login method and assert the array only in the test function that whether this has been you know logged in successfully or not so in the home page again I would have some elements such as the title bar or home page elements basically inside that like search elements and all the how to what are the elements that we need to search for so then to make sure that the set has happened properly we use the selectors and to make sure that login is appropriate and we have the user now in the home page then similarly for search R2 card and then finally payment and then log off so respective elements would be required in the respective page pages and we write respective methods and functions that are required in the respective pages and use all of them in the test function thank you

So, uh, this is something that we haven't done. Uh, we kept we keep we kept, uh, the functional testing to only to switch functional testing, we haven't integrated that to test the performance of the application. But, I think I guess we can do it by, um, measuring the wait time, uh, that page is taking as well as, um, have the, uh, execution times recorded especially with respect to, uh, certain functions or certain tests. Right? So, uh, let's say that if I have to log in, see how much time how much execution time that function has taken to log in and compare that with, uh, the expected time. For example, if the login has to happen in just, like, 5 seconds and, uh, you know that or you have the requirements of that login has to happen 5 seconds and, uh, use the time, um, method that is or time package that is there with, uh, Python. So both time and then, uh, import dot, uh, importing time also time dot start, um, current time, basically, and then have one more time implemented, uh, at the end of the test or end of the function test function and, uh, and see how much time has elapsed. And if that time falls within the, uh, 5 seconds, for example, in the in our case, the login has to happen, then go ahead and pass it, and, uh, your performance of the application is good enough to proceed. Otherwise, um, the functional functional testing can still continue, but performance testing is failed So this is, I think, uh, we can implement to make sure that the applications, uh, are performing well, uh, and tested well through functional, uh, or, uh, not selenium testing as well. Thank you.

Uh, so as mentioned previously also that I won't work much on the API testing, but I guess what we are looking at here is is let's not get. Not really sure, so I mean, done.

Okay. So, um, as previously mentioned with respect to the weights answer, um, I see that there are no weights, uh, basically implemented in any of this. For example, the first, second the second line where you get the example of come open. Uh, you haven't waited or this snippet doesn't wait for the username to appear. So definitely before, uh, it appears, the script might fail. So, again, as I mentioned, I would definitely use explicit weights here. Import, um, web driver weight as well as, um, the expected conditions, uh, as you see. And then, uh, we define a generic function somewhere, uh, in my utils or helper functions, uh, where I would wait for each element or any look at the generic function that I would write. For example, wait until wait dot until, uh, expected conditions, um, item, uh, presence of or presence of item or presence of all located items, uh, and then the locator and then call that function in this, uh, to make sure that, you know, the user name is appeared first. And then, again, wait for the password, and then wait for the, uh, uh, login button would be have been appeared if there's a password. So we don't really need to wait for login button, but, uh, as a safety step, we can still use that and then click. Uh, so that is how I would definitely implement, um, weights here to make sure that all the pages are, uh, loaded before they have been interacted. Thank you.

Very concise plan for integrating security testing. So, uh, when done, uh, security testing, um, with. So so I'm not really sure. Maybe, um, I would definitely look for trying to log in using most technically used user and passwords to make sure that our application is not or easily hackable or something like that. Um, so I would log in using username and passwords, general user and passwords, um, to make sure that application is secure to log in. And um, apart from that, they don't have much. I don't secure testing, but, um, I would boost definitely some libraries, uh, that are there, uh, to do the, uh, testing, like, penetration testing and all of those and, uh, integrate that with my test script, um, like, during each phase. For example, log in or, um, search or, you know, add to cart. And then especially at the payment site, uh, make sure that, uh, even if you provide, uh, the wrong, maybe, PIN or wrong, um, values, uh, to make the payment, uh, it's not accepting that. So I would definitely do that kind of security listing as far as I know. Thank you.

So, as I have mentioned before in the introduction also, we still do manual testing, there is no automation testing without manual testing, that is what I believe in, the effective manual testing will catch a lot of the defects that are client's law compared to the automation testing because mostly automation testing have the, you know, the primary workflows of the application to make sure that they have been not broken and we have high confidence in releasing the, you know, the product to the clients. So and test it faster as much as possible, but the true testing, but the true testing can happen only or the true testing can happen through only the exploratory testing or the manual testing that we do to make sure that we kind of find the defects, you know, that have been recently introduced by the developers. So process-wise, if we talk, so after the unit testing, when the developers gives us the code or the application, I'll come up with scenario matrix to understand what is the impact and I'll update the automation tests that are primary workflows that have been in the scenario matrix and I'll take out some scenarios which can be done exploratory as part of exploratory testing and for the automation testing at this level, I will definitely look at the exploratory testing scenarios that are left out within the scenario matrix and I'll test all those scenarios as well. And then finally, again, at some point, once that is kind of integration testing is done, we'll move to system testing or regression testing where we'll run all our tests that have been for this product. So the strategy would be that automation testing, unit testing especially, and then again, automation testing with respect to that particular workflows and along with that exploratory testing, especially manual testing to find the defects which are wide in nature and also some scenarios where we cannot automate. There could be like date and time scenarios or there could be many things like color identification and all those things. So there we do the manual testing along with that, the impact scenarios, and then finally system testing or regression testing that we do, again, run full automation test suite and then finally release it to the clients. Thank you.

Okay. So currently, I do that. Uh, so I would suggest that, uh, the junior uh, you know, tester to start automating as early as possible as the developer picks up the story. I ask the the tester also to pick up the story. Um, so so it so happens that usually once the UI is ready, then the promotion testing or promotion tester starts the, um, uh, scripting. Um, but, actually, that shouldn't be the case. So we both should start at a time. Although, uh, there'll be no code to automate or, uh, there's no UI to automate sometimes, uh, but there will be something, uh, that can be, uh, you know, tested or at least documented at the end of the story from the developer. So, uh, we'll try to understand that. So, uh, it won't become a complex test scenario at the end of the day. So bit by bit, uh, if they understand and start documenting, uh, you know, at least the black script, I would say, uh, with just the step teams or even just the comments also is fine to start with. Right? So as developer keeps on developing, the tester should also, uh, keeps on automating, uh, of how how much ever it is that, uh, just if it is just, uh, to start with another two lines and then, uh, keep it going. Right? Every sprint by sprint by sprint. So, uh, I believe there's no development that happens the entire application or the entire, uh, project workflow that can happen in just one iteration or one sprint. Uh, so it takes time, obviously. So, uh, during this time, the developer the testers can also utilize this time to understand the functionality. And then once the functionality is understood, I guess, I I just believe that, uh, it's not that complex document. Sometimes, uh, the dynamic XPaths and all those things are there, but we can use, um, to find the patterns, uh, what kind of, uh, pattern they follow, uh, to build those, uh, attributes so we can use those and then, uh, start, uh, start the junior tester to, uh, start with basic purpose and then keep on automating it and keep on rolling it. Uh, so first of all, it won't be context in that way. And then, uh, the reviews, especially. Right? So once you develop something, uh, as a test, adjust you on the document and uploads in the gate for the review, um, as a senior tester of the lead, we have to make sure that it is correct and he's on the right path and provide the, uh, feedback that's actually required and then ask him to report it. Right? So, uh, that's how I think I could do it. So break it to smaller pieces, and, uh, as the developer progresses, tester should also progress, and then the frequent reviews, uh, to to go to give the regular feedback so that you can go on the test. Thank you.