Razor Insights

Automation Testing at Razor

Written by Razor
Published on
Senior Automation Engineer, Simon shares the ins and outs of automation testing in this comprehensive guide. Understand the benefits of automation testing over manual testing, how to decide what to automate, which framework to use and the importance of reporting in the process.

What is automation testing?

Automation testing uses tools like computer programs or applications to execute a set of test instructions, called scripts, to check that the software functions as expected. Rather than a human executing the scripts, the tools run the scripts programmatically. This allows the tests to run at much faster speeds and much more consistently as it eliminates human error. A greater level of confidence in the quality of the software being tested is therefore gained with automation testing.

Why automation testing?

Firstly, let’s make clear that automation testing isn’t something that will make manual testing redundant. Far from it. Automation tests are executed alongside manual tests to create more confidence and robustness in the software created. One of the major advantages of incorporating automation testing is that it frees up more time for manual testing and reduces the number of monotonous tasks needed to be carried out.

For example, imagine you have a website that sells insurance. For a customer to get a quote they may have to fill in 30 different fields on the page, such as name, address, previous claims, how much excess they would like to pay, what they want covered etc. If you were to test this manually, every time a change is made to that page, or one of the components on that page (or anything linked to it) you should run a regression test to make sure that you can still complete and process the page as before. This could take a manual tester 10 minutes to fill in, and if multiple combinations need to be tested that quickly adds up to hours taken away from other work they could be doing.

If you used an automation test for this, you could run all those combinations in one go at the click of a button and complete the test within seconds. By automating this, testers can focus on designing, writing and executing test cases for any new or existing features whilst having the confidence that the core of the software is still working as it should.

How do we decide on what should and shouldn’t be automated?

In an ideal world, we would automate as much as we possibly could for each and every project. In reality, we have to consider time and budget constraints for each project and focus on where the automated tests will be of the most value. This is because the major drawback of automation testing is the time it takes to set them up in the first instance. Once they are set up, it’s at this point that you can leverage the benefits of reduced execution time and the consistency they provide. In the case of large projects, setting up automation tests will have major benefits, whereas with small, simple projects with limited complexity, automation testing may not justify the investment given their limited feedback within the timeframe.

To decide this, there are a few things we can look at:

  • Regression testing - Re-running tests to ensure that previously developed and tested software is still working. For larger, more complex projects this is a massive time saver, allowing us to run tests that check that other untouched areas of the software are not affected by the latest developer work or infrastructure changes.
  • End-to-end core process flows - For projects with a common set of start and end goals that are core to the business functioning. For example, with eCommerce, a user starts by entering the store and ends with the user buying an item. If any step in this process fails it could mean that the user cannot make a purchase, and the business could potentially lose income. By having automation tests in place we can make sure that these areas are always covered.
  • Fragile areas - If a project has a specific area that is complex and potentially brittle, focusing automation testing on this area means we can gain more confidence that those areas aren’t affected by code changes.
  • Time savings - For repetitive tasks, such as filling in multiple pages of forms, automation testing can run multiple sets of data in a fraction of the time that a human would take to execute this test.
  • Multiple browsers or devices - If a project requires that the tests be carried out on numerous browsers or mobile devices, this can very quickly increase the time it takes to test a small part of that work manually. By using automation we can run the tests across multiple environments in parallel saving a lot of time and effort.

What automation framework do we use and why?

This is always open for debate depending on the project requirements and if there is something specific we need to look at. For instance, if mobile software testing is essential, a framework compatible with tools like Appium becomes crucial. Nonetheless, I advocate for consistency across projects, as it streamlines onboarding for testers unfamiliar with automation. That's why at Razor, we've opted for WebdriverIO for most of our automation testing, ensuring efficiency and cohesion across our endeavors.

Some other benefits of WebdriverIO we have found are:

  • Great support and documentation - The website is set out clearly with easy-to-follow guides, frequent issue fixes and a dedicated discord channel which allows you to ask questions or raise issues.
  • The list of add-on services is huge - Do you want to run a visual regression test? There’s a service for that and you can have it set up in minutes. Maybe you want to integrate it with Jenkins? You can do that too. Maybe you need to test on mobile? Install the Appium service and you’re away.
  • It’s free - What more is there to say? A fairly consistent learning curve - Getting set up and writing your first tests with WebdriverIO is pretty easy in comparison to a lot of other testing frameworks.

What about reporting?

One of the benefits of using something like WebdriverIO is the abundance of reporters that are ready to be plugged in and used. From something as simple as showing a green dot for each test pass and a red dot for failure, to a report that can show you every interaction and take a screenshot of any failures, there is something for everyone.

Screenshots of failures are an often overlooked resource. When a test fails, identifying the precise cause can be challenging. While you might have information on where the test failed and a helpful error message, a screenshot offers a rapid understanding of the issue, facilitating prompt resolution by the appropriate personnel.

All of this can be linked up with a CI/CD (continuous integration/continuous deployment) tool so that we don’t have to worry about running the tests manually after each deployment. Once again by using the services provided within WebdriverIO we can easily set up a process with something like Jenkins, TeamCity, Browserstack (or any other automation servers that facilitate the CI/CD pipeline). This allows us to run the tests regularly and allows everyone to see any issues before the software goes live.