Test Automation "Terminate!"

How We Crush Bugs and Save Time with Automation

Where, why, and when to use automated QA testing
Carl Swanson

Quality Assurance Engineer

Carl Swanson

June 12, 2018

Test automation is the most controversial, frustrating, challenging, and rewarding aspect of my work as a software quality assurance (QA) professional, and it seems I’m not alone. Opinions on automation in the QA world range from it being an integral part of the development cycle, to a colossal waste of time and energy.

I can understand both sides. On the one hand, a poorly planned automation suite can do more harm than good, causing QA staff to spend as much time maintaining the tests as they would have spent on manual execution. On the other, a development team with a solid automated test suite will deliver stories faster and with greater confidence.

In my experience, the key to high quality test automation is determining a measured, far-sighted test plan which strikes a balance between time saved and time spent. To this end, it’s important to understand why automation can help, when it’s most useful, and what tools are best for the job.

Why automate at all?

Automation is one way to deal with the issues of volume and repetition faced by many QA departments. Many of us are big proponents of robust regression testing in the software development process. Regression testing verifies that previously completed aspects of a project have not been adversely affected by recent developments.

However, when testing newly developed features, it can be difficult to predict every part of the project that could be affected. It’s often fruitful (or bugful) to execute tests throughout the entire application. The problem is that there will never be time to execute tests on every part of an application every time an update is made.

QA teams take different approaches in addressing this issue. Some do a full run of all tests at certain intervals (most expensive, least risky). Others do a predetermined subset of tests, based on priority or relation to the newly developed feature. I tend to lean into the “priority” camp, which looks at each test and its associated functionality, and then prioritizes based on the volatility of the feature, or likelihood of bugginess.

Generally speaking, the more tests you perform, the more likely you are to identify bugs, ultimately leading to a higher quality product. However, following that rule blindly will lead to a lot of unnecessary testing. This can risk a high turnover rate for the QA professionals performing the tests, as executing the same regression tests for weeks or months on end is a recipe for low morale.

Automation aims to solve this dilemma by focusing on the areas of testing that are run very frequently. It can also target drawn out or overly complex data entry and manipulation that may suffer due to human error.

A well planned automation suite will grant a development team the confidence that the highest priority features of the project remain intact on as many platforms as necessary with every release. It can also increase overall productivity, since bugs can be found faster, thus providing a quicker overall turnaround time.

Can automation fail you?

If the above sounds fairly rosy, the natural reaction to what I just outlined could easily be “Let’s just automate everything!” This would be a mistake and would inevitably result in lower overall quality and decreased feature delivery time. A set of automated tests is a reflection of the functionality they are acting upon, which means that if the feature changes, the automated tests must be altered accordingly.

This maintenance of tests is often the downfall of an automation suite, eventually making the time required to keep the suite up to date each sprint greater than the time required to just run the tests manually.

When is automation most useful?

Rather than determining a blanket rule for automation, I like to take a feature-by-feature approach. With each feature, I determine if it’s appropriate to test manually or through automation. To do so, I ask a few questions:

1. Can this feature be automated in a reasonable manner in the first place?

  • For example, a feature which is accessed from outside the tested application may be possible to automate, but you might have to interact with a third party site or client. Is it worth spending the time interfacing with a third party rather than testing that particular aspect manually?
  • You might consider partial automation, or “happy path” automation.

2. Do I have time to create this automation in the current sprint?

  • The priority to keep in mind with automation should always be quality assurance, not automation simply for the sake of automation.
  • Take the full scope of the sprint’s timeline into account when determining if automation generation is the right move. You never want automation to cause the project’s delivery speed to suffer.

3. Do I expect this feature to be relatively stable in the future?

  • While you can’t expect that every feature you are developing will be its final incarnation, don’t spend time automating a feature if it’s a phase one implementation, especially if it has a planned update in a future sprint or release!
  • Also keep an eye out for volatile content. For example, when a news carousel is updated with different content, a manual tester would not raise any flags. However, an automated test could report a false failure.
  • If you are unsure of the stability of a feature, hold the automation until the sprint after its implementation. This can give the team enough time to see the feature as part of the overall system and ensure the automation isn’t rendered obsolete.

How often to test

After determining what to automate, it’s important to decide when the automated suite should be run. There are many options available, and the entire development team should determine what is right for the team and the project. Test can be run:

  1. Once per sprint – best if your team does a large regression suite before each sprint release.
  2. On a specific time-based schedule – usually once daily during downtime (i.e. 1am).
  3. Whenever a developer checks in a new change – based on sprint deadlines.

The schedule should be discussed in depth with the development team. While frequent runs can provide fast bug report turnarounds, it may mean it no longer falls to just the QA professional to interpret test results. In the case of a false failure (i.e. the test is reporting a failure but it’s actually a bug in the test that needs updating), updates to the test must be made quickly. Otherwise the usefulness, reliance, and confidence in the test suite will quickly decline.

Tools of the trade

1. Selenium

Selenium is the most commonly employed tool and my personal favorite. As an open-source browser automation tool, you are able to write tests in your favorite language to bring up a site in one or more browsers, navigate and interact with the web applications, and assert your expectations on the site behavior.

We use Protractor for our Angular applications, which ties into Selenium and helps avoid some of the more brittle sleep or wait commands. In addition, Protractor allows for easy setup of page objects. This means you can define variables or functions in a single location to be included in any test. When changes on the site occur, rather than having to find all locations referencing that element, we only need to perform updates in a single location.

PHPUnit has also been employed, which has a Selenium extension and allows for test automation to be generated in PHP, a good choice if developers are writing unit tests with PHPUnit to begin with.

2. Gemini

Gemini is a visual regression tool which is very easy to use and allows for quick visual comparisons. When setting up Gemini tests, you specify which elements you want to capture and any actions required to show those actions (such as performing a quick search or filtering). Gemini takes a screen capture of those elements, and then at any time the test administrator can crawl through the site recapturing the same images and performing a pixel-by-pixel comparison. A full regression suite is not recommended for this as visual comparisons can be unreliable, but it is a great way to quickly test if, for example, a style guide rule has been broken.

3. Eggplant Functional

A powerful, commercial GUI automation tool. It provides a number of methods to accomplish the level of testing that suits your needs. You can create simple linear scripts (record-and-run type) or build suites with functions and procedures that allow you to combine scripts to adhere to the Don’t Repeat Yourself (DRY) principle of programming. It runs at the device level and therefore is not restricted to browser testing. As they say: “Any platform. Any browser. Any software.”

In summary

Every software development project is different, with varying timelines, requirements, team sizes, member roles, and so much more. While it’s impossible to prescribe the catch-all automation approach and the catch all automation tool, the first step is understanding what options are available.

I hope this article provides an overview of when automation can shine, along with possible pitfalls. Armed with this information, you can make the right decisions to create an automation suite that is perfect for your team and project.

Learn more

  • SeleniumHQ – browser automation
  • Protractor – end-to-end testing for angular
  • PHPUnit – programmer-oriented testing framework for PHP
  • Gemini Testing – for regression testing of web pages using screenshots on GitHub
  • Eggplant Functional – user-centric test automation for any platform or device

Author bio

Carl Swanson has been a QA professional for nine years, and has worked on desktop and web applications as a QA tester, QA manager, and Product Owner. While he has not abandoned his manual testing roots, he derives joy from the time saved running an automation suite. When not running tests, he’s endlessly updating his new-to-him 100+ year old house in Southern Maine.

Was this post helpful?