While the definition and value of continuous testing is clear to most practitioners in today’s digital era, the aging of their test suite, the maintenance and continuous value of their suite is much more difficult to understand and retain.

When teams define their test automation scenarios, and than plan their integration into the CI, they typically follow the below known DevOps and CT manifesto:

  • Continuous testing over testing at the end.
  • Embracing all testing activities over only automated functional testing.
  • Testing what gives value over testing everything.
  • Testing across the team over testing in siloed testing departments.
  • Product coverage over code coverage.

Looking at the above bullets makes total sense, however, how can you guarantee that your teams that consists of various practitioners (business testers, test automation engineers, and developers) continuously follow and gain value from their suite?

To start with addressing this question, here is a short recap of one of the common practices to certify and decide which test scenarios to automate (functional wise and non-functional)?

  1. What’s the test engineer’s gut feeling 😊?
  2. Risk calculated as probability to occur and impact to customers?
  3. Value – does the test provide new information and, if failed, how much time to fix?
  4. Cost efficiency to develop – how long does it take to develop and how easy is it to script?
  5. History of test – volume of historical failures in related areas and frequency of breaks?

When asking the above questions properly, teams can come out with a good subset of test scenarios that worth automating and executing over time. They also “sign” on an ongoing “maintenance fee” for these scenarios since they will get integrated into the CI and be executed multiple times a day or a week across multiple platforms that tend to change.

Main RCAs for Escaped Defects, Flakiness

When looking over time and across platforms at what causes or can cause tests to be flaky, deliver inconsistent results and not deliver any value, the following reasons or root causes typically come up:

  • Coverage –> Not testing against the right platforms (mobile devices, desktop browsers, IOT devices, etc.)
  • Lack and late automation testing –> delays in developing or not automating tests inside the iteration cycle
  • Poor creation practices –> Not following coding and test creation standards (Use of wrong object locators, wrong timing, not handling popups and other interrupts, etc.)
  • Not designed for testability –> System isn’t well designed for testability creating too many hidden spots that can be prone to defects or cause flakiness in test code
  • Insufficient and outdated unit testing –> this should be owned and managed by developers and R&D leads
  • outdated environments/platforms –> teams running their tests against the wrong and outdated test data and test environments

The above points are quite important and should be top of mind continuously and in relation to the above-mentioned test automation creation method.

If the newly created tests inside the sprints and iterations and in between the releases are not going through a well-defined process to ensure they run against the right set of platforms (coverage), do not follow creation practices and maintenance as the platforms and features changes, and are not executed against an up to date environment, they will quickly stop giving any value to the teams.

Teams should ask themselves the following questions and gather similar metrics to assess value over time:

If { (Test Scenario) == Candidate of a Value add Test Case*

  { and – It detect defects} // track their history

  { and – It passes across multiple platforms with consistent result} // various platforms/OS’s

  { and – Its execution time is < 5 minutes } // longer than 5 minutes is a hint to break the tests into 2

Then

{ Include in Test Suite & & CI }

Else {

  

  }

How to Clean Up Your CT Suite?

We’ve covered above some common practices to define tests that are candidates to be automated, we mentioned some root cause analysis (RCAs) for tests that are flaky and are not adding value, but the keyword that needs to be at the top of mind of DevOps managers is continuous value. As user stories are being decided upon value to customers, so do test automation cases needs to be judged if they continue to be part of a suite or not.

Test code has an aging tag like any other software product. The tag can determine if the test value has “expired” or is still “valid“.

To understand if such test cases are in the “valid” bucket or not, teams need visibility and metrics that are agreed upon the 3 above-mentioned personas.

Quality visibility can be achieved through advanced test reporting and analytics (ML/AI based), and through other annotations developers put in the test code that can later be sliced and diced for gathering insights.

In the above image and the bottom one, I am showing some nice examples of both RCAs for test failures that are all considered noise and are the opposite of test value, as well as a tracking of the CI pipeline with history and trending between branches and jobs.

Cleaning up your test suite should start by looking into your test data periodically, and through gathering of insights and statistics across test cycles.

Such insights determine the noise vs. value ratio each test scenario adds to the product team, and also shows the overall productivity and time spent by the practitioners to keep their pipeline green and with high quality.

Bottom Line

Continuous testing by definition generates a lot of noise, test data and artifacts, and that’s reasonable. To keep up with the product road map, teams must define their value criteria and help themselves stay ahead of the curve by eliminating noise and flakiness as much as possible. Do not get too attached to test automation scenario just because it was written and because it runs OK across releases. The criteria to continue running a test starts and ends with continuous value of this test and not just its result.

 

Happy CT!