Optimizing Your End-to-End Browser Test Coverage

 

“[Q]uality is never an accident; it is always the result of intelligent effort”.

-John Ruskin, leading art critic and Victorian era writer

The necessity for intelligent effort is especially true in the world of software development, where thoughtful effort must be applied at every step in the development, particularly so during testing.

It’s imperative to continuously test a product—during development, during and after deployments when it goes live, and monitoring periodically as it’s used—as a form of preventative care, ensuring your customers get the most value possible from your web application and walk away content with their user experience.

But when it comes to browser testing, it can be difficult to decide how much testing you need. The practice of unit testing is conceptually simple: in order to determine test coverage, all you need to do is count the lines of code and see whether or not each one has been tested. This metric quickly devolves based upon whether particular lines of code really matter and whether all of a line’s logic has actually been tested. At higher levels of integration, API, or browser level / end-to-end testing, most organizations don’t even attempt to come up with a metric due to the combinatorially vast number of possible uses.

The Old Standard

Typically, teams (perhaps between product and engineering) determine a set of core test cases based on feature sets or a specified set of requirements. Coverage is calculated against this determined list of test cases: if a test exists to cover the case, it counts towards the coverage metric. Once all determined test cases have automated tests, typically teams will say they have full coverage if they’re not audacious enough to say 100% coverage.

This metric, however, can be misleading. Has the team correctly identified all of the important use cases? Have they accounted for the many permutations of possible user flows through those use cases?

This brings a further question: how many user stories are actually realistic to cover? Which potential user stories should actually be covered? Certainly, one engineer can’t test every potential user story. Not only would doing so take an unrealistic amount of time and resources: much of that time would likely be spent testing scenarios which rarely happen in the real world. Trying to test more and more potential use cases leads to test bloat, in which engineer-hours and test runtime trend ever-upward while test stability and usefulness trend downward.

Using this method, any subset of user stories chosen for testing is likely to suffer from an issue as old as computer science itself: garbage in, garbage out. If the user stories being tested aren’t realistic, then the test results you receive will ultimately be of little use to your team. You’ll miss testing important use cases, and critical bugs will make it into production.

A Better Way

Ideally, a team should know exactly which user stories are most representative of real user behavior, and test primarily those. When tests are written to reflect real user pathways, the “garbage” disappears—the tests are close simulations of what will be performed by real users. With no garbage in, we as testers save ourselves from producing garbage out that affects both our users’ experiences and our own revenue streams resultant from flawed testing practices. If every test is founded on reality, every result will be truly invaluable to the software team.

Fortunately, product analytics make discovering real user stories possible. With the right toolset, developers can track how users actually navigate a site, develop data-backed user stories accordingly, and then identify the most likely ways a real user would actually use a web application. Once the most likely user stories are identified, effective testing can correspondingly be built, providing testing efficiency and optimal coverage.

Ultimately this change—from determining test cases with a team in a room, to detecting test cases from what your users are doing on your application—will become an industry standard. The technology to enable this pivot is new to the scene compared to traditional testing customs, and will require many teams to alter their expectations and KPIs to allow their testing teams to win while making the shift.

We built ProdPerfect to plug product analytics directly into testing, making that transition easier and more immediately impactful. Whether or not you work with us to shepherd this transition towards a more data-driven testing suite, the seismic shift towards test case detection will be one that means a higher quality software product for your business and the industry as a whole.

Leave a Reply

Your email address will not be published. Required fields are marked *