How to Build E2E Test Cases

Developing end-to-end (E2E) test cases poses a substantial challenge compared to writing unit and API test cases. Blocks of code and APIs can be tested against a well-defined, limited, and predetermined set of business rules. Test-driven-development (TDD) techniques can empower developers to write relevant tests alongside their code. 

Developing E2E tests requires a fully different approach. This level of testing is meant to replicate user behavior that’s interacting with many blocks of code and multiple APIs simultaneously. Below we recommend a process that will help you build accurate, effective test cases for your E2E testing regime. Note that we will not cover test scripting here, but only test case development.

There are four considerations, each of which will be explored in turn:

  1. How to Scope End-to-End Testing
  2. What Bugs to Target
  3. Which User Flows to Follow
  4. How to Design Test Cases

How to Scope E2E Testing

The goal of E2E testing is to make sure that users can use your application without running into trouble. Usually, this is done by running automated E2E regression tests against said application. One approach to choosing your scope could be to test every possible way users could use an application. This would certainly represent true 100% coverage. Unfortunately, it would also yield a testing codebase even larger than the product codebase, and a test runtime that is likely as long as it takes to write the build being tested. 

Senior QA Engineers are often used to determining scope as well. Combining experience, knowledge of the code-base, and knowledge of the web app’s business metrics, a QA engineer can propose tests that should stop your users from encountering bugs when performing high-value actions. Unfortunately, “should” is the weakness of this approach: biased understanding of the web app, cost, and reliance on one individual leads inevitably to bugs making their way into production. 

The team should therefore instead test only how users are actually using the application. Doing so yields the optimal balance of achieving thorough test coverage without expending excessive resources or runtime or relying on an expert to predict how customers use the website. This approach requires user data, rather than an expansive exploration of the different feature options in the application, to manage. To mine user data, you’ll need to use some form of product analytics to understand how your users currently use your application

What Bugs to Target

E2E testing should not replace or substantially repeat the efforts of unit and API testing. Unit and API testing should test business logic. Generally, a unit test ensures that a block of code always results in the correct output variable(s) for given input variable(s). An API test ensures that for a given call, the correct response occurs. 

E2E testing is meant to ensure that user interactions always work and that a user can complete a workflow successfully. E2E test validations should therefore make certain that an interaction point (button, form, page, etc) exists and can be used. Then, they should verify that a user can move through all of these interactions and, at the end, the application returns what is expected in both the individual elements and also the result of user-initiated data transformations. Well-built tests will also look for javascript or browser errors. If tests are written in this way, the relevant blocks of code and APIs will all be tested for functionality during the test execution. 

Which User Flows to Follow

The risk of a bloated test suite, beyond high maintenance cost, is runtime that grows too long for tests to be run in the deployment process for each build. If you keep runtimes to only a few minutes, you can test every build and provide developers immediate feedback about what they may have broken, so they can rapidly fix the bug. 

To prevent test suite bloat, we suggest splitting your test cases into two groups: core and edge. Core test cases are meant to reflect your core features—what people are doing repeatedly. These are usually associated with revenue or bulk usability; a significant number of users are doing them, so if they fail you’re in trouble. Edge cases are the ways that people use the application that are unexpected, unintended, or rare, but might still break the application in an important way. The testing team will need to pick and choose which of these cases to include based on business value. It’s important to be careful of writing edge case tests for every edge bug that occurs. Endlessly playing “whack a mole” can again cause the edge test suite to become bloated and excessively resource-intensive to maintain.

If runtime allows, we recommend running your core and edge tests with every build. Failing that, we recommend running core feature tests with every build, and running the longer-runtime edge case tests occasionally, in order to provide feedback on edge case bugs at a reasonable frequency.

How to Design Test Cases

Every test case, whether it is core or edge, should focus on a full and substantial user experience. At the end of a passing test, you should be certain that the user will have completed a given task to their satisfaction.

Each test will be a series of interactions, with an element on the page: a link, a button, a form, a drawing element, etc. For each element, the test should validate that it exists and that it can be interacted with. Between interactions, the test writer should look for meaningful changes in the application on the DOM that indicate whether or not the application has responded in the accepted way. Finally, the data in the test (an address, a product selected, some other string or variable entered into the test) should be used to ensure that the test transforms or returns that data in the way that it’s expected to.

If you build E2E test cases with this process in mind, you will achieve high-fidelity continuous testing of your application without pouring unnecessary or marginally-valuable hours into maintaining the suite. You will be able to affordably ensure that users can use your application in the way they intend to do so.

The seismic shift in how we test software

As it’s been since ARPANET, functional web software today is mostly shipped by luck, duct tape, or sheer will. Ask any engineer with tenure at an ecommerce company and they can tell you about the last time they broke checkout, the defining feature of doing commerce online.

Every year we have groundbreaking technologies that change the entire game of what kinds of software we develop: virtualization, the cloud, mobile, SOA, REST, NOSQL, ML/AI, microservices, SPAs, serverless, the list goes on. But except for the very bleeding edge of talent and investment, software has been tested pretty much the same way over the last 20 years: a mix of human manual testing, and a shallow layer of test automation.

As late as 2018, even sophisticated businesses struggle with testing:42% of software companies are still testing entirely manually; only 32% have become mostly automated, according to recent research sponsored by testing companies Sauce Lab and SmartBear. 75% of testing teams aren’t keeping up with software development: code is being deployed only partially tested. The majority of testing teams don’t even have a firm practice for test management—they aren’t even certain what they’re testing. The bugs that are not caught by testing cost the globe literally trillions per year. This is true despite the fact that 25% of software budgets are allocated towards testing for QA.

We continually hope to be building better software year over year, but the bugs have been inescapable. Humans are imperfect and inconsistent which, while beautiful in our way, you’d never build a human pyramid more than a couple of layers high, less you invite catastrophe. We’re slower to the task than machines, we behave inconsistently, and communication breaks down at sufficient scale or with multiple layers of the organization. With any manually tested system having bugs, project failures and high development costs are the expectation and the norm.

However, there’s a light at the end of this tunnel. The last 2 years have seen a new breed of tools appear that have the chance to change the game. We ourselves roll our eyes when products throw the words machine learning or artificial intelligence at their own product. But suffice it to say the tools are getting smarter in a non-trivial way.

The Rise and Fall and Rise Again of Browser Automation
As an industry, we’ve tried and failed to get away from using a browser to test web applications. We tried unit testing, we’ve tried API testing, we’ve tried behavior-driven testing, we’ve tried test-driven development, and we’ve tried just monitoring. None of these are necessarily bad, but it’s pretty rare that they provide a truly conclusive test that the product you are shipping will work for its users. Every one of those approaches tests a limited understanding of the application and there will always be gaps in the testing that will let significant bugs get through. Good teams have as many nets as possible so that each testing system’s blind spots are hopefully covered by something else, but it’s impossible to have certainty. And the worst blind spots are usually where the integrations and units come together. At the end of the day, a web application is exercised by users through a browser: the only way to really know if it will work is by testing as much as possible through a browser preferably as close as possible to how real users exercise the system.

For all the advancements in other tooling and successor frameworks over the last 15 years, the standard in Software Quality Assurance Engineering is Selenium. Originally developed in 2004 the idea was simple: rather than ask a human to manually click through a browser to test features of a web application, you could write code that would do this work. Originally a framework to remote control a browser it has evolved to be an abstraction for controlling or deeply integrating into a wide variety of browser-like systems as well as large scale collections of remote machines that could run the browser.

While the initial principle is simple—automate a browser—the results can be complex. Selenium’s evolution has spawned incredible layers of abstraction to handle all the different varieties of browser behavior and usage that one could dream of, as well as where either the browser can be run or and abstractions to fit how the developers want to write their tests. The dirty secret is that even were it a simple browser, it really isn’t just testing one unified component. Automated browser tests are almost always testing an independently standing environment. This means there’s a standing network setup, server setup, database, and external services for each test system involved. Since each of those components then have both a constantly evolving state and processes for being updated, there are many independent variables that need to be controlled. The most frustrating impact is on time-based behavior. Loading a web page kicks off a variety of synchronous and asynchronous processes which makes deciding when to call a page fully loaded and ready to test tricky, particularly with the rise of single page applications. Mash all these frustrations together and you get modern Test Engineering.

As such, maintenance burden and stability issues have continued to plague browser testing. Test suites have continued to be maintained manually, by humans updating code, across changes to the underlying application. Stability issues and complexity have resulted in tests that are flaky, they sometimes pass and sometimes fail for the same version of the application. These problems have meant that browser testing has remained expensive, frustrating, slow, and ultimately of limited effectiveness. In 2015, Google was officially advocating severely limiting the number of browser tests a software team should maintain. The whole process left much to be desired.

Various other tools emerged to improve the efficacy of browser testing: various record-and-play tools and other “scriptless testing” products were designed to allow less technical and less expensive resources to drive automated testing–these tend to be particularly unstable. Products such as BrowserStack and SauceLabs made cross-browser testing much more accessible, instead of needing to maintain a series of remote machines hosting various versions of various browsers you could instead pay a service to do that for you.

Browser testing has had an ongoing less-acknowledged but still serious problem in its entire history: ultimately, software teams have to guess what’s important to test. No data drives the decision of what to test. Software teams get together and decide what they believe are the most important use cases on their application which require testing, and go test those. A gap always exists between what developers predict their users will do, and what they actually do. This means that critical bugs which affect core user flows can be missed, even when a test suite is well-maintained. It also means that many tests cover irrelevant or rare use cases, so browser suites become even bigger, and thus slower, and harder and more expensive to maintain.

Recent Innovations in the Space
Browser testing will achieve its full potential when it runs faster, costs less to build and maintain, and (most importantly) runs consistently, throwing alerts only when it is catching bugs. A number of recent innovations in the space have brought browser testing a long way.

Crowdtesting is a process by which the software team provides test cases to the vendor, and the vendor provides a large bank of people to manually perform the scenarios. It has a few advantages: it’s easier to set up than your own automation suite, it requires less ongoing maintenance than a home-built suite, and manual testers can sometimes catch bugs that automated tests would miss. However, this approach has several drawbacks. There are a few major players in this space.

Because customers pay for each test run, more software shipped correlates directly to more money spent. While manual testers can sometimes catch bugs that automated tests would miss, they will also frequently report false positives or miss other critical bugs, due to the inexactness of a manual review by an untrained/unfamiliar resource. In addition, while the only real maintenance is updating test instructions, it still means that a resource has to be assigned to the task, continually updating and changing the test cases to prevent the test from becoming stale and outdated.

Crowdtesting is much like the American military: it was an innovation to win the last war, not the next one. The machines will provide far more resource-efficient, consistent, and easy-to-use testing products that will leave Crowdtesting as a footnote that ultimately served as a stopgap between the old way and the new way of testing.

With Machine Learning (ML)-Enabled Record-and-Play, a third-party application adds an additional layer to your own application, allowing you to build tests recording you using your software. These tests are intended to be functional through many small changes of your software, by building “models” of the test event, rather than using conventional testing hooks. This reduces test maintenance costs and significantly reduces instability. Because the tests are truly automated (rather than crowdsourced), you don’t have to pay for the cost of running each test. There are a few big players in this space and perhaps a few dozen others.

However, since it is your team developing the tests with the external application, the gap between your team’s understanding of the application and actual user behavior remains. Additionally, the tests need to be rebuilt every time there’s an appreciable change to the product, requiring attention and input from a software team. Lastly, since tests all run through the interface, if you decide to leave the service, you take no assets with you–you’re back at square one.

Ultimately, we believe the core problems of browser testing won’t get solved until machines can help with deciding what to test, in addition to helping with how to test an application. Ultimately, good browser testing exists to make sure users can do what they want to on an application, without encountering bugs. If you’re able to test what users are doing, you’ll make sure they don’t see bugs.

Autodetection/Autogeneration is where machines begin to help to decide what to test. Autodetection tooling analyzes user traffic to determine test cases that represent common flows of user behavior, and then Autogeneration automatically produces repeatable test scripts based on those test cases. This process can be repeated continuously to update a testing suite. The players in this space have emerged more recently and are fewer in number.

The last challenge for Autodetection-driven technologies is anticipating changes to User Interfaces (UIs). Ideally, browser tests would only break when a true bug has emerged. When UIs change dramatically, even machine-driven tests will fail when the change is deployed to a testing environment. In a short time, it’s likely that these technologies will be capable of detecting and prioritizing developer behavior in pre-production environments to automatically update tests to anticipate these UI changes.

What it Means for Software Testing Today
Improvements in data science and data engineering (behind true Machine Learning and most “machine learning” tools masquerading as true ML) have unlocked quite a deal of potential in helping reduce the cost and instability of browser tests. Machines are able to do more of the work. If we think of the history of browser testing over the past 25 years, the path has been something like this:

  1. Humans decided what to test, and humans tested it manually
  2. Humans decided what to test, and humans wrote and maintained code to have machines do the testing
  3. Humans decided what to test, and machines helped low-cost humans more quickly do the testing
  4. Humans decided what to test, and humans told smarter machines how to test with less babysitting and maintenance
  5. Machines decide what to test, and machines do the testing

Steps 4 and 5 represent the recent seismic shift in testing. We’re now seeing an emergence of a few technologies that will cut the human out of the loop completely, both in deciding what to test and translating those tests to machines. The result will be test suites that catch more bugs, update and run faster, and require no human effort to build or maintain. We suspect that in 5 years, machines will own the entire browser testing process.

The Next 5-10 Years
The processes that we’re seeing today are just the beginning. What’s happening is that we’re unleashing data to improve the way we test the web. First, we were collecting data between executions of each of our test runs to improve our expectations. Now we’re updating our expectations of what should be tested based upon the behavior of the user. These expectations and test behaviors will only get more intelligent over time as we’re ultimately simulating users. These user simulations will eventually get sufficiently accurate and intelligent that we can expect to rely on it as the primary form of web software testing. The goal is for the tools to recommend potential solutions to the problems they have seen in the past.

This article was first published on

Who Should Determine End-to-End Test Cases?

“A knife has the purpose of cutting things, so to perform its function well it must have a sharp cutting edge. Man, too, has a function…”


In the distant (in software-years, which are much like dog years) past, a company’s development team would focus on new product code, and then a dedicated quality assurance (QA) team would write corresponding test code (including any unit tests). One of the pitfalls of this practice was that developers might get “lazy” about code quality, and might throw quality concerns “over the wall” to QA. This slowed down development and led to an ultimately antagonistic relationship between developers and QA teams, so it fell out of favor.

The “QA does QA” practice has mostly given way to moving testing into the hands of the developers themselves. Most of the time, developers now write their own unit tests and API tests. This makes sure developers take ownership of quality and thereby incentivizes them to put more focus on writing high quality code in the first place. How this is implemented varies: some teams use test-driven development (TDD) to write tests first and then build code to pass those tests. Some teams add peer code review. Some teams embed QA within dev teams to help them plan for quality at the onset. These practices are similarly meant to keep developers from building tests that are easy to pass.

The swing from QA-driven test-writing to developer-driven test-writing has, for some teams, crept into browser or end-to-end (E2E) testing. Contemporary dev teams either assign E2E test-writing to developers or to QA automation engineers, and different leaders can have strong opinions on who should really be taking point, us included.

At ProdPerfect, we believe that developers are the right choice to take point on writing unit and API tests, but making the right tradeoffs in what should be a core E2E test is near impossibly hard. Developers have a strong sense (through the context of developing them) of the intent of unit-level and API-level code, so they know best how to reliably test their own code. But it’s a stretch to expect developers to bear the burden of comprehensive end-to-end testing themselves. Adequately testing the full application for the myriad of probable user journeys throughout involves monitoring, analyzing, and accounting for complex interactions between many code modules. Then, from that set of possibilities, they must accurately choose the right set that deploys developer time, server resources, server time, and stated outcomes to balance business objectives. And they must re-evaluate those choices on a regular basis. Developers typically focus on small slices of an application at a time. To expect developers to fully bear the burden of comprehensive E2E testing is asking them to understand the entire universe of the application’s development and usage forwards and backwards in time. Truly no one is positioned to do so.

Developers are good at doing what they’re hired to do: developing code to innovate product—and even testing that code—and should remain primarily focused on doing so. It’s a waste of resources to task developers with end-to-end testing, and they’re not positioned to do it best.

Instead, due to the complexity of effective end-to-end testing, the ideal person to determine and execute end-to-end user tests is someone whose core expertise and focus is in understanding the entire user journey and the outcomes thereof, not someone who is asked to tack on end-to-end testing as an afterthought. E2E testing should be driven by an independent group with a mandate to focus on it and the time invested to maintain it: this can be the product team or it can be QA as a whole (a QA analyst, QA automation engineering team, etc). These groups can, with the help of tools and data, wrap their arms around the different user journeys, develop test cases for them, and write tests designed to catch bugs at the user journey level, and maintain them over time. This level of testing doesn’t require intimate understanding of the underlying modules of code behind the application; it’s instead meant to ensure that users can always use the application as they want to. Software teams should leave testing of lower levels of the application to those lower levels of testing—unit and API/integration testing.

Ideally, QA teams should not simply be tasked with guessing at how users are using their applications. They can and should employ advanced product analytics to understand these user journeys and how they are evolving with time. In this way, focused testers are then able to fully understand which test cases are most relevant and write the best corresponding tests to ensure quality without bloating the testing suite.

In any successful business, different roles are designed to allow talented individuals to specialize and focus. Whether it’s specializing in sales operations vs. selling and closing, marketing content vs. advertising strategy, or development and testing, specialization allows teams to operate with focus and excellence. With E2E, it follows that a specialized and complex need should be filled by a designated individual with a specialized focus and toolset in order to get the highest quality result without wasting resources.

End-to-End or Unit Testing: Which Tests for Which Bugs?

“Not even Ares battles against necessity.”


When designing a holistic testing strategy for any application, the QA strategist has to first answer, “Which testing methods should I utilize for which types of bugs?” Some bugs are rendering errors, some involve the application returning the wrong data, others are functionality issues (the user simply can’t do what they intend to), and others are application-level/browser errors, each requiring some specificity in approach.

Often, businesses rely too exclusively on either end-to-end (browser-level) testing, or too exclusively on unit testing, without properly accounting for different kinds of bugs. Some businesses try to bake various data validation checks into their end-to-end testing. At ProdPerfect, we’ve heard business leaders suggest, “if all of your unit tests are well-written, there’s no need for end-to-end testing.” Both of these approaches, though, are flawed. QA teams need both end-to-end and unit testing, and they should be applied differently.

Unit Testing vs. End-to-End Testing

Unit testing checks code blocks (typically with a black-box mindset): variable X is the input; variable Y should be the output. Unit testing efficiently checks for the functions or calculations that provide resulting data—a numerical value, a text string, etc. End-to-end testing tests all layers of the application at once; it’s best-suited to make sure buttons, forms, changes, links, and generally entire workflows function without problems.

Here’s an example to illustrate the proper approach to testing decision-making:

“Llamas R Us” is an E-Commerce company, selling llamas online by subscription. Their software includes a sales tax calculator. When a customer selects their ship-to location during the checkout process, the sales tax calculator automatically calculates the tax and applies it to the total cost of the customer’s monthly llama purchase. What needs to be tested here is that the right sales tax is applied to the llamas being purchased, depending on which state the would-be llama farmer is living in.

Llamas R Us may be inclined to assign testers to manually complete the checkout process, select different locations, and calculate whether the correct sales tax is being applied. To perform these tests, they may consider using automated end-to-end testing, writing unique code for each different state’s sales tax to ensure full test coverage of the calculator feature.

However, though it seems comprehensive, this system is not in reality optimal. For one, it’s inefficient, as it’s difficult for end-to-end testers, even with automated tests, to read the resulting data from such a web page and verify it—humans are imprecise and machines aren’t great at scraping raw data off of a web page. It’s also simply inefficient to test a checkout process 50 times for 50 different states. Doing so would cause testing time and human testing efficiency to implode.

Data Validation vs. Application Functionality

At its heart, testing the sales tax calculator is a data validation test. The hypothetical calculator is reliant on a particular set of inputs (states, countries, etc.) to generate a particular set of outputs (the sales tax multipliers). Thus, it is an ideal candidate for unit testing instead of end-to-end testing. In this case, Llamas R Us should create individual unit tests to verify the functionality of their sales tax calculator. These tests run much faster, require less work to set up, and don’t need to be changed each time the user interface is tweaked—they should always work until the sales tax calculator code is itself changed.

End-to-end testing, automated or otherwise, is ideal for testing the functionality of an application, rather than the data being sent to the user. For Llamas R Us and their E-Commerce application, end-to-end testing is ideal for ensuring that a buyer can add products to their cart, can navigate shopping categories, and can access product details, images, and reviews. In this case, Llamas R Us isn’t testing to make sure the right data emerges, but that the workflow can be completed, consistently, as the application changes. Because of the various application layers and the many interacting blocks of code being tested simultaneously, end-to-end testing is efficient and invaluable as a tool for testing features like these designed for the usability of a web application. Simply testing at the unit level will never provide the full picture of whether or not the whole application works together. The reality is, you never really know it works until you see it working.

Data validation questions and raw functionality questions are very different issues which must be approached with different kinds of tests. The simplest answer to the question, “Which testing methods should I utilize for which types of bugs?” is that if you’re testing something which produces a given (and by its nature predictable) set of output data based on a given set of input data, unit testing is likely the most efficient. For testing the stability and functionality of a feature or workflow, end-to-end testing is likely best.

What is Regression Testing and Why is It Important?

An ounce of prevention is worth a pound of cure.”  -Benjamin Franklin

This article is for everyone that asks, “why would you want to keep testing the features that are already live? Shouldn’t we focus on testing new features?” Those of you who already know the importance of testing both can probably focus on other content instead.

Testing new features that go live is certainly critical to making sure your application works. Brand new features should be tested extensively, not only for functionality, but also for user experience, conversion (as appropriate), performance, etc.

Your current feature set needs to be tested for regressions. regression is what it sounds like if you took statistics in college: your application regresses from its current state to something worse. It is deviation from expected state. This happens in one of two ways:

  1. You’re maintaining, altering, fixing, or improving a feature (rather than introducing a new one) and break it.
  2. You’re changing just about anything and end up breaking something completely different in the application.

The first form of regression is fairly obvious; the second can be a head-scratcher. The short version of why this can happen: almost any application is deeply interconnected. There’s a concept called DRY – “Don’t Repeat Yourself.” Good developers don’t copy code; rather, they make that code accessible to all features that touch it. Any area of an application depends on many others to function properly. If you break something while working on inventory management, you might wreck checkout. Updating infrastructure might break login. It’s not that every change can truly affect every part of the application, but any change might impact multiple parts of the application if a bug is introduced.

Regression testing, therefore, tests to make sure any of these regressions are caught before they make it to production. Generally, you run a suite of regression tests by testing every unit of code, every API, and core user pathway across the application at the browser level, with every build. Some teams can’t run their regression suite fast enough (or are using manual or crowdsourced testing, which has incremental cost per test run because you’re throwing extra bodies at the problem) that they run their browser regression testing suite on a less-frequent schedule. The obvious downside of this is that you’re more likely to let regressions into production before they are caught.

Automation vs. Manual Testing

If you’re considering automating your browser regression suite, start with features that are least likely to be changing rapidly: building automation takes time to pay off, so you want to make sure that these tests are going to be run more than a few times before they need to be re-written. For brand new or rapidly-evolving features, manual testing may be your most efficient approach.

When you do decide to automate: automated browser regression testing suites are built in a number of ways. The most basic is scripting them in SeleniumCypressCapybara, with Javascript in TestCafe, or using other such frameworks. They can also be built using record-and-play tools such as Selenium / TestCafe IDE. Machine Learning is making record-and-play stronger and less time-intensive, and will eventually allow record-and-play to drive itself using web traffic data.