Developing end-to-end (E2E) test cases poses a substantial challenge compared to writing unit and API test cases. Blocks of code and APIs can be tested against a well-defined, limited, and predetermined set of business rules. Test-driven-development (TDD) techniques can empower developers to write relevant tests alongside their code.
Developing E2E tests requires a fully different approach. This level of testing is meant to replicate user behavior that’s interacting with many blocks of code and multiple APIs simultaneously. Below we recommend a process that will help you build accurate, effective test cases for your E2E testing regime. Note that we will not cover test scripting here, but only test case development.
There are four considerations, each of which will be explored in turn:
- How to Scope End-to-End Testing
- What Bugs to Target
- Which User Flows to Follow
- How to Design Test Cases
How to Scope E2E Testing
The goal of E2E testing is to make sure that users can use your application without running into trouble. Usually, this is done by running automated E2E regression tests against said application. One approach to choosing your scope could be to test every possible way users could use an application. This would certainly represent true 100% coverage. Unfortunately, it would also yield a testing codebase even larger than the product codebase, and a test runtime that is likely as long as it takes to write the build being tested.
Senior QA Engineers are often used to determining scope as well. Combining experience, knowledge of the code-base, and knowledge of the web app’s business metrics, a QA engineer can propose tests that should stop your users from encountering bugs when performing high-value actions. Unfortunately, “should” is the weakness of this approach: biased understanding of the web app, cost, and reliance on one individual leads inevitably to bugs making their way into production.
The team should therefore instead test only how users are actually using the application. Doing so yields the optimal balance of achieving thorough test coverage without expending excessive resources or runtime or relying on an expert to predict how customers use the website. This approach requires user data, rather than an expansive exploration of the different feature options in the application, to manage. To mine user data, you’ll need to use some form of product analytics to understand how your users currently use your application.
What Bugs to Target
E2E testing should not replace or substantially repeat the efforts of unit and API testing. Unit and API testing should test business logic. Generally, a unit test ensures that a block of code always results in the correct output variable(s) for given input variable(s). An API test ensures that for a given call, the correct response occurs.
Which User Flows to Follow
The risk of a bloated test suite, beyond high maintenance cost, is runtime that grows too long for tests to be run in the deployment process for each build. If you keep runtimes to only a few minutes, you can test every build and provide developers immediate feedback about what they may have broken, so they can rapidly fix the bug.
To prevent test suite bloat, we suggest splitting your test cases into two groups: core and edge.
Core test cases are meant to reflect your core features—what people are doing repeatedly. These are usually associated with revenue or bulk usability; a significant number of users are doing them, so if they fail you’re in trouble.
Edge cases are the ways that people use the application that are unexpected, unintended, or rare, but might still break the application in an important way. The testing team will need to pick and choose which of these cases to include based on business value. It’s important to be careful of writing edge case tests for every edge bug that occurs. Endlessly playing “whack a mole” can again cause the edge test suite to become bloated and excessively resource-intensive to maintain.
If runtime allows, we recommend running your core and edge tests with every build. Failing that, we recommend running core feature tests with every build, and running the longer-runtime edge case tests occasionally, in order to provide feedback on edge case bugs at a reasonable frequency.
Who Is Responsible For Determining End to End Test Cases?
The “QA does QA” practice has mostly given way to moving testing into the hands of the developers themselves. Most of the time, developers now write their own unit tests and API tests. This makes sure developers take ownership of quality and thereby incentivizes them to put more focus on writing high quality code in the first place. How this is implemented varies:
- Some teams use test-driven development (TDD) to write tests first and then build code to pass those tests.
- Some teams add peer code review.
- Some teams embed QA within dev teams to help them plan for quality at the onset.
These practices are similarly meant to keep developers from building tests that are easy to pass.
At ProdPerfect, we believe that developers are the right choice to take point on writing unit and API tests, but making the right tradeoffs in what should be a core E2E test is near impossibly hard. Developers have a strong sense (through the context of developing them) of the intent of unit-level and API-level code, so they know best how to reliably test their own code.
It’s a stretch to expect developers to bear the burden of comprehensive end-to-end testing themselves. Developers typically focus on small slices of an application at a time. To expect developers to fully bear the burden of comprehensive E2E testing is asking them to understand the entire universe of the application’s development and usage forwards and backwards in time. Truly no one is positioned to do so.
Due to the complexity of effective end-to-end testing, the ideal person to determine and execute end-to-end user tests is someone whose core expertise and focus is in understanding the entire user journey and the outcomes thereof, not someone who is asked to tack on end-to-end testing as an afterthought. E2E testing should be driven by an independent group with a mandate to focus on it and the time invested to maintain it: this can be the product team or it can be QA as a whole (a QA analyst, QA automation engineering team, etc).
These groups can, with the help of tools and data, wrap their arms around the different user journeys, develop end to end test cases for them, and write tests designed to catch bugs at the user journey level, and maintain them over time. This level of testing doesn’t require intimate understanding of the underlying modules of code behind the application; it’s instead meant to ensure that users can always use the application as they want to. Software teams should leave testing of lower levels of the application to those lower levels of testing—unit and API/integration testing.
Ideally, QA teams should not simply be tasked with guessing at how users are using their applications. They can and should employ advanced product analytics to understand these user journeys and how they are evolving with time. In this way, focused testers are then able to fully understand which E2E test cases are most relevant and write the best corresponding tests to ensure quality without bloating the testing suite.
How to Write Test Cases
Every test case, whether it is core or edge, should focus on a full and substantial user experience. At the end of a passing test, you should be certain that the user will have completed a given task to their satisfaction.
Each test will be a series of interactions, with an element on the page: a link, a button, a form, a drawing element, etc. For each element, the test should validate that it exists and that it can be interacted with. Between interactions, the test writer should look for meaningful changes in the application on the DOM that indicate whether or not the application has responded in the accepted way. Finally, the data in the test (an address, a product selected, some other string or variable entered into the test) should be used to ensure that the test transforms or returns that data in the way that it’s expected to.
If you build E2E test cases with this process in mind, you will achieve high-fidelity continuous testing of your application without pouring unnecessary or marginally-valuable hours into maintaining the suite. You will be able to affordably ensure that users can use your application in the way they intend to do so.
Examples of E2E Test Cases
Developing E2E test cases involves creating user journeys through an app that are broken down into several discrete steps. These tests vary depending on the use case and what is being tested, but in general E2E testing is used to verify that the workflow from start to finish performs as expected.
An example E2E test case that verifies email functionality might look like this:
- Type the URL of the email homepage into the address bar and submit
- Find and click the ‘Login’ link on the homepage
- Enter a valid username and password into the form and submit
- Access the inbox and verify a list of emails is present
- Open the spam folder and verify a list of emails is present
- Click ‘compose’ to create and send a new email
- Open the sent folder and verify the new email is present
- Click ‘Logout’ and check the user is logged out and send to the logout page
Another example of an E2E test case that tests whether new users can add a product to their cart and buy it through an eCommerce app might look like this:
- Type the URL of the eCommerce site into the address bar and submit
- Find and click a product link on the homepage
- Click on the ‘add to cart’ button
- Verify the ‘view cart’ link appears and click on it
- Navigate through the checkout process and input name, credit card, address and other details
- Click ‘submit’
- Verify purchase succeeded and the user receives an email confirmation
How to Automate End to End Testing
Automating E2E testing is important to save time without sacrificing test quality or coverage. A regression suite of basic sanity tests like verifying users can login, sign up, view content and more help to verify that new commits do not introduce breaking changes to previously working code. However, constantly running these tests manually is tedious and time-consuming, so automating them can save time, money and tester burnout.
Automating testing involves picking a test automation framework to write your tests in. There are a number of popular tools available for this, depending on your software and use case. Selenium is probably one of the oldest and best-known tools of its kind, but there are plenty of others available, such as Cucumber, Taiko, Puppeteer and more. Each of these tools offers a programmatic approach to writing E2E test cases, which essentially follow the same principles as the examples listed above. However, because they are programmatic, they can easily be integrated into part of an automated build/deploy process.
Of course, these tools still rely on testers to identify what needs testing and then write test cases for them. Knowing what to test is a challenge, and it is very easy getting lost in a process of writing endless test cases. Test cases take time to write and time to run, but without data-driven decision making, it’s very difficult to know how much value any particular test case brings to a test suite.
More advanced tools and services, such as ProdPerfect, help to prioritize your test cases by using product analytics data from an application clickstream to analyze and understand real user behavior. Problem areas can then be identified and E2E tests can be developed based on this information, to provide data-driven E2E test cases that match real-world use cases.