“A knife has the purpose of cutting things, so to perform its function well it must have a sharp cutting edge. Man, too, has a function…”
In the distant (in software-years, which are much like dog years) past, a company’s development team would focus on new product code, and then a dedicated quality assurance (QA) team would write corresponding test code (including any unit tests). One of the pitfalls of this practice was that developers might get “lazy” about code quality, and might throw quality concerns “over the wall” to QA. This slowed down development and led to an ultimately antagonistic relationship between developers and QA teams, so it fell out of favor.
The “QA does QA” practice has mostly given way to moving testing into the hands of the developers themselves. Most of the time, developers now write their own unit tests and API tests. This makes sure developers take ownership of quality and thereby incentivizes them to put more focus on writing high quality code in the first place. How this is implemented varies: some teams use test-driven development (TDD) to write tests first and then build code to pass those tests. Some teams add peer code review. Some teams embed QA within dev teams to help them plan for quality at the onset. These practices are similarly meant to keep developers from building tests that are easy to pass.
The swing from QA-driven test-writing to developer-driven test-writing has, for some teams, crept into browser or end-to-end (E2E) testing. Contemporary dev teams either assign E2E test-writing to developers or to QA automation engineers, and different leaders can have strong opinions on who should really be taking point, us included.
At ProdPerfect, we believe that developers are the right choice to take point on writing unit and API tests, but making the right tradeoffs in what should be a core E2E test is near impossibly hard. Developers have a strong sense (through the context of developing them) of the intent of unit-level and API-level code, so they know best how to reliably test their own code. But it’s a stretch to expect developers to bear the burden of comprehensive end-to-end testing themselves. Adequately testing the full application for the myriad of probable user journeys throughout involves monitoring, analyzing, and accounting for complex interactions between many code modules. Then, from that set of possibilities, they must accurately choose the right set that deploys developer time, server resources, server time, and stated outcomes to balance business objectives. And they must re-evaluate those choices on a regular basis. Developers typically focus on small slices of an application at a time. To expect developers to fully bear the burden of comprehensive E2E testing is asking them to understand the entire universe of the application’s development and usage forwards and backwards in time. Truly no one is positioned to do so.
Developers are good at doing what they’re hired to do: developing code to innovate product—and even testing that code—and should remain primarily focused on doing so. It’s a waste of resources to task developers with end-to-end testing, and they’re not positioned to do it best.
Instead, due to the complexity of effective end-to-end testing, the ideal person to determine and execute end-to-end user tests is someone whose core expertise and focus is in understanding the entire user journey and the outcomes thereof, not someone who is asked to tack on end-to-end testing as an afterthought. E2E testing should be driven by an independent group with a mandate to focus on it and the time invested to maintain it: this can be the product team or it can be QA as a whole (a QA analyst, QA automation engineering team, etc). These groups can, with the help of tools and data, wrap their arms around the different user journeys, develop E2E test cases for them, and write tests designed to catch bugs at the user journey level, and maintain them over time. This level of testing doesn’t require intimate understanding of the underlying modules of code behind the application; it’s instead meant to ensure that users can always use the application as they want to. Software teams should leave testing of lower levels of the application to those lower levels of testing—unit and API/integration testing.
Ideally, QA teams should not simply be tasked with guessing at how users are using their applications. They can and should employ advanced product analytics to understand these user journeys and how they are evolving with time. In this way, focused testers are then able to fully understand which E2E test cases are most relevant and write the best corresponding tests to ensure quality without bloating the testing suite.
In any successful business, different roles are designed to allow talented individuals to specialize and focus. Whether it’s specializing in sales operations vs. selling and closing, marketing content vs. advertising strategy, or development and testing, specialization allows teams to operate with focus and excellence. With E2E, it follows that a specialized and complex need should be filled by a designated individual with a specialized focus and toolset in order to get the highest quality result without wasting resources.