The seismic shift in how we test software

As it’s been since ARPANET, functional web software today is mostly shipped by luck, duct tape, or sheer will. Ask any engineer with tenure at an ecommerce company and they can tell you about the last time they broke checkout, the defining feature of doing commerce online.

Every year we have groundbreaking technologies that change the entire game of what kinds of software we develop: virtualization, the cloud, mobile, SOA, REST, NOSQL, ML/AI, microservices, SPAs, serverless, the list goes on. But except for the very bleeding edge of talent and investment, software has been tested pretty much the same way over the last 20 years: a mix of human manual testing, and a shallow layer of test automation.

As late as 2018, even sophisticated businesses struggle with testing:42% of software companies are still testing entirely manually; only 32% have become mostly automated, according to recent research sponsored by testing companies Sauce Lab and SmartBear. 75% of testing teams aren’t keeping up with software development: code is being deployed only partially tested. The majority of testing teams don’t even have a firm practice for test management—they aren’t even certain what they’re testing. The bugs that are not caught by testing cost the globe literally trillions per year. This is true despite the fact that 25% of software budgets are allocated towards testing for QA.

We continually hope to be building better software year over year, but the bugs have been inescapable. Humans are imperfect and inconsistent which, while beautiful in our way, you’d never build a human pyramid more than a couple of layers high, less you invite catastrophe. We’re slower to the task than machines, we behave inconsistently, and communication breaks down at sufficient scale or with multiple layers of the organization. With any manually tested system having bugs, project failures and high development costs are the expectation and the norm.

However, there’s a light at the end of this tunnel. The last 2 years have seen a new breed of tools appear that have the chance to change the game. We ourselves roll our eyes when products throw the words machine learning or artificial intelligence at their own product. But suffice it to say the tools are getting smarter in a non-trivial way.

The Rise and Fall and Rise Again of Browser Automation
As an industry, we’ve tried and failed to get away from using a browser to test web applications. We tried unit testing, we’ve tried API testing, we’ve tried behavior-driven testing, we’ve tried test-driven development, and we’ve tried just monitoring. None of these are necessarily bad, but it’s pretty rare that they provide a truly conclusive test that the product you are shipping will work for its users. Every one of those approaches tests a limited understanding of the application and there will always be gaps in the testing that will let significant bugs get through. Good teams have as many nets as possible so that each testing system’s blind spots are hopefully covered by something else, but it’s impossible to have certainty. And the worst blind spots are usually where the integrations and units come together. At the end of the day, a web application is exercised by users through a browser: the only way to really know if it will work is by testing as much as possible through a browser preferably as close as possible to how real users exercise the system.

For all the advancements in other tooling and successor frameworks over the last 15 years, the standard in Software Quality Assurance Engineering is Selenium. Originally developed in 2004 the idea was simple: rather than ask a human to manually click through a browser to test features of a web application, you could write code that would do this work. Originally a framework to remote control a browser it has evolved to be an abstraction for controlling or deeply integrating into a wide variety of browser-like systems as well as large scale collections of remote machines that could run the browser.

While the initial principle is simple—automate a browser—the results can be complex. Selenium’s evolution has spawned incredible layers of abstraction to handle all the different varieties of browser behavior and usage that one could dream of, as well as where either the browser can be run or and abstractions to fit how the developers want to write their tests. The dirty secret is that even were it a simple browser, it really isn’t just testing one unified component. Automated browser tests are almost always testing an independently standing environment. This means there’s a standing network setup, server setup, database, and external services for each test system involved. Since each of those components then have both a constantly evolving state and processes for being updated, there are many independent variables that need to be controlled. The most frustrating impact is on time-based behavior. Loading a web page kicks off a variety of synchronous and asynchronous processes which makes deciding when to call a page fully loaded and ready to test tricky, particularly with the rise of single page applications. Mash all these frustrations together and you get modern Test Engineering.

As such, maintenance burden and stability issues have continued to plague browser testing. Test suites have continued to be maintained manually, by humans updating code, across changes to the underlying application. Stability issues and complexity have resulted in tests that are flaky, they sometimes pass and sometimes fail for the same version of the application. These problems have meant that browser testing has remained expensive, frustrating, slow, and ultimately of limited effectiveness. In 2015, Google was officially advocating severely limiting the number of browser tests a software team should maintain. The whole process left much to be desired.

Various other tools emerged to improve the efficacy of browser testing: various record-and-play tools and other “scriptless testing” products were designed to allow less technical and less expensive resources to drive automated testing–these tend to be particularly unstable. Products such as BrowserStack and SauceLabs made cross-browser testing much more accessible, instead of needing to maintain a series of remote machines hosting various versions of various browsers you could instead pay a service to do that for you.

Browser testing has had an ongoing less-acknowledged but still serious problem in its entire history: ultimately, software teams have to guess what’s important to test. No data drives the decision of what to test. Software teams get together and decide what they believe are the most important use cases on their application which require testing, and go test those. A gap always exists between what developers predict their users will do, and what they actually do. This means that critical bugs which affect core user flows can be missed, even when a test suite is well-maintained. It also means that many tests cover irrelevant or rare use cases, so browser suites become even bigger, and thus slower, and harder and more expensive to maintain.

Recent Innovations in the Space
Browser testing will achieve its full potential when it runs faster, costs less to build and maintain, and (most importantly) runs consistently, throwing alerts only when it is catching bugs. A number of recent innovations in the space have brought browser testing a long way.

Crowdtesting is a process by which the software team provides test cases to the vendor, and the vendor provides a large bank of people to manually perform the scenarios. It has a few advantages: it’s easier to set up than your own automation suite, it requires less ongoing maintenance than a home-built suite, and manual testers can sometimes catch bugs that automated tests would miss. However, this approach has several drawbacks. There are a few major players in this space.

Because customers pay for each test run, more software shipped correlates directly to more money spent. While manual testers can sometimes catch bugs that automated tests would miss, they will also frequently report false positives or miss other critical bugs, due to the inexactness of a manual review by an untrained/unfamiliar resource. In addition, while the only real maintenance is updating test instructions, it still means that a resource has to be assigned to the task, continually updating and changing the test cases to prevent the test from becoming stale and outdated.

Crowdtesting is much like the American military: it was an innovation to win the last war, not the next one. The machines will provide far more resource-efficient, consistent, and easy-to-use testing products that will leave Crowdtesting as a footnote that ultimately served as a stopgap between the old way and the new way of testing.

With Machine Learning (ML)-Enabled Record-and-Play, a third-party application adds an additional layer to your own application, allowing you to build tests recording you using your software. These tests are intended to be functional through many small changes of your software, by building “models” of the test event, rather than using conventional testing hooks. This reduces test maintenance costs and significantly reduces instability. Because the tests are truly automated (rather than crowdsourced), you don’t have to pay for the cost of running each test. There are a few big players in this space and perhaps a few dozen others.

However, since it is your team developing the tests with the external application, the gap between your team’s understanding of the application and actual user behavior remains. Additionally, the tests need to be rebuilt every time there’s an appreciable change to the product, requiring attention and input from a software team. Lastly, since tests all run through the interface, if you decide to leave the service, you take no assets with you–you’re back at square one.

Ultimately, we believe the core problems of browser testing won’t get solved until machines can help with deciding what to test, in addition to helping with how to test an application. Ultimately, good browser testing exists to make sure users can do what they want to on an application, without encountering bugs. If you’re able to test what users are doing, you’ll make sure they don’t see bugs.

Autodetection/Autogeneration is where machines begin to help to decide what to test. Autodetection tooling analyzes user traffic to determine test cases that represent common flows of user behavior, and then Autogeneration automatically produces repeatable test scripts based on those test cases. This process can be repeated continuously to update a testing suite. The players in this space have emerged more recently and are fewer in number.

The last challenge for Autodetection-driven technologies is anticipating changes to User Interfaces (UIs). Ideally, browser tests would only break when a true bug has emerged. When UIs change dramatically, even machine-driven tests will fail when the change is deployed to a testing environment. In a short time, it’s likely that these technologies will be capable of detecting and prioritizing developer behavior in pre-production environments to automatically update tests to anticipate these UI changes.

What it Means for Software Testing Today
Improvements in data science and data engineering (behind true Machine Learning and most “machine learning” tools masquerading as true ML) have unlocked quite a deal of potential in helping reduce the cost and instability of browser tests. Machines are able to do more of the work. If we think of the history of browser testing over the past 25 years, the path has been something like this:

  1. Humans decided what to test, and humans tested it manually
  2. Humans decided what to test, and humans wrote and maintained code to have machines do the testing
  3. Humans decided what to test, and machines helped low-cost humans more quickly do the testing
  4. Humans decided what to test, and humans told smarter machines how to test with less babysitting and maintenance
  5. Machines decide what to test, and machines do the testing

Steps 4 and 5 represent the recent seismic shift in testing. We’re now seeing an emergence of a few technologies that will cut the human out of the loop completely, both in deciding what to test and translating those tests to machines. The result will be test suites that catch more bugs, update and run faster, and require no human effort to build or maintain. We suspect that in 5 years, machines will own the entire browser testing process.

The Next 5-10 Years
The processes that we’re seeing today are just the beginning. What’s happening is that we’re unleashing data to improve the way we test the web. First, we were collecting data between executions of each of our test runs to improve our expectations. Now we’re updating our expectations of what should be tested based upon the behavior of the user. These expectations and test behaviors will only get more intelligent over time as we’re ultimately simulating users. These user simulations will eventually get sufficiently accurate and intelligent that we can expect to rely on it as the primary form of web software testing. The goal is for the tools to recommend potential solutions to the problems they have seen in the past.

This article was first published on SDTimes.com.

Who Should Determine End-to-End Test Cases?

“A knife has the purpose of cutting things, so to perform its function well it must have a sharp cutting edge. Man, too, has a function…”

-Aristotle

In the distant (in software-years, which are much like dog years) past, a company’s development team would focus on new product code, and then a dedicated quality assurance (QA) team would write corresponding test code (including any unit tests). One of the pitfalls of this practice was that developers might get “lazy” about code quality, and might throw quality concerns “over the wall” to QA. This slowed down development and led to an ultimately antagonistic relationship between developers and QA teams, so it fell out of favor.

The “QA does QA” practice has mostly given way to moving testing into the hands of the developers themselves. Most of the time, developers now write their own unit tests and API tests. This makes sure developers take ownership of quality and thereby incentivizes them to put more focus on writing high quality code in the first place. How this is implemented varies: some teams use test-driven development (TDD) to write tests first and then build code to pass those tests. Some teams add peer code review. Some teams embed QA within dev teams to help them plan for quality at the onset. These practices are similarly meant to keep developers from building tests that are easy to pass.

The swing from QA-driven test-writing to developer-driven test-writing has, for some teams, crept into browser or end-to-end (E2E) testing. Contemporary dev teams either assign E2E test-writing to developers or to QA automation engineers, and different leaders can have strong opinions on who should really be taking point, us included.

At ProdPerfect, we believe that developers are the right choice to take point on writing unit and API tests, but making the right tradeoffs in what should be a core E2E test is near impossibly hard. Developers have a strong sense (through the context of developing them) of the intent of unit-level and API-level code, so they know best how to reliably test their own code. But it’s a stretch to expect developers to bear the burden of comprehensive end-to-end testing themselves. Adequately testing the full application for the myriad of probable user journeys throughout involves monitoring, analyzing, and accounting for complex interactions between many code modules. Then, from that set of possibilities, they must accurately choose the right set that deploys developer time, server resources, server time, and stated outcomes to balance business objectives. And they must re-evaluate those choices on a regular basis. Developers typically focus on small slices of an application at a time. To expect developers to fully bear the burden of comprehensive E2E testing is asking them to understand the entire universe of the application’s development and usage forwards and backwards in time. Truly no one is positioned to do so.

Developers are good at doing what they’re hired to do: developing code to innovate product—and even testing that code—and should remain primarily focused on doing so. It’s a waste of resources to task developers with end-to-end testing, and they’re not positioned to do it best.

Instead, due to the complexity of effective end-to-end testing, the ideal person to determine and execute end-to-end user tests is someone whose core expertise and focus is in understanding the entire user journey and the outcomes thereof, not someone who is asked to tack on end-to-end testing as an afterthought. E2E testing should be driven by an independent group with a mandate to focus on it and the time invested to maintain it: this can be the product team or it can be QA as a whole (a QA analyst, QA automation engineering team, etc). These groups can, with the help of tools and data, wrap their arms around the different user journeys, develop test cases for them, and write tests designed to catch bugs at the user journey level, and maintain them over time. This level of testing doesn’t require intimate understanding of the underlying modules of code behind the application; it’s instead meant to ensure that users can always use the application as they want to. Software teams should leave testing of lower levels of the application to those lower levels of testing—unit and API/integration testing.

Ideally, QA teams should not simply be tasked with guessing at how users are using their applications. They can and should employ advanced product analytics to understand these user journeys and how they are evolving with time. In this way, focused testers are then able to fully understand which test cases are most relevant and write the best corresponding tests to ensure quality without bloating the testing suite.

In any successful business, different roles are designed to allow talented individuals to specialize and focus. Whether it’s specializing in sales operations vs. selling and closing, marketing content vs. advertising strategy, or development and testing, specialization allows teams to operate with focus and excellence. With E2E, it follows that a specialized and complex need should be filled by a designated individual with a specialized focus and toolset in order to get the highest quality result without wasting resources.

End-to-End or Unit Testing: Which Tests for Which Bugs?

“Not even Ares battles against necessity.”

-Sophocles

When designing a holistic testing strategy for any application, the QA strategist has to first answer, “Which testing methods should I utilize for which types of bugs?” Some bugs are rendering errors, some involve the application returning the wrong data, others are functionality issues (the user simply can’t do what they intend to), and others are application-level/browser errors, each requiring some specificity in approach.

Often, businesses rely too exclusively on either end-to-end (browser-level) testing, or too exclusively on unit testing, without properly accounting for different kinds of bugs. Some businesses try to bake various data validation checks into their end-to-end testing. At ProdPerfect, we’ve heard business leaders suggest, “if all of your unit tests are well-written, there’s no need for end-to-end testing.” Both of these approaches, though, are flawed. QA teams need both end-to-end and unit testing, and they should be applied differently.

Unit Testing vs. End-to-End Testing

Unit testing checks code blocks (typically with a black-box mindset): variable X is the input; variable Y should be the output. Unit testing efficiently checks for the functions or calculations that provide resulting data—a numerical value, a text string, etc. End-to-end testing tests all layers of the application at once; it’s best-suited to make sure buttons, forms, changes, links, and generally entire workflows function without problems.

Here’s an example to illustrate the proper approach to testing decision-making:

“Llamas R Us” is an E-Commerce company, selling llamas online by subscription. Their software includes a sales tax calculator. When a customer selects their ship-to location during the checkout process, the sales tax calculator automatically calculates the tax and applies it to the total cost of the customer’s monthly llama purchase. What needs to be tested here is that the right sales tax is applied to the llamas being purchased, depending on which state the would-be llama farmer is living in.

Llamas R Us may be inclined to assign testers to manually complete the checkout process, select different locations, and calculate whether the correct sales tax is being applied. To perform these tests, they may consider using automated end-to-end testing, writing unique code for each different state’s sales tax to ensure full test coverage of the calculator feature.

However, though it seems comprehensive, this system is not in reality optimal. For one, it’s inefficient, as it’s difficult for end-to-end testers, even with automated tests, to read the resulting data from such a web page and verify it—humans are imprecise and machines aren’t great at scraping raw data off of a web page. It’s also simply inefficient to test a checkout process 50 times for 50 different states. Doing so would cause testing time and human testing efficiency to implode.

Data Validation vs. Application Functionality

At its heart, testing the sales tax calculator is a data validation test. The hypothetical calculator is reliant on a particular set of inputs (states, countries, etc.) to generate a particular set of outputs (the sales tax multipliers). Thus, it is an ideal candidate for unit testing instead of end-to-end testing. In this case, Llamas R Us should create individual unit tests to verify the functionality of their sales tax calculator. These tests run much faster, require less work to set up, and don’t need to be changed each time the user interface is tweaked—they should always work until the sales tax calculator code is itself changed.

End-to-end testing, automated or otherwise, is ideal for testing the functionality of an application, rather than the data being sent to the user. For Llamas R Us and their E-Commerce application, end-to-end testing is ideal for ensuring that a buyer can add products to their cart, can navigate shopping categories, and can access product details, images, and reviews. In this case, Llamas R Us isn’t testing to make sure the right data emerges, but that the workflow can be completed, consistently, as the application changes. Because of the various application layers and the many interacting blocks of code being tested simultaneously, end-to-end testing is efficient and invaluable as a tool for testing features like these designed for the usability of a web application. Simply testing at the unit level will never provide the full picture of whether or not the whole application works together. The reality is, you never really know it works until you see it working.

Data validation questions and raw functionality questions are very different issues which must be approached with different kinds of tests. The simplest answer to the question, “Which testing methods should I utilize for which types of bugs?” is that if you’re testing something which produces a given (and by its nature predictable) set of output data based on a given set of input data, unit testing is likely the most efficient. For testing the stability and functionality of a feature or workflow, end-to-end testing is likely best.

What is Regression Testing and Why is It Important?

An ounce of prevention is worth a pound of cure.”  -Benjamin Franklin

This article is for everyone that asks, “why would you want to keep testing the features that are already live? Shouldn’t we focus on testing new features?” Those of you who already know the importance of testing both can probably focus on other content instead.

Testing new features that go live is certainly critical to making sure your application works. Brand new features should be tested extensively, not only for functionality, but also for user experience, conversion (as appropriate), performance, etc.

Your current feature set needs to be tested for regressions. regression is what it sounds like if you took statistics in college: your application regresses from its current state to something worse. It is deviation from expected state. This happens in one of two ways:

  1. You’re maintaining, altering, fixing, or improving a feature (rather than introducing a new one) and break it.
  2. You’re changing just about anything and end up breaking something completely different in the application.

The first form of regression is fairly obvious; the second can be a head-scratcher. The short version of why this can happen: almost any application is deeply interconnected. There’s a concept called DRY – “Don’t Repeat Yourself.” Good developers don’t copy code; rather, they make that code accessible to all features that touch it. Any area of an application depends on many others to function properly. If you break something while working on inventory management, you might wreck checkout. Updating infrastructure might break login. It’s not that every change can truly affect every part of the application, but any change might impact multiple parts of the application if a bug is introduced.

Regression testing, therefore, tests to make sure any of these regressions are caught before they make it to production. Generally, you run a suite of regression tests by testing every unit of code, every API, and core user pathway across the application at the browser level, with every build. Some teams can’t run their regression suite fast enough (or are using manual or crowdsourced testing, which has incremental cost per test run because you’re throwing extra bodies at the problem) that they run their browser regression testing suite on a less-frequent schedule. The obvious downside of this is that you’re more likely to let regressions into production before they are caught.

Automation vs. Manual Testing

If you’re considering automating your browser regression suite, start with features that are least likely to be changing rapidly: building automation takes time to pay off, so you want to make sure that these tests are going to be run more than a few times before they need to be re-written. For brand new or rapidly-evolving features, manual testing may be your most efficient approach.

When you do decide to automate: automated browser regression testing suites are built in a number of ways. The most basic is scripting them in SeleniumCypressCapybara, with Javascript in TestCafe, or using other such frameworks. They can also be built using record-and-play tools such as Selenium / TestCafe IDE. Machine Learning is making record-and-play stronger and less time-intensive, and will eventually allow record-and-play to drive itself using web traffic data.

What is End-to-End Testing?

“You can test every servo, every wire, every component; the hardware, the firmware, the software. But you have no idea whether your robot is going to work, and whether it’s going to work in the wild, until you let it out into the wild and let it fail. And it will fail, even when it seems like every component works individually. Even well-built things get hairy when you connect them all together.”

We didn’t have a truly applicable analogy for end-to-end testing until we heard it from a customer that had previously built robots before moving over to web software. Sure, there must be some sort of theoretical, Platonic ideal of testing in which exhaustive testing of components and subsystems will guarantee that the robot—or your web application—will work without needing to be run “in the wild.” But, we’ll wager, nobody’s found it yet.

The Advantage of End-to-End Testing

This is the essence of end-to-end (E2E) testing, and why it’s so important. Your web application is probably less complex than a Mars rover, but you similarly won’t know whether it’s going to work once you put all the bits together until you’ve done just that. Your unit tests will test individual blocks of code for their core functionality. API/integration tests will make sure that your “subsystems” are working as intended. But, E2E tests are intended to test the entire application as real users would use it, in conditions similar to how real users will use it.

Therefore, an E2E test will actually launch the application in a browser and interact with it in a way that will test every layer of the application: the user interface itself, the browser (and compatibility with it), the network, the server, the APIs, the codebase, any 3rd party integrations, and any hardware—the whole kit. As with the robot, you don’t really know how all of these components and layers will work together until they’re doing just that—working together. You therefore don’t want to be shipping changes to your application without testing it end-to-end (unless you don’t mind bugs sneaking through.)

E2E tests can assume many names, in part depending on their level of rigor. They can be called browser tests, smoke tests, user acceptance tests (UATs), or (less accurately) UI tests. Typically these all mean the same thing: you launch a browser to interact with the application, and check that specific behaviors still work as intended.

There are two ways to launch this browser and interact with the whole application: the first is with a human who checks for failures by clicking around, referred to as manual testing. The second is by having a machine virtually simulate a human, using predetermined validations to check for failures, referred to as automated testing.

Data-Driven Testing

And as with our Mars rover, it’s ideal to test the application by simulating real-world usage as precisely as possible: testing the application in the same way that your users are using it, or are going to use it. This requires having data which tells you how your users are in fact using your application. Utilizing real user data is always possible when testing for regressions. But, user behavior needs to be estimated (or, frankly, guessed) when testing brand new features because you don’t have data about real usage quite yet.

Some teams might be tempted to do “kitchen sink” testing and try to test the application in every possible way, rather than in a way that reflects user behavior. We discourage this elsewhere in more detail elsewhere, but the primary consideration is that E2E tests are the most expensive, least stable, and slowest tests you’ll run. Having too many is going to incur dramatically increased costs for steeply-diminishing returns.

Limitations of E2E Testing

Finally, a word of caution: E2E testing has limitations. It’s great at testing that the application will generally function: that users can always move through workflows without errors or breakages. It’s great at ensuring that all of the gnarly bits of code are working together when a user checks out or logs in or executes an analysis. But E2E testing isn’t great (or efficient) in testing that the right data is going to be displayed for a user or stored in the application—this is a place for unit tests to thrive. E2E testing also isn’t great at showing you where in your codebase your bug is hiding—just where in the user’s journey they’re going to find that the application is broken. Finally E2E testing isn’t great at telling you whether your page is formatting incorrectly or “looks wrong.” It can do a bit of this, but it’s a heck of an expensive way to do so. We recommend using a tool such as Percy.io for testing visual regressions, instead.

In short: ignore E2E testing at your own peril, but over-relying on it won’t do you any favors, either.