What is Regression Testing and Why is It Important?

An ounce of prevention is worth a pound of cure.”  -Benjamin Franklin

This article is for everyone that asks, “why would you want to keep testing the features that are already live? Shouldn’t we focus on testing new features?” Those of you who already know the importance of testing both can probably focus on other content instead.

Testing new features that go live is certainly critical to making sure your application works. Brand new features should be tested extensively, not only for functionality, but also for user experience, conversion (as appropriate), performance, etc.

Your current feature set needs to be tested for regressions. regression is what it sounds like if you took statistics in college: your application regresses from its current state to something worse. It is deviation from expected state. This happens in one of two ways:

  1. You’re maintaining, altering, fixing, or improving a feature (rather than introducing a new one) and break it.
  2. You’re changing just about anything and end up breaking something completely different in the application.

The first form of regression is fairly obvious; the second can be a head-scratcher. The short version of why this can happen: almost any application is deeply interconnected. There’s a concept called DRY – “Don’t Repeat Yourself.” Good developers don’t copy code; rather, they make that code accessible to all features that touch it. Any area of an application depends on many others to function properly. If you break something while working on inventory management, you might wreck checkout. Updating infrastructure might break login. It’s not that every change can truly affect every part of the application, but any change might impact multiple parts of the application if a bug is introduced.

Regression testing, therefore, tests to make sure any of these regressions are caught before they make it to production. Generally, you run a suite of regression tests by testing every unit of code, every API, and core user pathway across the application at the browser level, with every build. Some teams can’t run their regression suite fast enough (or are using manual or crowdsourced testing, which has incremental cost per test run because you’re throwing extra bodies at the problem) that they run their browser regression testing suite on a less-frequent schedule. The obvious downside of this is that you’re more likely to let regressions into production before they are caught.

Automation vs. Manual Testing

If you’re considering automating your browser regression suite, start with features that are least likely to be changing rapidly: building automation takes time to pay off, so you want to make sure that these tests are going to be run more than a few times before they need to be re-written. For brand new or rapidly-evolving features, manual testing may be your most efficient approach.

When you do decide to automate: automated browser regression testing suites are built in a number of ways. The most basic is scripting them in SeleniumCypressCapybara, with Javascript in TestCafe, or using other such frameworks. They can also be built using record-and-play tools such as Selenium / TestCafe IDE. Machine Learning is making record-and-play stronger and less time-intensive, and will eventually allow record-and-play to drive itself using web traffic data.

The Cost of a Bug in Prod

The Cost of a Bug in Prod

How much did your last bug cost you?

Everyone knows bugs in production cost revenue and are a nightmare for the engineering team. But browser automated testing is so costly and inconsistent that many teams abandon efforts to maintain a testing suite. At what cost?

How much QA and developer time was allocated to fixing the bug? How far was your next release pushed out? How much downtime did you experience, and how much revenue was lost while finding and fixing the bug?

IBM estimates that a bug making it to production increases your costs by 7x versus finding the bug in testing. Why hasn’t conventional browser automation testing solved the problem?

Conventional End to End Testing Isn't Worth It

No matter what tool you use to maintain end to end testing, you’re stuck with three fundamental problems:

  1. A talented engineer needs to write every test specification by hand
  2. Browser-level tests are brittle and flaky, and highly costly to maintain
  3. You simply don’t know your test coverage level

But what if it just worked? What if your users’ behavior was building and updating your testing suite—automatically?

Sound too good to be true? Learn how you can break the cycle — schedule a demo today.

Share on facebook
Share on google
Share on twitter
Share on linkedin

Why Browser-Automated Testing Consistently Fails

Why Browser-Automated Testing Consistently Fails

We’ve seen a lot of attempts at building browser-automated testing suites in our time. Most of them fail. The story typically goes something like this:

  • A growing team has been manually testing their application for some time
  • As they grow, they realize that manual testing is missing key user flows, and more importantly is significantly slowing down deployments
  • The team hire QA automation engineers or assigns developer engineers to build an automated test suite
  • The effort takes a few months
  • Over time, release schedules and firefighting get in the way of updating, maintaining, and expanding the testing suite — and the suite starts to degrade

At which point, one of two things happens:

  1. The test suite grows increasingly decrepit and obsolete until it is abandoned, or
  2. The number of tests and effort per build balloons and bloats until running the test takes hours and maintaining it delays your build cycle even more than previous manual testing
Why does this always come to pass? Much of the problem is the natural brittleness of browser-level tests–the effort in maintaining them is why Google suggests they take the least of your attention. But the main reason is that when building end to end tests, most teams are focusing on the wrong questions entirely.

A quick google search for “Improve QA testing” will bring you to a host of articles with titles like “___ Ways to Improve QA Testing.” I’m going to save you some time and provide you with the highlights here:

1. Incorporate testing early in the process

2. Outsource

3. Don’t forget to do it

4. Use this tool or that tool for test case management or execution

There’s a whole lot of advice on how to test your application, especially at the browser level. But bugs don’t get through because you don’t know how to test. Bugs get through because you don’t know what to test. Testing suite get bloated and broken because you don’t know what to test, and you end up building hundreds or thousands of tests in the hopes you’ll cover what you need. Tools can’t fix that for you, and outsourced firms can’t fix that for you, no matter how much money you throw at them.

 

Figure Out What to Test

This may seem obvious, but that’s precisely the problem: most teams think figuring out what to test is fairly self-evidence, or simply requires an engineer’s intuition: they’ll test what they think is important, flaky, bug-prone, etc.

But what should you test to make sure your application won’t break when your users use it? You should test what your users are actually trying to do.

In reality, your users tell you everything you need to know about what to test – you just need to start listening. No team succeeds with 10,000 end to end tests (and yes, we’ve seen 10,000), spitballing to try to catch everything. The teams that win–who can keep their automated testing suites maintained, relevant, and effective–are those who test for functionality along real user behavior.

Your product team is using analytics to understand what your users are doing, and improve workflows to improve the user experience. They’ve been doing it for years. Why isn’t engineering using the same analytics to build better tests?

The Path Forward

The first step to making this transition is to evaluate your current testing suite against your users’ common flows.

Which of your current automated end to end tests are covering actual user behavior, and which are irrelevant? Where are your gaps? ProdPerfect can help you quickly find the answers. We’ll tell you what your users are doing and what your tests are doing, so you can close the gaps and cut the chaff.

Share on facebook
Share on google
Share on twitter
Share on linkedin