Testing Applications Built on Serverless Architecture: Don’t Fear the Transition


Making a full migration to building an application on serverless architecture is a daunting process already, but to the uninitiated, the journey can seem terrifying.

However, with the right plan, the migration brings greater development speed, control, and cost management. This, of course, is why people choose to go through the transition.

To a large extent, much of the fear is overblown. Myths abound that one must radically change their development practices when migrating to serverless. While some practices change as you harness greater speed and control, many won’t, and there’s no need to throw the baby out with the bathwater. As you plan your transition to serverless, you’ll benefit greatly by knowing which practices don’t need to be changed as part of your transition plan.

Testing in Serverless

Starting with the good news: your testing practices don’t need to change to handle the transition to serverless. The really good news is they can even be improved.

Serverless architecture allows you to compartmentalize different functions easily, meaning the intent and purpose of each particular function more clear. This makes unit testing much simpler and clearer, and it’s easier to know when your code is properly covered by unit tests. Internal APIs also make up a greater portion of the codebase. Well-written APIs are probably the easiest part of the codebase to test because the contracts are so clear in the API code itself. Making the jump to serverless, therefore, doesn’t require changing your testing processes at lower levels.

In fact, it makes the process much easier and clearer.

These components all bubble up into a unified whole application. If you’ve taken advantage of your serverless architecture to compartmentalize different parts, then modifying an individual module is less likely to break everything else. You’re less likely to create a regression based on sloppy integration, which is great. But you’re also going to be shipping new changes continuously and likely not always through a central unified pipeline. This will slightly change how you test the application end to end.

The best way to make sure you haven’t broken your application is still to test the whole thing at once, end to end (this is sometimes called browser testing, UI testing, or user acceptance testing). To support serverless development, you will need to automate your end-to-end testing, and keep the testing suite lean so it runs quickly and can support frequent deployments.

Best Practices for Automated End-to-End Testing

Most principles of good end-to-end testing practice are consistent from bare-metal to serverless.

E2E tests are intended to test the entire application as real users would use it, in conditions similar to how real users will use it. Therefore, an E2E test will actually launch the application in a browser and interact with it in a way that will test every layer of the application: the user interface itself, the browser (and compatibility with it), the network, the server, the APIs, the codebase, any 3rd party integrations, and any hardware — the whole kit.

  1. Set up a testing environment that closely reflects the live application being tested. This QA, Testing, or Staging (whatever you want to call it) environment will be updated with the most recent build before that build goes to production, and the tests will be run there. This environment will be integrated into your manual or automated deployment pipeline.
  2. Implement a process to react to the feedback your tests give you. If the tests fail, your build should not deploy to production. That feedback should immediately alert the developer who committed the build, and the developer should be responsible for diagnosing the failure, determining if there is a bug, and if so, fixing it. If not, the developer provides feedback to the QA automation engineering team about why the test failed and how to modify tests to pass on the next bug-free build.
  3. Decide what to test. Test case management is quietly the most difficult part of E2E testing, and is critical to get right. In short, your E2E tests should reflect how users are actually using your application. Focus on actual use cases that are intended to end in a satisfying conclusion for the user; that is, they have accomplished something they set out to do.
  4. Design validations. Validations in your automated test code should validate that each interaction point successfully vaults the user through their workflow; each interaction should successfully get the user to the next one. Tests should also validate the data that is transformed directly by the interactions through the test workflow. This ensures that the user has succeeded in their intent: the password is updated, or the product is ordered, credit card is charged, and it will be shipped to the right place.
  5. Finally, you’re going to write the test code, which will direct a driver to launch the browser, initiate each interaction, and validate every step along the way. Selenium is the most common toolset, but recent innovations have brought about other competitive frameworks such as TestCafe and Cypress; new ML-driven tooling also shows promise as a highly stable execution framework.

Your end-to-end tests should be run every time you deploy code, whether it is a front-end change or a back-end change, big or incremental, to catch regressions before they go live. You’ll need unified tooling (including Continuous Integration) to ensure the testing suite is kicked off automatically regardless of where the deployment occurs. If the tests are kicked off immediately and run quickly, they provide the same level of instantaneous feedback that unit and API tests provide, dramatically increasing their value over tests that are run much later in the process.

In addition to end-to-end testing, developers building on serverless need to set up continuous monitoring to look for unexpected performance or functional changes on production. Besides an end-to-end testing suite as a synthetic functional monitor, you’ll want a serverless-specific application performance monitoring tool like IOpipe to trace and profile which serverless functions are potentially impacting users.

This article was first published on IOpipe’s blog.

Leave a Reply

Your email address will not be published. Required fields are marked *