Frequently Asked Questions

End-to-End Testing with ProdPerfect

ProdPerfect’s tools are Javascript for the front end and either Chromeless or Selenium based for the full end to end testing. We’re most comfortable in Ruby, Python, and Javascript, but ultimately the underlying tooling for Selenium and browser automation is the same abstraction across every language and we can easily adapt to fit any setup you are using. Similarly we are experienced writing code to fit any CI system you currently have in place.

We can measure it! It’s one of the best parts of using ProdPerfect: instead of hoping your end to end testing suite covers what’s important, we can measure what portion of total user behavior we’re covering, and measure against minimum-frequency thresholds. You can see this through our dashboard and confirm our performance while you’re using us.

If users are doing something on your site with any consistency, we’re testing it. There is always a chance that a user will find a novel way of moving through your application. In the end, no testing service or product will get “100% coverage.”

We use TestCafe to asynchronously run tests, allowing you to perform a full end-to-end test in a few minutes. They’re written in Javascript.

Yes. We integrate with every major CI system out there. We can probably figure out how to integrate with your goofy niche CI system if you have one.

During onboarding and integration, your team will need to install our data recorder and CI integration, and provide ProdPerfect appropriate access to the testing environments. We may have a few questions during initial implementation. Beyond that, no effort is required other than fixing broken code with bug reports that we provide.

Yes; we’re designed to. Our tests suites run quickly because they are lean and can be parallelized. We can rapidly respond to broken tests with our Account Engineering staff, who are receiving telemetry about every test suite run. We have auto-generation tooling that allows us to rewrite and redploy tests faster than a QA Automation Engineer could.

When a test breaks, we immediately investigate it. If the test is broken due to a feature change, we comment it out (communicating with your team via a shared Slack channel), and then typically update the test within 30 minutes.

You automatically receive standard error output from the testing suite through your CI. In addition, we automatically send screenshots of the broken step and preceding step through our integrated communication channel (typically Slack). As our Account Engineers investigate the bug, we will add additional detail by hand through the integrated communication channel.

Our tests traverse your application in the same way that your users do, but in staging or other pre-prod environment. Our tests expect to be able to fully traverse the same flow, and receive the same application returns that users do in production. If these conditions are not met, or the application throws an error during the traversal, our tests will fail. This kind of testing is often called User Acceptance Testing, Functional Testing, or (loosely) Selenium Testing.

If a new feature affects a flow users are currently traversing, our tests are likely to break, and we will update them after checking to make sure there’s not a bug. This means we test most new features. Features that are unrelated to old flows (for example: you add a configuration workflow for a Slack integration) and introduce brand new behavior aren’t detected by default and won’t be tested. Ultimately ProdPerfect is a regression testing suite.

There is an exception: if new features spend over 48 hours in staging or preprod, and are being used by internal users during that time, we can configure ProdPerfect to detect that behavior to bootstrap tests for new features.

(Commonly asked about: signup flow, password change, etc)

If an action or flow is performed by a statistically significant number of users, we’ll cover it–we have tight thresholds for what’s “in” and what’s “out” of the testing suite. ProdPerfect will include password change flows, account management flows, and signup flows (for example). However, there may be very rare flows that can cause bugs. We won’t catch these automatically. Instead, we provide all of our customers an allocation of “on demand” tests. You simply tell us what to test, and we’ll build those tests to cover these edge cases.

We don’t test for purely visual regressions, and we aren’t a substitute for Unit Testing, as we don’t test for the correct data being returned by the application.

Any environment with a prod-like version of the application, with some prod-like data, and access to the internet, can be used. This could be localhost, QA, staging, pre-prod, or production.

Three ways. First, TestCafe has some great features for handling wait times and network/database instability without becoming flaky. Second, we use auto-tracking in our tracking library to prioritize using stable event identifiers in our tests. Finally, we run the testing suite every hour through a shadow CI to catch any testing instability or flakiness before tests are run in your CI to test the build.

We break up different user groups and run analyses separately for them. For example: an education technology application may have students, teachers, school administrators, and super-admins. We’ll analyze each of these separately and deploy tests for all four groups, typically with different logins for each type of user.

If you’re A/B testing on production (such that different cohorts/customers see different features), we’ll detect these multiple pathways and test both. We will infer or force the conditions that trigger the A or B on staging.

If you are flagging features on production, we’ll test these features as we do A/B tests. If they are being introduced in staging, they may break our tests, at which point we will respond and use exploratory testing to understand what changed, and update the test. If they cause features to disappear completely, you’re likely to get a bug alert that you’ll know to disregard. If they introduce brand new behavior that does not break the tests, this looks more like a brand new feature–we’ll test it when it reaches production.

In these situations, we add tests that create the data we need (users, events, customers, etc) at the beginning of the testing suite wherever possible, so we can create our own data.

MABL & Testim (as well as others) are scripting or record-and-play tools that use Machine Learning to stabilize tests. They do this by making models of clickstream events that are robust through small changes. Large feature changes will still break them. Your team needs to develop, update, and expand the testing suite. You test through their application and do not own any code.

ProdPerfect auto-detects and auto-generates testing suites. When they break, we fix them. You receive, keep, and own the test code.

Rainforest is a crowdsourced manual testing service. You provide Rainforest your test cases, and they use an API to spread those tests among many testers, allowing them to manually test your application quickly. You pay per minute of manual testing, so if you test more often, you’re paying more. You develop, maintain, and expand the test cases. You test through their web application and do not own any code.

ProdPerfect auto-detects and auto-generates testing suites. When they break, we fix them. You receive, keep, and own the test code.

Customer 1:

• Customer deployed code to staging that broke login and landed you on an error page after clicking the login button.

• Customer deployed code to staging that broke a page for adding a new linked account leading to an about:error page when navigating.

Customer 2

• Customer deployed code to staging that redirected non-admin users to on login rather than their site.

• Customer deployed code to staging that broke the ability to delete comments from matters.

• Customer deployed code to staging that prevented the creation of new Companies by an Admin user.

• Helped diagnose issues with certain forms such as their [redacted] form that existed before we showed up.

• Customer deployed code to staging that broke the ability to open the accordion menus that house all of your managed machines.

Customer 3

• Customer deployed code to a review environment that broke the ability to make distributions (500 error returned).

• Customer deployed code to a review environment that broke the ability to create new Projects by an Admin.

• Customer deployed code to a review environment that broke the ability to save new changes to a deal after editing.

• Customer deployed code to a review environment that led to an HTTP error being returned when attempting to open a deal in the editor.

Privacy and GDPR

ProdPerfect’s privacy policy is here.

Are you GDPR compliant?

Yes. We do not collect PII by default, however there may be circumstances in which we need to do so. We are Privacy Shield certified and abide by the Privacy Shield standards for both EU-US and Swiss-US policy sets.

ProdPerfect is committed to keeping all data safe and secure, as well as remaining in compliance of GDPR in the cases in which we must be classified as a Data Processor. Our Technical and Organizational Measures are available upon request. A Data Processing Agreement is available upon request.