Frequently Asked Questions
What ProdPerfect Tests
If users are doing something on your site with any consistency, we’re testing it. There is always a chance that a user will find a novel way of moving through your application, and if you have edge cases you care about, we can ensure they’re covered. In the end, no testing service or product will get “100% coverage.” We cover at least 65% of all observed behavior on your application.
This is a minimum we guarantee. In reality, our analysis engine produces test cases until the next test case we’d produce starts to look a lot like noise–it is either very rare behavior or very difficult to establish as a real pattern. Most of our customers have a coverage level of somewhere between 70% and 80%. We believe that, depending on the nature of your application, that 65% to 80% coverage of observed user behavior is the right mix because beyond this, tests become highly marginal in coverage and will begin to throw more false positives, and the test suite runtime begins to explode–this would actually decrease the value of your test suite.
You automatically receive standard error output from the testing suite through your CI. In addition, the testrunner automatically sends screenshots of the broken step and video of every successful run, every failure, and the previous successful run of a broken test to a web portal that gets linked through CI on every test run for rapid triage. As our Delivery Engineers investigate the bug, we will add additional detail by hand through Slack or another integrated communication channel.
If a new feature affects a flow users are currently traversing, our tests are likely to break, and we will update them after checking to make sure there’s not a bug. This means we test most new features. Features that are unrelated to old flows (for example: you add a configuration workflow for a Slack integration) and introduce brand new behavior aren’t detected by default and won’t be tested. Ultimately ProdPerfect is a regression testing suite.
There is an exception: if new features spend over 48 hours in staging or preprod, and are being used by internal users during that time, we can configure ProdPerfect to detect that behavior to bootstrap tests for new features.
We have deployed to a Beta customer group our Chrome extension autogenerator, which allows them to tell us what brand new feature sets they want tested with a few clicks. This should become live to all customers in a few months.
When managing an end to end test suite, any engineering leader has two options. You can either:
- Pause development to go update your end to end tests in order to (hopefully) prevent them from ever breaking due to UI changes, or
- Develop code at full speed and override tests that break due to UI changes
We use the second option in order to minimize the need for coordination and to maximize your deployment speed. If a ProdPerfect test fails after you make a UI change, you should still investigate the failed test, as there may be a bug. If there’s not, simply override and deploy through the test suite failure. ProdPerfect will comment out the test and fix it asynchronously.
(Commonly asked about: signup flow, password change, etc)
If an action or flow is performed by a statistically significant number of users, we’ll cover it–we have tight thresholds for what’s “in” and what’s “out” of the testing suite. ProdPerfect will include password change flows, account management flows, and signup flows (for example). However, there may be very rare flows that can cause bugs. We won’t catch these automatically. Instead, we provide all of our customers an allocation of “on demand” tests. You simply tell us what to test, and we’ll build those tests to cover these edge cases.
We don’t test for purely visual regressions, and we aren’t a substitute for Unit Testing, as we don’t test for the correct data being returned by the application. If you have test cases that are designed to test that certain input variables consistently yield certain outputs, we recommend pushing these to Unit or Integration testing where possible, and we won’t cover them in our end to end tests.
We also can’t yet handle Canvas elements or other drawing. We’re working on it, promise.
There is no set amount of test cases, so this will vary from customer to customer. We don’t charge per test case, so we’re not incentivized to cap you on the total number. The ultimate number of test cases will depend upon how many the analysis engine discovers: if the discovered workflow has a sufficiently strong pattern and high frequency, it will get a test case. The guaranteed minimum coverage of our test suite is 65% of all observed user behavior; your number may be higher.
Organizations that work with us are those willing to challenge the idea that the test cases they have conceived are the right ones. Our approach is to cover what your users are actually doing repeatedly–this is what’s most important for them, so it’s what’s most important to test. We believe in keeping test suites lean and mean, so they can run quickly and stably, with every build, and so that every broken test is actionable and important to developers. We do have customers that maintain their own edge case test suite which they run less frequently.
Here is a short video by Erik about the risks and challenges of having a very large E2E testing suite.
This is in fact our speciality. Our analysis engine excels at finding patterns amidst very complex workflows, and identifying which pathways through the application are important to the most users. ProdPerfect’s use of actual data to find patterns in a behavioral sandbox gives us a distinct advantage over any human that is otherwise attempting to infer or make an educated guess at what to test.
Yes, though it’s likely to be a more expensive deployment. If you simply have feature flags or A/B testing for different customers, we’ll need to discuss what is the “vanilla” version of the application that ProdPerfect will be testing on staging, and rationalize to that. If you’re closer to a custom development shop, where every customer has custom code, we will treat these as different applications with different test suites and price accordingly.
Yes. Because we’re modeling elements using Autotracking, we’re not reliant on consistent selectors the same way Selenium is.
Our default tests run in Headless Chrome to be fast. Our testrunner can also use any locally-installed browser. A better solution: the ProdPerfect testrunner is a Selenium Grid driver so it can plug directly into BrowserStack or SauceLabs. Our Delivery Engineer will help you get this set up.
We can handle mobile web generally; we will detect the behavior, categorize it, and replay it. Here is a more extensive guide.
If you can update the application with our data tracker, and the application can send that clickstream metadata to the internet, then yes, we can test it. If you are maintaining many different versions of your application for different customers, this can become complex, but we’ve handled it before. Talk to us about it.
Working With ProdPerfect
Yes! Most of the time, actually. See our guide to ProdPerfect with your team.
We can measure it! It’s one of the best parts of using ProdPerfect: instead of hoping your end to end testing suite covers what’s important, we can measure what portion of total user behavior we’re covering, and measure against minimum-frequency thresholds. You can see this through our dashboard and confirm our performance while you’re using us. We have powerful SLAs that hold us accountable to doing a great job:
- Minimum 65% coverage of all observed user behavior
- 95% stability
- 4-hour turnaround on broken tests
- 48-hour creation of new tests when detected on production
Your test suite will be stored in a private Git repo for which you’ll have read access. The latest version of the testing suite will always be installed to your CI server using Node Package Manager with each deploy, so by default it will live there.
Nope. If you have great test hooks, our test suite will likely be even more stable, but we model events using Autotracking (and execute events on those models) to make us less reliant than Selenium on “good” test hooks in code.
You don’t need to do anything to build, maintain, or expand the testing suite. We got it. You need to respond to bug reports, of course, and keep a stable testing environment up and running for us, but that’s all. Very frequently people call this “magic” or “too good to be true.” If you feel that way, come get a demo.
Nope. It would royally mess up our workflow. We’ll be faster and more effective anyway.
You will be connected to a dedicated Delivery Engineer on Slack. Ask for their help. Their job is to make sure your test suite is stable and meeting its SLAs, and also to make sure you get the best experience possible working with ProdPerfect.
You own the test code—it’s your property, you have it locally, and you have access to that test code on your private-linked Git repo that we maintain. You can therefore run ProdPerfect locally as much as you like, and if you need to leave ProdPerfect, you can maintain this code yourself going forward.
Nope. It’s not our model and it’s very far from how we can provide you the most value. Talk to us about what KPIs we can use in a pilot to show you that we can maintain and expand your testing suite with greater accuracy, speed, and resource-efficiency than an internal or outsourced team.
How ProdPerfect Tests
Our humans currently drive internal toolsets–centered around a Chrome extension record-and-play we call the Autogenerator (because it automatically produces test code)–to add mock test data to make sure the tests can successfully traverse the pattern we’ve detected on the production environment. They also use that same Autogenerator to fix tests that break due to a UI change. This saves somewhere between 80-90% of human effort.
Because we already know the path we’re testing, we have a definition of success for each test case. We will trial-and-error mock data until we get down the path we need to go. This includes drop-downs, form fields, and any other data that would cause us to go down a different or branching path. As you can imagine, the fact that mock test data can be trialed until the result matches a known success state, this is a machine learning problem that can eventually be fully automated. We’re working on it.
A few elaborations of how we manage test data:
- Much of the mock data doesn’t impact the workflow–logins, account details such as name or address, which product is selected to add to cart. We use what is convenient.
- Syntactically-restricted data such as a phone number will be created using context or trial-and-error.
- If we observe negative test cases in production, we will test them–however we will not exhaustively test permutations of mock data to trigger the negative test case.
- We create as much test data as possible, especially during account creation flows. We try to manipulate test data that we create in the test suite to be less dependent on our customers’ test data management processes.
- If the test data is not triggered by a human action (such as in a monitoring tool), we will need to ask for an API or data stream to allow us to trigger or receive that data.
- If data is being passed between one user role and another (such as a customer ordering a product and an employee handling the order) we will order the tests in order to make sure this data is created and passed during the test flow.
In short, it’s a process of trial and error split between humans and machines. Keep in mind we are collecting metadata based on each event, and we’re determining/discovering the sequences that people are going through. From there, a trial and error process allows us to figure out which mock data we should be inputting to get through each pathway. Because the process already knows what success looks like based on a pre-determined workflow, we will know that the mock data is correct when we can successfully complete that workflow. We then validate that the data returned by the application is returned consistently.
This process currently has some humans helping. However, in the future machines will be able to do this guess and check independently given the limited amount of options they can check, and because they know which pathway they need to get to next, given the pre-defined pathway which defines success. It’s an exercise in machine learning! We’re currently teaching it.
Here is a video of our co-founder Erik explaining with some visualizations.
Once we’ve decided what mock data we’re using, our engineers use the Autogenerator to validate that data being returned in the workflow, in the middle or at the end of the flow, in the DOM. If the data is transformed in some way using business rules, we will make sure that the data is returned consistently run to run.
- We use MailSac to receive emails sent to us. Because we provision an email for a login to your testing environment (typically “YOURCOMPANY@prodperfect.com”) we can always receive that mail. This allows us to validate emails and also click buttons for account signup.
- For 3rd-party integrations we will typically (as necessary) create accounts with the third-party application and handle any data confirmation or manipulation with a test.
- For outgoing phone calls, texts, and other nonweb data that you want to validate, there are lots of open-source tools on the market and we’re likely using one already.
Sadly not. Our process requires us to use a testing framework that can be deeply integrated with our analytics so that we can serve up tests that meet our SLAs. Because you don’t need to maintain the test code, you don’t need to worry about it.
We can separate our analyses by user rule to make sure less-common user roles (such as an admin) don’t get swamped by more-common ones. In our integration with you, we’ll ask what user roles you have. We’ll need separate logins to your testing environment application for those user roles.
Three ways. First, TestCafe has some great features for handling wait times and network/database instability without becoming flaky. Second, we use auto-tracking in our tracking library to prioritize using stable event identifiers in our tests. Finally, we run the testing suite every hour through a shadow CI to catch any testing instability or flakiness before tests are run in your CI to test the build.
We break up different user groups and run analyses separately for them. For example: an education technology application may have students, teachers, school administrators, and super-admins. We’ll analyze each of these separately and deploy tests for all four groups, typically with different logins for each type of user.
If you’re A/B testing on production (such that different cohorts/customers see different features), we’ll detect these multiple pathways and test both. We will infer or force the conditions that trigger the A or B on staging.
If you are flagging features on production, we’ll test these features as we do A/B tests. If they are being introduced in staging, they may break our tests, at which point we will respond and use exploratory testing to understand what changed, and update the test. If they cause features to disappear completely, you’re likely to get a bug alert that you’ll know to disregard. If they introduce brand new behavior that does not break the tests, this looks more like a brand new feature—we’ll test it when it reaches production.
In these situations, we add tests that create the data we need (users, events, customers, etc) at the beginning of the testing suite wherever possible, so we can create our own data.
We need about 10,000 total events to start building a test suite. Note each event is a click, form submission, pageview, etc. A good rule of thumb: if you have at least a few dozen human-hours of activity on your application each month, we’ll be fine.
Integrating ProdPerfect Into My System
Yes. We integrate with every major CI system out there. We can probably figure out how to integrate with your goofy niche CI system if you have one. We’ll provide you a custom script for your CI system to install the latest version of the test suite using Node Package Manager.
You can run the test suite manually. You can also ask us to run the test suite at a set frequency from our own servers at no extra charge.
If you’re deploying to ephemeral or isolated environments, that’s fine. You’ll work with our Delivery Engineers to have your CI point our test suite at the ephemeral environments when they spin up, using the right script. Just as you would with an automated test suite you deployed yourself.
If you’re deploying to production directly, you may want to work with us to make sure you can launch your application in a stable dev environment, or work with us to test on production.
In short, it depends how your pipeline is set up, but generally yes. ProdPerfect deploys end to end test suites, so you’ll need some form of stable environment to spin up for testing. You can do this during a pull request before the merge to a higher-level branch; often, using an ephemeral testing environment such as Heroku Review Apps is the way to do this as part of an automated deployment. If you have static pre-production environments, you can run ProdPerfect on developer environments as a pre-merge test.
During onboarding and integration, your team will need to install our data recorder and CI integration, and provide ProdPerfect appropriate access to the testing environments. We may have a few questions during initial implementation. Beyond that, no effort is required other than fixing broken code with bug reports that we provide.
Yes; we’re designed to. Our tests suites run quickly because they are lean and can be parallelized. We can rapidly respond to broken tests with our Delivery Engineering staff, who are receiving telemetry about every test suite run. We have auto-generation tooling that allows us to rewrite and redeploy tests faster than a QA Automation Engineer could.
When a test breaks, we immediately investigate it. If the test is broken due to a feature change, we comment it out (communicating with your team via a shared Slack channel), and then typically update the test within 30-60 minutes (4 hour SLA). If you’re using ProdPerfect as a build-blocker this will keep ProdPerfect from blocking the next build.
Very soon, we’ll have a push-button feature in our web portal to comment out broken tests, so you’ll no longer be dependent on a ProdPerfect Delivery Engineer to comment out broken tests. This will let you move even faster and not get blocked out of our 16-hour support window.
Any environment with a prod-like version of the application, with some prod-like data, and access to the internet, can be used. This could be localhost, QA, staging, pre-prod, or production.
Work with your Delivery Engineer to help them understand your deployment process, and we’ll accommodate. If you develop temporary environments during deploys, you’ll pass us the variable that sets the URL or IP address for that temporary environment, and we’ll target that on deploy.
Not yet. Product analytics vendors tend not to expose their raw metadata, which we need. Don’t worry; the data tracker is safe and lightweight.
Nope. We work exclusively on the browser level.
Security and Privacy
Nope. We are working with some very big eCommerce clients and made sure that there would be no impact. We do this on load by loading after page complete, and on data transfer by using the Beacon API (or “Navigator Beacon”) to send the data asynchronously.
Our practice is to not collect personal information of any sort (PII, PHI, PCI, etc). You can see our Security FAQ here.
We typically work with your Security team to actively test the datatracker on a non-sensitive environment. We give them full access to the tracking library and the data we are collecting from their application to demonstrate that we aren’t collecting PII. Your team can also host the data tracker to ensure it is in control of all updates.
What Differentiates You From Competitors?
MABL & Testim (as well as others) are scripting or record-and-play tools that use Machine Learning to stabilize tests. They do this by making models of clickstream events that are robust through small changes. Large feature changes will still break them. Your team needs to develop, update, and expand the testing suite. You test through their application and do not own any code.
ProdPerfect auto-detects and auto-generates testing suites. When they break, we fix them. You receive, keep, and own the test code.
Rainforest (et al) is a crowdsourced manual testing service. You provide Rainforest your test cases, and they use an API to spread those tests among many testers, allowing them to manually test your application quickly. You pay per minute of manual testing, so if you test more often, you’re paying more. You develop, maintain, and expand the test cases. You test through their web application and do not own any code.
ProdPerfect auto-detects and auto-generates testing suites. When they break, we fix them. You receive, keep, and own the test code.