What ProdPerfect Tests
- How do I know your tests represent full coverage?
If users are doing something on your site with any consistency, we’re testing it. There is always a chance that a user will find a novel way of moving through your application, and if you have edge cases you care about, we can ensure they’re covered. In the end, no testing service or product will get “100% coverage.” We cover at least 65% of all observed behavior on your application.
- Why is 65% coverage a good minimum number?
This is a minimum we guarantee. In reality, our analysis engine produces test cases until the next test case we’d produce starts to look a lot like noise–it is either very rare behavior or very difficult to establish as a real pattern. For test cases beyond the first 65%, customers can ask for them to be covered using “wildcards” that are part of every engagement. We always aim to be a lean test suite that truly caps the testing pyramid, rather than attempt to cover “everything” on an application, so we can support Continuous Testing and always test what’s most relevant.
- What do your bug reports look like?
You automatically receive standard error output from the testing suite through your CI. In addition, the testrunner automatically sends screenshots of the broken step and video of every successful run, every failure, and the previous successful run of a broken test to a web portal that gets linked through CI on every test run for rapid triage. As our Delivery Engineers investigate the bug, we will add additional detail by hand through Slack or another integrated communication channel.
- What do your end-to-end tests actually test for?
- Do you test new features?
If a new feature affects a flow users are currently traversing, our tests are likely to break, and we will update them after checking to make sure there’s not a bug. This means we test most new features. Features that are unrelated to old flows (for example: you add a configuration workflow for a Slack integration) and introduce brand new behavior aren’t detected by default and won’t be tested. Ultimately ProdPerfect is a regression testing suite.
There is an exception: if new features spend over 48 hours in staging or preprod, and are being used by internal users during that time, we can configure ProdPerfect to detect that behavior to bootstrap tests for new features.
We have deployed to a Beta customer group our Chrome extension autogenerator, which allows them to tell us what brand new feature sets they want tested with a few clicks. This should become live to all customers in a few months.
- I want to make sure that tests don’t break when I push a new UI change. Can you support that?
When managing an end to end test suite, any engineering leader has two options. You can either:
- Pause development to go update your end to end tests in order to (hopefully) prevent them from ever breaking due to UI changes, or
- Develop code at full speed and override tests that break due to UI changes
We use the second option in order to minimize the need for coordination and to maximize your deployment speed. If a ProdPerfect test fails after you make a UI change, you should still investigate the failed test, as there may be a bug. If there’s not, simply override and deploy through the test suite failure. ProdPerfect will comment out the test and fix it asynchronously.
- How do you handle edge cases?
(Commonly asked about: signup flow, password change, etc)
If an action or flow is performed by a statistically significant number of users, we’ll cover it–we have tight thresholds for what’s “in” and what’s “out” of the testing suite. ProdPerfect will include password change flows, account management flows, and signup flows (for example). However, there may be very rare flows that can cause bugs. We won’t catch these automatically. Instead, we provide all of our customers an allocation of “on demand” tests. You simply tell us what to test, and we’ll build those tests to cover these edge cases.
- What don't you test for?
We don’t test for purely visual regressions, and we aren’t a substitute for Unit Testing, as we don’t test for the correct data being returned by the application. If you have test cases that are designed to test that certain input variables consistently yield certain outputs, we recommend pushing these to Unit or Integration testing where possible, and we won’t cover them in our end to end tests.
We also can’t yet handle Canvas elements or other drawing. We’re working on it, promise.
- How many test cases will I get?
There is no set amount of test cases, so this will vary from customer to customer. We don’t charge per test case, so we’re not incentivized to cap you on the total number. The ultimate number of test cases will depend upon how many the analysis engine discovers: if the discovered workflow has a sufficiently strong pattern and high frequency, it will get a test case. The guaranteed minimum coverage of our test suite is 65% of all observed user behavior; your number may be higher.
- I’ve already come up with 500 test cases; how can ProdPerfect help me?
Organizations that work with us are those willing to challenge the idea that the test cases they have conceived are the right ones. Our approach is to cover what your users are actually doing repeatedly–this is what’s most important for them, so it’s what’s most important to test. We believe in keeping test suites lean and mean, so they can run quickly and stably, with every build, and so that every broken test is actionable and important to developers. We do have customers that maintain their own edge case test suite which they run less frequently.
- My application has very complex user flows. Can you help?
This is in fact our speciality. Our analysis engine excels at finding patterns amidst very complex workflows, and identifying which pathways through the application are important to the most users. ProdPerfect’s use of actual data to find patterns in a behavioral sandbox gives us a distinct advantage over any human that is otherwise attempting to infer or make an educated guess at what to test.
- My application is highly customizable for different customers. Can you help?
Yes, though it’s likely to be a more expensive deployment. If you simply have feature flags or A/B testing for different customers, we’ll need to discuss what is the “vanilla” version of the application that ProdPerfect will be testing on staging, and rationalize to that. If you’re closer to a custom development shop, where every customer has custom code, we will treat these as different applications with different test suites and price accordingly.
- Do you test responsive apps such as React and Angular?
Yes. Because we’re modeling elements using Autotrack, we’re not reliant on consistent selectors the same way Selenium is.
We accommodate single page transitions by observing change events and inferring the next event in the sequence based on how users interacted with the elements that appeared with the change.
- Can you handle cross-browser testing?
Our default tests run in Headless Chrome to be fast. ProdPerfect’s testrunner is a Selenium Grid driver and can integrate with various cross-browser testing tools. We prefer BrowserStack and SauceLabs. It’s worth noting that cross-browser testing solutions are frequently unstable, so be prepared for that.
- How about testing mobile web?
We can handle mobile web generally; we will detect the behavior, categorize it, and replay it. Here is a more extensive guide.
- Can you test native mobile, React Native, Electron, or other non-web applications?
- Can you test on-premise applications?
If you can update the application with our data tracker, and the application can send that clickstream metadata to the internet, then yes, we can test it. If you are maintaining many different versions of your application for different customers, this can become complex, but we’ve handled it before. Talk to us about it.
Working With ProdPerfect
- Do you work with teams that already have automated test suites or automation teams?
Yes! Most of the time, actually. See our guide to ProdPerfect with your team.
- How do I know you’re doing a good job?
We can measure it! It’s one of the best parts of using ProdPerfect: instead of hoping your end to end testing suite covers what’s important, we can measure what portion of total user behavior we’re covering, and measure against minimum-frequency thresholds. You can see this through our dashboard and confirm our performance while you’re using us. We have powerful SLAs that hold us accountable to doing a great job:
- Minimum 65% coverage of all observed user behavior
- 95% stability
- 4-hour turnaround on broken tests
- 48-hour creation of new tests when detected on production
- How do I get my test code?
Your test suite will be stored in a private Git repo for which you’ll have read access. The latest version of the testing suite will always be installed to your CI server using Node Package Manager with each deploy, so by default it will live there.
- Do I need to change my code to work with you?
Nope. If you have great test hooks, our test suite will likely be even more stable, but we model events using Autotracking (and execute events on those models) to make us less reliant than Selenium on “good” test hooks in code.
- Do I need to do… anything?
You don’t need to do anything to build, maintain, or expand the testing suite. We got it. You need to respond to bug reports, of course, and keep a stable testing environment up and running for us, but that’s all. Very frequently people call this “magic” or “too good to be true.” If you feel that way, come get a demo.
- Can I edit your test code if I really want to?
Nope. It would royally mess up our workflow. We’ll be faster and more effective anyway.
- What about a Pull Request (PR) workflow?
We currently don’t support opening PRs against your test suite. Our process requires us to build and update your test suite in a highly consistent way. If you have questions or changes you’d like to make, speak with your Delivery Engineer and they will help make sure the right updates are made.
- What does support look like?
You will be connected to a dedicated Delivery Engineer on Slack. Ask for their help. Their job is to make sure your test suite is stable and meeting its SLAs, and also to make sure you get the best experience possible working with ProdPerfect.
- What if I want to use the test code on my own or leave?
You own the test code—it’s your property, you have it locally, and you have access to that test code on your private-linked Git repo that we maintain. You can therefore run ProdPerfect locally as much as you like, and if you need to leave ProdPerfect, you can maintain this code yourself going forward.
- Can you write a test suite for us once and then let us maintain it?
Nope. It’s not our model and it’s very far from how we can provide you the most value. Talk to us about what KPIs we can use in a pilot to show you that we can maintain and expand your testing suite with greater accuracy, speed, and resource-efficiency than an internal or outsourced team.
How ProdPerfect Tests
- How much human effort is actually involved in test development?
Our humans currently drive internal toolsets–centered around a Chrome extension record-and-play we call the Autogenerator (because it automatically produces test code)–to add mock test data to make sure the tests can successfully traverse the pattern we’ve detected on the production environment. They also use that same Autogenerator to fix tests that break due to a UI change. This saves somewhere between 80-90% of human effort.
- How do you know what mock data to use in your tests?
Because we already know the path we’re testing, we have a definition of success for each test case. We will trial-and-error mock data until we get down the path we need to go. This includes drop-downs, form fields, and any other data that would cause us to go down a different or branching path. As you can imagine, the fact that mock test data can be trialed until the result matches a known success state, this is a machine learning problem that can eventually be fully automated. We’re working on it.
A few elaborations of how we manage test data:
- Much of the mock data doesn’t impact the workflow–logins, account details such as name or address, which product is selected to add to cart. We use what is convenient.
- Syntactically-restricted data such as a phone number will be created using context or trial-and-error.
- If we observe negative test cases in production, we will test them–however we will not exhaustively test permutations of mock data to trigger the negative test case.
- We create as much test data as possible, especially during account creation flows. We try to manipulate test data that we create in the test suite to be less dependent on our customers’ test data management processes.
- If the test data is not triggered by a human action (such as in a monitoring tool), we will need to ask for an API or data stream to allow us to trigger or receive that data.
- If data is being passed between one user role and another (such as a customer ordering a product and an employee handling the order) we will order the tests in order to make sure this data is created and passed during the test flow.
- I have a very data-dependent application. How can you test it?
In short, it’s a process of trial and error split between humans and machines. Keep in mind we are collecting metadata based on each event, and we’re determining/discovering the sequences that people are going through. From there, a trial and error process allows us to figure out which mock data we should be inputting to get through each pathway. Because the process already knows what success looks like based on a pre-determined workflow, we will know that the mock data is correct when we can successfully complete that workflow. We then validate that the data returned by the application is returned consistently.
This process currently has some humans helping. However, in the future machines will be able to do this guess and check independently given the limited amount of options they can check, and because they know which pathway they need to get to next, given the pre-defined pathway which defines success. It’s an exercise in machine learning! We’re currently teaching it.
- How do you validate data?
Once we’ve decided what mock data we’re using, our engineers use the Autogenerator to validate that data being returned in the workflow, in the middle or at the end of the flow, in the DOM. If the data is transformed in some way using business rules, we will make sure that the data is returned consistently run to run.
- How do you handle 3rd-party integrations, emails, phone calls, etc?
- We use MailSac to receive emails sent to us. Because we provision an email for a login to your testing environment (typically “email@example.com”) we can always receive that mail. This allows us to validate emails and also click buttons for account signup.
- For 3rd-party integrations we will typically (as necessary) create accounts with the third-party application and handle any data confirmation or manipulation with a test.
- For outgoing phone calls, texts, and other nonweb data that you want to validate, there are lots of open-source tools on the market and we’re likely using one already.
- How are your tests deployed?
- Can you deliver tests in a framework other than your own proprietary one?
Sadly not. Our process requires us to use a testing framework that can be deeply integrated with our analytics so that we can serve up tests that meet our SLAs. Because you don’t need to maintain the test code, you don’t need to worry about it.
- How do you handle multiple user roles who have different experiences?
We can separate our analyses by user rule to make sure less-common user roles (such as an admin) don’t get swamped by more-common ones. In our integration with you, we’ll ask what user roles you have. We’ll need separate logins to your testing environment application for those user roles.
- How do you keep your tests stable?
Three ways. First, TestCafe has some great features for handling wait times and network/database instability without becoming flaky. Second, we use auto-tracking in our tracking library to prioritize using stable event identifiers in our tests. Finally, we run the testing suite every hour through a shadow CI to catch any testing instability or flakiness before tests are run in your CI to test the build.
- I have certain users whose functionality is critical but they are a small cohort. How do you handle that?
We break up different user groups and run analyses separately for them. For example: an education technology application may have students, teachers, school administrators, and super-admins. We’ll analyze each of these separately and deploy tests for all four groups, typically with different logins for each type of user.
- How does ProdPerfect handle A/B testing?
If you’re A/B testing on production (such that different cohorts/customers see different features), we’ll detect these multiple pathways and test both. We will infer or force the conditions that trigger the A or B on staging.
- How does ProdPerfect handle feature flagging?
If you are flagging features on production, we’ll test these features as we do A/B tests. If they are being introduced in staging, they may break our tests, at which point we will respond and use exploratory testing to understand what changed, and update the test. If they cause features to disappear completely, you’re likely to get a bug alert that you’ll know to disregard. If they introduce brand new behavior that does not break the tests, this looks more like a brand new feature—we’ll test it when it reaches production.
- How does ProdPerfect handle environments whose data changes frequently?
In these situations, we add tests that create the data we need (users, events, customers, etc) at the beginning of the testing suite wherever possible, so we can create our own data.
- What if I don’t have a lot of users?
We need about 10,000 total events to start building a test suite. Note each event is a click, form submission, pageview, etc. A good rule of thumb: if you have at least a few dozen human-hours of activity on your application each month, we’ll be fine.
Integrating ProdPerfect Into My System
- Can ProdPerfect work with my language/framework/system?
- Can you deploy tests to my CI Process?
Yes. We integrate with every major CI system out there. We can probably figure out how to integrate with your goofy niche CI system if you have one. We’ll provide you a custom script for your CI system to install the latest version of the test suite using Node Package Manager.
- What if I don’t have a CI?
You can run the test suite manually. You can also ask us to run the test suite at a set frequency from our own servers at no extra charge.
- What if we don't have a staging environment and/or what if we use ephemeral environments spun up?
If you’re deploying to ephemeral or isolated environments, that’s fine. You’ll work with our Delivery Engineers to have your CI point our test suite at the ephemeral environments when they spin up, using the right script, just as you would with an automated test suite you deployed yourself.
If you’re deploying to production directly, you may want to work with us to make sure you can launch your application in a stable dev environment, or work with us to test on production. Here is our disclaimer regarding testing on production.
- Can developers use ProdPerfect to test on feature branches before merging to staging/QA?
In short, it depends how your pipeline is set up, but generally yes. ProdPerfect deploys end to end test suites, so you’ll need some form of stable environment to spin up for testing. You can do this during a pull request before the merge to a higher-level branch; often, using an ephemeral testing environment such as Heroku Review Apps is the way to do this as part of an automated deployment. If you have static pre-production environments, you can run ProdPerfect on developer environments as a pre-merge test.
- How much effort is required by my team?
During onboarding and integration, your team will need to install our data recorder and CI integration, and provide ProdPerfect appropriate access to the testing environments. We may have a few questions during initial implementation. Beyond that, no effort is required other than fixing broken code with bug reports that we provide.
- How do I install the data tracker?
- I run a Continuous Delivery team. Can you support our rapid deployments?
Yes; we’re designed to. Our tests suites run quickly because they are lean and can be parallelized. We can rapidly respond to broken tests with our Delivery Engineering staff, who are receiving telemetry about every test suite run. We have auto-generation tooling that allows us to rewrite and redeploy tests faster than a QA Automation Engineer could.
When a test breaks, we immediately investigate it. If the test is broken due to a feature change, we comment it out (communicating with your team via a shared Slack channel), and then typically update the test within 30-60 minutes (4 hour SLA). If you’re using ProdPerfect as a build-blocker this will keep ProdPerfect from blocking the next build.
- What if I’m waiting on your Delivery Engineer?
Very soon, we’ll have a push-button feature in our web portal to comment out broken tests, so you’ll no longer be dependent on a ProdPerfect Delivery Engineer to comment out broken tests. This will let you move even faster and not get blocked out of our 16-hour support window.
- What environments can I test in?
Any environment with a prod-like version of the application, with some prod-like data, and access to the internet, can be used. This could be localhost, QA, staging, pre-prod, or production.
- What if I don’t have a staging server? E.g., I spin up temporary environments during deploy?
Work with your Delivery Engineer to help them understand your deployment process, and we’ll accommodate. If you develop temporary environments during deploys, you’ll pass us the variable that sets the URL or IP address for that temporary environment, and we’ll target that on deploy.
- Can we use Google Analytics or some other product analytics tool rather than the ProdPerfect data tracker to send you data?
Not yet. Product analytics vendors tend not to expose their raw metadata, which we need. Don’t worry; the data tracker is safe and lightweight.
- Do I need to give you access to my codebase or database?
Nope. We work exclusively on the browser level.
Security and Privacy
- What data does ProdPerfect collect, and how do I know it’s secure?
Here is an overview of our data collection and storage process.
- Will your data tracking snippet impact performance?
No. We work with some very large eCommerce clients and made sure that there would be no impact. We do this on load by loading after page complete, and on data transfer by using the Beacon API (or “Navigator Beacon”) to send the data asynchronously. For more information, check out this analysis by TestDouble.
- How do you keep the data you collect safe / how will you pass our security review?
Our practice is to not collect personal information of any sort (PII, PHI, PCI, etc). You can see our Security FAQ here.
- I really really don’t want you to accidentally collect PII.
We typically work with your security team to actively test the data tracker on a non-sensitive environment. We give them full access to the tracking library and the data we are collecting from their application to demonstrate that we aren’t collecting PII. Your team can also host the data tracker to ensure it is in control of all updates. Here’s how to confirm ProdPerfect avoids PII.
What Differentiates You From Competitors?
- How do you compare to MABL/Testim/Functionize?
MABL & Testim (as well as others) are scripting or record-and-play tools that use Machine Learning to stabilize tests. They do this by making models of clickstream events that are robust through small changes. Large feature changes will still break them. Your team needs to develop, update, and expand the testing suite. You test through their application and do not own any code.
ProdPerfect auto-detects and auto-generates testing suites. When they break, we fix them. You receive, keep, and own the test code.
- How do you compare to Rainforest/Test.io/Testlio?
Rainforest (et al) is a crowdsourced manual testing service. You provide Rainforest your test cases, and they use an API to spread those tests among many testers, allowing them to manually test your application quickly. You pay per minute of manual testing, so if you test more often, you’re paying more. You develop, maintain, and expand the test cases. You test through their web application and do not own any code.
ProdPerfect auto-detects and auto-generates testing suites. When they break, we fix them. You receive, keep, and own the test code.