I hate “recommendations.”
In my time as a consultant at Stroud, “recommendation” was a four-letter word. And for good reason! If your solution wasn’t implemented while you were on-site with the client, it was a garbage solution (otherwise known as a recommendation).
These days I talk to a lot of engineering and QA leaders, frustrated with their testing technology and results, and they ask my opinion on some of the recommendations they read.
So far, I’ve been quite disappointed with the information being peddled to them, and I’m happy to share my opinion on how it’s missing the mark.
Take this example from The Cost of Poor Software Quality in the US: A 2020 Report:
It discusses the $2.1T opportunity in the US alone (that’s the GDP of freakin’ Italy) to improve testing and quality across software development. Big opportunity; how do I achieve said improvement? According to the report, the key enablers for achieving the highest levels of cost, schedule, and quality performance are:
- A well-defined yet adaptable development process
- Excellent estimation methods
- Project management discipline
- Excellent staff skill levels
- Quality vision
- Customer satisfaction focus
- TQM management culture
- Defect prevention
If I had provided a client with ^^ that ^^ list when I was a consultant, I’d have been promptly fired, and for good reason. Heck, I would have fired myself. The general nature of the report’s action items (even when it does elaborate) is vague, mind-numbing, and ultimately, useless.
As you dig deeper into these recommendations, it becomes clear the proposed “solution” is building a unicorn organization around getting testing right. It would require you to upend your culture and processes, all while building a world-class team.
It’s hard enough to make your core offering successful. It’s patent insanity to ask every software team on earth to build a world-class organization around QA independently.
Worse, these recommendations are not substantively different from a report you may have read from 2010. For over a decade, the “best practices” in QA have not changed in any meaningful way. There have been various tweaks and new dogmas, but all center around the idea that getting QA right requires a ton of money, work, talent, frustration, grind, patience, and luck.
Imagine if you read this report and these recommendations regarding any other aspect of your business – your infrastructure, your CI pipeline, or your CRM data management. You’d never accept them! You would recognize these are ancillary processes meant to support your team’s core focus. The right move would be to make sure you implemented the right product, got the job done, and moved on to more meaningful work.
Why can’t we do this with QA?
The crux of the crazy
The reason all of this feels so insane is that it’s just a shotgun blast of fairly hand-wavy activities, none of which can be measured or prioritized. There’s no data telling you whether what you’re doing works.
Let’s review some of the more specific recommendations:
“We recommend that software shops:
- Avoid low-quality development practices and adopt secure coding practices.
- Recognize the inherent difficulties of developing software and use effective tools to help deal with those difficulties.
- Ensure early and regular analysis of source code to detect violations, weaknesses, and vulnerabilities.
- Measure structural quality characteristics.
- Focus on the evaluation of included components (e.g., open-source) and platforms that may have unknown weaknesses or vulnerabilities.
- Learn more about the typical vulnerabilities and exploitable weaknesses attributable to certain programming languages.
- Use best-known practices for managing a legacy system – especially when it comes to overcoming the loss of understanding and knowledge of how the system works internally. Benchmarking health status is a good place to start.
- Avoid unsuccessful projects by not creating arbitrary schedules. Pay attention to defined quality objectives and measure against those objectives throughout the project lifecycle.
- Invest smartly in software quality improvements based on CPSQ numbers in hand.
- Focus on the different results of good vs. poor software quality in your shop and relevant benchmark organizations.”
Again, I ask – how are these solutions?
You can’t pick up and just do any of this tomorrow. There’s also no way of measuring the effectiveness of the general activities; if you did, by some chance, run through the list and implement the ideas to the best of your team’s ability. The report makes QA feel like a black box where you drop in a ton of effort and maybe, in some hard-to-measure way, might improve results.
There are no numbers.
There are no hard objectives.
There is no measurable data.
Breaking out of the madness: TaaS
The reason this whole situation has been garbage forever is the technology hasn’t been up to snuff. QA technology’s stagnance is largely due to laziness on the part of the venture-backed devtools ecosystem – to radically improve testing, you need to make it something different from today’s “throw expensive humans at it and let them grind away”-type tools. Taking testing off peoples’ hands ultimately requires machine intelligence, and lucky for us, that intelligence is here.
QA and testing are ready for their giant leap, and the way forward is to break out of the madness – stop layering new tweaks, processes, and procedures on top of old ideas. The solution is to turn testing into a software-powered service, period. That’s not a recommendation; it’s a fact.
Fundamentally there is no reason QA testing shouldn’t go the way of cloud storage, infrastructure, DevOps, application performance monitoring (APM), analytics, and all sorts of tools for developers.
Over the past ten years, most non-bespoke functions supporting application development have become services: IaaS, CI, and APM are a few. So there is no reason Testing as a Service (TaaS) shouldn’t be the norm. At first, many technology leaders will balk at the idea because they feel testing is something only humans can do due to how customized testing needs to be for each individual application. But these same attitudes existed for infra, deployment tech, and monitoring tech when they were new (remember when you had to instrument your product analytics toolsets manually?).
Early adopters drove the popularization of all of the services now universal in the software industry. Testing as a Service will be the same – adopted first by tech leaders with more excitement about building the future than fear to keep holding onto what never worked.
This post is, of course, selfishly motivated. ProdPerfect is building this future of TaaS. It’s real, it works, and we’re on the hunt for those early adopters. We work with radical technology leaders who want to move fast and focus on building product. If you read this and got that itch, let my team know you’d like to speak to me personally when you reach out.
I look forward to chatting.