What Can and Should be Automated in Software Development?

Software development is a highly creative human endeavor that requires a concentrated mix of talents: efficient design, organization, architectural strategy, coordination with key business needs, and a deep attention to detail. It’s hard. Almost anyone can sling code if they take a few online classes. But few people can develop stable, extensible software.

Many of the human skills in software development are difficult to acquire and to quantify. Some come with experience, but still only when complemented by skill. There seems to be a natural talent that cannot be taught: many recruiters still look for the mythical “10x” developer who is ten times as productive or effective as others. From these sorts of mysteries and unquantifiable factors come a great deal of “philosophy” and legend about what goes into making great software.

It’s no surprise, therefore, that there is skepticism and even apprehension around the idea of automating any part of a developer’s job. Can machines really replicate any of what great developers do? And should we even try?

What should we automate?

What’s the difference between sculpting marble and breaking rocks?

The difference is in the engagement of the mind. The tools are the same. The medium is the same. How the mind engages with the task is what matters. While both use steel to change the shape of stone, one is drudgery and the other is creative and delightful. Breaking rocks all day burns people out. Sculpting sets the soul on fire.

David Liedle and I were discussing automation the other day and I realized that this is a great analogy for what we should automate. Robots can break rocks. We don’t need humans to break rocks. We do need humans to sculpt.

Are there parts of the software development process that are more like breaking rocks than sculpting? Of course. Would we ask a sculptor to chisel their own rock out of the earth and carry it to their workshop? No: it is a terrible use of their time and does not take advantage of their unique talents.

For software development, we should automate the parts of the process that do not engage the creativity, the strategy, the cleverness, and the organizational strength of a great developer. We should automate the drudging parts that burn people out.

What can we automate?

Perhaps not surprisingly, the tasks we should automate and the tasks we can automate have significant overlap by their very nature. The kinds of tasks that lack the special, human parts that are so hard to quantify are the very ones that are easiest to break into parts and automate in turn.

Right now, and for the foreseeable future, we automate tasks that can be defined and repeated, either deterministically or probabilistically (the latter being what we think of as “AI”). In human history, the tasks which have been automated have been those wherein the human mind is no longer creatively engaged. We have automated picking crops, forming boxes, stacking shelves. We are beginning to automate repetitive tasks on applications using Robotic Process Automation. QA engineers automate the task of manually clicking through an application repeatedly. All of these free up the human mind from drudgery so it can turn its focus towards more beautiful work.

We have seen it in other parts of the software development process: performance analysts used to repeatedly probe applications for performance issues; now Application Performance Management runs on its own when set up. Software deployments used to be heavily-managed events; now they can be done with a click of a button. All of these tasks are not what makes software engineering interesting or valuable to the human mind.

This holds true for the current wave of automation: the jobs being automated are those which have been so proceduralized by management process already that they no longer set the human soul alight. And there’s much more of the software development process that can yet be automated away from human burden.

At ProdPerfect, we seek to combat the drudgery of sitting in a room guessing what’s important to test, and repeatedly re-writing and re-tooling the same end-to-end automation tests. We’re here to fight burnout, to help software teams deal with less BS from broken code and from having to test it, so they can go build the things that help other people avoid burnout, and thrive.

As with every wave of automation, there’s some discomfort and incredulity that anything but an experienced, well-trained human can do the trick. In ten years, we won’t be able to imagine doing it any other way.

The Cost of a Bug in Prod

The Cost of a Bug in Prod

How much did your last bug cost you?

Everyone knows bugs in production cost revenue and are a nightmare for the engineering team. But browser automated testing is so costly and inconsistent that many teams abandon efforts to maintain a testing suite. At what cost?

How much QA and developer time was allocated to fixing the bug? How far was your next release pushed out? How much downtime did you experience, and how much revenue was lost while finding and fixing the bug?

IBM estimates that a bug making it to production increases your costs by 7x versus finding the bug in testing. Why hasn’t conventional browser automation testing solved the problem?

Conventional End to End Testing Isn't Worth It

No matter what tool you use to maintain end to end testing, you’re stuck with three fundamental problems:

  1. A talented engineer needs to write every test specification by hand
  2. Browser-level tests are brittle and flaky, and highly costly to maintain
  3. You simply don’t know your test coverage level

But what if it just worked? What if your users’ behavior was building and updating your testing suite—automatically?

Sound too good to be true? Learn how you can break the cycle — schedule a demo today.

Share on facebook
Share on google
Share on twitter
Share on linkedin

Why Browser-Automated Testing Consistently Fails

Why Browser-Automated Testing Consistently Fails

We’ve seen a lot of attempts at building browser-automated testing suites in our time. Most of them fail. The story typically goes something like this:

  • A growing team has been manually testing their application for some time
  • As they grow, they realize that manual testing is missing key user flows, and more importantly is significantly slowing down deployments
  • The team hire QA automation engineers or assigns developer engineers to build an automated test suite
  • The effort takes a few months
  • Over time, release schedules and firefighting get in the way of updating, maintaining, and expanding the testing suite — and the suite starts to degrade

At which point, one of two things happens:

  1. The test suite grows increasingly decrepit and obsolete until it is abandoned, or
  2. The number of tests and effort per build balloons and bloats until running the test takes hours and maintaining it delays your build cycle even more than previous manual testing
Why does this always come to pass? Much of the problem is the natural brittleness of browser-level tests–the effort in maintaining them is why Google suggests they take the least of your attention. But the main reason is that when building end to end tests, most teams are focusing on the wrong questions entirely.

A quick google search for “Improve QA testing” will bring you to a host of articles with titles like “___ Ways to Improve QA Testing.” I’m going to save you some time and provide you with the highlights here:

1. Incorporate testing early in the process

2. Outsource

3. Don’t forget to do it

4. Use this tool or that tool for test case management or execution

There’s a whole lot of advice on how to test your application, especially at the browser level. But bugs don’t get through because you don’t know how to test. Bugs get through because you don’t know what to test. Testing suite get bloated and broken because you don’t know what to test, and you end up building hundreds or thousands of tests in the hopes you’ll cover what you need. Tools can’t fix that for you, and outsourced firms can’t fix that for you, no matter how much money you throw at them.

 

Figure Out What to Test

This may seem obvious, but that’s precisely the problem: most teams think figuring out what to test is fairly self-evidence, or simply requires an engineer’s intuition: they’ll test what they think is important, flaky, bug-prone, etc.

But what should you test to make sure your application won’t break when your users use it? You should test what your users are actually trying to do.

In reality, your users tell you everything you need to know about what to test – you just need to start listening. No team succeeds with 10,000 end to end tests (and yes, we’ve seen 10,000), spitballing to try to catch everything. The teams that win–who can keep their automated testing suites maintained, relevant, and effective–are those who test for functionality along real user behavior.

Your product team is using analytics to understand what your users are doing, and improve workflows to improve the user experience. They’ve been doing it for years. Why isn’t engineering using the same analytics to build better tests?

The Path Forward

The first step to making this transition is to evaluate your current testing suite against your users’ common flows.

Which of your current automated end to end tests are covering actual user behavior, and which are irrelevant? Where are your gaps? ProdPerfect can help you quickly find the answers. We’ll tell you what your users are doing and what your tests are doing, so you can close the gaps and cut the chaff.

Share on facebook
Share on google
Share on twitter
Share on linkedin