Bias for action and agility

What has linked all of the high-performing teams and organisations I've worked for or with? That title is a bit of a clue: it's a bias for action.

What this means is they believe the way to solve a problem or to improve things is to do something. A team with a bias for action assumes that on balance, doing things is better than not doing things. This may sound a bit reductive, so let's illustrate it by considering the alternative: a bias for inaction.

Inaction

A bias for inaction is the belief that doing things is usually harmful, and therefore the organisation should seek to do or change as little as possible.

Organisations with a bias for inaction tend to have a lot of people who describe their jobs with the phrase, "I stop...". I stop high-risk changes being made to production. I stop money being spent on unnecessary projects. I stop effort being wasted on problems we already know how to work around. In these organisations, there is always a good reason to not do something. This new market trend is a passing fad. Our customers have tolerated this error rate for years. It's a nice idea but we don't want to spend money on it.

Often this attitude comes from what feels like sound evidence. Doing things has been bad for the organisation. Changes have failed and taken down production systems. New products have been expensive and underwhelming. New processes have only made things worse. In this environment it's natural that the organisation starts to fear action, to employ the army of people who say, "I stop the bad things happening by stopping things from happening."

An organisation won't undergo immediate and catastrophic failure if it takes this approach. By definition: you can't make catastrophic decisions if you don't make any decisions! But... I've never found a team or organisation in this state that was high-performing. Instead, I found teams and organisations whose best possible outcome was no better than merely muddling along. Inaction-biased organisations are the ones where all the products are dated, all the internal tools are riddled with irritating problems that never get fixed, waste is rampant, and change only happens under duress when regulators or market conditions force it. The failure isn't instant: it's long, drawn-out, and slow - the consequence of muddling along in the same direction while the market around you changes course.

The worst thing is they're often aware of this state. They want things to be better. They want to feel competitive, to drive their market rather than be a bystander in it. But at the same time, they fall back on that standing excuse of the inaction-biased: if we do something, it might fail.

Action = Failure

The idea that in doing something you may do the wrong thing isn't unfounded. It's common to hear stories of investors or economic organisations whose predictions performed worse than random chance. To put a number on it, slightly under 47 percent is typical if the penalty for both false negatives and false positives are identical. (If the penalties are not identical then some very interesting things happen - I'll get to that in a little bit.)

If we want to be an action-biased organisation, our default state is to always be doing things. We might think they're good things to do, but our chance of success that any one of those things works out the way we expect is worse than our chance of a coin coming up heads. Assuming we're average at predicting outcomes, there's a slightly more than 53 percent chance any one thing we do will fail.

However, as an action-biased team, we're not going to do one thing and walk away from it shaking our heads when it doesn't work out. We're going to keep doing things. The probability of us making the wrong call once is 53-and-a-bit percent, but the probability of that happening twice is a shade over 28 percent. If we do three things, even our rather tragic worse-than-a-coin ability to predict outcomes still leaves us with around an 85% chance that at least one of those things does work out.

The challenge here is you need to survive long enough to devise and carry out those three distinct actions, and there's still a 15% chance you won't have got it right at that point. If you're at the low end of forecasting accuracy (in a study of investment managers, this low end was around 22%) you'd need to do ten actions before your chance of having at least one success topped 90 percent.

Does this suggest we should avoid doing anything? The inaction-biased would certainly say so. But that's not the whole picture. The choice to take action or not isn't simple and binary. What this result suggests is we should avoid doing anything big.

If the scope of your actions is multi-million pound projects, year-long plans or "bet the company" decisions than you're doomed. At some point your ability to predict things will fail you, and if the things you're doing are big and expensive it will fail you in a big and expensive way. It's one of the ways organisations become inaction-biased: they started out filled with entrepreneurial drive, got lucky on a few of those big "bet the company" moments then the optimism crashed down hard as the big, expensive failures started to rack up.

If the scope of your actions is small, and the impact of failure low, then you can do a lot of actions. This means you have a much better chance of getting to the point where one of them pays off.

Agility: Smaller Actions, More Often

It's easy to say that actions should be small, but hard to do.

There are two problems to solve:

  • Finding the sweet spot of low effort, high impact.
  • Managing risk.

Consider a typical example of a problem a team might have: a lengthy manual build, test and deployment process with a high risk of change failure. The kind of thing an inaction-biased team would commonly live with, and maybe even use as justification for inaction in other areas.

There are a lot of different approaches you could take here to improve things:

  • A project to automate every aspect of the process, from build to deploy.
  • A project to rewrite the entire codebase in a different stack which claims to offer easier manual deployment.
  • Writing a shell script for a simple, well-understood part of the process.
  • Writing a shell script for a complex part of the process that is always done inconsistently.

Automating everything is a really tempting option. It has a huge impact, and would potentially solve all of our problems. But it's also a lot of work. We have to stop the world to spend several months writing build and test and deployment code. We might switch it all on only to find the curse of prediction accuracy has struck: a 53% chance we made things worse because our assumptions about how our environment works and what needed testing were wrong.

Rewriting the codebase? That's an even bigger project... but yet it has almost zero impact. We'd spend months on it to end up with the same situation of a manual build, test and deploy process. With a 53% chance that our prediction was wrong and the new stack didn't improve deployment times in the slightest.

Writing shell scripts, on the other hand - those are both small tasks. There's still only a 47% chance they help with the problem, but at least we've only lost hours rather than months. However, it's not enough merely that a task is small. We need to consider the impact. Doing lots of small things might give us a lot of opportunities to make things better, but if they're all low-impact those things aren't going to get better by any particularly large amount.

If our problem is a long and error-prone deployment process, automating one of the quick, easy and reliable parts isn't doing much to solve that problem, no matter how temptingly easy it is. Automating the bit that's always done inconsistently, though... we might be wrong (hell, 53% odds we are wrong) but that feels like it's going to have a huge impact on reliability.

So we've picked a thing to do. We also need to manage the risk of doing that thing.

We know that full automation and rewrites are big risks, but we already discounted those because they're big effort and one of them didn't even give us big impact for all that work. But even with something as simple and quick as writing a shell script, there a few different options with very different risk levels:

  • Stick it in production without testing it.
  • Run it in staging once, then stick it in production.
  • Make sure it's reliable in a production-like staging environment, then promote it.
  • Only use it on development workstations for a month, then staging for a month, then production, restarting the process on any change.
  • Never use it in production. It's too dangerous.

Yes, I've seen all five of those options in the wild! Thing is, there's a U-shaped curve of risk here. At the cowboy end, your risk of breaking your production system is too high. But at the inaction-biased end, the risk of never getting to see whether your idea worked is too high. If it takes you 3-4 months to see an action through to the point of knowing if it worked, you're not going to be doing many actions. If you don't do many actions, even if they're small you're still back at the point of, "everything we tried last year failed, maybe we shouldn't try anything at all."

The correct option is at the bottom of the curve: the one where you get stuff delivered quickly without resorting to cowboying it up any old way. You want to realise the benefit, but you need to assume failure is a possible outcome and plan for it.

Action-biased teams

This all leads us to the mantra of a successful, high-performing team with a bias for action:

  • Do small, high-impact things.
  • Do lots of them.
  • Don't expect them all to work.

It's closely aligned with a Build, Measure, Learn approach, especially in the sense that if you expect some things to fail, you can also expect to learn from those failures.

What happens if you don't?

Inaction-bias: setting yourself up for failure

I mentioned earlier that humans tend to predict things with a slightly worse success rate than a coin flip, providing the consequences for a false positive and a false negative are identical. What happens if that isn't the case?

The prediction of recessions gives us an example of this. A false negative failing to predict a recession has limited consequences. Extenuating circumstances can always be found, and it's accepted that recessions are unexpected and hard-to-forecast events. But the opposite case of predicting a recession that does not occur is viewed less leniently. Economists gain reputations as "doomsayers" or "stopped clocks", especially when going against consensus.

This is not a big difference; at most, you could say it's a minor preference for positive economic forecasts. But it does something shocking to prediction accuracy. In this area, it falls from around 47 percent to barely over 1%.

This is important. In an inaction-biased organisation, there is a big difference between false positives and false negatives. If you do nothing and something goes wrong as a result, the damage to your reputation is small. Doing nothing was the right thing, the safe thing. It couldn't be helped. But if you do something and it goes wrong: that was risky and dangerous. You shouldn't have been trying to change things. You're in trouble now.

Is it any wonder that such organisations are hopeless at predicting the outcomes of anything which requires action? Is it a surprise that history is littered with previously incumbent businesses who fumbled every attempt at reacting to change in their marketplace, even when the correct approach was blindingly obvious?

Bias for action and agility

Effective agile, action-biased organisations fail frequently, but despite this frequent failure have better overall outcomes in my experience. This is due to a combination of positive approaches:

  • Doing smaller things more often: increasing the number of opportunities for success.
  • Choosing impactful things: increasing the effect of success.
  • Accepting negative outcomes and planning for them: decreasing the effect of failure.
  • Supporting action, even failed action: avoiding bias toward bad predictions.

You need both the agility and the bias for action. Without agility and the desire to do smaller, more frequent things you either go out of business after one too many "bet the company" gambles, or survive long enough on an early run of luck to become another inaction-biased organisation discouraged by a string of large and expensive failures.

(I maintain combining agility and inaction is not really a possible state, as doing nothing is inherently a large, long-duration activity which will not deliver anything you can measure or learn from.)

And if nothing else, go ahead and make that small change you've been putting off. Even if it doesn't succeed, it's worth trying.