Specificity

Specificity

I've become that annoying person who asks, "you say it takes a long time - but how long exactly? Which part consumes the most time? How did you measure?"

There's a reason. For the past year or so I've been working at a seed stage startup, which is the earliest I've ever joined a company. This means a lot of learning. Head of Engineering at an older company is about keeping a steady hand on the wheel, steering toward better destinations while not exceeding the handling capabilities of the vessel. At a startup, it's more like sitting in a forest of maps and charts and thinking, "hang on, do we even have a vehicle yet? Are these even the right maps?"

One thing I've learnt is in this context, specificity is not only valuable but necessary.

You probably don't realise how much normal corporate discussion at a large company lacks specificity. Think about it. How many times do you say, "it's a lot of work", "it'll be quick" or, "oh yes, there were some bugs in it"? We assume a common frame of reference for "a lot" or "quick". It doesn't matter what the bugs are or how they came to be there.

Specificity is the art of saying, "this is similar to project A, which took us 4 weeks. We expect it to take the same amount of time, if that assessment is correct". Note that this is not the same thing as the typical corporate estimate, where only the duration would be given without context. In this example, we have:

  • The evidence used to make the assumption (duration of Project A)
  • A specific, unambiguous datum (4 weeks)
  • How the datum is transformed (this is similar, we expect the same time)
  • The invalidating criteria (what could invalidate the transformation)

This invites discussion. Did we forget Project A had 3 weeks of post-launch fixes? Was half of the 4 weeks contract negotiation? If it is similar, can we reuse parts or did we learn things that will speed us up? How do we test that the new project is indeed similar?

These are important for me. When you are small, opportunity costs are large. I cannot afford to spend 14% of the time the company has existed for on building something that will never be used. Imagine a bigger company assigning a division of 80 programmers to work on something for two years, and never using it. (OK, I'm aware this happens a lot). What and why are critical. When you handwave the details, you end up spending £10k to chase £1k of revenue, or 5 days writing a tool to save 5 minutes of time.

Another practical example: evaluating recruitment platforms. I could say, "looks good and the salesperson was nice". Or I could say, "I saw a filter with 100 candidates, 25% of the ones I saw on page 1 were people I'd invite to interview, their claimed response rate is 70%, I estimate we could get 17 interviews from this platform." Now I have the success criteria for an evaluation, and a notional cost per lead for comparison. If I'm specific with everything I do including how much time I spend, I can then sort my platforms by cost per hire and concentrate on the most cost-effective.

I was reminded again of this need for specificity doing a post-mortem after losing a day and a half to orders not showing up on an internal tool. At first we claimed a "corner case" which triggered "a rare bug". I've made that same write-up many a time. Many a time I've also seen that corner case continue to rear its head in other places, to make that rare bug all rather common. Especially when concurrency is involved.

Specificity made us go back and change that assessment. A "missing step which is expected to happen when a parcel is collected" causes "a join to a data source which is expected to always contain the parcel collection step to exclude affected items". (The real version is more specific, I've anonymised for public consumption). Doesn't this feel a lot more likely to occur than a corner case combined with a rare bug? Going further, we've highlighted a lesson to prevent a whole class of errors: don't assume that anything requiring manual action will happen reliably or in the correct order.

What I realise retrospectively is how much time I lost to codebases which encoded these concepts as a series of unconnected hotfixes, because investigations were claimed as "rare bugs" or "unexpected corner cases" without the detail. How many teams suffered gruelling on-call rotas tending to a system that "just does that sometimes" without anyone asking "does what?" or "why?" afterward. Or I would realise that, had I been specific and not recorded it all with no greater precision than, "I lost a lot of time this week to code structure and unexpected bugs".