Updating the Joel Test for the modern world
Hands up who remembers the Joel Test? It might take some remembering, because it's a shade over 15 years old now - pre-dating the Agile Manifesto. For those of you who don't, Joel's writing on software was some of the earliest, "actually, this software development lark doesn't have to be a pitched battle between business managers and programmers taking place in a soulless cube farm" sentiment to make its way into wider consciousness. The test itself was a simple series of twelve yes/no questions you could ask to gauge how effective your company was at software development:
- Do you use source control?
- Can you make a build in one step?
- Do you make daily builds?
- Do you have a bug database?
- Do you fix bugs before writing new code?
- Do you have an up-to-date schedule?
- Do you have a specification?
- Do programmers have quiet working conditions?
- Do you use the best tools money can buy?
- Do you have testers?
- Do new candidates write code during their interview?
- Do you do hallway usability testing?
Fifteen years in software development is a long time, but it still stands up pretty well today. If your company ticks all of these boxes, it's going to be a decent place to work with a better than average chance of producing good software. Even so, there are some items on that list which show how different processes and tools were a decade and a half ago.
So what would a Joel Test for the modern era look like? What twelve questions does effective look like now? Well, here goes:
- Do you use source control?
- Can you deploy to production in one step?
- Do you make a build on every check-in?
- Do you have a bug database?
- Is your software ready to deploy at all times?
- Do you have an up-to-date project burndown?
- Do you have a well-maintained product backlog?
- Do programmers work collaboratively?
- Do you use the best tool for the job?
- Do you have test automation engineers?
- Do new candidates pair-program in their interview?
- Do you do hallway usability testing?
Firstly, look at what's not changed. Source control still matters. Tracking bugs still matters. Grabbing J Random Bypasser and seeing what they think of your code is still the best way to discover usability issues.
Even the changes are mostly subtle. Some of this is down to the improvement in performance and tooling; in 2000 making a daily build was aspirational, now it's a bare minimum. Why wait an entire day to find out if you have an integration issue? Similarly, the wealth of CI/CD and desired state tools mean you can automate way beyond just the build artifacts these days. Tedious, error-prone manual deployments should be just as much a thing of the past as tedious, error-prone manual application build processes.
One of the more interesting changes is the idea of fixing bugs before writing new features. It's not gone away, but has become part of a more holistic view of software being ready to release. That still means no show-stopping bugs, but also being able to switch off partially-implemented features with a feature toggle, or versioning your API so you don't have to force consumers to update their code to make an update. (This last one causes the classic "deploy in lockstep" problem, where a large number of applications have to be updated together in order to stop the whole lot crashing down in a mess of incompatibility.)
The other big change since 2000 is process. Back then, XP and Scrum were niche methodologies, and as mentioned the Agile manifesto had yet to be written. Big Design Up Front might seem antiquated now, but it was a lot better than the No Design Up Front prevalent at many organisations at the time. Still, we can update those. The specification and the schedule become the backlog and the burndown. Using these tools effectively means you alter either scope or deadline to suit your desired outcome: a well-capitalised business may want to let delivery dates extend out to get in more features, while a cash-strapped startup might throw out nice-to-have functionality to get to market early.
Another big change is in the way programmers work. Aspiration for the turn of the century was someone being able to quietly get on with their job - because it was normative for one developer to be expected to disappear off alone for a few months to work on her feature and only talk to the rest of the team when it was time to integrate. Whereas now we know that an effective team work together on one or two features at a time, and integrate their code constantly. The trick is to make sure it's collaboration, rather than interruption and disruption.
Skipping ahead a bit, that's also reflected in the change to the interview process. I've seen arrogant, difficult to work with developers blaze through a solo programming test, but pair programming exercises soon show up negative personality traits. It's basically a continuation of the same thought process as programming in interview; you're aiming to give candidates a test that's as close as possible to the job they'll actually be doing.
So back to question 9. Tools are still important; companies shouldn't hand their developers a second-hand i3 with a paltry amount of RAM and a tiny monitor. But in the modern world there's something more damaging than cheaping out on hardware, and that's the firms who insist every piece of software must be written in Java or .net, with this relational database, and all components must be built internally from scratch unless you have approval signed in triplicate. Bending SQL to solve graph problems or shoehorning C# into a task for a lightweight Node.js API is not a recipe for quality or short development time, and yet companies have policies actively mandating this. Far more toxic than some out-of-date hardware.
Finally, I changed testers to test automation engineers. We could go even further and say, "developers who know how to write automated tests" but at the current state of the art testing is still a distinct skill, and there are still enough developers who don't know how to do it effectively that you need specialists to guide them. We're not there yet, but we are at the stage where big rounds of manual testing for each release are a bit turn-of-the-century. Instead of a big testing department, each team has a member whose goal is software quality, and whose remit includes developing automated tools to ensure that. Again, it's about changing aspirations - running a full automated test suite every time you check in (or at least merge to trunk) is feasible now, and therefore you should be looking to do it. What this does is shorten the feedback loop between a problem being created and a problem being identified.
Anyway, that's my take on how the Joel Test should look now. Not dissimilar to the original, but definitely moved on from where the industry was fifteen years ago. Like the original it's not an exhaustive list of things effective companies do (there are dozens more identifiers I could add), but that would defeat the spirit of the original test - it's short, quick, and gives you a rough indication of your organisation's effectiveness compared to the top firms who are doing this all the time without even thinking about it.