Data/object anti-symmetry broke me.
The reason for this breakage was established a year or two ago. I had the fortune to work with some very smart people, who were very good at software engineering. As someone who'd had to fight for every slight increment toward clean coding, testability or product focus and then fight again to keep it from sliding backward, it was inspirational to work with a team who Got It in every way. They were pairing, they were debating good OO design, everything was repeatable and well-factored and whatever else, and yet... something was up.
A team who were consistently performing somewhere around the 80-90% level were failing to achieve useful results, to the point that their project got canned. At the time I put this down to failures endemic to an organisation that is on the transformation journey rather than at the end of it, and lamented losing the opportunity to work with that team.
I thought this a plausible excuse, but the experience must have cracked the foundation beneath my previously unshakeable belief that if you read all of the books and did things the right way (without bastardising them into some bizarre cargo-cult practice) then good things would inevitably follow. Last week this flawed foundation got shaken by said conversation about data/object anti-symmetry.
I realised we were in the same situation. We were discussing whether a Well Engineered idea about DTOs we'd come up with fit into our Well Engineered codebase, or if we needed a different Well Engineered idea that respected encapsulation, or... it didn't really matter what Well Engineered idea we came up with, because the actual problem was we weren't delivering anything which made us feel good about ourselves as a business rather than merely feeling good about the quality of our engineering.
It took me a while to get to this realisation. First I went and read a bunch of stuff about the DTO pattern. Martin Fowler says, "this is great, except when it isn't". Yegor Bugayenko says it's not great, but then his solution involves a non-obvious side effect so I'm not keen on that either. I looked in Clean Code but that gets about as far as Chapter 6 going, "well this is a problem, but ¯\_(ツ)_/¯." You can swap between REST and gRPC but that's really just swapping between making the DTOs yourself and having some external tool build them for you off the back of a
.proto file. I guess it's okay to have impure OO semantics if it was a tool's fault?
So maybe OOP is not the right paradigm for a distributed programming model in which what you're doing is passing bags of data around. Functional programming? No, because I run into a similar problem: distributed computing gives me an anaemic domain model at the interface between services, but it also gives me an anaemic functional model: higher-order functions are not a first-class citizen in this situation either.
About the closest model I have is good old procedural programming, but that feels like a cop-out as well. I'm saying I have no good abstractions over the problem, so I may as well instruct the computer exactly what to do using good old-fashioned C and be happy with it.
Do I care?
This is where we take the leap into unknown territory. Does it matter? If I solve the data/object anti-symmetry problem in some beautiful and elegant way, does it help my business at all? (Assuming my business is not selling esoteric and somewhat theoretical books on how programming is to be done). Do I have any competitive advantage over people using QBASIC on Rails or implementing their services as a monolithic Bash script?
This is getting to the root of the problem. Code does not exist in a perfect theoretical vacuum, it exists to solve problems. There's no point me writing beautiful, elegant code if the problem has been solved and the market dominated by awful code while I'm still figuring out my ideal solution structure.
I feel I've bashed on Well Engineered Code for long enough that you've either given up or become interested, so let's wind back a bit and remind ourselves where all this navel-gazing about the Platonic Object came from.
See, a couple of decades ago programmers mostly Did Stuff. You were left to your own devices, given a problem and a completely arbitrary deadline, and told to connect the two. You opened up
Visual Basic 4.0 32-bit Edition and hacked together something that solved the problem immediately in front of you, and your project manager was mightily pleased.
Then you had a second problem, and you opened up the same tool and Did Some More Stuff, a bit slower this time because you were working around the shortcuts and assumptions you'd previously made, and you'd solve the new problem. Your project manager would be pleased, but maybe a little less mightily because things had gone slower and there were bugs.
This would continue until you'd built The Monolith, and development would slow to a crawl because the entire thing revolved around an extremely finicky god-object singleton called
GregsSessionThing which everything accessed by reflection, and there seemed to be a data type called
srting which is actually a densely-packed struct of 48-bit integers, and...
The reason for this is that once you built something is was done, it was capitalised on a balance sheet, it was now time to sell it and all incremental revenue would be pure profit. Occasionally you'd go back and add new features, but this was all new work (to be capitalised and sold and so on until sales==profit) and going back over what you'd already built was verboten as it'd cut into the margins and what had already been capitalised as software build.
If you're under 30 this may sound like an insane strawman so let me kill those assumptions: I worked for companies where once something was built it was inviolate, the source control was locked and the only thing you had access to was the compiled DLLs. This shit happened. It was A Thing.
The rise of software engineering
Domain Driven Design and Clean Code and all of those things came out of this environment, as a welcome reaction to it. They were successful. Incredibly successful, to the point where if you can talk convincingly about sound engineering principles and applying manufacturing techniques to software you'll end up part of someone's transformation gig somewhere.
How many of these ideas stand of their own accord, and how many are merely a reaction to what went before?
This is my problem with the accepted wisdom that Well Engineered Software is the most important goal. It feels like it doesn't stand alone. It stands as a reaction to the world where you were left alone to build things, but once you'd built them they were inviolate.
Given that world, of course you'd spend days theorising on whether your object model was the best it could be. There would be genuine value in creating the one solution structure to rule them all, because you're going to be living in that thing for the next decade. You need to worry a lot about whether the interface you're building is composable in all the ways you might need to compose it, because you might be in a situation where all you've got is the compiled binary.
It's a good idea based on the assumptions which underpin it. But do those assumptions hold true?
Consider the following statement:
"There's some stuff we'd like to fix before the next release."
Now here's a rough aggregation of the responses I've had to that over time:
- 2005: "It's already built, you're going to have to work around it."
- 2010: (sigh) "Why can't developers ever get it right first time?"
- 2015: "It's inevitable, make sure there's still some productive work this sprint."
- 2019: "Why are you even telling me this?"
There is a trend, in significant part thanks to the Well Engineered Software movement, to trust that the people who build software know what they're doing and therefore know when it needs servicing. What that means is these days, up-to-date organisations are mostly okay with the idea you might need to go back and rework something you build a while ago. Nobody is locking away your source code and forcing you to interact with only your compiled libraries these days.
(If your organisation is that far behind, I can't help you there. May I suggest a rousing bin fire as a positive action?)
So I feel that Well Engineered Software was a step, a good and essential step, but still a step. Not the end state.
Rapidly Refactored Iterations
A thousand or so words ago I went deep into one of these philosophical arguments about getting object oriented programming exactly right and asked, "Does it matter?"
If we can go back and refactor our code? If, even more than that, the people we work with are used to us reworking the things we've written, expect us to rework the things we've written?
No! No, it does not matter!
I put a cowboy hat on the article I got closest to this with but now I've had nearly three years to refine this viewpoint, I don't think it's typical cowboy bullshit. I'm not saying don't test. I'm not saying abandon your principles entirely and replace your fancy mechanical keyboard with a hand-cranked spaghetti roller.
I'm saying that when you down your tools to get into one of these theoretical arguments about whether what you're about to do Breaks OOP (or any of the other Good Engineering concerns you can worry endlessly about), pick those tools back up again and do whatever works for you, in that team, at that point in time.
Within a week it will annoy you. It will cause you pain. So do what solves that pain for you, in that team, at that point in time.
A week later it will annoy you again. So you do what solves that pain... only this time you're starting to understand the problem, you know if it's "non-OOP problem in OOP space" friction or something completely else that you'd confused as a theoretical issue. You can come up with an okay solution that will probably last a month or two, and it probably won't take you much more time than it did to do the first "whatever works for us right now" pass.
I need a name for this, so I'm calling it Rapidly Refactored Iterations. It's the idea that building software is a hard problem. It's the idea that you won't get things 100% right the first time. Most importantly, it's the idea that arguing over whether something is theoretically germane to your paradigm or represents Good Engineering is at best going to boost your chances a couple of percent, whereas coming back to it later when you know what problems it's causing you is going to be a good 25-50% kick in the right direction. Those 25-50% kicks add up fast, and put you in a far better place than all those 2-3% arguments about theoretical perfection.
What does this entail?
I feel now I'm on the road to unbreaking myself. Data/object anti-symmetry is not a big deal because data/object anti-symmetry largely doesn't matter, and if it does matter than it will matter to me in a way where the solution is obvious.
More importantly, I don't feel like one discussion has suddenly invalidated the way I've been working. Instead, I realise I've stumbled across an answer that works for me while striving for something else, not even knowing I was already at the answer.
So let's dive into this idea of rapidly refactored iterations. What have I been doing that worked?
Fix Pain First
I wanted to start by saying how Well Engineered Software was still a massive improvement to what went before, and we don't want to throw all of the good principles out at the same time as we're throwing out over-engineering and too much beard stroking about what The Right Approach is.
Then I realised I don't need to. If you're always fixing your pain points, you'll naturally end up following good engineering practice where it actually helps. You will end up writing useful tests, and creating small but meaningful objects, not because some textbook tells you but because it removes pain from your development.
This also helps you solve the blank screen problem, because your ethos at this point is,
- Get started
- Accept you might make a mess
- Get annoyed by the mess
- Fix the mess
You'll have clean code before the people who obsess about clean code have even figured out their project structure, and more importantly you'll know what "clean" means in your problem space rather than some abstract textbook definition.
Consistency over Correctness
In my experience, this is the most controversial part of how I develop. It sounds counter-intuitive, but it comes back to the way we approach Well Engineered Software. What invariably happens is that Programmer X comes along, decides what is there is not Well Engineered, and therefore decides to make the solution Well Engineered by writing all new code in a Well Engineered way.
Of course, Programmer Y has a different idea of Well Engineered. And then Programmer X goes on a conference and comes back with a new idea of Well Engineered and now you have three competing models in the same solution, until Architect Z comes along and despite not really understanding what's going on decides to unify all three notions by adding a Well Engineered Mapping Layer. (Let's not think about how many meetings we spend discussing how to make the mapping layer theoretically ideal.)
My direction to the teams I got working in the rapidly iterative fashion was to say, "fine, you can do that. Change it. But you've got to change it consistently." That means if you're going to introduce a new way of doing things to the solution, you need to do the following:
- Remove the old approach from the whole solution.
- Apply the new approach across the whole solution.
- Don't impact the sprint goal while you're doing it.
What this means in practice is not only do you need to keep things consistent, you need to get the whole team signed up for the change such that everyone is merging and pulling changes to/from the big refactor as it's happening, and because this hits everyone's workflow you'll need to communicate properly as a team on top of all this to the point that any large change will end up being done via mob programming as a default.
Delivery over Deliberation
I touched on this in the last point, but the most important thing is to deliver working software. I have achieved a lot of things and made companies a lot of money with code I wouldn't touch without donning a pair of hefty rubber gloves first. Conversely, I have seen a lot of money being wasted chasing "perfect" software that never had a single customer. Revenue-wise, solving problems is more important than satisfying some arbitrary notion of Good Engineering Practice.
You need to deliver before you go chasing perfection, or you won't have any money to fund the latter. How this surfaces is a team pact that you're only going to bugger about with refactoring in one of two situations:
- You have a direct cause of pain in need of resolution.
- You're so far ahead in your sprint you've got time to burn.
Corollary: suddenly teams have a really powerful incentive to get shit done in a sprint and not leave things right to the last day.
Second corollary: because getting sufficiently ahead in a sprint is rare, the non-pain-driven refactors which make their way to the codebase tend to be the important ones that people really care about. When you have a limited budget you're loathe to spend it chrome-plating things that don't need chrome-plating.
You Should Still Care
I want to pick up a point I made a few paragraphs ago. "You'll end up following good engineering practice where it actually helps." I've worked with a whole bunch of teams who felt this same existential angst about clean OO design and 100% coverage and whatever else, but decided the solution was to turn everything into the Wild West because it feels amazing when you're hitting thousands of lines of code per day without stopping to realise they're all awful.
That's not the right approach. It's abdicating responsibility as a programmer; avoiding the question of whether what you're doing is the right thing by deliberately doing the wrong thing. You should still try to get things right. When it gets down to it, the only real difference I'm advocating is that sometimes you can get bogged down in trying to figure out exactly what "right" is, and that's the point you do something which works and gives you some output with the view you can revisit it later when you've got a better idea about what "right" actually was.
(Later is often not much later at all - most of the time I figure out the right way of doing something before I've got far enough into the initial solution to commit it.)
Easy for you to say...
This all feels nice to me, but when I trialled this article with a small group I did get asked, "what about my most junior developer who maybe doesn't know what the right call to make is?"
This is a sound point. In particular, if I'm new to a language I probably don't know whether my pain is being caused by something genuinely wrong in my approach or if it's just the general pain of learning what the idioms are and how my language works. This is particularly true if I'm a newbie and I don't have other languages to compare and contrast with.
This is in part what the whole Well Engineered approach tries to fix. You tell people in this situation, "go here, and read these books, and do it like this, and what you're doing will be right." Which is a noble and well-meaning goal, that unfortunately gets broadsided by the incredible ability of people to cargo-cult their way through sensible advice. You don't lose the misunderstandings of the language and framework, instead you layer them and compound them with misunderstandings of the design pattern in play and the clean coding principles and the testing approach.
There's a whole other article about team dynamics here, but let's take the headline: programming is a team sport. This is what you should be getting from pairing and mob programming and your pull requests: that someone who has the experience to make the judgement call is working with the junior members of the team, asking them why they think this particular thing is causing pain, coaching them to identify actual problems and fixes. Yes, this has a lot of consequences for who you have in your team; the lone wolf is going to add limited value, the primadonna rockstar who throws a tantrum every time they're asked a simple question is going to be actively harmful. You need good and patient people who are willing to help others. Team sport, right?
I kinda feel like I'm not even being original here, as most of this was covered far more succinctly in Rob Ashton's relaxed attitude towards the pragmatic delivery of okay software essay eight years ago. Maybe I've put down a bit more of my thought process and how I got here, maybe I have some sort of insight into what this looks like in your day-to-day coding, but then possibly I just need three and a half thousand words to say a simple concept.
Where we're going is this idea that code is mutable. There's a cost to that, but there's also a cost to prevaricating about whether what you're doing is correct in every theoretical sense imaginable, and that second cost can be and often is way higher. Plus the endless discussion meetings are much more exhausting than reworking a bit of code because the initial idea was a bit goofy. So if you feel yourself starting to stall on the question of whether something is right... stop worrying, do something that works for now, and come back to it when you know more.
You're not going to find this idea of right defined perfectly for every situation in a textbook; you're going to have to figure it out for your situation. It's okay for something to not feel Well Engineered in the first cut if it works well and isn't causing you any real pain, because chances are you're going to refactor it 2 or 3 times and at the end of that process it'll be way better than copying something out of Design Patterns or some master code repository containing the One True Solution.
Data/object anti-symmetry? We'll figure something out.