Warning: Rant Ahead. I just need to get this out of my system. You don't have to read it. It might look like I'm beating a dead horse, but recently I found this particular horse alive and kicking.
We have to do it this way because otherwise it will be a maintenance nightmare
So, here are some ways to make your code maintainable:
How to Write Maintainable Code
- Write helpful, detailed comments
- Always code to interfaces: you never know when we will need to substitute a different implementation
- Your Controllers delegate to Façades, your Façades to Services, and your Services to DAOs: this allows us create modular and re-usable components for greater flexibility
- Extract a maximum of configuration information to properties or xml configuration files. That way, the future maintainers will be able to modify the program's behaviour without even needing to recompile!
- Actually, if you can put some if this information in the database, even better, you can tweak the application while it is running!
Oops! Sorry, actually, I meant to publish that in a different universe. Here's the corrected version.
How to Write Maintainable Code
- The code is the documentation. Comments get in the way and are often wrong
- A smaller codebase requires less effort to understand it, hence will be more easily assimilated by future maintainers. This means don't create interfaces until you actually have varying implementations
- The simplest possible code is also the hardest to break accidentally, and also the easiest to introduce new features to. Don't add layers until the need for them is manifest
- Extract only the minimum possible configuration information. If later you find something needs to be configurable, extract it then. Each configurable item is another piece of indirection, adding size and complexity to the system.
- If something needs to be configurable, put it in the database only as a last resort, and only if there is a proven requirement from real users that this needs to be configured, and then only if they are willing to pay not only for the initial development cost of this configurability but for the future maintenance costs arising from the much-increased size and complexity of the system as already described.
Two worlds, opposite extremes, both seeking the same outcome: to protect future maintainability. The other-universe of the first approach isn't a weird quantum thing, it was just twenty years ago.
There was, indeed, a time when languages imposed a (small) upper limit on the length of names. So you had this
sndClsAppMl(char* addr, char** tmpl, int cntyCd, int clsCd)
And now you have
sendClassificationApprovalEmail(EmailAddress address, MessageTemplate template, Locale locale, Classification newClassification)
What clarity could be achieved by commenting this that could not be gained by altering the name of the method or its arguments?
Around this time, twenty years ago, development environments didn't have the same kind of instant code navigation that we do today. Nowadays, if I stumble by some misfortune on a cryptically-named and mal-commented method call, I hit Control-B (yeah, intellij), and I'm instantly in the target method which I can read, understand, and (unless it's lunchtime), rename.
The comments, if there are any beyond those auto-generated by the IDE, usually reflect what the author had in mind, sort of, when he was wondering (pondering, maybe?) what it might be useful (or appropriate) for a method to do - or not do - under a particular set of circumstances (not excluding the most recent version of the spec, while taking into account what one of the users said at the last demo, although the analysts didn't agree) that may or may not bear some relation to the current situation of the project.
Steve Yegge discusses commenting more brilliantly than I could ever hope to, going so far as to declare that static typing is commenting taken to an extreme.
But I digress. The point was that without the instant code navigation we have nowadays, comments may in fact have been useful.
Continuing our peek at pre-history, a feature of large systems in the Bad Old Days was that they took ages to compile. Ask anyone who was around. So of course you would want to pluck stuff out of the code and pop it into a config file of some sort, just to cut down on compilations.
Once you had your thing compiled, and especially if it was a native application for a popular desktop OS from a Seattle-based company, it was probably a curse to deploy. The kind of thing everybody wants to delay for as long as possible, whatever the deadline says.
So, naturally, whatever you can configure, you'll want in the database, 'cos you're not doing a redeploy just because your sponsor realised a moment too late that in fact the colour for the "to be chased" status ought to be pink, not red. Just update a row in the database, and it's done for everyone, yay!
The fact that it might cost a team-month to develop the configuration management subsystem for options that are changed, perhaps, only once a year, doesn't seem to trouble the team's goal donors however. The need for user-accessible system configuration control sometimes reflects the unhappy relationship between development teams and user teams. It says "we don't want to have to call you to make this change, you're too slow".
Nowadays the distinction between config and code is blurring. A Spring configuration file is really executable code, just written in XML. In Ruby, the distinction disappears altogether. Ultimately, there is no fundamental difference anyway - java code is just a configuration of the jvm; the OS is just configuration for the hardware. A dynamic language is what you end up with when you make your software totally configurable.
Design Patterns Considered Harmful
As for coding to interfaces and the whole Façade/Service/DAO thing, I can't fit it into the Dinosaur Theory of software development. "Teach Yourself Design Patterns in 12 Days", might be responsible, or perhaps a corporately-misunderstood two-day crash course on the topic. One important use of the Façade pattern is to hide a horrible, complex and difficult subsystem behind a single, simple, interface. The thing is, if you're writing the subsystem yourself, it's not going to be horrible, complex and difficult, right? Façade is just another way of telling your legacy colleagues, "your API sucks but I can work around that". You would just never use it on your own code!
While the goal of writing reusable code is noble and all that, the sad truth is that un-reusable code, i.e. code that solves the specific problem you have right now in this specific project, is much simpler to write and maintain. In fact, it might even be simpler to write your own thing from scratch than to use someone else's "re-usable" component. As your code grows, and as you tend your code and listen to where it's hurting, it will split naturally along implicit fault lines, and where you break a class into smaller pieces, each of these pieces will be naturally re-usable because you are already re-using them in real-world cases inside your real-world project. Designing explicitly for re-usability rarely pays off.
When I was at university, Interfaces were the thing. I got points in exams for Interfaces. But an interface covering every method for each of your business objects? Meaningless! Pointless! Clutter! Writing to an interface doesn't make your code re-usable, if the interface itself isn't re-usable.
If you have read this far, my commiserations. I'll try not do this again. Please be assured that any resemblance to any real persons or situations, living or dead, past, present, or future, is entirely accidental and co-incidental. It's not you. Leave me alone. End of rant.