I’ve come to believe that every system implicitly defines a spectrum of changes, ordered by their likelihood. As designers and developers, we make decisions about what to embody as architecture, code, and data based on known requirements and our experience and intuition.
We pick some kinds of changes and say they are so likely that we should represent the current choice as data in the system. For instance, who are the users? You can imagine a system where the user base is so fixed that there’s no data representing the user or users. Consider a single-user application like a word processor.
Another system might implicitly indicate there is just one community of users. So there’s no data that represents an organization of users… it’s just implicit. On the other hand, if you’re building a SaaS system, you expect the communities of users to come and go. (Hopefully, more come than go!) So you make whole communities into data because you expect that population to change very rapidly.
If you are building a SaaS system for a small, fixed market you might decide that the population won’t change very often. In that case, you might represent a population of users in the architecture via instancing.
So data is at the high-energy end of the spectrum, where we expect constant change. Next would be decisions that are contemplated in code but only made concrete in configuration. These aren’t quite as easy to change as data. Furthermore, we expect that only one answer to any given configuration choice is operative at a time. That’s in contrast to data where there can be multiple choices active simultaneously.
Below configuration are decisions represented explicitly in code. Constructs like policy objects, strategy patterns, and plugins all indicate our belief that the answer to a particular decision will change rapidly. We know it is likely to change, so we localize the current answer to a single class or function. This is the origin of the “Single Responsibility Principle.”
Farther down the spectrum, we have cross-cutting behavior in a single system. Logging, authentication, and persistence are the examples here. Would it be meaningful to say push these up into a higher level like configuration? What about data?
Then we have those things which are so implicit to the service or application that they aren’t even represented. Everybody has a story about when they had to make one of these explicit for the first time. It may be adding a native app to a Web architecture, or going from single-currency, single-language to multinational.
Next we run into things that we expect to change very rarely. These are cross-cutting behavior across multiple systems. Authentication services and schemas often land at this level.
So the spectrum goes like this, from high energy, rapidly changing, blue to cool, sedate red:
- Encapsulated code
- Cross-cutting code
- Implicit in application
- Cross-cutting architecture
The farther toward the “red” end of the spectrum we relegate a concern, the more tectonic it will be to change it.
No particular decision “naturally” falls at one level or another. We just have experience and intuition about which kinds of changes happen with greatest frequency. That intuition isn’t always right.
Efforts to make everything into data in the system lead to rules engines and logic programming. That doesn’t usually end up with the end-user control we think. It turns out you still need programmers to think through changes to rules in a rules engine. Instead of democratizing the changes, you’ve made them more esoteric.
It’s also not feasible to hoist everything up to be data. The more decisions you energy-boost to that level, the more it costs. And at some point you generalize enough that all you’ve done is create a new programming language. If everything about your application is data, you’ve written an interpreter and recursed one level higher. Now you still have to decide how to encode everything in that new language.