Wide Awake Developers

Inverted Ownership

| Comments

One of the sources of semantic coupling has to do with identifiers, and especially with synthetic identifiers. Most identifiers are just alphanumeric strings. Systems share those identifiers to ensure that both sides of an interface agree on the entity or entities they are manipulating.

In the move to services, there is an unfortunate tendency to build in a dependency on an ambient space of identifiers. This limits your organization’s maneuverability.

Contextualized IDs

The trouble is that a naked identifier doesn’t tell you what space of identifiers it comes from. There is just this ambient knowledge that a field called Policy ID is issued from the System of Record for policies. That means there can only be one “space” of policy numbers, and they must all be issued by the same SoR.

I don’t believe in the idea of a single system of record. One of my rules for architecture without an end state is “Embrace Plurality”. Whether through business changes or system migrations, you will always end up with multiple systems of record for any concept or entity.

In that world, it’s important that IDs carry along their context. It isn’t enough to have an alphanumeric Policy ID field. You need a URN or URI to identify which policy system issued that policy number.

Liberal Issuance

Imagine a Calendar service that tracks events by date and time. It would seem weird for that service to keep all events for every user in the same calendar, right? We should really think of it as a Calendars service. I’d expect to see API to create a calendar, which returns the URL to “my” calendar. Every other API call then includes that URL, either as a prefix or a parameter.

In the same way, your services should serve all callers and allow them to create their own containers. If you’re building a Catalog service, think of it as a Catalogs service. Anybody can create a catalog, for any purpose. Likewise, a Ledger service should really be Ledgers. Any client can create a ledger for any reason.

This is the way to create services that can be recombined in novel ways to create maneuverability.

The Perils of Semantic Coupling

| Comments

On the subject of maneuverability, many organizations run into trouble when they try to enter new lines of business, create a partnership, or merge with another company. Updating enterprise systems becomes a large cost factor in these business initiatives, sometimes large enough to outweigh the benefits case. This is a terrible irony: our automation provides efficiency, but removes flexibility.

If you break down the cost of such changes, you’ll find it comes in equal parts from changes to individual systesm and changes to integrations across systems. Integrations are always costly and full of risk, and never more so than we changing cardinalities. Partnerships and mergers pretty much always change cardinalities, too.

The cost factor arises from “semantic coupling.” That is the coupling between services introduced because the services need to share concepts. It usually appears as data types or entity names that pop up in many services.

As an example, let’s think about a tiny retailing system with a small set of what I’ll call “macroservices”. One of the most important entity types here is the Stock Keeping Unit, or SKU. It represents “a thing which can be sold”. In a typical retail system, it has a large number of attributes that describe how the item is priced, delivered, displayed on the web, upsold and cross-sold, reviewed, categorized, and taxed.

SKUs are created in a master data management system. There may be a variety of feeds that get massaged into MDM, but we’ll consider that to be outside the boundary of our interest for now. From MDM, the SKU must be distributed to a number of other services:

Each of these macroservices uses aspects of the SKU for its own purpose. Content management attaches “telling and selling” content to the SKU so it can be presented nicely on the web. Pricing adds it to the pricing rules. Shipping identifies the carriers, options, and costs to deliver it. Order management–probably a great big silver beast of a system–tracks inventory, orders, delivery rules, returns, and a lot more.

Now what happens if we have to make a major change to the SKU? Let’s imagine that we want to change how we manage prices. In the past, merchants set prices on each item individually. Now, we’ve got too much in the catalog for that to scale so we introduce the idea of price points for digital items. A price point is a price that applies to a large number of SKUs. When we change the price point, all SKUs that refer to it should be changed at the same time. So, if we decide to reduce the price of a low-bitrate MP3 track from $0.99 to $0.89, we can just change a single price point record.

How many systems do we have to change for this new concept?

If we consider “price point” to be part of our core domain, then we have to add that concept everywhere. The surface area of that change is really large, and it will be a costly change to make. It might even be too costly to be worth doing. We could hire a small army of temp workers to update price records by hand twice a year and still come out ahead. That’s not a very satisfying answer though. All this automation is supposed to make us more efficient! What good is it if we are stuck with outdated processes because our systems are too hard to change?

The key problem is semantic coupling. There are a lot of systems here that shouldn’t need to care about the “price point” concept. It has no bearing on the digital locker, shipping, or ratings & reviews.

In this example, we can reduce the semantic coupling. Simply decide that “price point” is not a core concept. It is a detail of data management for the MDM system. Everything downstream receives SKUs with a list price. No downstream system should care how that list price was determined.

This decision flattens a many-to-one relationship from SKU to price point. In so doing, we get a huge benefit. We eliminate an entire entity and all references to it from all the downstream systems.

I would even make a case for shattering the concept of SKU into multiple separate concepts. MDM may keep that concept. Downstream, though, each system has its own set of internal concepts. We should treat identifiers from other systems as opaque tokens that we map onto our own system’s space.

For example, the pricing service doesn’t need to know that it is pricing SKUs. It just needs to price “things that can be priced.” I know, it sounds tautological, but I think we get misled as humans… we think of SKU as a unitary concept so we build it as such in our systems. But look what happens if we say a pricing service can price “stuff and things” as long as they have some mapping in the pricing service itself. We can add an entirely new universe of things to price, without forcing everything on Earth to be a SKU!

We should scrutinize each of the other services, asking ourselves, “Does this really care about a SKU? Or does it care about something that a SKU happens to posess?” I would argue that in each case, the service really cares about “Thing that can be Xed”. Priced, taxed, shipped, reviewed, etc. Are SKUs the only things that can be taxed? Are they the only things that can be reviewed? Etc.

Iterate this process and four things will happen:

  1. Your services will shrink.
  2. Your services will become much more general.
  3. Each service will own its own space of identifiers.
  4. Your organization will become more maneuverable.

The key point I want to make here is that a concept may appear to be atomic just because we have a single word to cover it. Look hard enough and you will find seams where you can fracture that concept. Don’t share the whole thing. Don’t couple all your downstream systems to the whole concept, and definitely don’t couple your downstream to a complex of related concepts! It’s a cardinal sin.


| Comments

Agile development works best at team scale. When a team can self-organize, refine their methods, build the toolchain, and modify adapt it to their needs, they will execute effectively. We should be happy to achieve that! I worry when we try to force-fit the same techniques at larger scales.

At the scale of a whole organization, we need to look at the qualities we want to have. (We can’t necessarily produce those qualities directly, but we can create the conditions that allow them to emerge.) When we look at attempts to scale agile development up, the quality the org wants is maneuverability.

Maneuverability is the ability to change your vector rapidly. It’s about gaining, shedding, or redirecting momentum. Keeping with the analogy of momentum, we can call that which resists change in the momentum vector “inertial mass.” Personnel are mass, because it’s relatively hard to add or shed personnel. Technical debt is a component of mass, too. It makes changes to your technical strategy harder. Actually, I’d even go so far as to say that code itself is mass. KLOCs kill.

Maneuverability has been explored most fully by the military. Superior maneuverability allows a fighter aircraft to get inside the enemy’s turn radius, then shoot for the kill. An army with high maneuverability can engage, disengage, and reorient to exploit an enemy’s weakness. In the words of John Boyd, it allows you to separate your opponent into multiple, non-cooperating centers of gravity.

Maneuverability is an emergent property. It requires a number of prerequisites in the organization’s structure, leadership style, operations, and ability to execute. I firmly believe that maneuverability requires a great ability to execute at the micro scale.

Agile development provides that ability to execute in software development. It is a necessary, but not sufficient, part of maneuverability. There are other necessary capabilities in the technical arena. I think that infrastructure and architecture have important roles to play for maneuverability as well.

I have previously given talks on the subject of maneuverability. I’ll also be posting some further thoughts about pertinent architecture decisions.

Bad Layering

| Comments

If I had to guess, I would say that “Layers” is probably the most commonly applied architecture pattern. And why not? Parfaits have layers, and who doesn’t like a parfait? So layers must be good.

Like everything else, though, there’s a good way and a bad way.

The usual Neapolitan stack looks like this:

On one of my favorite projects of all, we used more layers because we wanted to further isolate different behaviors. In that project, we added a “UI Model” distinct from the “Domain.”

We impose this style because we want to separate concerns. This should provide us with two big benefits. First, we can change the contents of each layer independently. So changes to the GUI should not affect the domain, and changes to the domain should not affect persistence. The second benefit we want is the ability to substitute a layer. We may swap out a layer for the sake of testing (often in the case of persistence layers) or for different product configurations.

People sometimes make an argument for swapping out a layer in case of technology change. That argument is used for ORMs in the persistence layer, but I don’t find it convincing. Changing persistence on an existing application is by far not the most common kind of change. You’d be buying an expensive option that is seldom exercised.

When Good Layers Go Bad

The trouble arises when the layers are built such that we have to drill through several of them to do something common. Have you ever checked in a commit that had a bunch of new files like “Foo”, “FooController”, “FooForm”, “FooFragment”, “FooMapper”, “FooDTO”, and so on? That, dear reader, is a breakdown in layering.

It comes from each layer being decomposed along the same dimension. In this case, aligned by domain concept. That means the domain layer is dominating the other layers.

I would much rather see each layer have objects and functions that express the fundamental concepts of that layer. “Foo” is not a persistence concept, but “Table” and “Row” are. “Form” is a GUI concept, as is “Table” (but a different kind of table than the persistence one!) The boundary between each layer should be a matter of translating concepts.

In the UI, a domain object should be atomized into its constituent attributes and constraints. In persistence, it should be atomized into rows in one or more tables (in SQL-land) or one or more linked documents.

What appears as a class in one layer should be mere data to every other layer.

How Does It Happen?

This breakdown in layering can arise from more than one dynamic process.

  1. The application framework may impose this structure.
  2. The language may not have abstractions powerful enough to make it pleasant to work with data.
  3. TDD without enough refactoring. Each thin slice through the application adds one more strand of “Foo and Friends”. Truly merciless refactoring would pull out the common behavior sideways into the layer-specific concepts I described above. Lacking merciless refactoring, the project will accrete sticky strands like cotton candy on a toddler.
  4. The team may not have seen it done any other way.

What If It Happens To You?

Maybe you already have degenerate layers. Assuming they aren’t required by your framework, start looking for opportunities to refactor. Don’t just build a class hierarchy so you can inherit implementations. Rather, look for common patterns of interaction. Figure out how to turn the code you’ve got in classes into data acted on by classes relevant to the layer.

Use maps. Convert objects into maps from field identifier to an object that represents the salient aspect of the field for that layer:

  • For a GUI, those aspects will be something like “lexical type”, “editable”, “constraint” / “validation”, “semantic class”, and so on.
  • For persistence, they will deal with “length”, “representation format”, “referent,” etc.

Seek and destroy DTOs. They should be maps.

A DTO clearly indicates that your class is crossing a boundary. And yet, it requires that code on both sides of the boundary codes to the method signatures of the DTO. That means there is precisely zero translation at the boundary.

Where To Go From Here

Let me be clear, I like parfaits. (Yogurt and fruit! Ice cream, nuts, caramel!) I have nothing against layers. Most of my applications are built from layers. It’s just that getting the benefits we seek requires more effort than smearing a single domain concept across multiple subdirectories.

If “Layers” is the only architecture pattern you’ve used, then you’re in for a treat. There are plenty of other fundamental structures to explore. Pipes and filters. Blackboard. Components. Set GoF aside and go read Pattern-Oriented Software Architecture. The whole series is a treasure trove and an encyclopedia.

People Don’t Belong to Organizations

| Comments

One company that gets this right is Github. I exist as my own person there. I’m affiliated with my employer as well as other organizations.

We are long past the days of “the company man,” when a person’s identity was solely bound to their employer. That relationship is much more fluid now.

A company that gets it wrong is Atlassian. I’ve left behind a trail of accounts in various Jirae and Confluences. Right now, the biggest offender in their product lineup is HipChat. My account is identified by my email address, but it’s bound up with an organization. If I want to be part of my employer’s HipChat as well as a client’s, I have to resort to multiple accounts signed up with plus addresses. It’s great that GMail supports that, but I still can’t log in to more than one account at a time.

More generally, this is a failure in modeling. Somewhere along the line, somebody drew a line between `Organization` and `Person` on their model, with a one-to-many relationship. One `Organization` can have many `Person` entities, but each `Person` belongs to exactly one `Organization`.

I’ll go even further. The proper way to approach this today is to relate `Organization` and `Person` by way of another entity. Reify the association! Is it employment? Put the start and end dates on the employment. Oh, and don’t delete the association once it ends… that’s erasing it from history.

I think the default for pretty much any relationship these days should be many-to-many. Particularly any data relationship that models a real relationship in the external world. We shouldn’t let the bad old days of SQL join tables deter us from doing the right thing now.

Glue Fleet and Compojure Together Using Protocols

| Comments

Inspired by Glenn Vanderburg’s article on Clojure templating frameworks, I decided to try using Fleet for my latest pet project. Fleet has a very nice interface. I can call a single function to create new Clojure functions for every template in a directory. That really makes the templates feel like part of the language. Unfortunately, Glenn’s otherwise excellent article didn’t talk about how to connect Fleet into Compojure or Ring. I chose to interpret that as a compliment, springing from his high esteem of our abilities.

My first attempt, just calling the template function directly as a route handler resulted in the following:

java.lang.IllegalArgumentException: No implementation of method: :render of protocol: #'compojure.response/Renderable found for class: fleet.util.CljString

Ah, you’ve just got to love Clojure errors. After you understand the problem, you can always see that the error precisely described what was wrong. As an aid to helping you understand the problem… well, best not to dwell on that.

The clue is the protocol. Compojure knows how to turn many different things into valid response maps. It can handle nil, strings, maps, functions, references, files, seqs, and input streams. Not bad for 22 lines of code!

There’s probably a simpler way that I can’t see right now, but I decided to have CljString support the same protocol.

Take a close look at the call to extend-protocol on lines 12 through 15. I’m adding a protocol–which I didn’t create–onto a Java class–which I also didn’t create. My extension calls a function that was created at runtime, based on the template files in a directory. There’s deep magic happening beneath those 3 lines of code.

Because I extended Renderable to cover CljString, I can use any template function directly as a route function, as in line 17. (The function views/index was created by the call to fleet-ns on line 10.)

So, I glued together two libraries without changing the code to either one, and without resorting to Factories, Strategies, or XML-configured injection.

Metaphoric Problems in REST Systems

| Comments

I used to think that metaphor was just a literary technique, that it was something you could use to dress up some piece of creative writing. Reading George Lakoff’s Metaphors We Live By, though has changed my mind about that.

I now see that metaphor is not just something we use in writing; it’s actually a powerful technique for structuring thought. We use metaphor when we are creating designs. We say that a class is like a factory, that an object is a kind of a thing. The thing may be an animal, it may be a part of a whole, or it may be representative of some real world thing.

All those are uses of metaphor, but there is a deeper structure of metaphors that we use every day, without even realizing it. We don’t think of them as metaphors because in a sense these are actually the ways that we think. Lakoff uses the example of “The tree is in front of the mountain.” Perfectly ordinary sentence. We wouldn’t think twice about saying it.

But the mountain doesn’t actually have a front, neither does the tree. Or if the mountain has a front, how do we know it’s facing us? What we actually mean, if we unpack that metaphor is something like, “The distance from me to the tree is less than the distance from me to the mountain.” Or, “The tree is closer to me than the mountain is.” That we assign that to being in front is actually a metaphoric construct.

When we say, “I am filled with joy.” We are actually using a double metaphor, two different metaphors related structurally. One, is “A Person Is A Container,” the other is, “An Emotion Is A Physical Quantity.” Together it makes sense to say, if a person is a container and emotion is a physical thing then the person can be full of that emotion. In reality of course, the person is no such thing. The person is full of all the usual things a person is full of, tissues, blood, bones, other fluids that are best kept on the inside.

But we are embodied beings, we have an inside and an outside and so we think of ourselves as a container with something on the inside.

This notion of containers is actually really important.

Because we are embodied beings, we tend to view other things as containers as well. It would make perfect sense to you if I said, “I am in the room.” The room is a container, the building is a container. The building contains the room. The room contains me. No problem.

It would also make perfect sense to you, if I said, “That program is in my computer.” Or we might even say, “that video is on the Internet.” As though the Internet itself were a container rather than a vast collection of wires and specialized computers.

None of these things are containers, but it’s useful for us to think of them as such. Metaphorically, we can treat them as containers. This isn’t just an abstraction about the choice of pronouns. Rather the use of the pronouns I think reflects the way that we think about these things.

We also tend to think about our applications as containers. The contents that they hold are the features they provide. This has provided a powerful way of thinking about and structuring our programs for a long time. In reality, no such thing is happening. The program source text doesn’t contain features. It contains instructions to the computer. The features are actually sort of emergent properties of the source text.

Increasingly the features aren’t even fully specified within the source text. We went through a period for a while where we could pretend that everything was inside of an application. Take web systems for example. We would pretend that the source text specified the program completely. We even talked about application containers. There was always a little bit of fuzziness around the edges. Sure, most of the behavior was inside the container. But there were always those extra bits. There was the web server, which would have some variety of rules in it about access control, rewrite rules, ways to present friendly URLs. There were load balancers and firewalls. These active components meant that it was really necessary to understand more than the program text, in order to fully understand what the program was doing.

The more the network devices edged into Layer 7, previously the domain of the application, the more false the metaphor of program as container became. Look at something like a web application firewall. Or the miniature programs you can write inside of an F5 load balancer. These are functional behavior. They are part of the program. However, you will never find them in the source text. And most of the time, you don’t find them inside the source control systems either.

Consequently, systems today are enormously complex. It’s very hard to tell what a system is going to do once you put into production. Especially in those edge cases within hard to reach sections of the state space. We are just bad at thinking about emergent properties. It’s hard to design properties to emerge from simple rules.

I think we’ll find this most truly in RESTful architectures. In a fully mature REST architecture, the state of the system doesn’t really exist in either the client or the server, but rather in the communication between the two of them. We say, HATEOAS “Hypertext As The Engine Of Application State,” (which is a sort of shibboleth use to identify true RESTafarian’s from the rest of the world) but the truth is: what the client is allowed to do is to hold to it by the server at any point in time, and the next state transition is whatever the client chooses to invoke. Once we have that then the true behavior of the system can’t actually be known just by the service provider.

In a REST architecture we follow an open world assumption. When we’re designing the service provider, we don’t actually know who all the consumers are going to be or what their individual and particular work flows maybe. Therefore we have to design for a visible system, an open system that communicates what it can do, and what it has done at any point in time. Once we do that then the behavior is no longer just in the server. And in a sense it’s not really in the client either. It’s in the interaction between the two of them, in the collaborations.

That means the features of our system are emergent properties of the communication between these several parts. They’re externalized. They’re no longer in anything. There is no container. One could almost say there’s no application. The features exists somewhere in the white space between those boxes on the architecture diagram.

I think we lack some of the conceptual tools for that as well. We certainly don’t have a good metaphorical structure for thinking about behavior as a hive-like property emerging from the collaboration of these relatively, independent and self-directed pieces of software.

I don’t know where the next set of metaphors will come from. I do know that the attempt to force web-shaped systems in to the application is container metaphor, simply won’t work anymore. In truth, they never worked all that well. But now it’s broken down completely.

Time Motivates Architecture

| Comments

Let’s engage in a thought experiment for a moment. Suppose that software was trivial to create and only ever needed to be used once. Completely disposable. So, somebody comes to you and says, “I have a problem and I need you to solve it. I need a tool that will do blah-de-blah for a little while.” You could think of the software the way that a carpenter thinks of a jig for cutting a piece of wood on a table saw, or a metalworker thinks of creating a jig to drill a hole at the right angle and depth.

If software were like this, you would never care about its architecture. You would spend a few minutes to create the thing that was needed, it would be used for the job at hand, and then it would be thrown away. It really wouldn’t matter how good the software was on the inside–how easy it was to change–because you’d never change it! It wouldn’t matter how it adapted to changing business requirements, because you’d just create a new one when the new requirement came up. In this thought experiment we wouldn’t worry about architecture.

The key difference between this thought experiment and actual software? Of course, actual software is not disposable. It has a lifespan over some amount of time. Really, it’s the time dimension that makes architecture important.

Over time, we need for many different people to work effectively in the software. Over time, we need the throughput of features to stay constant, or hopefully not decrease too much. Maybe it even increases in particularly nice cases. Over time, the business needs change so we need to adapt the software.

It’s really time that makes us care about architecture.

Isn’t it interesting then, that we never include time as a dimension in our architecture descriptions?