Wide Awake Developers

The Fear Cycle

| Comments

Once you begin to fear your technology, you will shortly have cause to fear it even more.

The Fear Cycle goes like this:

  1. Small changes have unpredictable, scary, or costly results.
  2. We begin to fear making changes.
  3. We try to make every change as small and local as possible.
  4. The code base accumulates warts, knobs, and special cases.
  5. Fear intensifies.

Fear starts when an innocuous change goes badly. Maybe a production outage results, or maybe just an embarrassing bug. It may be a bug that gets upper management attention. Nothing instills fear like an executive committee meeting about your code defect!

This sphincter-shrinker originated because a developer couldn’t predict all the ramifications of a change. Maybe the test suite was inadequate. Or there are special cases that are only observed in production. (E.g., that one particular customer whose data setup is different than everyone else.) Whatever the specific cause, the general result is, “I didn’t know that would happen.”

Add a few of these events into the company lore and you’ll find that developers and project managers become loath to touch anything outside their narrow scope. They seek local safety.

The trouble with local safety is that it requires kludges. The code base will inevitably deteriorate as pressure for larger changes and broader refactoring builds without release.

The vicious cycle is completed when one of those local kludges is responsible for someone else’s “What? I didn’t know that!” moment. At this point, the fear cycle is self-sustaining. The cost of even small changes will continue to increase without limit. The time needed to get changes released will increase as well.

Breaking Point

One of several things will happen:

  1. A big bang rewrite (usually with a different team.) The focus will be “this time, we do it right!” See also: second system syndrome, Things You Should Never Do, Part I.
  2. Large scale outsourcing.
  3. Sell off the damaged assets to another company.

Avoiding the Cycle

The fear cycle starts when people treat a technical problem as a personal one. The first time a seemingly simple change causes a large and unpredictable effect, you need to convene a technical SWAT team to determine why the system allowed it to happen and what technical changes can avoid it in the future.

The worst response to a negative event is a tribunal.

Sadly, the difference between a technical SWAT team and a tribunal is mostly in how the individuals in that group approach the issue. Wise leadership is required to avoid the fear cycle. Look to people with experience in operations or technical management.

Breaking the Cycle

Like many reinforcing loops in an organization, the fear cycle is wickedly hard to break. So far, I have not observed any instance of a company successfully breaking out of it. If you have, I would be very interested to hear your experiences!

Components and Glue

| Comments

There’s a well-known architectural style in desktop applications called “Components and Glue”. The central idea is that independent components are composed together by a scripting layer. The glue is often implemented in a different or more dynamic language than the components.

The C2 wiki’s page on ComponentGlue has been stable since 2004, so obviously this is not a new idea.

Emacs is one example of this approach. The components are written in C, the glue is ELisp. (To be fair, though, the ELisp outnumbers the C by a pretty large factor.)

Perl was originally conceived as a glue language.

Visual Basic applications also followed this pattern. Components written in C or C++, glue in VB itself.

I think Components and Glue is a relevant architecture style today, especially if we want to compose and recompose our services in novel ways.

My last several posts have been about decomposing services into smaller, more independent units. Each one could be its own micro-SaaS business. Some application needs to stitch these back together. I often see this done in a separate layer that presents a simplified interface to the applications.

This glue layer may be written in a different language than the services themselves. For that matter, the individual services may be written in a variety of languages, but that’s a subject for a different time.

The glue layer changes more rapidly than the back end services, because it needs to keep serving the applications as they change. Even when the back end services are provided by an enterprise IT group, the integration layer will be more affiliated with the front end web & app teams.

We embrace plurality, so if there’s one glue layer, there may be more. We should allow multiple glue layers, where each one is adapted to the needs of its consumers. That begins to look like this:

The smaller and lighter we make the glue, the faster we can adapt it. The endpoint of that progression looks like AWS Lambda where every piece of script gets its own URL. Hit the URL to invoke the script and it can hit services, reshape the results, and reply in a client-specific format.

Once we reach that terminus, we can even think of individual functions as having URLs. Like one-off scripts in ELisp or perl, we can write glue for incidental needs: one-time marketing events, promotions, trial integrations, and so on.

“Scripts as glue” also lets us deal with a tension that often arises with valuable customers. Sometimes the biggest whales also demand a lot of customization. How should we balance the need to customize our service for large customers (the whales) and the need to generalize to serve the entire market? We can create suites of scripts that present one or more customer-specific interfaces, while the interior of our services remain generalized.

This also allows us to handle one of the hardest cases: when a customer wants us to “plug in” their own service in lieu of one of ours. As I’ve said before, all our services use full URLs for identifiers, so we should be able to point those URLs at our outbound customer glue. That glue calls the customer service according to its API and returns results according to our formats.

The components and glue pattern remains viable. As we decompose monoliths, it is a great way to achieve separation between services without undue burden on the front end applications and their developers.

Faceted Identities

| Comments

I have a rich and multidimensional relationship with Amazon. It started back in 1996 or 1997, when it became the main supplier for my book addiction. As the years went by, I became an “Amazon Affiliate” in a futile attempt to balance out my cash flow with the company. Later, I started using AWS for cloud computing. I also claimed my author page.

Let’s contemplate the data architecture needed to maintain such a set of relationships. Let’s assume for the moment that Amazon were using a SQL RDBMS to hold it all. The obvious approach is something I could call the “Big Fat User Table”. One table, keyed by my secret, internal user ID, with columns for all the different possible thing a user can be to Amazon. There would be a dozen columns for my affiliate status, a couple for my author page, a boolean to show I’ve signed up for AWS, and a bunch of booleans for each of the individual services.

Such a table would table would be an obvious bottleneck. Any DBA worth her salt would split that sucker into many tables, joined by a common key (the user ID.) New services would then just add a table in their own database with the common user ID. Let’s call this approach the “Universal Identifier” design.

That would also allow one-to-many relations for some aspects. For example, when I lived in Minnesota, the state demanded that Amazon keep track of tax for each affiliate. Amazon responded by shutting down all the affiliate accounts in Minnesota. I recently moved to Florida and was able to open a new account with my new address. So I have two affiliate accounts attached to my user account.

For what it’s worth, column family databases would kind of blur the lines between the Big Fat User Table and the Universal Identifier design.

We can get more flexible than the Universal Identifier, though.

You see, if we push the User ID into all the various services, that implies that the “things” that service manages can only be consumed by a User. Maneuverable architecture says we should be able to recompose services in novel configurations to solve business problems.

Instead of pushing the User ID into each service, we should just let each service create IDs for its “things” and return them to us.

For example, a Calendar Service should be willing to create a new calendar for anyone who asks. It doesn’t need to know the ID of the owner. Later, the owner can present the calendar ID as part of a request (usually somewhere in the URL) to add events, check dates, or delete the calendar. Likewise, a Ledger service should be willing to create a new ledger for any consumer, to be used for any purpose. It could be a user, a business, or a one-time special partnership. The calls could be coming from a long-lived application, a bit of script hooked to a URL, or curl in a bash script. Doesn’t matter.

If we’ve got all these services issuing identifiers, we need some way to stitch them back together. That’s where the faceted identities come in. If we start from a user and follow all the related “stuff” connected to that user, it looks a lot like a graph.

When a user logs in to the customer-facing application, that app is responsible for traversing the graph of identities, making requests to services, and assembling the response.

I hope you aren’t surprised when I say that different applications may hold different graphs, with different principals as their roots. That goes along with the idea that there’s no privileged vantage point. Every application gets to act like the center of its own universe.

Going Meta

If you’ve been schooled in database design, this probably looks a little weird. I’m removing the join keys from the relational databases. (Some day soon I need to write a post addressing a common misconception: that “relational” databases got their name because they let you relate tables together.)

The key issue I’m aiming at is really about logical dependencies in the data. Foreign key relationships are a policy statement, not a law of nature. Policies change on short notice, so they should be among the most malleable constructs we have. By putting that policy in the bottommost layer of every application, we make it as hard as possible to change!

We can think of a hierarchy of “looseness” in relationships:

  • Two ideas, stored in one entity: As coupled as it gets. Neither idea can be used without the other. (An “entity” here can be a table or link data resources with URLs. It’s not about the storage, but about the required relationship.)
  • Two ideas, two entities, one-to-one: Still, both ideas must be used together.
  • Two ideas, two entities, one-to-one optional: Now we can at least decide whether the second item is needed with the first.
  • Two ideas, two entities, one-to-many: This admits that the second idea may come in different quantities than the first.
  • Two ideas, two entities, many-to-many: Much more flexible! Both ideas can be combined in differing quantities as needed. However, this still requires that these ideas are only used together with each other. In other words, if ideas X and Y have a many-to-many relationship, I don’t get to reuse idea X together with idea A.
  • Two ideas, externalized relationship: This is the heart of faceted identities. Ideas X and Y can be completely independent. Each can be used together by other applications.

Interface Segregation Principle

The “I” in SOLID stands for Interface Segregation Principle. It says that a client should only depend on an interface with the minimum set of methods it needs. An object may support a wide set of behavior, but if my object only needs three of those behaviors, then I should depend on an interface with precisely those three behaviors. (One hopes those three make sense together!)

This has an application when we use faceted identies as well. Sometimes we have a very nice separation where the facets don’t need to interact with each other, only the application interacts with all of them. More often though, we do need to pass an identifier from one kind of thing into another. That’s when the contract becomes important. If service Y requires a foreign identifier “X” to perform an action, then it needs to be clear about what it will do with “X”. It’s up to the calling application to ensure that the “X” it passes can perform those actions.

Summary

Maneuverability is all about composing, recomposing, and combinging services in novel configurations. One of the biggest impediments to that is relationships among entities. We want to make those as loose as possible by externalizing the relationships to another service. This allows entities to be used in new ways without coordinated change across services. Furthermore, it allows different applications to use different relationship graphs for their own purposes.

Inverted Ownership, Part 2

| Comments

My last post on the subject of inverted ownership felt a bit abstract, so I thought I might illustrate it with a typical scenario.

In this first figure, we see a newly-extracted Catalog service, freshly factored out of the old monolithic application. It’s part of the company’s effort to become more maneuverable. We don’t know, or particularly care, what storage model it uses internally. From the outside, it presents an interface that looks like “SKUs have attributes”.

All seems well. It looks and smells like a microservice: independently deployable, released on its own schedule by a small autonomous team.

The problem is what you don’t see in the picture: context. This service has one “universe” of SKUs. It doesn’t serve catalogs. It serves one catalog. The problem becomes evident when we start asking what consumers of this service would want. If we think of the online storefront as the only consumer then it looks fine. Ask around a bit, though, and you’ll find other interested parties.

While IT toils to get down to a single source of record for product information, the wheelers and dealers in the business are out there signing up partners, inventing marketing campaigns, and looking into new lines of business. Pretty much all of those are going to screw around with the very idea of “the catalog”.

Maneuverability demands that we can combine and recombine our services in novel ways. What can we do with this catalog service that would let it be reused in ways that the dev team didn’t foresee?

Instancing might be one approach… multiple deployments from the same code base. High operational overhead, but it’s better than being stuck.

I prefer to make the context explicit instead.

Zero, One, Many

There’s an old saying that the only sensible numbers are zero, one, and infinity. One catalog isn’t enough, so the right number to support is “infinity.” (Or some resource-constrained approximation.)

What does it take? All we have to do is make catalog service create catalogs for anyone who asks. Any consumer that needs a catalog can create one. That might be a big, sophisticated online storefront. But it could be someone using cURL to manually construct a small catalog for a one-off marketing effort. The catalog service shouldn’t care who wants the catalog or what purpose they are going to put it to.

Of course, this means that subsequent requests need to identify which catalog the item comes from. Good thing we’re already using URLs as our identifiers.

Considerations

There are some practical issues (and maybe objections) to address.

First, does this mean that the SKUs are duplicated across all those catalogs? Not necessarily. We’re talking about the interface the service presents to consumers. It can do all kinds of deduplication internally. See my post about the immutable shopping cart for some ideas about deduplication and “natural” identifiers.

Second, and trickier, how do the SKUs get associated to the catalog? Does each microsite and service need to populate its own catalog? Can it just cherry-pick items from a “master” catalog?

You can probably guess that I don’t much like the idea of a “master” catalog. Instead, we would populate a newly-minted catalog by feeding it either item representations (serialized data in a well-known format) or better yet, hyperlinks that resolve to item representations.

How about this: make the service support HTML, RDFa, and a standardized microformat as a representation. Then you just feed your catalog service with URLs that point to HTML. Those can come from a catalog of your own, an internal app for cleansing data feeds, or even a partner or vendor’s web site. Now you’ve unified channel feeds, data import, and catalog creation.

Third, is it really true that just anyone can create a catalog? Doesn’t this open us up to denial-of-service attacks wherein someone could create billions of catalogs and goop up our database? My response is that we don’t ignore questions of authorization and permission, but we do separate those concerns. We can use proxies at trust boundaries to enforce permission and usage limits.

Conclusion

When you make the context explicit, you allow a service to support an arbitrary number of consumers. That includes consumers that don’t exist today and even ones you can’t predict. Each service then becomes a part that you can recombine in novel ways to meet future needs.

Inverted Ownership

| Comments

One of the sources of semantic coupling has to do with identifiers, and especially with synthetic identifiers. Most identifiers are just alphanumeric strings. Systems share those identifiers to ensure that both sides of an interface agree on the entity or entities they are manipulating.

In the move to services, there is an unfortunate tendency to build in a dependency on an ambient space of identifiers. This limits your organization’s maneuverability.

Contextualized IDs

The trouble is that a naked identifier doesn’t tell you what space of identifiers it comes from. There is just this ambient knowledge that a field called Policy ID is issued from the System of Record for policies. That means there can only be one “space” of policy numbers, and they must all be issued by the same SoR.

I don’t believe in the idea of a single system of record. One of my rules for architecture without an end state is “Embrace Plurality”. Whether through business changes or system migrations, you will always end up with multiple systems of record for any concept or entity.

In that world, it’s important that IDs carry along their context. It isn’t enough to have an alphanumeric Policy ID field. You need a URN or URI to identify which policy system issued that policy number.

Liberal Issuance

Imagine a Calendar service that tracks events by date and time. It would seem weird for that service to keep all events for every user in the same calendar, right? We should really think of it as a Calendars service. I’d expect to see API to create a calendar, which returns the URL to “my” calendar. Every other API call then includes that URL, either as a prefix or a parameter.

In the same way, your services should serve all callers and allow them to create their own containers. If you’re building a Catalog service, think of it as a Catalogs service. Anybody can create a catalog, for any purpose. Likewise, a Ledger service should really be Ledgers. Any client can create a ledger for any reason.

This is the way to create services that can be recombined in novel ways to create maneuverability.

The Perils of Semantic Coupling

| Comments

On the subject of maneuverability, many organizations run into trouble when they try to enter new lines of business, create a partnership, or merge with another company. Updating enterprise systems becomes a large cost factor in these business initiatives, sometimes large enough to outweigh the benefits case. This is a terrible irony: our automation provides efficiency, but removes flexibility.

If you break down the cost of such changes, you’ll find it comes in equal parts from changes to individual systesm and changes to integrations across systems. Integrations are always costly and full of risk, and never more so than we changing cardinalities. Partnerships and mergers pretty much always change cardinalities, too.

The cost factor arises from “semantic coupling.” That is the coupling between services introduced because the services need to share concepts. It usually appears as data types or entity names that pop up in many services.

As an example, let’s think about a tiny retailing system with a small set of what I’ll call “macroservices”. One of the most important entity types here is the Stock Keeping Unit, or SKU. It represents “a thing which can be sold”. In a typical retail system, it has a large number of attributes that describe how the item is priced, delivered, displayed on the web, upsold and cross-sold, reviewed, categorized, and taxed.

SKUs are created in a master data management system. There may be a variety of feeds that get massaged into MDM, but we’ll consider that to be outside the boundary of our interest for now. From MDM, the SKU must be distributed to a number of other services:

Each of these macroservices uses aspects of the SKU for its own purpose. Content management attaches “telling and selling” content to the SKU so it can be presented nicely on the web. Pricing adds it to the pricing rules. Shipping identifies the carriers, options, and costs to deliver it. Order management–probably a great big silver beast of a system–tracks inventory, orders, delivery rules, returns, and a lot more.

Now what happens if we have to make a major change to the SKU? Let’s imagine that we want to change how we manage prices. In the past, merchants set prices on each item individually. Now, we’ve got too much in the catalog for that to scale so we introduce the idea of price points for digital items. A price point is a price that applies to a large number of SKUs. When we change the price point, all SKUs that refer to it should be changed at the same time. So, if we decide to reduce the price of a low-bitrate MP3 track from $0.99 to $0.89, we can just change a single price point record.

How many systems do we have to change for this new concept?

If we consider “price point” to be part of our core domain, then we have to add that concept everywhere. The surface area of that change is really large, and it will be a costly change to make. It might even be too costly to be worth doing. We could hire a small army of temp workers to update price records by hand twice a year and still come out ahead. That’s not a very satisfying answer though. All this automation is supposed to make us more efficient! What good is it if we are stuck with outdated processes because our systems are too hard to change?

The key problem is semantic coupling. There are a lot of systems here that shouldn’t need to care about the “price point” concept. It has no bearing on the digital locker, shipping, or ratings & reviews.

In this example, we can reduce the semantic coupling. Simply decide that “price point” is not a core concept. It is a detail of data management for the MDM system. Everything downstream receives SKUs with a list price. No downstream system should care how that list price was determined.

This decision flattens a many-to-one relationship from SKU to price point. In so doing, we get a huge benefit. We eliminate an entire entity and all references to it from all the downstream systems.

I would even make a case for shattering the concept of SKU into multiple separate concepts. MDM may keep that concept. Downstream, though, each system has its own set of internal concepts. We should treat identifiers from other systems as opaque tokens that we map onto our own system’s space.

For example, the pricing service doesn’t need to know that it is pricing SKUs. It just needs to price “things that can be priced.” I know, it sounds tautological, but I think we get misled as humans… we think of SKU as a unitary concept so we build it as such in our systems. But look what happens if we say a pricing service can price “stuff and things” as long as they have some mapping in the pricing service itself. We can add an entirely new universe of things to price, without forcing everything on Earth to be a SKU!

We should scrutinize each of the other services, asking ourselves, “Does this really care about a SKU? Or does it care about something that a SKU happens to posess?” I would argue that in each case, the service really cares about “Thing that can be Xed”. Priced, taxed, shipped, reviewed, etc. Are SKUs the only things that can be taxed? Are they the only things that can be reviewed? Etc.

Iterate this process and four things will happen:

  1. Your services will shrink.
  2. Your services will become much more general.
  3. Each service will own its own space of identifiers.
  4. Your organization will become more maneuverable.

The key point I want to make here is that a concept may appear to be atomic just because we have a single word to cover it. Look hard enough and you will find seams where you can fracture that concept. Don’t share the whole thing. Don’t couple all your downstream systems to the whole concept, and definitely don’t couple your downstream to a complex of related concepts! It’s a cardinal sin.

Maneuverability

| Comments

Agile development works best at team scale. When a team can self-organize, refine their methods, build the toolchain, and modify adapt it to their needs, they will execute effectively. We should be happy to achieve that! I worry when we try to force-fit the same techniques at larger scales.

At the scale of a whole organization, we need to look at the qualities we want to have. (We can’t necessarily produce those qualities directly, but we can create the conditions that allow them to emerge.) When we look at attempts to scale agile development up, the quality the org wants is maneuverability.

Maneuverability is the ability to change your vector rapidly. It’s about gaining, shedding, or redirecting momentum. Keeping with the analogy of momentum, we can call that which resists change in the momentum vector “inertial mass.” Personnel are mass, because it’s relatively hard to add or shed personnel. Technical debt is a component of mass, too. It makes changes to your technical strategy harder. Actually, I’d even go so far as to say that code itself is mass. KLOCs kill.

Maneuverability has been explored most fully by the military. Superior maneuverability allows a fighter aircraft to get inside the enemy’s turn radius, then shoot for the kill. An army with high maneuverability can engage, disengage, and reorient to exploit an enemy’s weakness. In the words of John Boyd, it allows you to separate your opponent into multiple, non-cooperating centers of gravity.

Maneuverability is an emergent property. It requires a number of prerequisites in the organization’s structure, leadership style, operations, and ability to execute. I firmly believe that maneuverability requires a great ability to execute at the micro scale.

Agile development provides that ability to execute in software development. It is a necessary, but not sufficient, part of maneuverability. There are other necessary capabilities in the technical arena. I think that infrastructure and architecture have important roles to play for maneuverability as well.

I have previously given talks on the subject of maneuverability. I’ll also be posting some further thoughts about pertinent architecture decisions.

Bad Layering

| Comments

If I had to guess, I would say that “Layers” is probably the most commonly applied architecture pattern. And why not? Parfaits have layers, and who doesn’t like a parfait? So layers must be good.

Like everything else, though, there’s a good way and a bad way.

The usual Neapolitan stack looks like this:

On one of my favorite projects of all, we used more layers because we wanted to further isolate different behaviors. In that project, we added a “UI Model” distinct from the “Domain.”

We impose this style because we want to separate concerns. This should provide us with two big benefits. First, we can change the contents of each layer independently. So changes to the GUI should not affect the domain, and changes to the domain should not affect persistence. The second benefit we want is the ability to substitute a layer. We may swap out a layer for the sake of testing (often in the case of persistence layers) or for different product configurations.

People sometimes make an argument for swapping out a layer in case of technology change. That argument is used for ORMs in the persistence layer, but I don’t find it convincing. Changing persistence on an existing application is by far not the most common kind of change. You’d be buying an expensive option that is seldom exercised.

When Good Layers Go Bad

The trouble arises when the layers are built such that we have to drill through several of them to do something common. Have you ever checked in a commit that had a bunch of new files like “Foo”, “FooController”, “FooForm”, “FooFragment”, “FooMapper”, “FooDTO”, and so on? That, dear reader, is a breakdown in layering.

It comes from each layer being decomposed along the same dimension. In this case, aligned by domain concept. That means the domain layer is dominating the other layers.

I would much rather see each layer have objects and functions that express the fundamental concepts of that layer. “Foo” is not a persistence concept, but “Table” and “Row” are. “Form” is a GUI concept, as is “Table” (but a different kind of table than the persistence one!) The boundary between each layer should be a matter of translating concepts.

In the UI, a domain object should be atomized into its constituent attributes and constraints. In persistence, it should be atomized into rows in one or more tables (in SQL-land) or one or more linked documents.

What appears as a class in one layer should be mere data to every other layer.

How Does It Happen?

This breakdown in layering can arise from more than one dynamic process.

  1. The application framework may impose this structure.
  2. The language may not have abstractions powerful enough to make it pleasant to work with data.
  3. TDD without enough refactoring. Each thin slice through the application adds one more strand of “Foo and Friends”. Truly merciless refactoring would pull out the common behavior sideways into the layer-specific concepts I described above. Lacking merciless refactoring, the project will accrete sticky strands like cotton candy on a toddler.
  4. The team may not have seen it done any other way.

What If It Happens To You?

Maybe you already have degenerate layers. Assuming they aren’t required by your framework, start looking for opportunities to refactor. Don’t just build a class hierarchy so you can inherit implementations. Rather, look for common patterns of interaction. Figure out how to turn the code you’ve got in classes into data acted on by classes relevant to the layer.

Use maps. Convert objects into maps from field identifier to an object that represents the salient aspect of the field for that layer:

  • For a GUI, those aspects will be something like “lexical type”, “editable”, “constraint” / “validation”, “semantic class”, and so on.
  • For persistence, they will deal with “length”, “representation format”, “referent,” etc.

Seek and destroy DTOs. They should be maps.

A DTO clearly indicates that your class is crossing a boundary. And yet, it requires that code on both sides of the boundary codes to the method signatures of the DTO. That means there is precisely zero translation at the boundary.

Where To Go From Here

Let me be clear, I like parfaits. (Yogurt and fruit! Ice cream, nuts, caramel!) I have nothing against layers. Most of my applications are built from layers. It’s just that getting the benefits we seek requires more effort than smearing a single domain concept across multiple subdirectories.

If “Layers” is the only architecture pattern you’ve used, then you’re in for a treat. There are plenty of other fundamental structures to explore. Pipes and filters. Blackboard. Components. Set GoF aside and go read Pattern-Oriented Software Architecture. The whole series is a treasure trove and an encyclopedia.

People Don’t Belong to Organizations

| Comments

One company that gets this right is Github. I exist as my own person there. I’m affiliated with my employer as well as other organizations.

We are long past the days of “the company man,” when a person’s identity was solely bound to their employer. That relationship is much more fluid now.

A company that gets it wrong is Atlassian. I’ve left behind a trail of accounts in various Jirae and Confluences. Right now, the biggest offender in their product lineup is HipChat. My account is identified by my email address, but it’s bound up with an organization. If I want to be part of my employer’s HipChat as well as a client’s, I have to resort to multiple accounts signed up with plus addresses. It’s great that GMail supports that, but I still can’t log in to more than one account at a time.

More generally, this is a failure in modeling. Somewhere along the line, somebody drew a line between `Organization` and `Person` on their model, with a one-to-many relationship. One `Organization` can have many `Person` entities, but each `Person` belongs to exactly one `Organization`.

I’ll go even further. The proper way to approach this today is to relate `Organization` and `Person` by way of another entity. Reify the association! Is it employment? Put the start and end dates on the employment. Oh, and don’t delete the association once it ends… that’s erasing it from history.

I think the default for pretty much any relationship these days should be many-to-many. Particularly any data relationship that models a real relationship in the external world. We shouldn’t let the bad old days of SQL join tables deter us from doing the right thing now.

Glue Fleet and Compojure Together Using Protocols

| Comments

Inspired by Glenn Vanderburg’s article on Clojure templating frameworks, I decided to try using Fleet for my latest pet project. Fleet has a very nice interface. I can call a single function to create new Clojure functions for every template in a directory. That really makes the templates feel like part of the language. Unfortunately, Glenn’s otherwise excellent article didn’t talk about how to connect Fleet into Compojure or Ring. I chose to interpret that as a compliment, springing from his high esteem of our abilities.

My first attempt, just calling the template function directly as a route handler resulted in the following:

java.lang.IllegalArgumentException: No implementation of method: :render of protocol: #'compojure.response/Renderable found for class: fleet.util.CljString

Ah, you’ve just got to love Clojure errors. After you understand the problem, you can always see that the error precisely described what was wrong. As an aid to helping you understand the problem… well, best not to dwell on that.

The clue is the protocol. Compojure knows how to turn many different things into valid response maps. It can handle nil, strings, maps, functions, references, files, seqs, and input streams. Not bad for 22 lines of code!

There’s probably a simpler way that I can’t see right now, but I decided to have CljString support the same protocol.

Take a close look at the call to extend-protocol on lines 12 through 15. I’m adding a protocol–which I didn’t create–onto a Java class–which I also didn’t create. My extension calls a function that was created at runtime, based on the template files in a directory. There’s deep magic happening beneath those 3 lines of code.

Because I extended Renderable to cover CljString, I can use any template function directly as a route function, as in line 17. (The function views/index was created by the call to fleet-ns on line 10.)

So, I glued together two libraries without changing the code to either one, and without resorting to Factories, Strategies, or XML-configured injection.