Wide Awake Developers

« January 2010 | Main | January 2011 »

Time motivates architecture

Let's engage in a thought experiment for a moment. Suppose that software was trivial to create and only ever needed to be used once. Completely disposable. So, somebody comes to you and says, "I have a problem and I need you to solve it. I need a tool that will do blah-de-blah for a little while." You could think of the software the way that a carpenter thinks of a jig for cutting a piece of wood on a table saw, or a metalworker thinks of creating a jig to drill a hole at the right angle and depth.

If software were like this, you would never care about its architecture. You would spend a few minutes to create the thing that was needed, it would be used for the job at hand, and then it would be thrown away. It really wouldn't matter how good the software was on the inside--how easy it was to change--because you'd never change it! It wouldn't matter how it adapted to changing business requirements, because you'd just create a new one when the new requirement came up. In this thought experiment we wouldn't worry about architecture.

The key difference between this thought experiment and actual software? Of course, actual software is not disposable. It has a lifespan over some amount of time. Really, it's the time dimension that makes architecture important.

Over time, we need for many different people to work effectively in the software. Over time, we need the throughput of features to stay constant, or hopefully not decrease too much. Maybe it even increases in particularly nice cases. Over time, the business needs change so we need to adapt the software.

It's really time that makes us care about architecture.

Isn't it interesting then, that we never include time as a dimension in our architecture descriptions?

Circuit Breaker in Scala

FaKod (I think that translates as "The Fatalistic Coder"?) has written a nice Scala implementation of the Circuit Breaker pattern, and even better, has made it available on GitHub.

Check out http://github.com/FaKod/Circuit-Breaker-for-Scala for the code.

The Circuit Breaker can be mixed in to any type. See http://wiki.github.com/FaKod/Circuit-Breaker-for-Scala/ for an example of usage.

The Future of Software Development

I've been asked to sit on a panel regarding the future of software development. This is always risky and makes me nervous, for two reasons. First, prediction is a notoriously low success-rate activity. Second, the people you always see making predictions like this are usually well past their "use by" date. Nevertheless, here are a collection of barely-related thoughts I have on that subject.

  • Two obvious trends are cloud computing and mobile access. They are complementary. As the number of people and devices on the net increases, our ability to shape traffic on the demand side gets worse. Spikes in demand will happen faster and reach higher levels over time. Mobile devices exacerbate the demand side problems by greatly increasing both the number of people on the net and the fraction of their time they are able to access it.

  • Large traffic volumes both create and demand large data. Our tools for processing tera- and petabyte datasets will improve dramatically. Map/Reduce computing (a la Hadoop) has created attention and excitement in this space, but it is ultimately just one tool among many. We need better languages to help us think and express large data problems. In particular, we need a language that makes big data processing accessible to people with little background in statistics or algorithms.

  • Speaking of languages, many of the problems we face today cannot be solved inside a single language or application. The behavior of a web site today cannot be adequately explained or reasoned about just by examining the application code. Instead, a site picks up attributes of behavior from a multitude of sources: application code, web server configuration, edge caching servers, data grid servers, offline or asynchronous processing, machine learning elements, active network devices (such as application firewalls), and data stores. "Programming" as we would describe it today--coding application behavior in a request handler--defines a diminishing portion of the behavior. We lack tools or languages to express and reason about these distributed, extended, fragmented systems. Consequently, it is difficult to predict the functionality, performance, capacity, scalability, and availability of these systems.

  • Some of this will be mitigated naturally as application-specific functions disappear into tools and frameworks. Companies innovating at the leading edge of scalability today are doing things in application-specific behavior to compensate for deficiencies in tools and platforms. For example, caching servers could arguably disappear into storage engines and no-one would complain. In other words, don't count the database vendors out yet. You'll see key-value stores and in-memory data grid features popping up in relational databases any day now.

  • In general, it appears that Objects will diminish as a programming paradigm. Object-oriented programming will still exist... I'm not claiming "the death of objects" or something silly like that. However, OO will become just one more paradigm among several, rather than the dominant paradigm it has been for the last 15 years. "Object oriented" will no longer be synonymous with "good".

  • Some people have talked about "polyglot programming". I think this is a red herring. Polylgot is a reality, but it should not be a goal. That is, programmers should know many languages and paradigms, but deliberately mixing languages in a single application should be avoided. What I think we will find instead is mixing of paradigms, supported by a single primary language, with adjunct languages used only as needed for specialized functions. For example, an application written in Scala may mix OO, functional, and actor-based concepts, and it may have portions of behavior expressed in SQL and Javascript. Nevertheless, it will still primarily be a Scala application. The fact that Groovy, Scala, Clojure, and Java all run on Java Virtual Machine shouldn't mislead us into thinking that they are interchangeable... or even interoperable!

  • Regarding Java. I fear that Java will have to be abandoned to the "Enterprise Development" world. It will be relegated to the hands of cut-rate business coders bashing out their gray business applications for $30 / hour. We've passed the tipping point on this one. We used to joke that Java would be the next COBOL, but that doesn't seem as funny now that it's true. Java will continue to exist. Millions of lines of it will be written each year. It won't be the driver of innovation, though. As individual programmers, I'd recommend that you learn another language immediately and differentiate yourself from the hordes of low-skill, low-rent outsource coders that will service the mainstream Java consumer.

  • Where will innovation come from? Although some of the blush seems to be coming off Ruby, the reduction in hype has mainly allowed Ruby and Ruby on Rails developers to knuckle down and produce. That community continues to drive tremendous innovation. Many of the interesting developments here relate to process. Ruby developers have given us fantastic tools like Gems and Capistrano, that let small teams outperform and outproduce groups four times their size.

  • To my great surprise, data storage has become a hotbed of innovation in the last few years. Some of this is driven by the high-scalability fetishists, which is probably the wrong reason for 98% of companies and teams. However, innovations around column stores, graph databases, and key-value stores offer developers new tools to reduce the impedance mismatch between their data storage and their programming language. We spent twenty years trying to squeeze objects into relational databases. Aside from the object databases, which were an early casualty of Oracle's ascension, we mostly focused on changing the application code through framework after framework and ORM after ORM. It's refreshing to see storage models that are easier to use and easier to modify.

  • This will also cause another flurry of "reactive innovation" from the database vendors, just as we saw with "Universal Databases" in the mid-90s. The big players here--Microsoft and Oracle--won't let some schemaless little upstarts erode their market share. More significantly, they aren't about to let their flagship products--and the ones which give them beachheads inside every major corporation--get intermediated by some open-source frameworks banged up by the social network giants. Look for big moves by these vendors into high scalability, agile storage, and eventual consistency storage.

Failover: Messy Realities

People who don't live in operations can carry some funny misconceptions in their heads. Some of my personal faves:

  • Just add some servers!
  • I want a report of every configuration setting that's different between production and QA!
  • We're going to make sure this (outage) never happens again!

I've recently been reminded of this during some discussions about disaster recovery. This topic seems to breed misconceptions. Somewhere, I think most people carry around a mental model of failover that looks like this:

Normal operations transitions directly and cleanly to failed over

That is, failover is essentially automatic and magical.

Sadly, there are many intermediate states that aren't found in this mental model. For example, there can be quite some time between failure and it's detection. Depending on the detection and notification, there can be quite a delay before failover is initiated at all. (I once spoke with a retailer whose primary notification mechanism seemed to be the Marketing VP's wife.)

Once you account for delays, you also have to account for faulty mechanisms. Failover itself often fails, usually due to configuration drift. Regular drills and failover exercises are the only way to ensure that failover works when you need it. When the failover mechanisms themselves fail, your system gets thrown into one of these terminal states that require manual recovery.

Just off the cuff, I think the full model looks a lot more like this:

Many more states exist in the real world, including failure of the failover mechanism itself.

It's worth considering each of these states and asking yourself the following questions:

  • Is the state transition triggered automatically or manually?
  • Is the transition step executed by hand or through automation?
  • How long will the state transition take?
  • How can I tell whether it worked or not?
  • How can I recover if it didn't work?

Life's Little Frustrations

A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable. -Leslie Lamport

On my way to QCon Tokyo and QCon China, I had some time to kill so I headed over to Delta's Skyclub lounge. I've been a member for a few years now. And why not? I mean, who could pass up tepid coffee, stale party snacks, and a TV permanently locked to CNN? Wait... that actually doesn't sound like such a hot deal.

Oh! I remember, it's for the wifi access. (Well, that plus reliably clean bathrooms, but we need not discuss that.) Being able to count on wifi access without paying for yet another data plan has been pretty helpful for me. (As an aside, I might change my tune once I try a mifi box. Carrying my own hotspot sounds even better.)

Like most wifi providers, the Skyclub has a captive portal. Before you can get a TCP/IP connection to anything, you have to submit a form with a checkbox to agree to 89 pages of terms and conditions. I'm well aware that Delta's lawyers are trying to make sure the company isn't liable if I go downloading bootlegs of every Ally McBeal episode. But I really don't know if these agreements are enforceable. For all I know, page 83 has me agreeing to 7 years indentured servitude cleaning Delta's toilets.

Anyway, Delta has outsourced operations of their wifi network to Concourse Communications. And apparently, they've had an outage all morning that has blocked anyone from using wifi in the Minneapolis Skyclubs. When I submit the form with the checkbox, I get the following error page:

Including this bit of stacktrace:

There's a lot to dislike here.

  1. Why is this yelling at me, the user? To anyone who isn't a web site developer, this makes it sound like the user did something wrong. There's a ton of scary language here: "instance-specific error", "allow remote connections", "Named Pipes Provider"... heck, this sounds like it's accusing the user of hacking servers. "Stack trace" sure sounds like the Feds are hot on somebody's trail, doesn't it?
  2. Isn't it fabulous to know that Ken keeps his projects on his D: drive? If I had to lay bets, I'd say that Ken screwed up his configuration string. In fact, the whole problem smells like a failed deployment or poorly executed change. Ken probably pushed some code out late on a Friday afternoon, then boogied out of town. My prediction (totally unverifiable, of course) is that this problem will take less than 5 minutes to resolve, once Ken gets his ass back from the beach.
  3. We mere users get to see quite a bit of internal information here. Nothing really damaging, unless of course Wilson ORMapper has some security defects or something like that.
  4. Stepping back from this specific error message, we have the larger question: is it sensible to couple availability of the network to the availability of this check-the-box application? Accessing the network is the primary purpose of this whole system. It is the most critical feature. Is collecting a compulsory boolean "true" from every user really as important as the reason the whole damn thing was built in the first place? Of course not! (As an aside, this is an example of Le Chatelier's Principle: "Complex systems tend to oppose their own proper function.")

We see this kind of operational coupling all the time. Non-critical features are allowed to damage or destroy critical features. Maybe there's a single thread pool that services all kinds of requests, rather than reserving a separate pool for the important things. Maybe a process is overly linearized and doesn't allow for secondary, after-the-fact processing. Or, maybe a critical and a non-critical system both share an enterprise service---producing a common-mode dependency.

Whatever the proximate cause, the underlying problem is lack of diligence in operational decoupling.