Wide Awake Developers


Coupling and Coevolution

The mighty Mississippi River starts in Minnesota, at Lake Itasca. Every kid in Minnesota has to make the ritual pilgrimage to Itasca State Park at some point, where wading across North America's longest river is a rite of passage.

Mississippi River Starts Here

One of the very interesting things in Itasca State Park is a section of forest that is fenced off so that deer cannot enter it. It's part of a decades-long experiment to see how forests are affected by browsing herbivores. What's really interesting is that not only are the quantity of plants different inside the protected area, but the types of plants and trees are different, too. Because deer prefer to nibble on younger trees, fewer saplings survive in the main body of the forest than in the fenced-off portion. Outside the fence, the distribution of tree size and age is biased toward older trees. The population of trees is weighted more toward resinous species like pines, which deer prefer not to eat. Inside the fence, more saplings survive into young maturity, so you see a more even distribution of tree ages and a wider diversity of species represented in the mature trees. The changes in the canopy affect the ground cover which, in turn, change how deer could (if allowed) reach the trees and browse them.

So, here's a feedback loop that involves deer, trees, leaves and brush. The net result is a different ecosystem (albeit a slightly artificial one.)

Most physical and biological systems are like this in several ways, particularly relating to feedback. In our artificial systems (electrical, mechanical, symbolic, or semantic) we build in feedback mechanisms as a deliberate control. These are often one dimensional, proportional, and negative.

In natural systems, feedback arises everywhere. Sometimes, it proves to be helpful for the long-term stability of the system. In which case, the feedback itself gets reinforced by the existence and perpetuation of the system it exists within. In a sense, the system adapts to reinforce beneficial feedback. Conversely, feedback webs that cause too much instability will, like an overly aggressive virus, lead to destruction of their host system and disappear. So, we can see the constituents of a system co-evolving with each other and the system itself.

The old "microphone-amplifier-speaker-squealing" example of feedback really fails here. We lack both language and metaphor to really grasp this kind of interaction over time. In part, I think that's because we like to separate the world into isolated components and only talk about components at a single level of abstraction. The trouble is that abstractions like "level of abstraction" only exist in our minds.

Here's another example of coevolution, courtesy of Jared Diamond in "Guns, Germs, and Steel". I'll apologize in advance for oversimplifying; I'm devoting a paragraph to an argument he develops across entire chapters.

At some point, a group of nomads decided that the seeds of these particular grasses were tasty. In collecting the grasses, they spread it around. Some kinds of seeds survived the winter better and responded well to being sown by humans. Now, nobody sat down and systematically picked out which seeds grew better or worse. They didn't have to, because the seeds that grew better produced more seeds for the next generation. Over time, a tiny difference (fractions of a percent) in productivity would lead some strains to supplant the others. Meanwhile, inextricably linked, some humans figured out how to plants, harvest, and eat these early grains. These humans had an advantage over their neighbors, so they were able to feed more babies. That turns out to be a benefit, because farming is hard work and requires more offspring to help produce food. (Another feedback loop.) Oh, and this kind of labor makes it advantageous to keep livestock, too. Over time, these farmers would breed and feed more children than the nomads, so farmers would come to be a larger and larger percentage of the population. Just as an added wrinkle, keeping livestock and fertilizing fields both lead to diseases that simultaneously harm the individuals and occasionally decimate the population, but also provide some long-term benefits such as better disease resistance and inadvertent biological warfare when encountering other civilizations.

Try to diagram the feedback loops here: nomads, farmers, livestock, grains, birthrates, and so on. Everything is connected to everything else. It's really hard to avoid slipping into teleological language here. We've got feedback and feedforward at several different levels and timescales here, from the scale of microbes to livestock to civilizations, and across centuries. This dynamic altered the course of many species evolution: cattle, wheat, maize, and yes, good old H. Sapiens.

This complexity of interaction extends to planetary and stellar levels as well. At some sufficiently long time scale, the intergalactic medium is coupled to our planetary ecosystem.

The human intellectual penchant for decomposition, isolation, and leveled abstraction is purely an artifact of the size of our bodies and the duration of our lives.

2009 Calendar as OmniGraffle Stencil

I had need of a stencil that would let me drop monthly calendars on a number of pages. I found it useful, and someone else might, too.

Download the stencil.


I made a LibraryThing list of books relevant to the stuff that's banging around in my head now. These are in no particular order or organization. In fact, this is a live widget, so it might change as I think of other things that should be on the list.

The key themes here are time, complexity, uncertainty, and constraints. If you've got recommendations along these lines, please send them my way.

Cold Turkey

Last night, I did something pretty drastic.  It wasn't on impulse... I had been thinking about this for quite a while. Finally, I decided to take the band-aid approach and just do it all at once.

I deleted all my games.

New and old alike, they all went.  Bioshock, System Shock, System Shock II.  GTA IV. GTA: Vice City. (I skipped San Adreas.) Venerable Diablo I and II, not to mention their leering cousin Overlord. Age of Empires. Several versions of Peggle and Bejeweled. Warcraft III. Every incarnation of Half-Life and Half-Life 2. Uplink, Darwinia, Wingnuts, Weird Worlds and SPORE.

Well, OK, it  wasn't that hard to give up SPORE, but seriously, deleting Darwinia hurt.

Why chuck hundreds of dollars of software into the bin? It's all about time. My own time and time with a capital 'T'. I need time to understand Time. Too much recombinant thought has taken up residence. It's time to marshal these unruly ideas and get them out. So, the games served during gestation, but now it's time and Time and past time for me to put them aside and get scholastic. Put pen to paper, or fingers to keyboard. Time to run some numbers, see the scenarios, and try to synthesize a cohesive whole. Time to abstract and distill and methodologize.

I know I'm being obscure. How can I not? Take the number of people exposed to a given process theory (OODA). Multiply it by the fraction who also know the second through the seventh (ToC, Lean, Six Sigma, TQM, Agile, Strategic Navigation). Mix in dynamical systems thinking (Senge, Liker, Hock.) Intersect that group with people who know something about uncertainty, complexity, and time. Now intersect it with people who view all the world as material and economic flux.  (If you are a member of the resulting set, I want to talk to you!)  I know all these things are deeply connected, but if I could articulate how, and why, then I'd already be done.

One thing I am already sure about, though, is this: It is all about Time. Time is far more fundamental and far less understood than you'd think.  I'm now just talking about inappropriately scaled-up quantum mechanics metaphors.  I mean that people fundamentally trip up on Time all the time.  "The Black Swan" is just the tip of the iceberg.

If it works, I'll sound like an utter crackpot, raving and waving my very own personal ToE.

If it doesn't work, well, Steam knows which games I bought. I can always reinstall them.

Combining here docs and blocks in Ruby

Like a geocache, this is another post meant to help somebody who stumbles across it in a future Google search. (Or as an external reminder for me, when I forget how I did this six months from now.)

I've liked here-documents since the days of shell programming. Ruby has good support for here docs with variable interpolation. For example, if I want to construct a SQL query, I can do this:

def build_query(customer_id)
    select * 
     from customer
   where id = #{customer_id}

Disclaimer: Don't do this if customer_id comes from user input!

Recently, I wanted a way to build inserts using a matching number of column names and placeholders.

def build_query
    insert into #{table} ( #{columns()} ) values ( #{column_placeholders()} )

In this case, columns and column_placeholders were both functions.

One oddity I ran into is the combination of here documents and block syntax. RubyDBI lets you pass a block when executing a query, the same way you would pass a block to File::open(). The block gets a "statement handle", which gets cleaned up when the block completes.

  dbh.execute(query) { |sth| 
    sth.fetch() { |row|
      # do something with the row

Combining these two lets you write something that looks like SQL invading Ruby:

  dbh.execute(<<-STMT) { |sth|
      select distinct customer, business_unit_id, business_unit_key_name
       from problem_ticket_lz
       order by customer
    sth.fetch { |row|
      print "#{row[1]}\t#{row[0]}\t#{row[2]}\n"

This looks pretty good overall, but take a look at how the block opening interacts with the here doc. The here doc appears to be line-oriented, so it always begins on the line after the <<-STMT token. On the other hand, the block open follows the function, so the here doc gets lexically interpolated in the middle of the block, even though it has no syntactic relation to the block. No real gripe, just an oddity.

Another Cause of TNS-12541

There are about a jillion forum posts and official pages on the web that talk about ORA-12541, the infamous "TNS:No Listener" error. Somewhere around 70% of them appear to be link-farmers who just scrape all the Oracle forums and mailing lists.  Virtually all of the pages just refer back to the official definition from Oracle, which says "there's no listener running on the server" and tells you to log in to the server as admin and start up the listener.

Not all that useful, especially if you're not the DBA.

I found a different way that you can get the same error code, even when the listener is running. Special thanks to blogger John Jacob, whose post didn't quite solve my problem, but did set me on the right track.

Here's my situation. My client is a laptop connecting to the destination network through a VPN client. I'm connecting to an Oracle 10g service with 2 nodes. Tnsping reported success, the connection assistant could connect successfully, but sqlplus always reported TNS-12541 TNS:No listener.  The listener was fine.

Turning on client side tracing, I saw that the initial connection attempt to the service VIP was successful, but that the server than sends back a packet with the hostname of a specific node to use. Here's where the problem begins.

Thanks to some quirk in the VPN configuration, I can only resolve DNS names on the VPN if they're fully qualified. The default search domain just flat doesn't work.  So, I can resolve proddb02.example.com but not proddb02. That's the catch, because the database sends back just the host portion of the node, not the FQDN. DNS resolution fails, but sqlplus reports it as "No listener", rather than saying "Host not found" or something useful like that.

Again, there are a jillion post and articles telling network admins how to fix the default domain search on a VPN concentrator. And, again, I'm not the network admin, either.

The best I can do as a user is work around this issue by adding the IPs of the physical DB nodes to the hosts file on my own client machine.  Sure, some day it'll break when we re-address the DB nodes, and I will have long forgotten that I even put those addresses in C:\Windows\System32\Drivers\etc\hosts. Still, at least it works for now.

(Human | Pattern) Languages, part 2

At the conclusion of the modulating bridge, we expect to be in the contrasting key of C minor. Instead, the bridge concludes in the distantly related key of F sharp major... Instead of resolving to the tonic, the cadence concludes with two isolated E pitches. They are completely ambiguous. They could belong to E minor, the tonic for this movement. They could be part of E major, which we've just heard peeking out from behind the minor mode curtains. [He] doesn't resolve them into a definite key until the beginning of the third movement, characteristically labeled a "Scherzo".

In my last post, I lamented the missed opportunity we had to create a true pattern language about software. Perhaps calling it a missed opportunity is too pessimistic. Bear with me on a bit of a tangent. I promise it comes back around in the end.

The example text above is an amalgam of a lecture series I've been listening to. I'm a big fan of The Teaching Company and their courses. In particular, I've been learning about the meaning and structure of classical, baroque, romantic, and modern music from Professor Robert Greenberg.1 The sample I used here is from a series on Beethoven's piano sonatas. This isn't an actual quote, but a condensation of statements from one of the lectures. I'm not going to go into all the music theory behind this, but it is interesting.2

There are two things I want you to observe about the sample text. First, it's loaded with jargon. It has to be! You'd exhaust the conversational possibilities about the best use of a D-sharp pretty quickly. Instead, you'll talk about structures, tonalities, relationships between that D-sharp and other pitches. (D-sharp played together with a C? Very different from a quick sequence of D-sharp, E, D-sharp, C.) You can be sure that composers don't think in terms of individual notes. A D-sharp by itself doesn't mean anything. It only acquires meaning by its relation to other pitches. Hence all that stuff about keys---tonic, distantly related, contrasting. "Key" is a construct for discussing whole collections of pitches in a kind of shorthand. To a musician, there's a world of difference between G major and A flat minor, even though the basic pitch (the tonic) is only one half-step apart.

Also notice that the text addresses some structural features. The purpose and structure of a modulating bridge is pretty well understood, at least in certain circles. The notion that you can have an "expected" key certainly implies that there are rules for a sonata. In fact, the term "sonata" itself means some fairly specific things3... although to know whether we're talking about "a sonata" or "a movement in sonata form" requires some additional context.

In fact, this paragraph is all about context. It exists in the context of late Classical, early Romantic era music, specifically the music of Beethoven. In the Classical era, musical forms---such as sonata form---pretty much dictates the structure of the music. The number of movements, their relationships to each other, their keys, and even their tempos were well understood. A contemporary listener had every reason to expect that a first movement would be fast and bright, and if the first movement was in C major, then the second, slower movement would be a minuet and trio in G major.

Music and music theory have evolved over the last thousand-odd years. We have a vocabulary---the potentially off-putting jargon of the field. We have nesting, interrelating contexts. Large scale patterns (a piano sonata) create context for medium scale patterns (the first movement "allegretto") which in turn, create context for the medium and small scale patterns (the first theme in the allegretto consists of an ABA'BA phrasing, in which the opening theme sequences a motive upward over octaves.)  We even have the ability to talk about non sequiturs---like the modulating bridge above---where deliberate violation of the pattern language is done for effect.4

What is all this stuff if it isn't a pattern language?

We can take a few lessons, then, from the language of music.

The first lesson is this: give it time. Musical language has evolved over a long time. It has grown and been pruned back over centuries. New terms are invented as needed to describe new answers to a context. In turn, these new terms create fresh contexts to be exploited with yet other inventions.

Second, any such language must be able to assimilate change. Nothing is lost, even amidst the most radical revolutions. When the Twentieth Century modernists rejected the tonal system, they could only reject the structures and strictures of that language. They couldn't destroy the language itself. Phish plays fugues in concert... they just play them with electric guitars instead of harpsichords. There are Baroque orchestras today. They play in the same concert halls as the Pops and Philharmonics. The homophonic texture of plain chant still exists, and so do the once-heretical polyphony and church-sanctioned monophony. Nothing is lost, but new things can be encompassed and incorporated.

And, mainframes still exist with their COBOL programs, together with distributed object systems, message passing, and web services. The Singleton and Visitor patterns will never truly go away, any more than batch programming will disappear.

Third, we must continue to look at the relationships between different parts of our nascent pattern language. Just as individual objects aren't very interesting, isolated patterns are less interesting than the ways they can interact with each other.

I believe that the true language of software has as much to do with programming languages as the language of music has to do with notes. So, instead of missed opportunity, let us say instead that we are just beginning to discover our true language.

1. Professor Greenberg is a delightful traveling companion. He's witty, knowledgeable and has a way of teaching complex subjects without ever being condescending. He also sounds remarkably like Penn Jillette.

2. The main reason is that I would surely get it wrong in some details and risk losing the main point of my post here.

3. And here we see yet another of the complexities of language. The word "sonata" refers, at different times, to a three movement concert work, a single movement in a characteristic structure, a four movement concert work, and in Beethoven's case, to a couple of great fantasias that he declares to be sonatas simply because he says so.

4. For examples ad nauseum, see Richard Wagner and the "abortive gesture".

(Human | Pattern) Languages

We missed the point when we adopted "patterns" in the software world. Instead of an organic whole, we got a bag of tricks.

The commonly accepted definition of a pattern is "a solution to a problem in a context." This is true, but limiting. This definition loses an essential characteristic of patterns: Patterns relate to other patterns.

We talk about the context of a problem. "Context" is a mental shorthand. If we unpack the context it means many things: constraints, capabilities, style, requirements, and so on. We sometimes mislead ourselves by using the fairly fuzzy, abstract term "context" as a mental handle on a whole variety of very concrete issues. Context includes stated constraints like the functional requirements, along with unstated constraints like, "The computation should complete before the heat death of the universe." It includes other forces like, "This program is written in C#, so the solution to this problem should be in the same language or a closely related one." It should not require a supercooled quantum computer, for example.

Where does the context for a small-scale pattern originate?1 Context does not arise ex nihilio. No, the context for a small-scale pattern is created by larger patterns. Large grained patterns create the fabric of forces that we call the context for smaller patterns. In turn, smaller patterns fit into this fabric and, by their existence, they change it. Thus, the small scale patterns create feedback that can either resolve or exacerbate tensions inherent in the larger patterns.

Solutions that respect their context fit better with the rest of the organic whole. It would be strange to be reading some Java code, built into layered architecture with a relational database for storage, then suddenly find one component that has its own LISP interpreter and some functional code. With all respect to "polyglot programming", there'd better be a strong motivation for such an odd inclusion. It would be a discontinuity... in other words, it doesn't fit the context I described. That context---the layered architecture, the OO language, relational database---was created by other parts of the system.

If, on the other hand, the system was built as a blackboard architecture, using LISP as glue code over intelligent agents acting asynchronously, then it wouldn't be at all odd to find some recursive lambda expressions. In that context, they fit naturally and the Java code would be an oddity.

This interrelation across scale knits patterns together into a pattern language. By and large, what we have today is a growing group of proper nouns. Please don't get me wrong, the nouns themselves have use. It's very helpful to say "you want a Null Object there," and be understood. That vocabulary and the compression it provides is really important.

But we shouldn't mistake a group of nouns for a real pattern language. A language is more than just its nouns. A language also implies ways of connecting statements sensibly. It has idioms and semantics and semiotics.2 In a language, you can have dialog and argumentation.  Imagine a dialog in patterns as they exist today:

"Pipes and filters."


"Chain of Responsibility!"

You might be able to make a comedy sketch out of that, but not much more. We cannot construct meaningful dialogs about patterns at all scales.

What we have are fragments of what might become a pattern language. GoF, the PLoPD books, the PoSA books... these are like a few charted territories on an unmapped continent. We don't yet have the language that would even let us relate these works together, let alone relating them to everything else.

Everything else?  Well, yes. By and large, patterns today are an outgrowth of the object-oriented programming community.  I contend, however, that "object-oriented" is a pattern! It's a large-scale pattern that creates really significant context for all the other patterns that can work within it. Solutions that work within the "object-oriented" context make no sense in an actor-oriented context, or a functional context, or a procedural context, and so on. Each of these other large-scale patterns admit different solutions to similar problems: persistence, user interaction, and system integration, to name a few. I can imagine a pattern called "Event Driven" that would work very well with "Object oriented", "Functional", and "Actor Oriented", but somewhat less well with "Procedural programming", and contradict utterly with "Batch Processing". (Though there might be a link between them called "Buffer file" or something like that.)

That's the piece that we missed. We don't have a pattern language yet. We're not even close.

1. By "large" and "small", I don't mean to imply that patterns simply nest hierarchically. It's more complex and subtle than that. When we do have a real pattern language, we'll find that there are medium-grained patterns that work together with several, but not all, of the large ones. Likewise, we'll find small-scale patterns that make medium sized ones more or less practical. It's not a decision tree or a heuristic.

2. That's what keeps, "Fill the idea with blue" from being a meaningful sentence. All the words work, and they're even the right part of speech, yet the sentence as a whole doesn't fit together.

Beyond the Village

As an organization scales up, it must navigate several transitions. If it fails to make these transitions well, it will stall out or disappear.

One of them happens when the company grows larger than "village-sized". In a village of about 150 people or less, it's possible for you to know everyone else. Larger than that, and you need some kind of secondary structures, because personal relationships don't reach from every person to every other person. Not coincidentally, this is also the size where you see startups introducing mid-level management.

There are other factors that can bring this on sooner. If the company is split into several locations, people at one location will lose track of those in other locations. Likewise, if the company is split into different practice areas or functional groups, those groups will tend to become separate villages on their own. In either case, the village transition will happen sooner than 150.

It's a tough transition, because it takes the company from a flat, familial structure to a hierarchical one. That implicitly moves the axis of status from pure merit to positional. Low-numbered employees may find themselves suddenly reporting to a newcomer with no historical context. It shouldn't come as a surprise when long-time employees start leaving, but somehow the founders never expect it.

This is also when the founders start to lose touch with day-to-day execution. They need to recognize that they will never again know every employee by name, family, skills, and goals. Beyond village size, the founders have to be professional managers. Of course, this may also be when the board (if there is one) brings in some professional managers. It shouldn't come as a surprise when founders start getting replaced, but somehow they never expect it.


Creeping Fees

A couple of years ago, the Minneapolis-St. Paul airport introduced self-pay parking gates. Scan a credit card on the way in and on the way out, and it just debits the card. This obviously saves money on parking attendants, and it's pretty convenient for parkers.

At first, to encourage adoption, they offered a discount of $2 per day. Every time you'd approach the entry, a friendly voice from a Douglas Adams novel would ask, "Would you like to save $2 per day on parking?" For general parking, that meant $14 instead of $16 per day.

Some time later, this switched from being an incentive for adopting the system to a penalty for avoiding it. How? They raised the rates by $2 per day. So now, the top rate if you use self-pay is back to $16. If you don't use it, then your top rate bumped up to $18. Clearly they put somebody from the banking industry in charge of this parking system.

Now, it's changed again, from $2 per day to $2 per transaction. So it's just $2 off the top of whatever your overall parking fees are.

This gradual creep is really interesting. I wonder what the next step will be. A $2 per year discount would be one way to approach it. Maybe a "frequent parker" program. More likely the discount will drop to $1 per transaction, or it will just be discarded altogether.

That's OK with me, because swiping the credit card is still more convenient than exchanging cash money with a human anyway.

Besides, back when it was cash based, I always got tagged with the ATM fee anyway. 


A friend invited me to Plurk. So far, I've resisted Twitter for no good reason (other than a vague sense of social insecurity.) I figure I'll dip my toe into Plurk, though.

This link is an open invite to Plurk. It'll let anyone join. Fair warning, it's also a "friend" link. 

Six Word Methods

In his great collection of essays Why Does Software Cost So Much?, Tom DeMarco makes the interesting point that the software industry had grown from zero to $300 billion dollars (in 1993). This indicates that the market had at least $300B worth of demand for software, even while complaining continuously about the cost and quality of the very same software. It seems to me that the demand for software production, together with the time and cost pressures, has only increased dramatically since then.

(DeMarco enlightens us that the perennial question, "Why does software cost so much?" is not really a question at all, but rather a goad or a negotiation. Also very true.)

Fundamentally, the demand for software production far outstrips our industry's ability to supply it. In fact, I believe that we can classify most software methods and techniques by their relation and response to the problem of surplus demand. Some try to optimize for least-cost production, others for highest quality, still others for shortest cycle time.

In the spirit of six-word memoirs, here are the sometimes dubious responses that various technology and development methods' offer to the overwhelming demand for software production. 

Waterfall: Nevermind backlog, requirements were signed off.

RAD: Build prototypes faster than discarding them.

Offshore outsourcing: Army of cheap developers producing junk.

Onshore outsourcing: Same junk, but with expensive developers.

Agile: Avoid featuritis; outrun pesky business users.

Domain-specific languages: Compress every problem into one-liners.

CMMi: Enough Process means nothing's ever wasted.

Relational Databases: Code? Who cares? Data lives forever.

Model-driven architecture: Jackson Pollack's models into inscrutable code.

Web Services: Terrorize XML until maximum reuse achieved.

FORTH: backward writing IF punctuation time SAVE.

SOA: Iron-fisted governance ensures total calcification.

Intentional programming: Parallelize programming... make programmers of everyone.

Google as IDE: It's been done, probably in Befunge.

Open-source: Bury the world in abandoned code.

Mashups: Parasitize others' apps, then APIs change.

LISP: With enough macros, one uberprogrammer sufficies.

perl: Too busy coding to maintain anyway.

Ruby: Meta-programming: same problems, mysterious solutions.

Ocaml: No, try meta-meta-meta-programming.

Groovy: Faster Java coding, runs like C-64.

Software-as-a-Service: Don't write your own, rent ours.

Cloud Computing: Programmers would go faster without administrators.

Wii Wescue

So, I got a Wii for Father's Day last year. It's been a lot of fun to play together with my kids, my wife, and even my parents and in-laws. It's fantastic to have a game system that we can all play together and be reasonably competitive.  My six-year old can hold her own in Wii bowling, but she cries a lot when we play Halo. (I'm just kidding...)

Unfortunately, my three-year old put a shiny disc of her own into it: a plastic toy coin. Well, it does say "Play Money" right on the front. Right in the drive slot. I figured my Wii was a goner for sure.

"Play Money" 

I set about opening the thing up to remove the coin, but got stumped by these custom screws, kind of like a Philips head, but with three prongs. Turns out these are called "Triwing" screws and they're specifically designed to keep end users out of the machine, on the theory that these are not widely used screws, so most people won't have the means to unscrew them. True, it slowed me down a bit. I had to order a kit from Thinkgeek that has driver bits for every console on the market.

Opened it up, got the coin out, and the Wii still works!

But, surely these belong somewhere, don't they?


Opening Up SpringSource AP

Just now getting my hands on the SpringSource Application Platform. It's deceptive, because there's very little functionality exposed when you run it. It starts up with less ceremony than Apache or Tomcat. (Which is kind of funny, when you consider that it includes Tomcat.)

When you look at the bundle repository, though, it's clear that a lot of stuff is packaged in here. In a way, that's like the Spring framework itself. On the surface, it looks like just a bean configurator. All the really powerful stuff is in the libraries built out of that small core.

Here's a quick listing of the bundles in version 1.0.0.beta: 


There's clearly a lot of functionality built in, but how do you get at it? The SAP, erm, SpringSource AP documentation screams for improvement. Maybe they think that, because all the parts are documented elsewhere, there's no need for any integrated docset. If so, they would be wrong. Despite that, I'm interested enough to keep poking away at it.

Oh, and one other thing: the default administrator account is admin/springsource. (It's actually defined in servlet/conf/tomcat-users.xml.) For some reason, that's buried in chapter 5 of the user guide. It would be handy to make that more prominent.

Grab Bag of Demos

Sun opened the final day of JavaOne with a general session called "Extreme Innovation". This was a showcase for novel, interesting, and out-of-this-world uses of Java based technology.


VisualVM works with local or remote applications, using JMX over RMI to connect to remote apps. While you have to run VisualVM itself under JDK 1.6, it can connect to any version of JVm from 1.4.2 through 1.7. Local apps are automatically detected and offered in the UI for debugging.  VisualVM uses the Java Platform Debugger Architecture to show thread activities, memory usage, object counts, and call timing. It can also take snapshots of the application's state for post-mortem or remote analysis.

Memory problems can be a bear to diagnose. VisualVM includes a heap analyzer that can show reference chains. From the UI, it looks like it can also detect and indicate reference loops.

One interesting feature of VisualVM is the ability to add plug-ins for application-specific behavior. Sun demonstrated a Glassfish plugin that adds custom metrics for request latency and volume, and the ability to examine each application context independently.

The application does not require any special instrumentation, so you can run VisualVM directly against a production application. According to Sun, it adds "almost no overhead" to the application being examined. I'd still be very cautious about that. VisualVM allows you to enable CPU and memory profiling in real-time, so that will certainly have an effect on the application. Not to mention, it also lets you trigger a heap dump, which is always going to be costly.

VisualVM is available for download now.

JavaScript Support in NetBeans

Sun continues to push NetBeans at every turn. In this case, it was a demo of the JavaScript plugin for NetBeans. This is really a nice plugin. It uses type inferencing to provide autocompletion and semantic warnings. For example, it would warn you if a function had inconsistent return statements. (Such as returning an object from one code path mixed with a void return from another.)

It also has a handy developer aid: it warns developers about browser compatibility.

I don't do a whole lot of JavaScript, but I couldn't help thinking about other dynamic languages. Ifthe plugin can do that kind of type inferencing---without executing the code---for one dynamic language, then it should be possible to do for other dynamic languages. That could remove a lot of objections about Groovy, Scala, JRuby, etc.

Fluffy Stuff at the Edge

We got a couple of demos of Java in front of the end-user. One was a cell phone running an OpenGL scene at about 15 frames per second on an NVidia chipset. All the rendering was done in Java and displayed via OpenGL ES, with 3D positional audio. Not bad at all.

Project Darkstar got a few moments in the spotlight, too. They showed off a game called Call of the Kings, a multiplayer RTS that looked like it came from 1999.  Call of the Kings uses the jMonkey Engine (built on top of JOAL, JOGL, and jInput) on the client and Project Darkstar's game server on the backend. It's OK, but as game engines go, I'm not sure how it will be relevant.

There was also a JavaCard demo, running Robocode players on JavaCards.  That's not just storing the program on the card, it was actually executing on the card. Two finalists were brought up on stage (but not given microphones of their own, I noticed) for a final battle between their tanks. Yellow won, and received a PS3. Red lost, but got a PSP for making it to the finals.

Sentilla tried to get out from the "creepy" moniker by bouncing mesh-networked, location-tracking beachballs around the audience. Each one had a Sentilla "mote" in it, with a 3D accelerometer inside. Receivers at the perimeter of the hall could triangulate the beachballs' locations by signal strength. For me, the most interesting thing here was James Gosling's talk about powering the motes. They draw so little power that it's possible to power them from ambient sources: vibration and heat. Interesting. Still creepy, but interesting.

The next demo was mind-blowing. The livescribe pulse is a Java computer built into a pen. It's hard to describe how wild this thing is, you almost have to see it for any of this to make sense.

At one point, the presenter wrote down a list, narrating as he went. For item one, he wrote the numeral "1" and the word "pulse", describing the pen as he went. For item two, he wrote the numeral "2" and draw a little doodle of a desktop. Item three was the numeral and a vague cloudy thing. All this time, the pen was recoding his audio, and associating bits of the audio stream with the page locations. So when he tapped the numeral "1" that he had written, the pen played back his audio. Not bad.

Then he put an "application card" on the table and tapped "Spanish" on it. He wrote down the word "one"... and the pen spoke the word "uno".  He wrote "coffee please" and it said "cafe por favor". Then he had it do the same phrase in Mandarin and Arabic. Handwriting recognition, machine translation, and speech synthesis all in the pen. Wow.

Next, he selected a program from the pen's menu. The special notebook has a menu crosshair on it, but you can draw your own crosshair and it works the same way: use the pen to tap the up-arrow on paper, and the menu changes on the display. He picked a piano program, and the pen started to give him directions on how to draw a piano. Once he was done drawing it, he could tap the "keys" on paper to play notes.

The pen captures x, y, and t information as you write, so it's digitizing the trajectory rather than the image. This is great for data compression when you're sharing pages across the livescribe web site. It's probably also great for forgers, so there might be a concern there.

Industrial Strength

Emphasizing real-time Java for a bit, Sun showed off "Blue Wonder", an industrial controller built out of an x86 computer running Solaris 10 and Java RTS 2.0.  This is suitable for factory control applications and is, apparently, very exciting to factory control people.

From the DARPA Urban Challenge event, we saw "Tommy Jr.", an autonomous vehicle. It followed Paul Perrone into the room, narrating each move it was making. Fortunately, nobody tried to demonstrate it's crowd control or law enforcement features. Instead, they showed off an array of high resolution sensors and actuators. It's all controlled, under very tight real-time constraints, by a single x86 board running Solaris and Java RTS.

Into New Realms

Next, we saw a demo of JMars. This impressive application helps scientists make sense out of the 150 terabytes of data we've collected from various Mars probes. It combines data and imaging layers from many different probes. One example overlaid hematite concentrations on top of an infrared image layer. It also knows enough about the various satellites orbits to help plan imaging requests.

Ultimately, JMars was built to help target landing sites for both scientific interest and technical viability. We'll soon see how well they did: the Phoenix lander arrives in about two weeks, targeting a site that was selected using JMars.

JMars is both free to use and is also open source. Dr. Phil Christensen from Arizon State University invited the Java community to explore Mars for themselves, and perhaps join the project team.

Thousands of people, physicists and otherwise, are eagerly awaiting the LHC's activation. We got to see a little bit behind the scenes about how Java is being used within CERN.

On the one hand, some very un-sexy business process work is being done. LHC is a vast project, so it's got people, budget, and materials to manage. Ho hum. It's not easy to manage all those business processes, but it sure doesn't demo well.

On the other hand, showing off the grid computing infrastructure does.

Once it's operating, the ATLAS detectors alone will produce a gigabyte an hour of image data. All of it needs to be processed. "Processing" here means running through some amazing pattern recognition programs to analyze events, looking for anomalies. There will be far too many collisions generated every day for a physicist to look at all of them, so automated techniques have to weed out "uninteresting" collisions and call attention to ones that dont' fit the profile.

CERN estimates that 100,000 CPUs will be needed to process the data. They've built a coalition of facilities into a multi-tier grid. Even today, they're running 16,000 jobs on the grid across hundreds of data centers. With that many nodes involved, they need some good management and visualization tools, and we got to see one. It's a 3D world model with iconified data centers showing their status and capacity. Jobs fly from one to another along geodesic links. Very cool stuff.


Java is a mature technology that's being used in many spheres other than application server programming. For me, and many other JavaOne attendees, this session really underscored the fact that none of our own projects are anywhere near as cool as these demos. I'm left with the desire to go build something cool, which was probably the point.

OmniFocus Coming to the iPhone

Over the last six months, I've grown thoroughly dependent on OmniFocus. It's a Getting Things Done application that lets me juggle more projects, personal and professional, than I ever thought I could.

Now, Omni says they're going to bring OmniFocus to the iPhone. So far, the iPhone hasn't compelled me, but I think that will be the trigger.

Steve Jobs Made Me Miss My Flight

Or: On my way to San Jose.

On waking, I reach for my blackberry. It tells me what city I'm in; the hotel rooms offer no clues. Every Courtyard by Marriott is interchangeable.  Many doors into the same house. From the size of my suitcase, I can recall the length of my stay: one or two days, the small bag.  Three or four, the large. Two bags means more than a week.

CNBC, shower, coffee, email. Quick breakfast, $10.95 (except in California, where it's $12.95. Another clue.)

Getting there is the worst part. Flying is an endless accumulation of indignities. Airlines learned their human factors from hospitals. I've adapted my routine to minimize hassles.

Park in the same level of the same ramp. Check in at the less-used kiosks in the transit level. Check my bag so I don't have to fuck around with the overhead bins. I'd rather dawdle at the carousel than drag the thing around the terminal anyway.

Always the frequent flyer line at the security checkpoint. Sometimes there's an airline person at the entrance of that line to check my boarding pass, sometimes not. An irritation. I'd rather it was always, or never. Sometimes means I don't know if I need my boarding pass out or not.

Same words to the TSA agent.  Standard responses. "Doing fine," whether I am or not.  Same belt.  It's gone through the metal detector every time. I don't need to take it off.

Only... today, something is different. Instead of my bags trundling through the x-ray machine, she stops the belt.  Calls over another agent, a palaver. Another agent flocks to the screen. A gabble, a conference, some consternation.

They pull my laptop, my new laptop making its first trip with me, out of the flow of bags. One takes me aside to a partitioned cubicle. Another of the endless supply of TSA agents takes the rest of my bags to a different cubicle. No yellow brick road here, just a pair of yellow painted feet on the floor, and my flight is boarding. I am made to understand that I should stand and wait.  My laptop is on the table in front of me, just beyond reach, like I am waiting to collect my personal effects after being paroled.

I'm standing, watching my laptop on the table, listening to security clucking just behind me. "There's no drive," one says. "And no ports on the back. It has a couple of lines where the drive should be," she continues.

A younger agent, joins the crew. I must now be occupying ten, perhaps twenty, percent of the security force. At this checkpoint anyway. There are three score more at the other five checkpoints. The new arrival looks at the printouts from x-ray, looks at my laptop sitting small and alone. He tells the others that it is a real laptop, not a "device". That it has a solid-state drive instead of a hard disc. They don't know what he means. He tries again, "Instead of a spinning disc, it keeps everything in flash memory." Still no good. "Like the memory card in a digital camera." He points to the x-ray, "Here. That's what it uses instead of a hard drive."

The senior agent hasn't been trained for technological change. New products on the market? They haven't been TSA approved. Probably shouldn't be permitted. He requires me to open the "device" and run a program. I do, and despite his inclination, the lead agent decides to release me and my troublesome laptop.  My flight is long gone now, so I head for the service center to get rebooked.

Behind me, I hear the younger agent, perhaps not realizing that even the TSA must obey TSA rules, repeating himself.

"It's a MacBook Air."

Well Begun Is Half Done

How long is your checklist for setting up a new development environment? It might seem like a trivial thing, but setup costs are part of the overall friction in your project. I've seen three page checklists that required multiple downloads, logging in as several users (root and non-root), and hand-typing SQL strings to set up the local database server.

I think the paragon of environment setup is the ubiquitous GNU autoconf system. Anyone familiar with Linux, BSD, or other flavors of UNIX will surely recognize this three-line incantation:

make install

The beauty of autoconf is that it adapts to you. In the open-source world, you can't stipulate one particular set of packages or versions, at least, not if you actually want people to use your software and contribute to your project. In the corporate world, though, it's pretty common to see a project that requires a specific point-point rev of some Jakarta Commons library, but without actually documenting the version.

Then there are different places to put things: inside the project, in source control, or in the system. I recently went back to a project's code base after being away for more than two years. I thought we had done a good job of addressing the environment setup. We included all the deliverable jars in the codebase, so they were all version controlled. But, we decided to keep the development-only jars (like EasyMock, DBUnit, and JUnit) outside the code base. We did use Eclipse variables to abstract out the exact filesystem location, but when I returned to that code base, finding and restoring exactly the right versions of those build-time jars wasn't easy. In retrospect, we should have put the build-time jars under version control and kept them inside the code base.

Yes, I know that version control systems aren't good at versioning binaries like jar files. Who cares? We don't rev the jar files so often that the lack of deltas matters. Putting a new binary in source control when you upgrade from Spring 2.5 to Spring 2.5.1 really won't kill your repository. The cost of the extra disk space is nothing compared to the benefit of keeping your code base self-contained.

Maven users will be familiar with another approach. On a Maven project, you express external dependencies in a project model file. On the first build, Maven will download those dependencies from their "official" archives, then cache them locally. After that, Maven will just use the locally cached jar file, at least until you move your declared dependency to a newer revision. I have nothing against Maven. I know some people who swear by it, and others who swear at it. Personally, I just never got into it.

Then there are JRE extensions. This project uses JAI, which wants to be installed inside the JRE itself. We went along with that, but I was stumped for a while today when I saw hundreds of compile errors even though my Eclipse project's build path didn't show any unresolved dependencies. Of course, when you install JAI inside the JRE, it just becomes part of the Java runtime. That makes it an implicit dependency. I eventually remembered that trick, but it took a while. In retrospect, I wish we had tried harder to bring JAI's jars and native libraries into the code base as an explicit dependency.

Does developer environment setup time matter? I believe it does. It might be tempting to say, "That's a one-time cost, there's no point in optimizing it." It's not really a one-time cost, though. It's one time per developer, every time that developer has to reinstall. My rough observation says that, between migrating to a new workstation, Windows reinstalls, corporate re-imaging, and developer churn, you should expect three to five developer setups per year on an internal project.

For an open-source project, the sky is the limit. Keep in mind that you'll lose potential contributors at every barrier they encounter. Environment setup is the first one.

So, what's my checklist for a good environment setup checklist?

  • Keep the project self contained. Bring all dependencies into the code base. Same goes for RPMs or third-party installers.
  • Make sure all JAR files have version numbers in their file names. If the upstream project doesn't build their JAR files with version numbers, go ahead and rename the jars.
  • Make bootstrap scripts for database actions such as user creation or schema builds.
  • If you absolutely must embed a dependency on something that lives outside the code base, make your build script detect its location. Don't rely on specific path names.
  • Don't assume your code base is in any particular filesystem on the build machine.

I'd love to see your with your own rules for easy development setup.

Budgetecture and it's ugly cousins

It's the time of year for family gatherings, so here's a repulsive group portrait of some nearly universal pathologies. Try not to read this while you're eating.


We've all been hit with budgetecture.  That's when sound technology choices go out the window in favor of cost-cutting. The conversation goes something like this.

"Do we really need X?" asks the project sponsor. (A.k.a. the gold owner.)

For "X", you can substitute nearly anything that's vitally necessary to make the system run: software licenses, redundant servers, offsite backups, or power supplies.  It's always asked with a sort of paternalistic tone, as though the grown-up has caught us blowing all our pocket money on comic books and bubble gum, whilst the serious adults are trying to get on with buying more buckets to carry their profits around in.

The correct way to answer this is "Yes.  We do."  That's almost never the response.

After all, we're trained as engineers, and engineering is all about making trade-offs. We know good and well that you don't really need extravagances like power supplies, so long as there's a sufficient supply of hamster wheels and cheap interns in the data center.  So instead of simply saying, "Yes. We do," we go on with something like, "Well, you could do without a second server, provided you're willing to accept downtime for routine maintenance and whenever a RAM chip gets hit by a cosmic ray and flips a bit, causing a crash, but if we get error-checking parity memory then we get around that, so we just have to worry about the operating system crashing, which it does about every three-point-nine days, so we'll have to institute a regime of nightly restarts that the interns can do whenever they're taking a break from the power-generating hamster wheels."

All of which might be completely true, but is utterly the wrong thing to say. The sponsor has surely stopped listening after the word, "Well..."

The problem is that you see your part as an engineering role, while your sponsor clearly understands he's engaged in a negotiation. And in a negotiation, the last thing you want to do is make concessions on the first demand. In fact, the right response to the "do we really need" question is something like this:

"Without a second server, the whole system will come crashing down at least three times daily, particularly when it's under heaviest load or when you are doing a demo for the Board of Directors. In fact, we really need four servers so we can take an HA pair down independently at any time while still maintaining 100% of our capacity, even in case one of the remaining pair crashes unexpectedly."

Of course, we both know you don't really need the third and fourth servers. This is just a gambit to get the sponsor to change the subject to something else. You're upping the ante and showing that you're already running at the bare, dangerous, nearly-irresponsible minimum tolerable configuration. And besides, if you do actually get the extra servers, you can certainly use one to make your QA environment match production, and the other will make a great build box.

Schedule Quid Pro Quo

Another situation in which we harm ourselves by bringing engineering trade-offs to a negotiation comes when the schedule slips. Statistically speaking, we're more likely to pick up the bass line from "La Bamba" from a pair of counter-rotating neutron stars than we are to complete a project on time. Sooner or later, you'll realize that the only way to deliver your project on time and under budget is to reduce it to roughly the scope of "Hello, world!"

When that happens, being a responsible developer, you'll tell your sponsor that the schedule needs to slip. You may not realize it, but by uttering those words, you've given the international sign of negotiating weakness.

Your sponsor, who has his or her own reputation---not to mention budget---tied to the delivery of this project, will reflexively respond with, "We can move the date, but if I give you that, then you have to give me these extra features."

The project is already going to be late. Adding features will surely make it more late, particularly since you've already established that the team isn't moving as fast as expected. So why would someone invested in the success of the project want to further damage it by increasing the scope? It's about as productive as soaking a grocery store bag (the paper kind) in water, then dropping a coconut into it.

I suspect that it's sort of like dragging a piece of yarn in front of a kitten. It can't help but pounce on it. It's just what kittens do.

 My only advice in this situation is to counter with data. Produce the burndown chart showing when you will actually be ready to release with the current scope. Then show how the fractally iterative cycle of slippage followed by scope creep produces a delivery date that will be moot, as the sun will have exploded before you reach beta.

The Fallacy of Capital

When something costs a lot, we want to use it all the time, regardless of how well suited it is or is not.

This is sort of the inverse of budgetecture.  For example, relational databases used to cost roughly the same as a battleship. So, managers got it in their heads that everything needed to be in the relational database.  Singular. As in, one.

Well, if one database server is the source of all truth, you'd better be pretty careful with it. And the best way to be careful with it is to make sure that nobody, but nobody, ever touches it. Then you collect a group of people with malleable young minds and a bent toward obsessive-compulsive abbreviation forming, and you make them the Curators of Truth.

But, because the damn thing cost so much, you need to get your money's worth out of it. So, you mandate that every application must store its data in The Database, despite the fact that nobody knows where it is, what it looks like, or even if it really exists.  Like Schrodinger's cat, it might already be gone, it's just that nobody has observed it yet. Still, even that genetic algorithm with simulated annealing, running ten million Monte Carlo fitness tests is required to keep its data in The Database.

(In the above argument, feel free to substitute IBM Mainframe, WebSphere, AquaLogic, ESB, or whatever your capital fallacy du jour may be.)

Of course, if databases didn't cost so much, nobody would care how many of them there are. Which is why MySQL, Postgres, SQLite, and the others are really so useful. It's not an issue to create twenty or thirty instances of a free database. There's no need to collect them up into a grand "enterprise data architecture". In fact, exactly the opposite is true. You can finally let independent business units evolve independently. Independent services can own their own data stores, and never let other applications stick their fingers into its guts.


So there you have it, a small sample of the rogue's gallery. These bad relations don't get much photo op time with the CEO, but if you look, you'll find them lurking in some cubicle just around the corner.


Putting My Mind Online

Along with the longer analysis pieces, I've decided to post the entirety of my notes from QCon San Francisco. A few of my friends and colleagues are fellow mind-mappers, so this is for them.

Nygard's Mind Map from QCon

This file works with FreeMind, an fast, fluid, and free mind mapping tool. 

Catching up through the day

One of the great things about virtual infrastructure is that you can treat it as a service. I use Yahoo's shared hosting service for this blog. That gives me benefits: low cost and very quick setup. On the down side, I can't log in as root. So when Yahoo has a problem, I have a problem.

Yesterday, there was something wrong with Yahoo's install of Movable Type. As a result, I couldn't post my "five things". I'll be catching up today, as time permits.

My butt is planted in one track all day today, "Architectures You've Always Wondered About." We'll be hearing about the architecture that runs Second Life, Yahoo, eBay, LinkedIn, and Orbitz. I may need a catheter and an IV.

Make Time a Weapon

Here's an list of books about putting time to work as your own weapon, instead of being victimized by it:

On the Widespread Abuse of SLAs

Technical terminology sneaks into common use. Terms such as "bandwidth" and "offline" get used and abused, slowly losing touch with their original meaning. ("Bandwidth" has suffered multiple drifts. It started out in radio, not computer networking, let alone the idea of "personal attention space".) It is the nature of language to evolve, so I would have no problem with this linguistic drift, if it were not for the way that the mediocre and the clueless clutch to these seemingly meaningful phrases.

The latest victim of this linguistic vampirism is the "Service Level Agreement". This term, birthed in IT governance, sounds wonderful. It sounds formal and official.

An example of the vulgar usage: "I have a five-day SLA."

It sounds so very proactive and synergistic and leveraged, doesn't it? Theoretically, it means that we've got an agreement between our two groups; I am your customer and you commit to delivering service within five days.

A real SLA has important dimensions that I never see addressed with internal "organizational" SLAs.

First, boundaries.

When does that five day clock begin ticking? Is it when I submit my request to the queue? Or, is it when someone from your group picks the request up from the queue? If the latter, then how long do requests sit in queue before they get picked up? What's the best case? Worst case? Average?

When does the clock stop ticking? If you just say, "not approved" or "needs additional detail", does that meet your SLA? Do I have to resubmit for the next iteration, with a whole new five day clock? Or, does the original five day SLA run through resolution rather than just response?

An internal SLA must begin with submission into the request queue and end when the request is fully resolved.

Second, measurement and tracking.

How often do you meet your internal SLA? 100% of the time? 95% of the time? 50% of the time? Unless you can tell me your "on-time performance", there's no way for me to have confidence in your SLA.

How many requests have to be escalated or prioritized in order to meet SLA? Do any non-escalated requests actually get resolved within the alloted time?

How well does your on-time performance correlate with the incoming workload? If the request volume goes up by 25%, but your on-time performance does not change, then your SLA is too loose.

An SLA must be tracked and trended. It must be correlated with demand metrics.

Third, consequences.

If there is no penalty, then there is no SLA. In fact, the IT Infrastructure Library considers penalties to be the defining characteristic of SLAs. (Of course, ITIL also says that SLAs are only possible with external suppliers, because it is only with external suppliers that you can have a contract.)

When was the last time that an internal group had its budget dinged for breaking an SLA? What would that even mean? How would the health and performance of the whole company be aided by taking resources away from a unit that already cannot perform?  The Theory of Constraints says that you devote more resources to the bottleneck, not less. Penalizing you for breaking SLA probably makes your performance worse, not better.

(External suppliers are different because a) you're paying them, and b) they have a profit margin. I doubt the same is true for your own internal groups.)

If there's no penalty, then it's not an SLA.

Fourth, consent.

SLAs are defined by joint consent of both the supplier and consumer of the service. As a subscriber to your service, I can make economic judgments about how much to pay for what level of service. You can make economic judgments about how well you can deliver service at the required level for the offered payment.

When are internal "service level agreements" actually an "agreement"? Never. I always see SLAs being imposed by one group upon all of their subscribers.

An SLA must be an agreement, not a dictum.


If any of these conditions are not met, then it's not really an SLA. It's just a "best effort response time". As a consumer, and sometimes victim, of the service, I cannot plan to the SLA time. Rather, I must manage around it. Calling a "best effort response time" an "SLA" is just an attempt to deceive both of us.


Y B Slow?

I've long been a fan of the Firebug extension for Firefox.  It gives you great visibility into the ebb and flow of browser traffic.  It sure beats rolling your own SOCKS proxy to stick between your browser and the destination site.

Now, I have to also endorse YSlow from Yahoo.  YSlow adds interpretation and recommendations to Firebug's raw data.

For example, when I point YSlow at www.google.com, here's how it "grades" Google's performance:

Google gets an A for performance

Not bad.  On the other hand, www.target.com doesn't fare as well.

Target gets an F for performance

Along with the high-level recommendations, YSlow will also tally up the page weight, including a nice breakdown of cached versus non-cached requests and download size.

Cache stats for Target.com

There are so many good reasons to use this tool. In Release It, I spend a lot of time talking about the money companies waste on bloated HTML and unnecessary page requests.  Fat pages hurt users and they hurt companies.  Users don't want to wait for all your extra whitespace, table-formatting, and shims to download.  Companies shouldn't have to pay for all the added, useless bandwidth.  YSlow is a great tool to help eliminate the bloat, speed up page delivery, and make happy users.

ITIL and Extreme Programming

Esther Schindler asked if I'd be willing to post my earlier article on staying agile in the face of ITIL at CIO.com.  How could I say no?  The piece is here.


Quantum Manipulations

I work in information technology, but my first love is science.  Particularly the hard sciences of physics and cosmology.

There've been a series of experiments over the last few years that have demonstrated quantum manipulations of light and matter that approach the macroscopic realm.

A recent result from Harvard (HT to Dion Stewart for the link) has gotten a lot of (incorrect) play.  It involves absorbing photons with a Bose-Einstein condensate, then reproducing identical photons at some distance in time and space.  I've been reading about these experiments with a lot of interest, along with the experiments going the "other" direction: supraluminal group phase travel.

I wish the science writers would find a new metaphor, though.  They all talk in terms of "stopping light" or "speeding up light".  None of these have to do with changing the speed of light, either up or down.  This is about photons, not the speed of light.

In fact, this latest one is even more interesting when you view it in terms of the "computational universe" theory of Seth Lloyd.  What they've done is captured the complete quantum state of the photons, somehow 'imprinted' on the atoms in the condensate, then recreated the photons from that quantum state.

This isn't mere matter-energy conversion as the headlines have said.  It's something much more.

The Bose-Einstein condensate can be described as a phase of matter colder than a solid.  It's much weirder than that, though.  In the condensate, all the particles in all the atoms achieve a single wavefunction.  You can describe the entire collection of protons, neutrons and electrons as if it were one big particle with its own wavefunction.

This experiment with the photons shows that the photons' wavefunctions can be superposed with the wavefunction of the condesnate, then later extracted to separate the photons from the condensate.

The articles somewhat misrepresent this as being about converting light (energy) to matter, but its really about converting the photon particles to pure information then using that information to recreate identical particles elsewhere.  Yikes!

Education as mental immune system

Education and intelligence act like a memetic immune system. For instance, anyone with knowledge of chemistry understands that "binary liquid explosives" are a movie plot, not a security threat. On the other hand, lacking education, TSA officials told a woman in front of me to throw away her Dairy Queen ice cream cones before she could board the plane. Ice cream.

How in the hell is anyone supposed to blow up a plane with ice cream? It defies imagination.

She was firmly and seriously told, "Once it melts, it will be a liquid and all liquids and gels are banned from the aircraft."

I wanted to ask him what the TSA's official position was on collodal solids. They aren't gels or liquids, but amorphous liquids trapped in a suspension of solid crystals. Like a creamy mixture of dairy fats, egg yolks, and flavoring trapped in a suspension of water ice crystals.

I didn't of course. I've heard the chilling warnings, "Jokes or inappropriate remarks to security officials will result in your detention and arrest." (Real announcement. I heard it in Houston.) In other words, mouth off about the idiocy of the system and you'll be grooving to Brittney Spears in Gitmo.

On the other hand, there are other ideas that only make sense if you're overly educated. Dennis Prager is fond of saying that you have to go to graduate school to believe things like, "The Republican party is more dangerous than Hizbollah."

Of course, I don't think he's really talking about post-docs in Chemical Engineering.

Inviting Disaster

I'm reading a fabulous book called "Inviting Disaster", by James R. Chiles. He discusses hundreds of engineering and mechanical disasters. Most of them caused serious loss of life.

There are several common themes:

1. Enormously complex systems that react in sometimes unpredictable ways

2. Inadequate testing, training, or preparedness for failures -- particularly for multiple concurrent failures

3. A chain of events leading to the "system fracture". Usually exacerbated by human error

4. Politics or budget pressure causing otherwise responsible people to rush things out. This often involves whitewashing or pooh-poohing legitimate criticism and concern from experts involved.

The parallels to some projects I've worked on are kind of eerie. Particularly when he's talking about things like the DC-10 and the Hubble Space Telescope. In both of those cases, warning signs were visible during the construction and early testing, but because each of the people involved had tunnel vision limited to that person's silo, the clues got missed.

The scary part is that there is no solution here. Sometimes, you can't even place the blame very squarely. When half-a-dozen people were involved with unloading and handling of oxygen-generating cylinders on a ValuJet flight, no single individual really did something wrong (or contrary to procedure, anyway). Still, the net effect of their actions cost the lives of every single person on that flight.

It's grim stuff, but it ought to be required reading. If you ever leave your house again, you'll be much better prepared for building and operating complex systems.

Technorati Tags: operations, systems


One of the most fun features of my current project

One of the most fun features of my current project is our "extreme feedback monitor". We're using CruiseControl to build our entire codebase, including unit tests, acceptance tests, and quality metrics, every five minutes. To make a broken build painfully obvious, we've got a stoplight hanging on one wall of the room. (I may post some pictures later, if there's interest.)

Kyle Larson found the stoplight itself in a gift shop (Spencer's, maybe, I can't remember... Kyle, help me out here). It had just one plug but you could push each light as a separate switch.

Well, it looks pretty dumb to walk over and push on the red light to show a broken build. It's not pragmatic and it's not automated. So, Kyle rewired it with two additional cords, so each lamp has its own plug.

I plugged each lamp into an X10 lamp module so each color could be turned on and off individually. I hooked a "FireCracker" wireless transmitter up to the serial port on the build box. With one switched receiver and two lamp modules, we were ready to go.

CruiseControl supports a publisher that is supposed to integrate directly with X10 devices over the serial port. Unfortunately, the installation and setup for Java programs to work with X10 devices on Linux is... problematic. First off, the JavaComm API appears to be totally stagnant. It does not support Linux at all, so you have to install the Solaris SPARC version, but supply an open-source Linux implementation of the API (www.rxtx.org), replacing a .properties file. Then you have to make sure that the user running your build loop is a member of the "tty" group. Then just cross your fingers.

I got all of the above to work from my Java test apps, but the X10 publisher built into CC still couldn't open the serial port.

I finally gave up on the built-in publisher. I used wget, BottleRocket, and a shell script to check the build status web page every 30 seconds and change the lights accordingly.

Now, within a minute of a broken build, we can all see it. When the light is green, the build is clean.

If the red light means "broken build", and the green light means "good build", you might wonder what we use yellow for.

Yellow means that someone is in the process of synchronizing and committing code. Along with the FireCracker module, we also got a remote control. That normally sits in the middle of the tables in the lab. Whenever a pair needs to check in code, they grab the remote (i.e., take the semaphore) and turn on the yellow light. As an added "feature", the wireless switched receiver is the only module that makes an audible "click" when it switches. We use that one to control the yellow lamp, so we also have an auditory cue when a pair starts their commit dance.

After committing, the pair turns off the yellow light and replaces the remote, thus putting the semaphore and allowing the next pair to commit. In the event of multiple blocked pairs, FIFO behavior is not guaranteed. Semaphore holders have been known to be susceptible to flattery and bribery.

Technorati Tags: agile, automation, CruiseControl, pragmatic

The Veteran and the Master

The aged veteran said to the master, "See how many programs I have written in my labors. All of these works I have created needed no more than a text editor and a compiler." The master said, "I do have an editor; indeed, I have also a compiler."

Said the aged one, "Yet you shackle them within an 'environment'. Why must your environment be integrated? My environment has never been integrated, yet I am a mighty programmer."

The master said, "You are truly a mighty programmer. I perceive that you, in your keen intellect, can hold entire class hierarchies in mind at once. Such abilities of apprehension are to be respected."

The veteran was well pleased and said, "It is true. Hence I am lead programmer."

The master nodded. "Sadly, I have not your powers of visualization. I cannot hold entire hierarchies in my minds eye at once. In my limited faculties, I must focus entirely on one class at a time. The tool remembers the rest, as I cannot."

Emboldened, the aged veteran boasted, "See the commands fly from my fingertips! I type faster than other programmers think!"

Again, the master nodded his agreement, "I am not so blessed with speed as you. It is a burden and a trial to move so slowly. Behold, this measure of the marvel of your fingers. Such is the flight of your keystrokes that in the time it takes you to execute a regex replace across thirty files; compile the project; note the errors; and edit the twelve files with failed replacements; I will have barely completed the 'rename refactor' which I started by typing shift-alt-r."

Brazen in his opponents weakness, the veteran cried, "While you sit meditating at the green bar, I pound out another four thousand lines of code!"

Again, the master nodded, "Yes. And worse, while you write the next thousand, I will surely erase a thousand more, leaving us barely past where we began. It is clear that I cannot long contend in this field against such as yourself."

The battle-scarred veteran, his opponent beaten, laughed aloud. Barely bothering to express his contempt, he sneered, "And what fine code it is, too! You write a fraction of the code a real programmer could produce. As a coward in the grain, you shrink from any real challenge. Fearing to tread where real programmers dwell, you trade in coin like a merchant, purchasing the work of others, or worse, living on the charity of those motley-clad coders who give away the fruits of their work."

"Again, your perspicacity has unmasked me," said the master. "Knowing myself to produce bugs in my code, I prefer to write little of it. I do rely upon the work of others who, if not being smarter than myself, are at least more numerous than I. Had I your fleet fingers, I might not need to download these gifts offered by others. Indeed, I am certain that your mighty editor would surely outpace my mere web browser, and you could then code a new SVG renderer long before I will finish downloading Batik to do the same work. Alas, lacking your skills, I must fend for myself as best I can by reusing that which I can. Since each line of code costs me so greatly, it behooves me to write little, and I must needs make use of what aids I can."

Shaking his head, the aged veteran stalked away, safely assured that he had gauged the so-called master truly. He returned to his labors, building a parser for the scripting language of his workflow engine. This would be placed inside of an application that would someday have users.

Shaking his head, the master returned his eye to the red bar of his users' new acceptance tests. Reaching deliberately for the keyboard, he changed two methods and added one test case. In the serene green light of the test bar, he reflected a moment on the code he had added. Unruffled by the staccato typing in the direction of the veteran, he renamed four fields, extracted a method, and pulled it up into a new base class. Comforted by the tranquil green light, the master rested his hands a moment, then lifted them from the keyboard and walked away.

From the corner of his eye, the veteran observed the master leaving. "Charlatan," he snarled, as the regexes flew from his hands, long, long into the night.

Technorati Tags: agile, lean, pragmatic

An IKEA Weekend

I've been building a new office in my downstairs space for quite a while now. It's a "weekends" project for someone who doesn't have very many weekends. In early December, I broke down and hired a contractor to install the laminate ("cardboard") flooring, which was the penultimate step in the master plan.

Last comes furniture, then moving in. (Which starts the chain of dominoes, as my eldest gets the bedroom which used to be my office, then my youngest takes her spot, which makes room for the new baby. The challenge is to finish with the hole migration before the new electron gets injected. No, that wasn't a spelling error.)

So this weekend, I had thirty-six boxes of IKEA modular furniture from "Work IKEA" to assemble.

You have time to meditate on many lessons when you are assembling thirty-six boxes of IKEA modular furniture.

For example, I've never seen a company that makes it so difficult to purchase from them. I don't really want to know that the six-shelf bookshelf I picked out from the design software actually comes as three separate SKUs. Just sell me the damn shelf.

I shouldn't have to learn what a "CDO" is in order to pick out a bunch of stuff and have them deliver it on a specific day. I shouldn't have to make three trips into the store because they cannot take my credit card number over the phone.

And can someone please explain why I have to remove items from my delivery order because the local store doesn't have them in stock? In some fields of endeavor, timing is everything, but why should I have to call them every day to find out when the left-handed tabletop comes in, then rush to the store and place my order so the piece can be pulled from inventory?

It makes no sense to me. The whole process was implemented for the convenience of IKEA, not IKEA's customers. They've made a business decision to optimize for cost control rather than customer satisfaction. IKEA is certainly free to make that choice, and they do seem to be making profits, but I'm not likely to choose them for future furniture purchases.

Exposing that much of your internal process to the customer--or end user--is never a good way to win the hearts and minds of your customers.

Most of the assembly went without incident, though I was often perplexed by trying to map the low-level components into the high-level items I designed with. IKEA offers zero-cost software for download to design a floorplan with their lines, but it works at a higher level of abstraction. I was often left wondering which item a particular component was supposed to construct.

The components were very well designed. Each piece can either fit together in only one way, or it is rotationally symmetric so either orientation works. In either case, I, the assembler, am not left with an ambiguous situation, where something might fit but does not work.

The toughest pieces were the desks. Desks can be configured in about eighty-nine different ways. The components are all modular and generally have the same interfaces. I have a lot of flexibility at my disposal, but at the expense of complexity. A significant number of sample configurations helped me understand the complexity of options and pick a reasonable structure, but I can't help but wonder how the experience could be simplified.

The furniture is all assembled now, and the office sits expectantly waiting for its occupant, full of unrealized potential.

Wiki Proliferation

Wikis have been thoroughly mainstreamed now. You know how I can tell? Spammers are targeting them.

Any wiki without access control is going to get steamrolled by a bunch of Russian computers that are editing wiki pages. They replace all the legitimate content with links to porn sites, warez, viagra, get rich now, and the usual panoply of digital plaque.

The purpose does not appear to be driving traffic directly to those sites from the wikis. Instead, they are trying to pollute Google's page rankings by creating thousands upon thousands of additional inbound links.

If you run a wiki, be sure to enable access control and versioning (so you can recover after an attack). It is a shame that the open, freewheeling environment of the wiki has to end. It seems that the only way to preserve the value of the community is to weaken the core value of open participation that made the community worthwhile.

Plugging the Marbles Newsletter

Not too much going on here lately. Most of my waking hours have been billable for the past few months. That's good and bad, in so many different ways.

Most of my recent writing has been for the Marbles, Inc. monthly newsletter.

Dec 2006 Edit: Marbles IT has not been a going concern for some time.  My articles for the Marbles Monthly newsletter are now available under the Marbles category of this blog.


This kind of thing makes me wish I were back at Caltech.