Wide Awake Developers

Quantum Manipulations

| Comments

I work in information technology, but my first love is science.  Particularly the hard sciences of physics and cosmology.

There’ve been a series of experiments over the last few years that have demonstrated quantum manipulations of light and matter that approach the macroscopic realm.

A recent result from Harvard (HT to Dion Stewart for the link) has gotten a lot of (incorrect) play.  It involves absorbing photons with a Bose-Einstein condensate, then reproducing identical photons at some distance in time and space.  I’ve been reading about these experiments with a lot of interest, along with the experiments going the "other" direction: supraluminal group phase travel.

I wish the science writers would find a new metaphor, though.  They all talk in terms of "stopping light" or "speeding up light".  None of these have to do with changing the speed of light, either up or down.  This is about photons, not the speed of light.

In fact, this latest one is even more interesting when you view it in terms of the "computational universe" theory of Seth Lloyd.  What they’ve done is captured the complete quantum state of the photons, somehow ‘imprinted’ on the atoms in the condensate, then recreated the photons from that quantum state.

This isn’t mere matter-energy conversion as the headlines have said.  It’s something much more.

The Bose-Einstein condensate can be described as a phase of matter colder than a solid.  It’s much weirder than that, though.  In the condensate, all the particles in all the atoms achieve a single wavefunction.  You can describe the entire collection of protons, neutrons and electrons as if it were one big particle with its own wavefunction.

This experiment with the photons shows that the photons’ wavefunctions can be superposed with the wavefunction of the condesnate, then later extracted to separate the photons from the condensate.

The articles somewhat misrepresent this as being about converting light (energy) to matter, but its really about converting the photon particles to pure information then using that information to recreate identical particles elsewhere.  Yikes!

A Path to a Product

| Comments

Here’s a "can’t lose" way to identify a new product: Enable people to plan ahead less. 

Take cell phones.  In the old days, you had to know where you were going before you left.  You had to make reservations from home.  You had to arrange a time and place to meet your kids at Disney World.

Now, you can call "information" to get the number of a restaurant, so you don’t have to decide where you’re going until the last possible minute.  You can call the restaurant for reservations from your car while you’re already on your way.

With cell phones, your family can split up at a theme park without pre-arranging a meeting place or time.

Cell phones let you improvise with success.  Huge hit.

GPS navigation in cars is another great example.  No more calling AAA weeks before your trip to get "TripTix" maps.  No more planning your route on a road atlas.  Just get in your car, pick a destination and start driving.  You don’t even have to know where to get gas or food
along the way.

Credit and debit cards let you go places without planning ahead and carrying enough cash, gold, or jewels to pay your way.

The Web is the ultimate "preparation avoidance" tool.  No matter what you’re doing, if you have an always-on ‘Net connection, you can improvise your way through meetings, debates, social engagements, and work situations.

Find another product that lets procrastinators succeed, and you’ve got a sure winner.  There’s nothing that people love more than the personal liberation of not planning ahead.

 

How to Become an “Architect”

| Comments

Over at The Server Side, there’s a discussion about how to become an "architect".  Though TSS comments often turn into a cesspool, I couldn’t resist adding my own two cents.

I should also add that the title "architect" is vastly overused.  It’s tossed around like a job grade on the technical ladder: associate developer, developer, senior developer, architect.  If you talk to a consulting firm, it goes more like: senior consultant (1 - 2 years experience), architect (3 - 5 years experience), senior technical architect (5+ years experience).  Then again, I may just be too cynical.

There are several qualities that the architecture of a system should be:

  1. Shared.  All developers on the team should have more or less the same vision of the structure and shape of the overall system.
  2. Incremental.  Grand architecture projects lead only to grand failures.
  3. Adaptable. Successful architectures can be used for purposes beyond their designers’ original intentions.  (Examples: Unix pipes, HTTP, Smalltalk)
  4. Visible.  The "sacred, invisible architecture" will fall into disuse and disrepair.  It will not outlive its creator’s tenure or interest.

Is the designated "architect" the only one who can produce these qualities?  Certainly not.  He/she should be the steward of the system, however, leading the team toward these qualities, along with the other -ilities, of course.

Finally, I think the most important qualification of an architect should be: someone who has created more than one system and lived with it in production.  Note that automatically implies that the architect must have at least delivered systems into production.  I’ve run into "architects" who’ve never had a project actually make it into production, or if they have, they’ve rolled off the project—again with the consultants—just as Release 1.0 went out the door.

In other words, architects should have scars. 

Planning to Support Operations

| Comments

In 2005, I was on a team doing application development for a system that would be deployed to 600 locations. About half of those locations would not have network connections. We knew right away that deploying our application would be key, particularly since it is a "rich-client" application. (What we used to call a "fat client", before they became cool again.) Deployment had to be done by store associates, not IT. It had to be safe, so that a failed deployment could be rolled back before the store opened for business the next day. We spent nearly half of an iteration setting up the installation scripts and configuration. We set our continuous build server up to create the "setup.exe" files on every build. We did hundreds of test installations in our test environment.

Operations said that our software was "the easiest installation we’ve ever had." Still, that wasn’t the end of it. After the first update went out, we asked operations what could be done to improve the upgrade process. Over the next three releases, we made numerous improvements to the installers:

  • Make one "setup.exe" that can install either a server or a client, and have the installer itself figure out which one to do.
  • Abort the install if the application is still running. This turned out to be particularly important on the server.
  • Don’t allow the user to launch the application twice. Very hard to implement in Java. We were fortunate to find an installer package that made this a check-box feature in the build configuration file!
  • Don’t show a blank Windows command prompt window. (An artifact of our original .cmd scripts that were launching the application.)
  • Create separate installation discs for the two different store brands.
  • When spawning a secondary application, force it’s window to the front, avoiding the appearance of a hang if the user accidentally gives focus to the original window.

These changes reduced support call volume by nearly 50%.

My point is not to brag about what a great job we did. (Though we did a great job.) To keep improving our support for operations, we deliberately set aside a portion of our team capacity each iteration. Operations had an open invitation to our iteration planning meetings, where they could prioritize and select story cards the same as our other stakeholders. In this manner, we explicitly included Operations as a stakeholder in application construction. They consistently brought us ideas and requests that we, as developers, would not have come up with.

Furthermore, we forged a strong bond with Operations. When issues arose—as they always will—we avoided all of the usual finger-pointing. We reacted as one team, instead of two disparate teams trying to avoid responsibility for the problems. I attribute that partly to the high level of professionalism in both development and operations, and partly to the strong relationship we created through the entire development cycle.

"Us" and "Them"

| Comments

As a consultant, I’ve joined a lot of projects, usually not right when the team is forming. Over the years, I’ve developed a few heuristics that tell me a lot about the psychological health of the team. Who lunches together? When someone says "whole team meeting," who is invited? Listen for the "us and them" language. How inclusive is the "us" and who is relegated to "them?" These simple observations speak volumes about the perception of the development team. You can see who they consider their stakeholders, their allies, and their opponents.

Ten years ago, for example, the users were always "them." Testing and QA was always "them." Today, particularly on agile teams, testers and users often get "us" status (As an aside, this may be why startups show such great productivity in the early days. The company isn’t big enough to allow "us" and "them" thinking to set in. Of course, the converse is true as well: us and them thinking in a startup might be a failure indicator to watch out for!). Watch out if an "us" suddenly becomes "them." Trouble is brewing!

Any conversation can create a "happy accident;" some understanding that obviates a requirement, avoids a potential bug, reduces cost, or improves the outcome in some other way. Conversations prevented thanks to an armed-camp mentality are opportunities lost.

One of the most persistent and perplexing "us" and "them" divisions I see is between development and operations. Maybe it’s due to the high org-chart distance (OCD) between development groups and operations groups. Maybe it’s because development doesn’t tend to plan as far ahead as operations does. Maybe it’s just due to a long-term dynamic of requests and refusals that sets each new conversation up for conflict. Whatever the cause, two groups that should absolutely be working as partners often end up in conflict, or worse, barely speaking at all.

This has serious consequences. People in the "us" tent get their requests built very quickly and accurately. People in the "them" tent get told to write specifications. Specifications have their place. Specifications are great for the fourth or fifth iteration of a well-defined process. During development, though, ideas need to be explored, not specified. If a developer has a vague idea about using the storage area network to rapidly move large volumes of data from the content management system into production, but he doesn’t know how to write the request, the idea will wither on the vine.

The development-operations divide virtually ensures that applications will not be transitioned to operations as effectively as possible. Some vital bits of knowledge just don’t fit into a document template. For example, developers have knowledge about the internals of the application that can help diagnose and recover from system failures. (Developer: "Oh, when you see all the request handling threads blocked inside the K2 client library, just bounce the search servers. The app will come right back." Operations: "Roger that. What’s a thread?") These gaps in knowledge degrade uptime, either by extending outages or preventing operations from intervening. If the company culture is at all political, one or two incidents of downtime will be enough to start the finger-pointing between development and operations. Once that corrosive dynamic gets started, nothing short of changing the personnel or the leadership will stop it.

Inviting Domestic Disaster

| Comments

We had a minor domestic disaster this morning. It’s not unusual. With four children, there’s always some kind of crisis. Today, I followed a trail of water along the floor to my youngest daughter. She was shaking her "sippy cup" upside down, depositing a full cup of water on the carpet… and on my new digital grand piano. 

Since the entire purpose of the "sippy cup" is to contain the water, not to spread it around this house, this was perplexing.

On investigation, I found that this failure in function actually mimicked common dynamics of major disasters. In Inviting Disaster, James R. Chiles describes numerous mechanical and industrial disasters, each with a terrible cost in lives. In Release It, I discuss software failures that cost millions of dollars—though, thankfully, no lives. None of these failures come as a bolt from the blue. Rather, each one has precursor incidents: small issues whose significance are only obvious in retrospect. Most of these chains of events also involve humans and human interaction with the technological environment.

The proximate cause of this morning’s problem was inside the sippy cup itself. The removable valve was inserted into the lid backwards, completely negating its purpose. A few weeks earlier, I had pulled a sippy cup from the cupboard with a similarly backward valve. I knew it had been assembled by my oldest, who has the job of emptying the dishwasher, so I made a mental note to provide some additional instruction. Of course, mental notes are only worth the paper they’re written on. I never did get around to speaking with her about it.

Today, my wonderful mother-in-law, who is visiting for the holidays, filled the cup and gave it to my youngest child. My mother-in-law, not having dealt with thousands of sippy cup fillings, as I have, did not notice the reversed valve, or did not catch its significance.

My small-scale mess was much easier to clean up than the disasters in "Release It!" or "Inviting Disaster". It shared some similar features, though. The individual with experience and knowledge to avert the problem–me–was not present at the crucial moment. The preconditions were created by someone who did not recognize the potential significance of her actions. The last person who could have stopped the chain of events did not have the experience to catch and stop the problem. Change any one of those factors and the crisis would not have occurred.

Book Completed

| Comments

I’m thrilled to report that my book is now out of my hands and into the hands of copy editors and layout artists.

It’s been a long trip. At the beginning, I had no idea just how much work was needed to write an entire book. I started this project 18 months ago, with a sample chapter, a table of contents, and a proposal. That was a few hundred pages, three titles, and a thousand hours ago.

Now "Release It! Design and Deploy Production-Ready Software" is close to print. Even in these days of the permanent ephemerance of electronic speech, there’s still something incomparably electric about seeing your name in print.

Along with publication of the book, I will be making some changes to this blog. First, it’s time to find a real home. That means a new host, but it should be transparent to everyone but me. Second, I will be adding non-blog content: excerpts from the book, articles, and related content. (I have some thoughts about capacity management that need a home.) Third, if there is interest, I will start a discussion group or mailing list for conversation about survivable software.

 

Reflexivity and Introspection

| Comments

A fascinating niche of programming languages consists of those languages which are constructed in themselves. For instance, Squeak is a Smalltalk whose interpreter is written in Squeak. Likewise, the best language for writing a LISP interpreter turns out to be LISP itself. (That one is more like nesting than bootstrapping, but it’s closely related.)

I think Ruby has enough introspection to be built the same way. Recently, a friend clued me in to PyPy, a Python interpreter written in Python.

I’m sure there are many others. In fact the venerable GCC is written in its own flavor of C. Compiling GCC from scratch requires a bootstrapping phase, by compiling a small version of GCC, written in a more portable form of C, with some other C compiler. Then, the phase I micro-GCC compiles the whole GCC for the target platform.

Reflexivity arises when the language has sufficient introspective capabilities to describe itself. I cannot help but be reminded of Godel, Escher, Bach and the difficulties that reflexivity cause. Godel’s Theorem doesn’t really kick in until a formal system is complex enough to describe itself. At that point, Godel’s Theorem proves that there will be true statements, expressed in the language of the formal system, that cannot be proven true. These are inevitably statements about themselves—the symbolic logic form of, "This sentence is false."

Long-time LISP programmers create works with such economy of expression that we can only use artistic metaphors to describe them. Minimalist. Elegant. Spare. Rococo.

Forth was my first introduction to self-creating languages. FORTH starts with a tiny kernel (small enough that it fit into a 3KB cartridge for my VIC-20) that gets extended one "word" at a time. Each word adds to the vocabulary, essentially customizing the language to solve a particular problem. It’s really true that in FORTH, you don’t write programs to solve problems. Instead, you invent a language in which solving the problem is trivial, then you spend your time implementing that language.

Another common aspect of these self-describing languages seems to be that they never become widely popular. I’ve heard several theories that attempted to explain this. One says that individual LISP programmers are so productive that they never need large teams. Hence, cross-pollination is limited and it is hard to demonstrate enough commercial demand to seem convincing. Put another way: if your started with equal populations of Java and LISP programmers, demand for Java programmers would quickly outstrip demand for LISP programmers… not because it’s a superior language, but just because you need more Java programmers for any given task. This demand becomes self-reinforcing, as commercial programmers go where the demand is, and companies demand what they see is available.

I also think there’s a particular mindset that admires and relates to the dynamic of the self-creating language. I suspect that programmers possessing that mindset are also the ones who get excited by metaprogramming.

Education as Mental Immune System

| Comments

Education and intelligence act like a memetic immune system. For instance, anyone with knowledge of chemistry understands that "binary liquid explosives" are a movie plot, not a security threat. On the other hand, lacking education, TSA officials told a woman in front of me to throw away her Dairy Queen ice cream cones before she could board the plane. Ice cream.

How in the hell is anyone supposed to blow up a plane with ice cream? It defies imagination.

She was firmly and seriously told, "Once it melts, it will be a liquid and all liquids and gels are banned from the aircraft."

I wanted to ask him what the TSA’s official position was on collodal solids. They aren’t gels or liquids, but amorphous liquids trapped in a suspension of solid crystals. Like a creamy mixture of dairy fats, egg yolks, and flavoring trapped in a suspension of water ice crystals.

I didn’t of course. I’ve heard the chilling warnings, "Jokes or inappropriate remarks to security officials will result in your detention and arrest." (Real announcement. I heard it in Houston.) In other words, mouth off about the idiocy of the system and you’ll be grooving to Brittney Spears in Gitmo.

On the other hand, there are other ideas that only make sense if you’re overly educated. Dennis Prager is fond of saying that you have to go to graduate school to believe things like, "The Republican party is more dangerous than Hizbollah."

Of course, I don’t think he’s really talking about post-docs in Chemical Engineering.

Expressiveness, Revisited

| Comments

I previously mused about the expressiveness of Ruby compared to Java. Dion Stewart pointed me toward F-Script, an interpreted, Smalltalk-like scripting language for Mac OS X and Cocoa. In F-Script, invoking a method on every object in an array is built-in syntax. Assuming that updates is an array containing objects that understand the preProcess and postProcess messages.

updates preProcess
updates postProcess

That’s it. Iterating over the elements of the collection is automatic.

F-Script admits much more sophisticated array processing; multilevel iteration, row-major processing, column-major processing, inner products, outer products, "compression" and "reduction" operations. The most amazing thing is how natural the idioms look, thanks to their clean syntax and the dynamic nature of the language.

It reminds me of a remark about General Relativity, that economy of expression allowed vast truths to be stated in one simple, compact equation. It would, however, require fourteen years of study to understand the notation used to write the equation, and that one could spend a lifetime understanding the implications.

Technorati Tags: java, beyondjava, ruby, fscript