Wide Awake Developers

« December 2007 | Main | February 2008 »

Two Books That Belong In Your Library

I seldom plug books---other than my own, that is. I've just read two important books, however, that really deserve your attention.

Concurrency, Everybody's Doing It

The first is Java Concurrency in Practice by Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, and Doug Lea. I've been doing Java development for close to thirteen years now, and I learned an enormous amount from this fantastic book. For example, I knew what the textbook definition of a volatile variable was, but I never knew why I would actually want to use one. Now I know when to use them and when they won't solve the problem.

Of course, JCP talks about the Java 5 concurrency library at great length. But this is no paraphrasing of the javadoc. (It was Doug Lea's original concurrency utility library that eventually got incorporated into Java, and we're all better off for it.) The authors start with illustrations of real issues in concurrent programming. Before they introduce the concurrency utilities, they explain a problem and illustrate potential solutions. (Usually involving at least one naive "solution" that has serious flaws.) Once they show us some avenues to explore, they introduce some neatly-packaged, well-tested utility class that either solves the problem or makes a solution possible. This removes the utility classes from the realm of "inscrutable magic" and presents them as "something difficult that you don't have to write."

The best part about JCP, though, is the combination of thoroughness and clarity with which it presents a very difficult subject. For example, I always understood about the need to avoid concurrent modification of mutable state. But, thanks to this book, I also see why you have to synchronize getters, not just setters. (Even though assignment to an integer is guaranteed to happen atomically, that isn't enough to guarantee that the change is visible to other threads. The only way to guarantee ordering is by crossing a synchronization barrier on the same lock.)

Blocked Threads are one of my stability antipatterns. I've seen hundreds of web site crashes. Every single one of them eventually boils down to blocked threads somewhere. Java Concurrency in Practice has the theory, practice, and tools that you can apply to avoid deadlocks, live locks, corrupted state, and a host of other problems that lurk in the most innocuous-looking code.

Capacity Planning is Science, Not Art

The second book that I want to recommend today is Capacity Planning for Web Services. I've had this book for a while. When I first started reading it, I put it down right away thinking, "This is way too basic to solve any real problems." That was a big error.

Capacity Planning may get off to a slow start, but that's only because the authors are both thorough and deliberate. Later in the book, that deliberate pace is very helpful, because it lets us follow the math.

This is the only book on capacity planning I've seen that actually deals with transmission time for HTTP requests and repsonses. In fact, some of the examples even compute the number of packets that a request or reply will need.

I have objected to some capacity planning books because they assume that every process can be represented by an average. Not this one. In the section on standalone web servers, for example, the authors break files into several classes, then use a weighted distribution of file sizes to compute the expected response time and bandwidth requirements. This is a very real-world approach, since web requests tend toward a bimodal distribution: small HTML, Javascript, and CSS intermixed with large media files and images. (In fact, I plan on using the models in this book to quantify the effect of segregating media files from dynamic pages.)

This is also the only book I've seen that recognizes that capacity limits can propagate both downward and upward through tiers. There's a great example of how doubling the CPU performance in an app tier ends up increasing the demand on the database server, which almost totally nullifies the effect of the CPU upgrade. It also recognizes that all requests are not created equal, and recommends clustering request types by their CPU and I/O demands, instead of averaging them all together.

Nearly every result or abstract law has an example, written in concrete terms, which helps bridge theory and practice.

Both of these books deal with material that easily leads off into clouds of theory and abstraction. (JCP actually quips, "What's a memory model, and why would I want one?") These excellent works avoid the Ivory Tower trap and present highly pragmatic, immediately useful wisdom.

Well Begun Is Half Done

How long is your checklist for setting up a new development environment? It might seem like a trivial thing, but setup costs are part of the overall friction in your project. I've seen three page checklists that required multiple downloads, logging in as several users (root and non-root), and hand-typing SQL strings to set up the local database server.

I think the paragon of environment setup is the ubiquitous GNU autoconf system. Anyone familiar with Linux, BSD, or other flavors of UNIX will surely recognize this three-line incantation:

./configure
make
make install

The beauty of autoconf is that it adapts to you. In the open-source world, you can't stipulate one particular set of packages or versions, at least, not if you actually want people to use your software and contribute to your project. In the corporate world, though, it's pretty common to see a project that requires a specific point-point rev of some Jakarta Commons library, but without actually documenting the version.

Then there are different places to put things: inside the project, in source control, or in the system. I recently went back to a project's code base after being away for more than two years. I thought we had done a good job of addressing the environment setup. We included all the deliverable jars in the codebase, so they were all version controlled. But, we decided to keep the development-only jars (like EasyMock, DBUnit, and JUnit) outside the code base. We did use Eclipse variables to abstract out the exact filesystem location, but when I returned to that code base, finding and restoring exactly the right versions of those build-time jars wasn't easy. In retrospect, we should have put the build-time jars under version control and kept them inside the code base.

Yes, I know that version control systems aren't good at versioning binaries like jar files. Who cares? We don't rev the jar files so often that the lack of deltas matters. Putting a new binary in source control when you upgrade from Spring 2.5 to Spring 2.5.1 really won't kill your repository. The cost of the extra disk space is nothing compared to the benefit of keeping your code base self-contained.

Maven users will be familiar with another approach. On a Maven project, you express external dependencies in a project model file. On the first build, Maven will download those dependencies from their "official" archives, then cache them locally. After that, Maven will just use the locally cached jar file, at least until you move your declared dependency to a newer revision. I have nothing against Maven. I know some people who swear by it, and others who swear at it. Personally, I just never got into it.

Then there are JRE extensions. This project uses JAI, which wants to be installed inside the JRE itself. We went along with that, but I was stumped for a while today when I saw hundreds of compile errors even though my Eclipse project's build path didn't show any unresolved dependencies. Of course, when you install JAI inside the JRE, it just becomes part of the Java runtime. That makes it an implicit dependency. I eventually remembered that trick, but it took a while. In retrospect, I wish we had tried harder to bring JAI's jars and native libraries into the code base as an explicit dependency.

Does developer environment setup time matter? I believe it does. It might be tempting to say, "That's a one-time cost, there's no point in optimizing it." It's not really a one-time cost, though. It's one time per developer, every time that developer has to reinstall. My rough observation says that, between migrating to a new workstation, Windows reinstalls, corporate re-imaging, and developer churn, you should expect three to five developer setups per year on an internal project.

For an open-source project, the sky is the limit. Keep in mind that you'll lose potential contributors at every barrier they encounter. Environment setup is the first one.

So, what's my checklist for a good environment setup checklist?

  • Keep the project self contained. Bring all dependencies into the code base. Same goes for RPMs or third-party installers.
  • Make sure all JAR files have version numbers in their file names. If the upstream project doesn't build their JAR files with version numbers, go ahead and rename the jars.
  • Make bootstrap scripts for database actions such as user creation or schema builds.
  • If you absolutely must embed a dependency on something that lives outside the code base, make your build script detect its location. Don't rely on specific path names.
  • Don't assume your code base is in any particular filesystem on the build machine.

I'd love to see your with your own rules for easy development setup.

"Release It" is a Jolt Award Finalist

The Jolt Awards have been described as "the Oscar's of our industry". (Really. It's on the front page of the site.)  The list of past book winners reads like an essential library for the software practitioner. Even the finalists and runners-up are essential reading.

Release It has now joined the company of finalists. The competition is very tough... I've read "Beautiful Code" and "Manage It!", and both are excellent. I'll be on pins and needles until the awards ceremony on March 5th.  Honestly, though, I'm just thrilled to be in such good company.

Should Email Errors Keep Customers From Buying?

Somewhere inside every commerce site, there's a bit of code sending emails out to customers.  Email campaigning might have been in the requirements and that email code stands tall at the brightly-lit service counter.  On the other hand, it might have been added as an afterthought, languishing in some dark corner with the "lost and found" department.  Either way, there's a good chance it's putting your site at risk.

The simplest way to code an email sending routine looks something like this:

  1. Get a javax.mail.Session instance
  2. Get a javax.mail.Transport instance from the Session
  3. Construct a javax.mail.internet.MimeMessage instance
  4. Set some fields on the message: from, subject, body.  (Setting the body may involve reading a template from a file and interpolating values.)
  5. Set the recipients' Addresses on the message
  6. Ask the Transport to send the message
  7. Close the Transport
  8. Discard the Session

This goes into a servlet, a controller, or a stateless session bean, depending on which MVC framework or JEE architecture blueprint you're using.

There are two big problems here. (Actually, there are three, but I'm not going to deal with the "one connection per message" issue.)

Request-Handling Threads at Risk

As written, all the work of sending the email happens on the request-handling thread that's also responsible for generating the response page. Even on a sunny day, that means you're spending some precious request-response cycles on work that doesn't help build the page.

You should always look at a call out to an external server with suspicion. Many of them can execute asynchronously to page generation. Anything that you can offload to a background thread, you should offload so the request-handler can get back in the pool sooner. The user's experience will be better, and your site's capacity will be better, if you do.

Also, keep in mind that SMTP servers aren't always 100% reliable. Neither are the DNS servers that point you to them. That goes double if you're connecting to some external service. (And please, please don't even tell me you're looking up the recipient's MX record and contacting the receiving MTA directly!)

If the MTA is slow to accept your connection, or to process the email, then the request-handling thread could be blocked for a long time: seconds or even minutes. Will the user wait around for the response? Not likely. He'll probably just hit "reload" and double-post the form that triggered the email in the first place.

Poor Error Recovery

The second problem is the complete lack of error recovery.  Yes, you can log an exception when your connection to the MTA fails. But that only lets the administrator know that some amount of mail failed. It doesn't say what the mail was! There's no way to contact the users who didn't get their messages. Depending on what the messages said, that could be a very big deal.

At a minimum, you'd like to be able to detect and recovery from interruptions at the MTA---scheduled maintenance, Windows patching, unscheduled index rebulids, and the like. Even if "recovery" means someone takes the users' info from the log file and types in a new message on their desktops, that's better than nothing.

A Better Way

The good news is that there's a handy way to address both of these problems at once. Better still, it works whether you're dealing with internal SMTP based servers or external XML-over-HTTP bulk mailers.

Whenever a controller decides it's time to reach out and touch a user through email, it should drop a message on a JMS queue. This lets the request-handling thread continue with page generation immediately, while leaving the email for asynchronous processing.

You can either go down the road of message-driven beans (MDB) or you can just set up a pool of background threads to consume messages from the queue. On receipt of a message, the subscriber just executes the same email generation and transmission as before, with one exception. If the message fails due to a system error, such as a broken socket connection, the message can just go right back onto the message queue for later retry. (You'll probably want to update the "next retry time" to avoid livelock.)

Better Still

If you have a cluster of application servers that can all generate outbound email, why not take the next step? Move the MDBs out into their own app server and have the message queues from all the app servers terminate there? (If you're using pub-sub instead of point-to-point, this will be pretty much transparent.) This application will resemble a message broker... for good reason. It's essentially just pulling messages in from one protocol, transforming them, then sending them out over another protocol.

The best part? You don't even have to write the message broker yourself. There are plenty of open-source and commercial alternatives.

Summary

Sending email directly from the request-handling thread performs poorly, creates unpredictable page latency for users and risks dropping their emails right on the floor. It's better to drop a message in a queue for asynchronous transformation by a message broker: it's faster, more reliable, and there's less code for you to write.