Wide Awake Developers

The Pragmatic Architect on Security

| Comments

Catching up on some reading, I finally got a chance to read Ted Neward’s article "Pragmatic Architecture: Security".  It’s very good.  (Actually, the whole series is pretty good, and I recommend them all.  At least as of February 2008… I make no assertions about future quality!)

Ted nails it.  I agree with all of the principles he identifies, and I particularly like his advice to "fail securely". 

I would add one more, though: Be visible.

After any breach, the three worst questions are always:

  1. How long has this been happening?
  2. How much have we lost?
  3. Why didn’t we know about it sooner?

The answers are always, respectively, "Far too long", "We have no idea", and "We didn’t expect that exploit". To which the only possible response is, "Well, duh, if you’d expected it, you would have closed the vulnerability."

Successful exploits are always successful because they stay hidden. Are you sure that nobody’s in your systems right now, leaching data, stealing credit card numbers, or stealing products? Of course not. For a vivid case in point, google "Kerviel Societe Generale".

While you cannot prove a negative, you can improve your odds of detecting nefarious activity by making sure that everything interesting is logged. (And by "interesting", I mean "potentially valuable".) 

There are some pretty spiffy event correlation tools out there these days. They can monitor logs across hundreds of servers and network devices, extracting patterns of anomalous behavior. But, they only work if your application exposes data that could indicate a breach.

For example, you might not be able to log every login attempt, but you probably should log every admin login attempt.

Or, you might consider logging every price change. (I shudder to think about collusion between a merchant with pricing control and an outside buyer.  Imagine a 10-minute long sale on laptops: 90% off for 10 minutes only.)

If your internal web service listens on a port, then it should only accept connections from known sources. Whether you enforce that through IPTables, a hardware firewall, or inside the application itself, make sure you’re logging refused connections.

Then, of course, once you’re logging the data, make sure someone’s monitoring it and keeping pattern and signature definitions up to date!

Two Books That Belong in Your Library

| Comments

I seldom plug books—other than my own, that is. I’ve just read two important books, however, that really deserve your attention.

Concurrency, Everybody’s Doing It

The first is Java Concurrency in Practice by Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, and Doug Lea. I’ve been doing Java development for close to thirteen years now, and I learned an enormous amount from this fantastic book. For example, I knew what the textbook definition of a volatile variable was, but I never knew why I would actually want to use one. Now I know when to use them and when they won’t solve the problem.

Of course, JCP talks about the Java 5 concurrency library at great length. But this is no paraphrasing of the javadoc. (It was Doug Lea’s original concurrency utility library that eventually got incorporated into Java, and we’re all better off for it.) The authors start with illustrations of real issues in concurrent programming. Before they introduce the concurrency utilities, they explain a problem and illustrate potential solutions. (Usually involving at least one naive "solution" that has serious flaws.) Once they show us some avenues to explore, they introduce some neatly-packaged, well-tested utility class that either solves the problem or makes a solution possible. This removes the utility classes from the realm of "inscrutable magic" and presents them as "something difficult that you don’t have to write."

The best part about JCP, though, is the combination of thoroughness and clarity with which it presents a very difficult subject. For example, I always understood about the need to avoid concurrent modification of mutable state. But, thanks to this book, I also see why you have to synchronize getters, not just setters. (Even though assignment to an integer is guaranteed to happen atomically, that isn’t enough to guarantee that the change is visible to other threads. The only way to guarantee ordering is by crossing a synchronization barrier on the same lock.)

Blocked Threads are one of my stability antipatterns. I’ve seen hundreds of web site crashes. Every single one of them eventually boils down to blocked threads somewhere. Java Concurrency in Practice has the theory, practice, and tools that you can apply to avoid deadlocks, live locks, corrupted state, and a host of other problems that lurk in the most innocuous-looking code.

Capacity Planning is Science, Not Art

The second book that I want to recommend today is Capacity Planning for Web Services. I’ve had this book for a while. When I first started reading it, I put it down right away thinking, "This is way too basic to solve any real problems." That was a big error.

Capacity Planning may get off to a slow start, but that’s only because the authors are both thorough and deliberate. Later in the book, that deliberate pace is very helpful, because it lets us follow the math.

This is the only book on capacity planning I’ve seen that actually deals with transmission time for HTTP requests and repsonses. In fact, some of the examples even compute the number of packets that a request or reply will need.

I have objected to some capacity planning books because they assume that every process can be represented by an average. Not this one. In the section on standalone web servers, for example, the authors break files into several classes, then use a weighted distribution of file sizes to compute the expected response time and bandwidth requirements. This is a very real-world approach, since web requests tend toward a bimodal distribution: small HTML, Javascript, and CSS intermixed with large media files and images. (In fact, I plan on using the models in this book to quantify the effect of segregating media files from dynamic pages.)

This is also the only book I’ve seen that recognizes that capacity limits can propagate both downward and upward through tiers. There’s a great example of how doubling the CPU performance in an app tier ends up increasing the demand on the database server, which almost totally nullifies the effect of the CPU upgrade. It also recognizes that all requests are not created equal, and recommends clustering request types by their CPU and I/O demands, instead of averaging them all together.

Nearly every result or abstract law has an example, written in concrete terms, which helps bridge theory and practice.

Both of these books deal with material that easily leads off into clouds of theory and abstraction. (JCP actually quips, "What’s a memory model, and why would I want one?") These excellent works avoid the Ivory Tower trap and present highly pragmatic, immediately useful wisdom.

Well Begun Is Half Done

| Comments

How long is your checklist for setting up a new development environment? It might seem like a trivial thing, but setup costs are part of the overall friction in your project. I’ve seen three page checklists that required multiple downloads, logging in as several users (root and non-root), and hand-typing SQL strings to set up the local database server.

I think the paragon of environment setup is the ubiquitous GNU autoconf system. Anyone familiar with Linux, BSD, or other flavors of UNIX will surely recognize this three-line incantation:

./configure
make
make install

The beauty of autoconf is that it adapts to you. In the open-source world, you can’t stipulate one particular set of packages or versions, at least, not if you actually want people to use your software and contribute to your project. In the corporate world, though, it’s pretty common to see a project that requires a specific point-point rev of some Jakarta Commons library, but without actually documenting the version.

Then there are different places to put things: inside the project, in source control, or in the system. I recently went back to a project’s code base after being away for more than two years. I thought we had done a good job of addressing the environment setup. We included all the deliverable jars in the codebase, so they were all version controlled. But, we decided to keep the development-only jars (like EasyMock, DBUnit, and JUnit) outside the code base. We did use Eclipse variables to abstract out the exact filesystem location, but when I returned to that code base, finding and restoring exactly the right versions of those build-time jars wasn’t easy. In retrospect, we should have put the build-time jars under version control and kept them inside the code base.

Yes, I know that version control systems aren’t good at versioning binaries like jar files. Who cares? We don’t rev the jar files so often that the lack of deltas matters. Putting a new binary in source control when you upgrade from Spring 2.5 to Spring 2.5.1 really won’t kill your repository. The cost of the extra disk space is nothing compared to the benefit of keeping your code base self-contained.

Maven users will be familiar with another approach. On a Maven project, you express external dependencies in a project model file. On the first build, Maven will download those dependencies from their "official" archives, then cache them locally. After that, Maven will just use the locally cached jar file, at least until you move your declared dependency to a newer revision. I have nothing against Maven. I know some people who swear by it, and others who swear at it. Personally, I just never got into it.

Then there are JRE extensions. This project uses JAI, which wants to be installed inside the JRE itself. We went along with that, but I was stumped for a while today when I saw hundreds of compile errors even though my Eclipse project’s build path didn’t show any unresolved dependencies. Of course, when you install JAI inside the JRE, it just becomes part of the Java runtime. That makes it an implicit dependency. I eventually remembered that trick, but it took a while. In retrospect, I wish we had tried harder to bring JAI’s jars and native libraries into the code base as an explicit dependency.

Does developer environment setup time matter? I believe it does. It might be tempting to say, "That’s a one-time cost, there’s no point in optimizing it." It’s not really a one-time cost, though. It’s one time per developer, every time that developer has to reinstall. My rough observation says that, between migrating to a new workstation, Windows reinstalls, corporate re-imaging, and developer churn, you should expect three to five developer setups per year on an internal project.

For an open-source project, the sky is the limit. Keep in mind that you’ll lose potential contributors at every barrier they encounter. Environment setup is the first one.

So, what’s my checklist for a good environment setup checklist?

  • Keep the project self contained. Bring all dependencies into the code base. Same goes for RPMs or third-party installers.
  • Make sure all JAR files have version numbers in their file names. If the upstream project doesn’t build their JAR files with version numbers, go ahead and rename the jars.
  • Make bootstrap scripts for database actions such as user creation or schema builds.
  • If you absolutely must embed a dependency on something that lives outside the code base, make your build script detect its location. Don’t rely on specific path names.
  • Don’t assume your code base is in any particular filesystem on the build machine.

I’d love to see your with your own rules for easy development setup.

“Release It” Is a Jolt Award Finalist

| Comments

The Jolt Awards have been described as "the Oscar’s of our industry". (Really. It’s on the front page of the site.)  The list of past book winners reads like an essential library for the software practitioner. Even the finalists and runners-up are essential reading.

Release It has now joined the company of finalists. The competition is very tough… I’ve read "Beautiful Code" and "Manage It!", and both are excellent. I’ll be on pins and needles until the awards ceremony on March 5th.  Honestly, though, I’m just thrilled to be in such good company.

Should Email Errors Keep Customers From Buying?

| Comments

Somewhere inside every commerce site, there’s a bit of code sending emails out to customers.  Email campaigning might have been in the requirements and that email code stands tall at the brightly-lit service counter.  On the other hand, it might have been added as an afterthought, languishing in some dark corner with the "lost and found" department.  Either way, there’s a good chance it’s putting your site at risk.

The simplest way to code an email sending routine looks something like this:

  1. Get a javax.mail.Session instance
  2. Get a javax.mail.Transport instance from the Session
  3. Construct a javax.mail.internet.MimeMessage instance
  4. Set some fields on the message: from, subject, body.  (Setting the body may involve reading a template from a file and interpolating values.)
  5. Set the recipients’ Addresses on the message
  6. Ask the Transport to send the message
  7. Close the Transport
  8. Discard the Session

This goes into a servlet, a controller, or a stateless session bean, depending on which MVC framework or JEE architecture blueprint you’re using.

There are two big problems here. (Actually, there are three, but I’m not going to deal with the "one connection per message" issue.)

Request-Handling Threads at Risk

As written, all the work of sending the email happens on the request-handling thread that’s also responsible for generating the response page. Even on a sunny day, that means you’re spending some precious request-response cycles on work that doesn’t help build the page.

You should always look at a call out to an external server with suspicion. Many of them can execute asynchronously to page generation. Anything that you can offload to a background thread, you should offload so the request-handler can get back in the pool sooner. The user’s experience will be better, and your site’s capacity will be better, if you do.

Also, keep in mind that SMTP servers aren’t always 100% reliable. Neither are the DNS servers that point you to them. That goes double if you’re connecting to some external service. (And please, please don’t even tell me you’re looking up the recipient’s MX record and contacting the receiving MTA directly!)

If the MTA is slow to accept your connection, or to process the email, then the request-handling thread could be blocked for a long time: seconds or even minutes. Will the user wait around for the response? Not likely. He’ll probably just hit "reload" and double-post the form that triggered the email in the first place.

Poor Error Recovery

The second problem is the complete lack of error recovery.  Yes, you can log an exception when your connection to the MTA fails. But that only lets the administrator know that some amount of mail failed. It doesn’t say what the mail was! There’s no way to contact the users who didn’t get their messages. Depending on what the messages said, that could be a very big deal.

At a minimum, you’d like to be able to detect and recovery from interruptions at the MTA—scheduled maintenance, Windows patching, unscheduled index rebulids, and the like. Even if "recovery" means someone takes the users’ info from the log file and types in a new message on their desktops, that’s better than nothing.

A Better Way

The good news is that there’s a handy way to address both of these problems at once. Better still, it works whether you’re dealing with internal SMTP based servers or external XML-over-HTTP bulk mailers.

Whenever a controller decides it’s time to reach out and touch a user through email, it should drop a message on a JMS queue. This lets the request-handling thread continue with page generation immediately, while leaving the email for asynchronous processing.

You can either go down the road of message-driven beans (MDB) or you can just set up a pool of background threads to consume messages from the queue. On receipt of a message, the subscriber just executes the same email generation and transmission as before, with one exception. If the message fails due to a system error, such as a broken socket connection, the message can just go right back onto the message queue for later retry. (You’ll probably want to update the "next retry time" to avoid livelock.)

Better Still

If you have a cluster of application servers that can all generate outbound email, why not take the next step? Move the MDBs out into their own app server and have the message queues from all the app servers terminate there? (If you’re using pub-sub instead of point-to-point, this will be pretty much transparent.) This application will resemble a message broker… for good reason. It’s essentially just pulling messages in from one protocol, transforming them, then sending them out over another protocol.

The best part? You don’t even have to write the message broker yourself. There are plenty of open-source and commercial alternatives.

Summary

Sending email directly from the request-handling thread performs poorly, creates unpredictable page latency for users and risks dropping their emails right on the floor. It’s better to drop a message in a queue for asynchronous transformation by a message broker: it’s faster, more reliable, and there’s less code for you to write.

Two Sites, One Antipattern

| Comments

This week, I had Groundhog Day in December.  I was visiting two different clients, but they each told the same tale of woe.

At my first stop, the director of IT told me about a problem they had recently found and eliminated.

They’re a retailer. Like many retailers, they try to increase sales through "upselling" and "cross-selling". So, when you go to check out, they show you some other products that you might want to buy.  It’s good to show customers relevant products that are also advantageous to sell.
For example, if a customer buys a big HDTV, offer them cables (80% margin) instead of DVDs (3% margin).

All but one of the slots on that page are filled through deliberate merchandising. People decide what to display there, the same way they decide what to put in the endcaps or next to the register in a physical store. The final slot, though, gets populated automatically according to the products in the customer’s cart. Based on the original requirements for the site, the code to populate that slot looked for products in the catalog with similar attributes, then sorted through them to find the "best" product.  (Based on some balance of closely-matched attributes and high margin, I suspect.)

The problem was that there were too many products that would match.  The attributes clustered too much for the algorithm, so the code for this slot would pull back thousands of products from the catalog.  It would turn each row in the result set into an object, then weed through them in memory.

Without that slot, the page would render in under a second.  With it, two minutes, or worse.

It had been present for more than two years. You might ask, "How could that go unnoticed for two years?" Well, it didn’t, of course. But, because it had always been that way, most everyone was just used to it. When the wait times would get too bad, this one guy would just restart app servers until it got better.

Removing that slot from the page not only improved their stability, it vastly increased their capacity. Imagine how much more they could have added to the bottom line if they hadn’t overspent for the last two years to compensate. 

At my second stop, the site suffered from serious stability problems. At any given time, it was even odds that at least one app server would be vapor locked. Three to five times a day, that would ripple through and take down all the app servers. One key symptom was a sudden spike in database connections.

Some nice work by the DBAs revealed a query from the app servers that was taking way too long. No query from a web app should ever take more than half a second, but this one would run for 90 seconds or more. Usually that means the query logic is bad.  In this case, though, the logic was OK, but the query returned 1.2 million rows. The app server would doggedly convert those rows into objects in a Vector, right up until it started thrashing the garbage collector. Eventually, it would run out of memory, but in the meantime, it held a lot of row locks.  All the other app servers would block on those row locks.  The team applied a band-aid to the query logic, and those crashes stopped.

What’s the common factor here? It’s what I call an "Unbounded Result Set".  Neither of these applications limited the amount of data they requested, even though there certainly were limits to how much they could process.  In essence, both of these applications trusted their databases.  The apps weren’t prepared for the data to be funky, weird, or oversized. They assumed too much.

You should make your apps be paranoid about their data.   If your app processes one record at a time, then looping through an entire result set might be OK—as long as you’re not making a user wait while you do.  But if your app that turns rows into objects, then it had better be very selective about its SELECTs.  The relationships might not be what you expect.  The data producer might have changed in a surprising way, particularly if it’s not under your control.  Purging routines might not be in place, or might have gotten broken.  Definitely don’t trust some other application or batch job to load your data in a safe way.

No matter what odd condition your app stumbles across in the database, it should not be vulnerable.

Read-write Splitting With Oracle

| Comments

Speaking of databases and read/write splitting, Oracle had a session at OpenWorld about it.

Building a read pool of database replicas isn’t something I usually think of doing with Oracle, mainly due to their non-zero license fees.  It changes the scaling equation.

Still, if you are on Oracle and the fees work for you, consider Active Data Guard.   Some key facts from the slides:

  • Average latency for replication was 1 second
  • The maximum latency spike they observed was 10 seconds.
  • A node can take itself offline if it detects excessive latency.
  • You can use DBLinks to allow applications to think they’re writing to a read node.  The node will transparently pass the writes through to the master.
  • This can be done without any tricky JDBC proxies or load-balancing drivers, just the normal Oracle JDBC driver with the bugs we all know and love.
  • Active Data Guard requires Oracle 11g.

Budgetecture and It’s Ugly Cousins

| Comments

It’s the time of year for family gatherings, so here’s a repulsive group portrait of some nearly universal pathologies. Try not to read this while you’re eating.

Budgetecture

We’ve all been hit with budgetecture.  That’s when sound technology choices go out the window in favor of cost-cutting. The conversation goes something like this.

"Do we really need X?" asks the project sponsor. (A.k.a. the gold owner.)

For "X", you can substitute nearly anything that’s vitally necessary to make the system run: software licenses, redundant servers, offsite backups, or power supplies.  It’s always asked with a sort of paternalistic tone, as though the grown-up has caught us blowing all our pocket money on comic books and bubble gum, whilst the serious adults are trying to get on with buying more buckets to carry their profits around in.

The correct way to answer this is "Yes.  We do."  That’s almost never the response.

After all, we’re trained as engineers, and engineering is all about making trade-offs. We know good and well that you don’t really need extravagances like power supplies, so long as there’s a sufficient supply of hamster wheels and cheap interns in the data center.  So instead of simply saying, "Yes. We do," we go on with something like, "Well, you could do without a second server, provided you’re willing to accept downtime for routine maintenance and whenever a RAM chip gets hit by a cosmic ray and flips a bit, causing a crash, but if we get error-checking parity memory then we get around that, so we just have to worry about the operating system crashing, which it does about every three-point-nine days, so we’ll have to institute a regime of nightly restarts that the interns can do whenever they’re taking a break from the power-generating hamster wheels."

All of which might be completely true, but is utterly the wrong thing to say. The sponsor has surely stopped listening after the word, "Well…"

The problem is that you see your part as an engineering role, while your sponsor clearly understands he’s engaged in a negotiation. And in a negotiation, the last thing you want to do is make concessions on the first demand. In fact, the right response to the "do we really need" question is something like this:

"Without a second server, the whole system will come crashing down at least three times daily, particularly when it’s under heaviest load or when you are doing a demo for the Board of Directors. In fact, we really need four servers so we can take an HA pair down independently at any time while still maintaining 100% of our capacity, even in case one of the remaining pair crashes unexpectedly."

Of course, we both know you don’t really need the third and fourth servers. This is just a gambit to get the sponsor to change the subject to something else. You’re upping the ante and showing that you’re already running at the bare, dangerous, nearly-irresponsible minimum tolerable configuration. And besides, if you do actually get the extra servers, you can certainly use one to make your QA environment match production, and the other will make a great build box.

Schedule Quid Pro Quo

Another situation in which we harm ourselves by bringing engineering trade-offs to a negotiation comes when the schedule slips. Statistically speaking, we’re more likely to pick up the bass line from "La Bamba" from a pair of counter-rotating neutron stars than we are to complete a project on time. Sooner or later, you’ll realize that the only way to deliver your project on time and under budget is to reduce it to roughly the scope of "Hello, world!"

When that happens, being a responsible developer, you’ll tell your sponsor that the schedule needs to slip. You may not realize it, but by uttering those words, you’ve given the international sign of negotiating weakness.

Your sponsor, who has his or her own reputation—not to mention budget—tied to the delivery of this project, will reflexively respond with, "We can move the date, but if I give you that, then you have to give me these extra features."

The project is already going to be late. Adding features will surely make it more late, particularly since you’ve already established that the team isn’t moving as fast as expected. So why would someone invested in the success of the project want to further damage it by increasing the scope? It’s about as productive as soaking a grocery store bag (the paper kind) in water, then dropping a coconut into it.

I suspect that it’s sort of like dragging a piece of yarn in front of a kitten. It can’t help but pounce on it. It’s just what kittens do.

 My only advice in this situation is to counter with data. Produce the burndown chart showing when you will actually be ready to release with the current scope. Then show how the fractally iterative cycle of slippage followed by scope creep produces a delivery date that will be moot, as the sun will have exploded before you reach beta.

The Fallacy of Capital

When something costs a lot, we want to use it all the time, regardless of how well suited it is or is not.

This is sort of the inverse of budgetecture.  For example, relational databases used to cost roughly the same as a battleship. So, managers got it in their heads that everything needed to be in the relational database.  Singular. As in, one.

Well, if one database server is the source of all truth, you’d better be pretty careful with it. And the best way to be careful with it is to make sure that nobody, but nobody, ever touches it. Then you collect a group of people with malleable young minds and a bent toward obsessive-compulsive abbreviation forming, and you make them the Curators of Truth.

But, because the damn thing cost so much, you need to get your money’s worth out of it. So, you mandate that every application must store its data in The Database, despite the fact that nobody knows where it is, what it looks like, or even if it really exists.  Like Schrodinger’s cat, it might already be gone, it’s just that nobody has observed it yet. Still, even that genetic algorithm with simulated annealing, running ten million Monte Carlo fitness tests is required to keep its data in The Database.

(In the above argument, feel free to substitute IBM Mainframe, WebSphere, AquaLogic, ESB, or whatever your capital fallacy du jour may be.)

Of course, if databases didn’t cost so much, nobody would care how many of them there are. Which is why MySQL, Postgres, SQLite, and the others are really so useful. It’s not an issue to create twenty or thirty instances of a free database. There’s no need to collect them up into a grand "enterprise data architecture". In fact, exactly the opposite is true. You can finally let independent business units evolve independently. Independent services can own their own data stores, and never let other applications stick their fingers into its guts.

 

So there you have it, a small sample of the rogue’s gallery. These bad relations don’t get much photo op time with the CEO, but if you look, you’ll find them lurking in some cubicle just around the corner.

 

Releasing a Free SingleLineFormatter

| Comments

A number of readers have asked me for reference implementations of the stability and capacity patterns.

I’ve begun to create some free implementations to go along with Release It. As of today, it just includes a drop-in formatter that you can use in place of the java.util.logging default (which is horrible).

This formatter keeps all the fields lined up in columns, including truncating the logger name and method name if necessary. A columnar format is much easier for the human eye to scan. We all have great pattern-matching machinery in our heads. I can’t for the life of me understand why so many vendors work so hard to defeat it. The one thing that doesn’t get stuffed into a column is a stack trace. It’s good for a stack trace to interrupt the flow of the log file… that’s something that you really want to pop out when scanning the file.

It only takes a minute to plug in the SingleLineFormatter. Your admins will thank you for it.

Read about the library.

Download it as .zip or .tgz.

A Dozen Levels of Done

| Comments

What does "done" mean to you?  I find that my definition of "done" continues to expand. When I was still pretty green, I would say "It’s done" when I had finished coding.  (Later, a wiser and more cynical colleague taught me that "done" meant that you had not only finished the work, but made sure to tell your manager you had finished the work.)

The next meaning of "done" that I learned had to do with version control. It’s not done until it’s checked in.

Several years ago, I got test infected and my definition of "done" expanded to include unit testing.

Now that I’ve lived in operations for a few years and gotten to know and love Lean Software Development, I have a new definition of "done".

Here goes:

A feature is not "done" until all of the following can be said about it:

  1. All unit tests are green.
  2. The code is as simple as it can be.
  3. It communicates clearly.
  4. It compiles in the automated build from a clean checkout.
  5. It has passed unit, functional, integration, stress, longevity, load, and resilience testing.
  6. The customer has accepted the feature.
  7. It is included in a release that has been branched in version control.
  8. The feature’s impact on capacity is well-understood.
  9. Deployment instructions for the release are defined and do not include a "point of no return".
  10. Rollback instructions for the release are defined and tested.
  11. It has been deployed and verified.
  12. It is generating revenue.

Until all of these are true, the feature is just unfinished inventory.