Wide Awake Developers

The Paradox of Honor

| Comments

You can use a person’s honor against him only if he values honor. Only the honest man is threatened by the pointed finger. The liar is unaffected by that kind of accusation. I think it is because there is no such thing as "dishonesty". There is only honesty or it’s lack. Not a thing and it’s opposite, but a thing and it’s absence. One or zero, not one or minus-one. One who is lacking a thing cannot be threatened at the prospect of its loss.


I Think I’d Like To

| Comments

I think I’d like to do some Smalltalk (or Squeak) development sometime. Just for myself. It would be good for me – like an artist going to a retreat and setting aside all notions of practicality. I know I’ll never work in Squeak professionally. That’s why it would be like saying to yourself, "In this now, purity of expression is all that matters. Tomorrow, I will worry about making something I can sell. Tomorrow I will design so the mediocre masses that follow me cannot corrupt it. Today, I will work for the joy I find in the work."

The burdens of responsibility leave no room for such indulgence. So I turn back to Java and C#. I’ll write another Address class and deal with another session manager, and more cookies. Always with the cookies.


Bill Joy Knocks the Open Source Business Model

| Comments

Bill Joy had some doubts to voice about Linux. Of course, like so many others he immediately jumps to the wrong conclusion. "The open-source business model hasn’t worked very well," he says.

Tough nuts. Here’s the point that seems to get missed over and over again. There is no "open source business model". There never was, and I doubt there ever will be. It doesn’t exist. It’s a contradiction in terms.

Open source needs no business model.

Look, GNU existed before anyone ever talked about "open source". Linux was built before there were companies like RedHat and IBM interested (let alone Sun). The thing that the corps and the pundits cannot seem to grasp is their absolute irrelevance.

It’s like Bruce Sterling’s speech. Harangue. Whatever you want to call it. I see it as yet another person getting up and trying to tell the "open-source community" what they need to do. Getting on their case about not being organized enough… or something.

Or it’s like those posters on Slashdot that wish either GNOME or KDE would shut down so everyone can focus on one "standard" desktop.

Or Scott McNealy, lamenting the fact that open source Java application servers inhibit the expenditure of dollars that could be used to market J2EE against .Net.

Or the UI designers who froth at the mouth about how terrible an open source applications user interface may be. They say moronic things like "when will coders learn that they shouldn’t design user interfaces?" (Or the more extreme form, "Programmers should never design UIs.")

Or it’s like anyone who looks at an application and says, "That’s pretty good. You know what you really need to do?"

All of these people don’t get the true point. I’ll say it here as baldly as I can.

There is nobody in charge. Not IBM, not Linus Torvalds, not Richard Stallman. Nobody.

All you will find is an anarchic collection of self-interested individuals. Sometimes they collaborate. Some of them work together, some work apart, some work against each other. To the extent that some clusters of individuals share a vision, they collaborate to tackle bigger, cooler projects.

There is no one in control. Nobody gets to decree what open source projects live or die, or what direction they go in. These projects are an expression of free will, created by those capable of expressing themselves in that medium. Decisions happen in code, because coders make them happen.

Free will, baby. It’s my project, and I’ll do what I want with it. If I want to create the most god-awful user interface ever seen by Man, that’s my perogative. (If I want lots of users, I probably won’t do that, but who says I have to want lots of users? It’s my choice!)

As long as one GNOME hacker wants to keep working on GNOME, it will continue to evolve. As long as one Linux kernel hacker keeps coding, Linux will continue. None of these things require corporations, IPOs, or investement dollars to continue. The only true investments in open source are time and brainpower. Money is useful in that it can be used to purchase time, the greatest gift you can give a coder. Corporations are useful in that they are effective at aggregating and channeling money. "Useful", not "required".

As long as coders have free will and the tools to express it, open source software will continue. In fact, even if you take away their tools, they’ll build new ones! To truly kill open source software, you must kill free will itself.

(And, by the way, there are those who want to do exactly that.)

Needles, Haystacks

| Comments

So, this may seem a little off-topic, but it comes round in the end. Really, it does.

I’ve been aggravated with the way members of the fourth estate have been treating the supposed "information" that various TLAs had before the September 11 attacks. (That used to be my birthday, by the way. I’ve since decided to change it.) We hear that four of five good bits of information scattered across the hundreds of FBI, CIA, NSA, NRO, IRS, DEA, INS, or IMF offices "clearly indicate" that terrorists were planning to fly planes into buildings. Maybe so. Still, it doesn’t take a doctorate in complexity theory to figure out that you could probably find just as much data to support any conclusion you want. I’m willing to bet that if the same amount of collective effort were invested, we could prove that the U. S. Government has evidence that Saddam Hussein and aliens from Saturn are going to land in Red Square to re-establish the Soviet Union and launch missiles at Guam.

You see, if you already have the conclusion in hand, you can sift through mountain ranges of data to find those bits that best support your conclusion. That’s just hindsight. It’s only good for gossipy hens clucking over the backyard fence, network news anchors, and not-so-subtle innuendos by Congresscritters.

The trouble is, it doesn’t work in reverse. How many documents does just the FBI produce every day? 10,000? 50,000? How would anyone find exactly those five or six documents that really matter and ignore all of the chaff? That’s the job of analysis, and it’s damn hard. A priori, you could only put these documents together and form a conclusion through sheer dumb luck. No matter how many analysts the agencies hire, they will always be crushed by the tsunami of data.

Now, I’m not trying to make excuses for the alphabet soup gang. I think they need to reconsider some of their basic operations. I’ll leave questions about separating counter-intelligence from law enforcement to others. I want to think about harnessing randomness. You see, government agencies are, by their very nature, bureaucratic entities. Bureaucracies thrive on command-and-control structures. I think it comes from protecting their budgets. Orders flow down the hierarchy, information flows up. Somewhere, at the top, an omniscient being directs the whole shebang. A command-and-control structure hates nothing more than randomness. Randomness is noise in the system, evidence of an inadequate procedures. A properly structured bureaucracy has a big, fat binder that defines who talks to whom, and when, and under what circumstances.

Such a structure is perfectly optimized to ignore things. Why? Because each level in the chain of command has to summarize, categorize, and condense information for its immediate superior. Information is lost at every exchange. Worse yet, the chance for somebody to see a pattern is minimized. The problem is this whole idea that information flows toward a converging point. Whether that point is the head of the agency, the POTUS, or an army of analysts in Foggy Bottom, they cannot assimilate everything. There isn’t even any way to build information systems to support the mass of data produced every day, let alone correlating reports over time.

So, how do Dan Rather and his cohorts find these things and put them together? Decentralization. There are hordes of pit-bull journalists just waiting for the scandal that will catapult them onto CNN. ("Eat your heart out Wolf, I found the smoking gun first!")

Just imagine if every document produced by the Minneapolis field office of the FBI were sent to every other FBI agent and office in the country. A vast torrent of data flowing constantly around the nation. Suppose that an agent filing a report about suspicious flight school activity could correlate that with other reports about students at other flight schools. He might dig a little deeper and find some additional reports about increased training activity, or a cluster of expired visas that overlap with the students in the schools. In short, it would be a lot easier to correlate those random bits of data to make the connections. Humans are amazing at detecting patterns, but they have to see the data first!

This is what we should focus on. Not on rebuilding the $6 Billion Bureaucracy, but on finding ways to make available all of the data collected today. (Notice that I haven’t said anything that requires weakening our 4th or 5th Amendment rights. This can all be done under laws that existed before 9/11.) Well, we certainly have a model for a global, decentrallized document repository that will let you search, index, and correlate all of its contents. We even have technologies that can induce membership in a set. I’d love to see what Google Sets would do with the 19 hijackers names, after you have it index the entire contents of the FBI, CIA, and INS databases. Who would it nominate for membership in that set?

Basically, the recipe is this: move away from ill-conceived ideas about creating a "global clearinghouse" for intelligence reports. Decentralize it. Follow the model of the Internet, Gnutella, and Google. Maximize the chances for field agents and analysts to be exposed to that last, vital bit of data that makes a pattern come clear. Then, when an agent perceives a pattern, make damn sure the command-and-control structure is ready to respond.

MLP

| Comments

Here’s a good roundup of recent traffic regarding REST.


Here’s My Number One Frustration

| Comments

Here’s my number one frustration with the state of the industry today. I am a professional. I regard my work as a craft to be studied and learned. Yet, in most domains, there is no benefit to developing a high level of skill. You end up surrounded by people who don’t understand a word you say, can’t work at that level, and don’t really give a damn. They’ll get the same rewards and go home happy at 5:00 every day. It’s like, once you achieve a base level of mediocrity, there’s no benefit for further personal development. In fact, there’s a distinct disadvantage, in that you end up pulling ridiculous hours to clean up their garbage.

Bah, there I go being bitter again. Maybe I just need to work in some other domain–one where skills count for something, and being good at your job is a benefit, not a hindrance. I’m sick of writing Address classes, anyway.


Multiplier Effects

| Comments

Here’s another way to think about the ethics of software, in terms of multipliers. Think back to the last major virus scare, or when Star Wars Episode II was released. Some "analyst"–who probably found his certificate in a box of Cracker Jack–publishing some ridiculous estimate of damages.

BTW, I have to take a minute to disassemble this kind of analysis. Stick with me, it won’t take long.

If you take 1.5 seconds to delete the virus, it costs nothing. It’s an absolutely immeasurable impact to your day. It won’t even affect your productivity. You will probably spend more time than that discussing sports scores, going to the bathroom, chatting with a client, or any of the hundreds of other things human beings do during a day. It’s literally lost in the noise. Nevertheless, some peabrain analyst who likes big numbers will take that 1.5 seconds and multiply it by the millions of other users and their 1.5 seconds, then multiply that by the "national average salary" or some such number.

So, even though it takes you longer to blow your nose than to delete the virus email, somehow it still ends up "costing the economy" 5x10^6 USD in "lost productivity". The underlying assumptions here are so thoroughly rotten that the result cannot be anything but a joke. Sure as hell though, you’ll see this analysis dragged out every time there’s a news story–or better yet, a trial–about an email worm.

The real moral of this story isn’t about innumeracy in the press, or spotlight seekers exploiting innumeracy. It’s about multipliers.

Suppose you have a decision to make about a particular feature. You can do it the easy way in about a week, or the hard way in about a month. (Hypothetical.) Which way should you do it? Suppose that the easy way makes the user click an extra button, whereas doing it the hard way makes the program a bit smarter and saves the user one click. Just one click. Which way should you do it?

Let’s consider an analogy. Suppose I’m putting a sign up on my building. Is it OK to mount the sign six feet up on the wall, so that pedestrians have to duck or go around it? It’s much easier for me to hang the sign if I don’t have to set up a ladder and scaffold. It’s only a minor annoyance to the pedestrians. It’s not like it would block the sidewalk or anything. All they have to do is duck. (We’ll just ignore the fact that pissing off all your potential customers is not a good business strategy.)

It’s not ethical to worsen the lives of others, even a small bit, just to make things easy for yourself. These days, successful software is measured in millions of users, of people. Always be mindful of the impact your decisions–even small ones–have on those people. Accept large burdens to ease the burden on those people, even if your impact on any given individual is miniscule. The cumulative good you do that way will always overwhelm the individual costs you pay.

REST and Change in APIs

| Comments

In case it didn’t come through, I’m intrigued by REST, because it seems more fluid than the WS-* specifications. I can do an HTTP request in about 5 lines of socket code in any modern language, from any client device.

The WS-splat crowd seem to be building YABS (yet another brittle standard). Riddle me this: what use is a service description in a standardized form if there is only one implementor of that service? WSDL only attains full value when there are standards built on top of WSDL. Just like XML, WSDL is a meta-standard. It is a standard for specifying other standards. Collected and diverse industry behemoths and leviathans make the rules for that playground.

I see two, equally likely, outcomes for any given service definition:

  • A defining body will standardize the interface for a particular web service. This will take far too long.
  • A dominant company in a star-like topography with its customers and suppliers (think Wal-mart) will impose an interface that its business partners must use.

Once such interfaces are defined, how easily might they be changes? I mean the WSDL (or other) definition of the service itself. Can anyone say CORBAservices? You’d better define your services right the first time, because there appears to be substantial friction opposing change.

How does REST avoid this issue? By eliminating layers. If I support a URI naming scheme like http://company.com/groupName/divisionName/departmentName/purchaseOrders/poNumber as a RESTful way to access purchase orders, and I find that we need to change it to /purchaseOrders/departmentNumber/poNumber, then both forms can co-exist. The alternative change in SOAP/WSDL-land would either modify the original endpoint (an incompatible change!) or would define a new service to support the new mode of lookup. (I suppose other hacks are available, too. Service.getPurchaseOrder2() or Service.getPurchaseOrderNew() for example.)

Of course, neither of these service architectures are implemented widely enough to really evaluate which one will be more accepting of change. I can tell you, though, that one of the huge CORBA-killers was the slow pace and resistance to change in the CORBAservices.