Wide Awake Developers

When Should You Jump? JSR 308. That’s When.

| Comments

One of the frequently asked questions at the No Fluff, Just Stuff expert panels boils down to, "When should I get off the Java train?" There may be good money out there for the last living COBOL programmer, but most of the Java developers we see still have a lot of years left in their careers, too many to plan on riding Java off into it’s sunset.

Most of the panelists talk about the long future ahead of Java the Platform, no matter what happens with Java the Language. Reasonable. I also think that a young developer’s best bet is to stick with the Boy Scout motto: Be Prepared. Keep learning new languages and new programming paradigms. Work in many different domains, styles, and architectures. That way, no matter what the future brings, the prepared developer can jump from one train to the next.

After today, I think I need to revise my usual answer.

When should a Java developer jump to a new language? Right after JSR 308 becomes part of the language.

Beware: this stuff is like Cthulu rising from the vasty deep. There’s an internal logic here, but if you’re not mentally prepared, it could strip away your sanity like a chill wind across a foggy moor. I promise that’s the last hypergolic metaphor. Besides, that was another post.

JSR 308 aims to bring Java a more precise type system, and to make the type system arbitrarily extensible. I’ll admit that I had no idea what that meant, either. Fortunately, presenter and MIT Associate Professor Michael Ernst gave us several examples to consider.

The expert group sees two problems that need to be addressed.

The first problem is a syntactic limitation with annotations today: they can only be applied to type declarations. So, for example, we can say:

@NonNull List<String> strings;

If the right annotation processor is loaded, this tells the compiler that strings will never be null. The compiler can then help us enforce that by warning on any assignment that could result in strings taking on a null value.

Today, however, we cannot say:

@NonNull List<@NonNull String> strings;

This would mean that the variable strings will never take a null value, and that no list element it contains will be null.

Consider another example:

@NonEmpty List<@NonNull String> strings = ...;

This is a list whose elements may not be null. The list itself will not be empty. The compiler—more specifically, an annotation processor used by the compiler—will help enforce this.

They would also add the ability to annotate method receivers:

void marshal(@Readonly Object jaxbElement, @Mutable Writer writer) @Readonly { ... }

This tells the type system that jaxbElement will not be changed inside the method, that writer will be changed, and that executing marshal will not change the receiving object itself.

Presumably, to enforce that final constraint, marshal would only be permitted to call other methods that the compiler could verify as consistent with @Readonly. In other words, applying @Readonly to one method will start to percolate through into other methods it calls.

The second problem the expert group addresses is more about semantics than syntax. The compiler keeps you from making obvious errors like:

int i = "JSR 308";

But, it doesn’t prevent you from calling getValue().toString() when getValue() could return null. More generally, there’s no way to tell the compiler that a variable is not null, immutable, interned, or tainted.

Their solution is to add a pluggable type system to the Java compiler. You would be able to annotate types (both at declaration and at usage) with arbitrary type qualifiers. These would be statically carried through compilation and made available to pluggable processors. Ernst showed us an example of a processor that can check and enforce not-null semantics. (Available for download from the link above.) In a sample source code base (of approximately 5,000 LOC) the team added 35 not-null annotations and suppressed 4 warnings to uncover 8 latent NullPointerException bugs.

Significantly, Findbugs, Jlint, and PMD all missed those errors, because none of them include an inferencer that could trace all usages of the annotated types.

That all sounds good, right? Put the compiler to work. Let it do the tedious work tracing the extended semantics and checking them against the source code.

Why the Lovecraftian gibbering, then?

Every language has a complexity budget. Java blew through it with generics in Java 5. Now, seriously, take another look at this:

@NotEmpty List<@NonNull String> strings = new ArrayList<@NonNull String>();

Does that even look like Java? That complexity budget is just a dim smudge in our rear-view mirror here. We’re so busy keeping the compiler happy here, we’ll completely forget what our actual project it.

All this is coming at exactly the worst possible time for Java the Language. The community is really, really excited about dynamic languages now. Instead of those contortions, we could just say:

var strings = ["one", "two"];

Now seriously, which one would you rather write? True, the dynamic version doesn’t let me enlist the compiler’s aid for enforcement. True, I do need many more unit tests with the dynamic code. Still, I’d prefer that "low ceremony" approach to the mouthful of formalism above.

So, getting back to that mainstream Java developer… it looks like there are only two choices: more dynamic or more static. More formal and strict, or more loosey-goosey and terse. JSR 308 will absolutely accelerate this polarization.

And, by the way, in case you were thinking that Java the Language might start to follow the community move toward dynamic languages, Alex Buckley, Sun’s spec lead for the Java language, gave us the answer today.

He said, "Don’t look for any ‘var’ keywords in Java."

SOA at 3.5 Million Transactions Per Hour

| Comments

Matthias Schorer talked about FIDUCIA IT AG and their service-oriented architecture. This financial services provider works with 780 banks in Europe, processing 35,000,000 transactions during the banking day. That works out to a little over 3.5 million transactions per hour.

Matthias described this as a service-oriented architecture, and it is. Be warned, however, that SOA does not imply or require web services. The services here exist in the middle tier. Instead of speaking XML, they mainly use serialized Java objects. As Matthias said, "if you control both ends of the communication, using XML is just crazy!"

They do use SOAP when communicating out to other companies.

They’ve done a couple of interesting things. They favor asynchronous communication, which makes sense when you architect for latency. Where many systems push data into the async messages, FIDUCIA does not. Instead, they put the bulk data into storage (usually files, sometimes structured data) and send control messages instructing the middle tier to process the records. This way, large files can be split up and processed in parallel by a number of the processing nodes. Obviously, this works when records are highly independent of each other.

Second, they have defined explicit rules and regulations about when to expect transactional integrity. There are enough restrictions that these are a minority of transactions. In all other cases, developers are required to design for the fact that ACID properties do not hold.

Third, they’ve build a multi-layered middle tier. Incoming requests first hit a pair of "Central Process Servers" which inspect the request. Requests are dispatched to individual "portals" based on their customer ID. Different portals will run different versions of the software, so FIDUCIA supports customers with different production versions of their software. Instead of attempting to combine versions on a single cluster, they just partition the portals (clusters.)

Each portal has its own load distribution mechanism, using work queues that the worker nodes listen to.

This multilevel structure lets them scale to over 1,000 nodes while keeping each cluster small and manageable.

The net result is that they can process up to 2,500 transactions per second, with no scaling limit in sight.

Project Hydrazine

| Comments

Part of Sun’s push behind JavaFX will be called "Project Hydrazine".  (Hydrazine is a toxic and volatile rocket fuel.)  This is still a bit fuzzy, and they only left the boxes-and-arrows slide up for a few seconds, but here’s what I was able to glean.

Hydrazine includes common federated services for discovery, personalization, deployment, location, and development. There’s a "cloud" component to it, which wasn’t entirely clear from their presentation. Overall, the goal appears to be an easier model for creating end-user applications based on a service component architecture. All tied together and presented with JavaFX, of course.

One very interesting extension—called "Project Insight"—that Rich Green and Jonathan Schwartz both discussed is the ability to instrument your applications to monitor end-user activity in your apps.

(This immediately reminded me of Valve’s instrumentation of Half-Life 2, episode 2. The game itself reports back to Valve on player stats: time to complete levels, map locations where they died, play time and duration, and so on. Valve has previously talked about using these stats to improve their level design by finding out where players get frustrated, or quit, and redesigning those levels.)

I can see this being used well: making apps more usable, proactively analyzing what features users appreciate or don’t understand, and targeting development effort at improving the overall experience.

Of course, it can also be used to target advertising and monitor impressions and clicks. Rich promoted this as the way to monetize apps built using Project Hydrazine. I can see the value in it, but I’m also ambivalent about creating even more channels for advertising.

In any event, users will be justifiably anxious about their TV watching them back. It’s just a little too Max Headroom for a lot of people. Sun says that the data will only appear in the aggregate. This leads me to believe that the apps will report to a scalable, cloud-based aggregation service from which developers can get the aggregated data. Presumably, this will be run by Sun.

Unlike Apple’s iron-fisted control over iPhone application delivery, Sun says they will not be exercising editorial control. According to Schwartz, Hydrazine will all be free: free in price, freely available, and free in philosophy.

JavaOne: After the Revolution

| Comments

What happens to the revolutionaries, once they’ve won?

It’s been about ten years since I last made the pilgramage to JavaOne, back when Java was still being called an  "emerging technology".

Many things have changed since then. Java is now so mainstream that the early adopters are getting itchy feet and looking hard for the next big thing. (The current favorite is some flavor of dynamic language running on the JVM: Groovy, Scala, JRuby, Jython, etc.) Java, the language, has found a home inside large enterprises and their attendant consultancies and commoditized outsourcers.

We just heard Sun say that, Java SE is on 91% of all PCs and laptops, 85% of mobile phones, and 100% of all Blu-Ray players. It’s safe to say that the revolution is over. We won.

A couple of things haven’t changed about JavaOne in the last ten years.

The crowds in Moscone are still completely absurd. There aren’t lines, so much as there are tides. People ebb and flow like a non-Newtonian fluid

Sun still keeps a tight reign on the Message. (This control is one of the major tensions between Sun and the broader Java community.) This year, Sun’s focus is clearly on JavaFX. The leading keynote talked repeatedly about "all the screens of your life" and said that the JavaFX runtime will be the access layer to reach your content from any device anywhere. We also heard about JavaFX’s animation, 3D, audio, and video capabilities.

Glassfish got a brief mention. Version 3 is supposed to have a new kernel that slims down to 98KB is its minimal deployment. Add-on modules provide HTTP service, SIP service, and so on. Rich Green said hat Glassfish will scale up to the data center and down to set top boxes.

Perhaps it’s just my perspective, since I’m mostly a server-side developer, but I had the oddest sense of deja-vu. Instead of Rich Green in 2008, I felt the strange sense that I was listening to Scott McNealy in 1998. Same message: Java from the handset to the data center. Set top boxes. Headspace for audio. (Anyone else remember Thomas Dolby at the keynote?  This year we got Neil Young.)

So, here we are, at the 13th JavaOne, and Sun is still trying to get developers to see Java as more than a server-side platform. 

Well, the more things change, the more they stay the same, I suppose.

Who Ordered That?

| Comments

Yesterday, I let myself get optimistic about what Jonathan Schwartz coyly hinted about over the weekend.

The actual announcement came today.  OpenSolaris will be available on EC2. Honestly, I’m not sure how relevant that is. Are people actually demanding Solaris before they’ll support EC2?

There is a message here for Microsoft, though. The only sensible license cost for a cloud-based platform is $0.00 per instance. 

Addendum

I said that OpenSolaris would be available on EC2. Looks like I should have used the present tense, instead.

$ ec2-describe-images -a | grep -i solaris
IMAGE	ami-8946a3e0	opensolaris.thoughtworks.com/opensolaris-mingle-2_0_8540-64.manifest.xml	089603041495	available	public		x86_64	machine	aki-ab3cd9c2	ari-2838dd41

Yep, ThoughtWorks already has an OpenSolaris image configured as a Mingle server.

(I’ve said it before, but there’s just no need to pay money for development infrastructure any more.  Conversely, there’s no excuse for any development team to run without version control, automated builds, and continuous integration.)

Sun to Emerge From Behind in the Clouds?

| Comments

Nobody can miss the dramatic proliferation of cloud computing platforms and initiatives over the last couple of years. All through the last year, Sun has remained oddly silent on the whole thing. There is a clear, natural synergy between Linux, commodity x86 hardware, and cloud computing. Sun is conspicuously absent from all of those markets.  Sun clearly needs to regain relevance in this space.

On the one hand, Project Caroline now has its own website. Anybody can create an account that allows forum reading, but don’t count on getting your hands on hardware unless you’ve got an idea that Sun approves of.

Apart from that, Om Malik reports that we may see a joint announcement Monday morning from Sun and Amazon.

I suspect that the announcement will look something like this:

  • Based on AWS for accounts, billing, storage, and infrastructure
  • Java-based application deployment into a Sun grid container
  • AWS to handle load balancing, networking, etc.

In other words: it will look a lot like Project Caroline and the Google App Engine, running Java applications using Sun containers on top of AWS.

Agile IT! Experience

| Comments

On June 26-28, 2008, I’ll be speaking at the inagural Agile IT! Experience symposium in Reston, VA. Agile ITX is about consistently delivering better software. It’s for development teams and management, working and learning together.

It’s a production of the No Fluff, Just Stuff symposium series.  Like all NFJS events, attendance is capped, so be sure to register early.

From the announcement email:

The central theme of the Agile ITX conference (www.agileitx.com) is to help your development team/management consistently deliver better software. We’ll focus on the entire software development life cycle, from requirements management to test automation to software process. You’ll learn how to Develop in Iterations, Collborate with Customers, and Respond to Change. Software is a difficult field with high rates of failure. Our world-class speakers will help you implement best practices, deal with persistent problems, and recognize opportunities to improve your existing practices.

Dates: June 26-28, 2008

Location: Sheraton Reston

Attendance: Developers/ Technical Management

Sessions at Agile ITX will cover topics such as:

  • Continuous Integration (CI)
  • Test Driven Development (TDD)
  • Testing Strategies, Team Building
  • Agile Architecture
  • Dependency Management
  • Code Metrics & Analysis
  • Acceleration & Automation
  • Code Quality

Agile ITX speakers are successful leaders, authors, mentors, and trainers who have helped thousands of developers create better software. You will have the opportunity to hear and interact with:

Jared Richardson - co-author of Ship It!
Michael Nygard - author of Release It!
Johanna Rothman - author of Manage It!
Esther Derby - co-author of Behind Closed Doors: Secrets of Great Management
Venkat Subramaniam - co-author of Practices of an Agile Developer
David Hussman - Agility Instructor/Mentor
Andrew Glover - co-author of Continuous Integration
J.B. Rainsberger - author of JUnit Recipes
Neal Ford - Application Architect at ThoughtWorks
Kirk Knoernshild - contributor to The Agile Journal
Chris D’Agostino - CEO of Near Infinity
David Bock - Principal Consultant with CodeSherpas
Mark Johnson - Director of Consulting at CGI
Ryan Shriver - Managing Consultant with Dominion Digital
John Carnell - IT Architect at Thrivent Financial
Scott Davis - Testing Expert

Amazon Blows Away Objections

| Comments

Amazon must have been burning more midnight oil than usual lately.

Within the last two weeks, they’ve announced three new features that basically eliminate any remaining objections to their AWS computing platform.

Elastic IP Addresses 

Elastic IP addresses solve a major problem on the front end.  When an EC2 instance boots up, the "cloud" assigns it a random IP address. (Technically, it assigns two: one external and one internal.  For now, I’m only talking about the external IP.) With a random IP address, you’re forced to use some kind of dynamic DNS service such as DynDNS. That lets you update your DNS entry to connect your long-lived domain name with the random IP address.

Dynamic DNS services work pretty well, but not universally well.  For one thing, there is a small amount of delay.  Dynamic DNS works by setting a very short time-to-live (TTL) on the DNS entries, which instructs intermediate DNS servers to cache the entry only for a few minutes.  When that works well, you still have a few minutes of downtime when you need to reassign your DNS name to a new IP address.  For some parts of the Net, dynamic DNS doesn’t work well, usually when some ISP doesn’t respect the TTL on DNS entries, but caches them for a longer time.

Elastic IP addresses solve this problem. You request an elastic IP address through a Web Services call.  The easiest way is with the command-line API:

$ ec2-allocate-address
ADDRESS    75.101.158.25   

Once the address is allocated, you own it until you release it. At this point, it’s attached to your account, not to any running virtual machine. Still, this is good enough to go update your domain registrar with the new address. After you start up an instance, then you can attach the address to the machine. If the machine goes down, then the address is detached from that instance, but you still "own" it.

So, for a failover scenario, you can reassign the elastic IP address to another machine, leave your DNS settings alone, and all traffic will now come to the new machine.

Now that we’ve got elastic IPs, there’s just one piece missing from a true HA architecture: load distribution. With just one IP address attached to one instance, you’ve got a single point of failure (SPOF). Right now, there are two viable options to solve that. First, you can allocate multiple elastic IPs and use round-robin DNS for load distribution. Second, you can attach a single elastic IP address to an instance that runs a software load balancer: pound, nginx, or Apache+mod_proxy_balancer. (It wouldn’t surprise me to see Amazon announce an option for load-balancing-in-the-cloud soon.) You’d run two of these, with the elastic IP attached to one at any given time. Then, you need a third instance monitoring the other two, ready to flip the IP address over to the standby instance if the active one fails. (There are already some open-source and commercial products to make this easy, but that’s the subject for another post.)

Availability Zones 

The second big gap that Amazon closed recently deals with geography.

In the first rev of EC2, there was absolutely no way to control where your instances were running. In fact, there wasn’t any way inside the service to even tell where they were running. (You had to resort to pingtracing or geomapping of the IPs). This presents a problem if you need high availability, because you really want more than one location.

Availability Zones let you specify where your EC2 instances should run. You can get a list of them through the command-line (which, let’s recall, is just a wrapper around the web services):

$ ec2-describe-availability-zones
AVAILABILITYZONE    us-east-1a    available
AVAILABILITYZONE    us-east-1b    available
AVAILABILITYZONE    us-east-1c    available

Amazon tells us that each availability zone is built independently of the others. That is, they might be in the same building or separate buildings, but they have their own network egress, power systems, cooling systems, and security. Beyond that, Amazon is pretty opaque about the availability zones. In fact, not every AWS user will see the same availability zones. They’re mapped per account, so "us-east-1a" for me might map to a different hardware environment than it does for you.

How do they come into play? Pretty simply, as it turns out. When you start an instance, you can specify which availability zone you want to run it in.

Combine these two features, and you get a bunch of interesting deployment and management options.

Persistent Storage

Storage has been one of the most perplexing issues with EC2. Simply put, anything you stored to disk while your instance was running would be lost when you restart the instance. Instances always go back to the bundled disk image stored on S3.

Amazon has just announced that they will be supporting persistent storage in the near future. A few lucky users get to try it out now, in it’s pre-beta incarnation.

With persistent storage, you can allocate space in chunks from 1 GB to 1 TB.  That’s right, you can make one web service call to allocate a freaking terabyte! Like IP addresses, storage is owned by your account, not by an individual instance. Once you’ve started up an instance—say a MySQL server, for example—you attach the storage volume to it. To the virtual machine, the storage looks just like a device, so you can use it raw or format it with whatever filesystem you want.

Best of all, because this is basically a virtual SAN, you can do all kinds of SAN tricks, like snapshot copies for backups to S3.

Persistent storage done this way obviates some of the other dodgy efforts that have been going on, like  FUSE-over-S3, or the S3 storage engine for MySQL.

SimpleDB is still there, and it’s still much more scalable than plain old MySQL data storage, but we’ve got scores of libraries for programming with relational databases, and very few that work with key-value stores. For most companies, and for the forseeable future, programming to a relational model will be the easiest thing to do. This announcement really lowers the barrier to entry even further.

 

With these announcements, Amazon has cemented AWS as a viable computing platform for real businesses.

Geography Imposes Itself on the Clouds

| Comments

In a comment to my last post, gvwilson asks, "Are you aware that the PATRIOT Act means it’s illegal for companies based in Ontario, BC, most European jurisdictions, and many other countries to use S3 and similar services?"

This is another interesting case of the non-local networked world intersecting with real geography. Not surprisingly, it quickly becomes complex. 

I have heard some of the discussion about S3 and the interaction between the U.S. PATRIOT act and the EU and Canadian privacy laws. I’m not a lawyer, but I’ll relate the discussion for other readers who haven’t been tracking it.

Canada and the European Union have privacy laws that lean toward their citizens, and are quite protective of them. In the U.S., where laws are written about privacy at all, they are heavily biased in favor of large data-collecting corporations, such as credit rating agencies.  A key provision of the privacy laws in Canada and the EU is that companies cannot transmit private data to any jurisdiction that lacks substantially similar protections. It’s kind of like the "incorporation" clause in the GPL that way.

In the U.S., particularly with respect to the USA PATRIOT act, companies are required to turn over private customer data to a variety of government agencies. In some cases, they are required to do this even without a search warrant or court order. These are pretty much just fishing expeditions; casting a broad net to see if you catch anything. Therefore, the EU/Canadian privacy laws judge that the U.S. does not have substantially similar privacy protections, and companies in those covered nations are barred from exporting, transmitting, or storing customer data in any U.S. location where they might be subject to PATRIOT act search.

(Strictly speaking, this is not just a PATRIOT act problem. It also relates to RICO and a wide variety of other U.S. laws, mostly aimed at tracking down drug dealers by their banking transactions.)

Enter S3. S3 built to be a geographically-replicated distributed storage mechanism! There is no way even to figure out where the individual bits of your data are physically located. Nor is there any way to tell Amazon what legal jurisdictions your data can, or must, reside in. This is a big problem for personal customer data. It’s also a problem that Amazon is aware they must solve. For EC2, they recently introduced Availability Zones that let you define what geographic location your virtual servers will exist in. I would expect to see something similar for S3.

This would also appear to be a problem for EU and Canadian companies using Google’s AppEngine. It does not offer any way to confine data to specific geographies, either.

Does this mean it’s illegal for Canadian companies to use S3? Not in general. Web pages, software downloads, media files… these would all be allowed.  Just stay away from the personal data.

Suggestions for a 90-minute App

| Comments

Some of you know my obsession with Lean, Agile, and ToC.  Ideas are everywhere.  Idea is nothing. Execution is everything.

In that vein, one of my No Fluff, Just Stuff talks is called "The 90 Minute Startup".  In it, I build a real, live dotcom site during the session. You can’t get a much shorter time-to-market than 90 minutes, and I really like that.

In case you’re curious, I do it through the use of Amazon’s EC2 and S3 services. 

The app I’ve used for the past couple of sessions is a quick and dirty GWT app that implements a Net Promoter Score survey about the show itself. It has a little bit of AJAX-y stuff to it, since GWT makes that really, really simple. On the other hand, it’s not all that exciting as an application. It certainly doesn’t make anyone sit up and go "Wow!"

So, anyone want to offer up a suggestion for a "Wow!" app they’d like to see built and deployed in 90 minutes or less?  Since this is for a talk, it should be about the size of one user story. I doubt I’ll be taking live requests from the audience during the show, but I’m happy to take suggestions here in the comments.

(Please note: thanks to the pervasive evil of blog comment spam, I moderate all comments here. If you want to make a suggestion, but don’t want it published, just make a note of that in the comment.)