Wide Awake Developers

JavaOne Is a Hot Zone

| Comments

Apparently, there’s a virus attack. Not a computer virus. A real virus. Hot zone instead of a hot spot.

From my inbox this morning:

The JavaOne conference team has been notified by the San Francisco Department of Public Health about an identified outbreak of a virus in the San Francisco area. Testing is still underway to identify the specific virus in question, but they believe it to be the Norovirus, a common cause of the "stomach flu", which can cause temporary flu-like symptoms for up to 48 hours. Part of the San Francisco area impacted includes the Moscone Center, the site of the JavaOne conference which is being held this week. We are working with the appropriate San Francisco Department of Public Health and Moscone representatives to mitigate the impact this will have on the conference and steps are being taken overnight to disinfect the facility. We have not received any indication that the show should end early, so will have the full schedule of events on Friday as planned. We hope to see you then.

Please see the attached notification from the Department of Public Health.

For further information, as well as Frequently Asked Questions related to the Norovirus, please visit the San Francisco Department of Public Health website at http://sfcdcp.org/norovirus.cfm 

The CDC description includes the phrase "acute gastroenteritis." 


Grab Bag of Demos

| Comments

Sun opened the final day of JavaOne with a general session called "Extreme Innovation". This was a showcase for novel, interesting, and out-of-this-world uses of Java based technology.


VisualVM works with local or remote applications, using JMX over RMI to connect to remote apps. While you have to run VisualVM itself under JDK 1.6, it can connect to any version of JVm from 1.4.2 through 1.7. Local apps are automatically detected and offered in the UI for debugging.  VisualVM uses the Java Platform Debugger Architecture to show thread activities, memory usage, object counts, and call timing. It can also take snapshots of the application’s state for post-mortem or remote analysis.

Memory problems can be a bear to diagnose. VisualVM includes a heap analyzer that can show reference chains. From the UI, it looks like it can also detect and indicate reference loops.

One interesting feature of VisualVM is the ability to add plug-ins for application-specific behavior. Sun demonstrated a Glassfish plugin that adds custom metrics for request latency and volume, and the ability to examine each application context independently.

The application does not require any special instrumentation, so you can run VisualVM directly against a production application. According to Sun, it adds "almost no overhead" to the application being examined. I’d still be very cautious about that. VisualVM allows you to enable CPU and memory profiling in real-time, so that will certainly have an effect on the application. Not to mention, it also lets you trigger a heap dump, which is always going to be costly.

VisualVM is available for download now.

JavaScript Support in NetBeans

Sun continues to push NetBeans at every turn. In this case, it was a demo of the JavaScript plugin for NetBeans. This is really a nice plugin. It uses type inferencing to provide autocompletion and semantic warnings. For example, it would warn you if a function had inconsistent return statements. (Such as returning an object from one code path mixed with a void return from another.)

It also has a handy developer aid: it warns developers about browser compatibility.

I don’t do a whole lot of JavaScript, but I couldn’t help thinking about other dynamic languages. Ifthe plugin can do that kind of type inferencing—without executing the code—for one dynamic language, then it should be possible to do for other dynamic languages. That could remove a lot of objections about Groovy, Scala, JRuby, etc.

Fluffy Stuff at the Edge

We got a couple of demos of Java in front of the end-user. One was a cell phone running an OpenGL scene at about 15 frames per second on an NVidia chipset. All the rendering was done in Java and displayed via OpenGL ES, with 3D positional audio. Not bad at all.

Project Darkstar got a few moments in the spotlight, too. They showed off a game called Call of the Kings, a multiplayer RTS that looked like it came from 1999.  Call of the Kings uses the jMonkey Engine (built on top of JOAL, JOGL, and jInput) on the client and Project Darkstar’s game server on the backend. It’s OK, but as game engines go, I’m not sure how it will be relevant.

There was also a JavaCard demo, running Robocode players on JavaCards.  That’s not just storing the program on the card, it was actually executing on the card. Two finalists were brought up on stage (but not given microphones of their own, I noticed) for a final battle between their tanks. Yellow won, and received a PS3. Red lost, but got a PSP for making it to the finals.

Sentilla tried to get out from the "creepy" moniker by bouncing mesh-networked, location-tracking beachballs around the audience. Each one had a Sentilla "mote" in it, with a 3D accelerometer inside. Receivers at the perimeter of the hall could triangulate the beachballs’ locations by signal strength. For me, the most interesting thing here was James Gosling’s talk about powering the motes. They draw so little power that it’s possible to power them from ambient sources: vibration and heat. Interesting. Still creepy, but interesting.

The next demo was mind-blowing. The livescribe pulse is a Java computer built into a pen. It’s hard to describe how wild this thing is, you almost have to see it for any of this to make sense.

At one point, the presenter wrote down a list, narrating as he went. For item one, he wrote the numeral "1" and the word "pulse", describing the pen as he went. For item two, he wrote the numeral "2" and draw a little doodle of a desktop. Item three was the numeral and a vague cloudy thing. All this time, the pen was recoding his audio, and associating bits of the audio stream with the page locations. So when he tapped the numeral "1" that he had written, the pen played back his audio. Not bad.

Then he put an "application card" on the table and tapped "Spanish" on it. He wrote down the word "one"… and the pen spoke the word "uno".  He wrote "coffee please" and it said "cafe por favor". Then he had it do the same phrase in Mandarin and Arabic. Handwriting recognition, machine translation, and speech synthesis all in the pen. Wow.

Next, he selected a program from the pen’s menu. The special notebook has a menu crosshair on it, but you can draw your own crosshair and it works the same way: use the pen to tap the up-arrow on paper, and the menu changes on the display. He picked a piano program, and the pen started to give him directions on how to draw a piano. Once he was done drawing it, he could tap the "keys" on paper to play notes.

The pen captures x, y, and t information as you write, so it’s digitizing the trajectory rather than the image. This is great for data compression when you’re sharing pages across the livescribe web site. It’s probably also great for forgers, so there might be a concern there.

Industrial Strength

Emphasizing real-time Java for a bit, Sun showed off "Blue Wonder", an industrial controller built out of an x86 computer running Solaris 10 and Java RTS 2.0.  This is suitable for factory control applications and is, apparently, very exciting to factory control people.

From the DARPA Urban Challenge event, we saw "Tommy Jr.", an autonomous vehicle. It followed Paul Perrone into the room, narrating each move it was making. Fortunately, nobody tried to demonstrate it’s crowd control or law enforcement features. Instead, they showed off an array of high resolution sensors and actuators. It’s all controlled, under very tight real-time constraints, by a single x86 board running Solaris and Java RTS.

Into New Realms

Next, we saw a demo of JMars. This impressive application helps scientists make sense out of the 150 terabytes of data we’ve collected from various Mars probes. It combines data and imaging layers from many different probes. One example overlaid hematite concentrations on top of an infrared image layer. It also knows enough about the various satellites orbits to help plan imaging requests.

Ultimately, JMars was built to help target landing sites for both scientific interest and technical viability. We’ll soon see how well they did: the Phoenix lander arrives in about two weeks, targeting a site that was selected using JMars.

JMars is both free to use and is also open source. Dr. Phil Christensen from Arizon State University invited the Java community to explore Mars for themselves, and perhaps join the project team.

Thousands of people, physicists and otherwise, are eagerly awaiting the LHC’s activation. We got to see a little bit behind the scenes about how Java is being used within CERN.

On the one hand, some very un-sexy business process work is being done. LHC is a vast project, so it’s got people, budget, and materials to manage. Ho hum. It’s not easy to manage all those business processes, but it sure doesn’t demo well.

On the other hand, showing off the grid computing infrastructure does.

Once it’s operating, the ATLAS detectors alone will produce a gigabyte an hour of image data. All of it needs to be processed. "Processing" here means running through some amazing pattern recognition programs to analyze events, looking for anomalies. There will be far too many collisions generated every day for a physicist to look at all of them, so automated techniques have to weed out "uninteresting" collisions and call attention to ones that dont’ fit the profile.

CERN estimates that 100,000 CPUs will be needed to process the data. They’ve built a coalition of facilities into a multi-tier grid. Even today, they’re running 16,000 jobs on the grid across hundreds of data centers. With that many nodes involved, they need some good management and visualization tools, and we got to see one. It’s a 3D world model with iconified data centers showing their status and capacity. Jobs fly from one to another along geodesic links. Very cool stuff.


Java is a mature technology that’s being used in many spheres other than application server programming. For me, and many other JavaOne attendees, this session really underscored the fact that none of our own projects are anywhere near as cool as these demos. I’m left with the desire to go build something cool, which was probably the point.

SOA: Time for a Rethink

| Comments

The notion of a service-oriented architecture is real, and it can deliver. The term "SOA", however, has been entirely hijacked by a band of dangerous Taj Mahal architects. They seem innocuous, it part because they’ll lull you to sleep with endless protocol diagrams. Behind the soporific technology discussion lies a grave threat to your business.

"SOA" has come to mean top-down, up-front, strong-governance, all-or-nothing process (moving at glacial speed) implemented by an ill-conceived stack of technologies. SOAP is not the problem. WSDL is not the problem. Even BPEL is not the problem. The problem begins with the entire world view.

We need to abandon the term "SOA" and invent a new one. "SOA" is chasing a false goal. The idea that services will be so strongly defined that no integration point will ever break is unachievable. Moreover, it’s optimizing for the wrong thing. Most business today are not safety-critical. Instead, they are highly competitive.

We need loosely-coupled services, not orchestration.

We need services that emerge from the business units they serve, not an IT governance panel.

We need services to change as rapidly as the business itself changes, not after a chartering, funding, and governance cycle.

Instead of trying to build an antiseptic, clockwork enterprise, we need to embrace the messy, chaotic, Darwinian nature of business. We should be enabling rapid experimentation, quick rollout of "barely sufficient" systems, and fast feedback. We need to enable emergence, not stifle it.

Anything that slows down that cycle of experimentation and adjustment puts your business on the same evolutionary path as the Great Auk. I never thought I’d find myself quoting Tom Peters in a tech blog, but the key really is to "Test fast, fail fast, adjust fast."

The JVM Is Great, But…

| Comments

Much of the interest in dynamic languages like Groovy, JRuby, and Scala comes from running on the JVM. That lets them leverage the tremendous R&D that’s gone into JVM performance and stability. It also opens up the universe of Java libraries and frameworks.

And yet, much of my work deals with the 80% of cost that comes after the development project is done. I deal with the rest of the software’s lifetime. The end of development is the beginning of the software’s life. Throughout that life, many of the biggest, toughest problems exist around and between the JVM’s: Scalability, stability, interoperability, and adaptability.

As I previously showed in this graphic, the easiest thing for a Java developer to create is a slow, unscalable, and unstable web application. Making high-performance, robust, scalable applications still requires serious expertise. This is a big problem, and I don’t see it getting better. Scala might help here in terms of stability, but I’m not yet convinced it’s suitable for the largest masses of Java developers. Normal attrition means that the largest population of developers will always be the youngest and least experienced. This is not a training problem: in the post-commoditization world, the majority of code will always be written by undertrained, least-cost coders. That means we need platforms where the easiest thing to do is also the right thing to do.

Scaling distributed systems has gotten better over the last few years. Distributed memory caching has reached the mainstream. Terracotta and Coherence are both mature products, and they both let you try them out for free. In the open source crowd, as usual, you lose some manageability and some time-to-implement, but the projects work when you use them right. All of these do the job of connecting individual JVMs to a caching layer. On the other hand, I can’t help but feel that the need for these products points to a gap in the platform itself.

OSGi is finally reaching the mainstream. It’s been needed for a long time, for a couple of reasons. First, it’s still too common to see gigantic classpaths containing multiple versions of JAR files, leading to the perennial joy of finding obscure, it-works-fine-in-QA bugs. So, keeping individual projects in their own bundles, with no classpath pollution will be a big help. Versioning application bundles is also important for application management and deployment. OSGi is what we should have had since the beginning, instead of having the classpath inflicted on us.

I predict that we’ll see more production operations moving to hot deployment on OSGi containers. For enterprise services that require 100% uptime, it’s just no longer acceptable to bring down the whole cluster in order to do deployments. Even taking an entire server down to deploy a new revision may become a thing of the past. In the Erlang world, it’s common to see containers running continuously for months or years. In Programming Erlang, Joe Armstrong talks about sending an Erlang process a message to "become" a new package. It works without disrupting any current requests and it happens atomically between one service request and the next. (In fact, Joe says that one of the first things he does on a new system is deploy the container processes, at the very beginning of the project. Later, once he knows what the system is supposed to do, he deploys new packages into those containers.) Hot deployment can be safe, if the code being deployed is sufficiently decoupled from the container itself. OSGi does that.

OSGi also enables strong versioning of the bundles and their dependencies. This is an all-around good thing, since it will let developers and operations agree on exactly versions of which components belong in production at a given time.


| Comments

SAP has been talking up their suite of SOA tools. The names all run together a bit, since they each contain some permutation of "enterprise" and "builder", but it’s a very impressive set of tools.

Everything SAP does comes off of an enterprise service repository (ESR). This includes a UDDI registry, and it supports service discovery and lookup. Development tools allow developers to search and discover services through their "ES Workspace". Interestingly, this workspace is open to partners as well as internal developers.

From the ESR, a developer can import enough of a service defition to build a composite application. Composite applications include business process definitions, new services of their own, local UI components, and remote service references.

Once a developer creates a composite application, it can be deployed to a local container or a test server. Presumably, there’s a similar tool available for administrators to deploy services, composite applications, and other enterprise components onto servers.

Through it all, the complete definition of every component goes into the ESR.

In order to make the entire service lifecycle work, SAP has defined a strong meta-model and a very strong governance process.

This is the ultimate expression of the top-down, strong-governance model for enterprise SOA.

If you’re into that sort of thing.


Type Inference Without Gagging

| Comments

I am not a language designer, nor a researcher in type systems. My concerns are purely pragmatic. I want usable languages, ones where doing the easy thing also happens to be doing the right thing.

Even today, I see a lot of code that handles exceptions poorly (whether checked or unchecked!). Even after 13 years and some trillion words of books, most Java developers barely understand when to synchronize code. (And, by the way, I now believe that there’s only one book on concurrency you actually need.)

I still recall the agony of converting a large C++ code base to const-correctness. That’s something that you can’t just do a little bit. You add one little "const" keyword and sooner or later, you end up writing some gibberish that looks like:

const int foo(const char * const * blah) = const;

I’m exaggerating a little bit, but I bet somebody more current on C++ can come up with an even worse example.

That’s the path I don’t want to see Java tread.

On the other hand, Robert Fischer pointed out that type inference doesn’t have to hurt so much. His post on OCAML’s type inferencing system is a breath of fresh air.

There’s quite a bit of other interesting stuff in there, too. I particularly like this remark:

What the Rubyists call a "DSL", Ocamlists call "readable code".

I’m still working on wrapping my head around Erlang right now (it’s my "new language" for 2008), but I might just have to give OCAML preferred position for my 2009 new language.

When Should You Jump? JSR 308. That’s When.

| Comments

One of the frequently asked questions at the No Fluff, Just Stuff expert panels boils down to, "When should I get off the Java train?" There may be good money out there for the last living COBOL programmer, but most of the Java developers we see still have a lot of years left in their careers, too many to plan on riding Java off into it’s sunset.

Most of the panelists talk about the long future ahead of Java the Platform, no matter what happens with Java the Language. Reasonable. I also think that a young developer’s best bet is to stick with the Boy Scout motto: Be Prepared. Keep learning new languages and new programming paradigms. Work in many different domains, styles, and architectures. That way, no matter what the future brings, the prepared developer can jump from one train to the next.

After today, I think I need to revise my usual answer.

When should a Java developer jump to a new language? Right after JSR 308 becomes part of the language.

Beware: this stuff is like Cthulu rising from the vasty deep. There’s an internal logic here, but if you’re not mentally prepared, it could strip away your sanity like a chill wind across a foggy moor. I promise that’s the last hypergolic metaphor. Besides, that was another post.

JSR 308 aims to bring Java a more precise type system, and to make the type system arbitrarily extensible. I’ll admit that I had no idea what that meant, either. Fortunately, presenter and MIT Associate Professor Michael Ernst gave us several examples to consider.

The expert group sees two problems that need to be addressed.

The first problem is a syntactic limitation with annotations today: they can only be applied to type declarations. So, for example, we can say:

@NonNull List<String> strings;

If the right annotation processor is loaded, this tells the compiler that strings will never be null. The compiler can then help us enforce that by warning on any assignment that could result in strings taking on a null value.

Today, however, we cannot say:

@NonNull List<@NonNull String> strings;

This would mean that the variable strings will never take a null value, and that no list element it contains will be null.

Consider another example:

@NonEmpty List<@NonNull String> strings = ...;

This is a list whose elements may not be null. The list itself will not be empty. The compiler—more specifically, an annotation processor used by the compiler—will help enforce this.

They would also add the ability to annotate method receivers:

void marshal(@Readonly Object jaxbElement, @Mutable Writer writer) @Readonly { ... }

This tells the type system that jaxbElement will not be changed inside the method, that writer will be changed, and that executing marshal will not change the receiving object itself.

Presumably, to enforce that final constraint, marshal would only be permitted to call other methods that the compiler could verify as consistent with @Readonly. In other words, applying @Readonly to one method will start to percolate through into other methods it calls.

The second problem the expert group addresses is more about semantics than syntax. The compiler keeps you from making obvious errors like:

int i = "JSR 308";

But, it doesn’t prevent you from calling getValue().toString() when getValue() could return null. More generally, there’s no way to tell the compiler that a variable is not null, immutable, interned, or tainted.

Their solution is to add a pluggable type system to the Java compiler. You would be able to annotate types (both at declaration and at usage) with arbitrary type qualifiers. These would be statically carried through compilation and made available to pluggable processors. Ernst showed us an example of a processor that can check and enforce not-null semantics. (Available for download from the link above.) In a sample source code base (of approximately 5,000 LOC) the team added 35 not-null annotations and suppressed 4 warnings to uncover 8 latent NullPointerException bugs.

Significantly, Findbugs, Jlint, and PMD all missed those errors, because none of them include an inferencer that could trace all usages of the annotated types.

That all sounds good, right? Put the compiler to work. Let it do the tedious work tracing the extended semantics and checking them against the source code.

Why the Lovecraftian gibbering, then?

Every language has a complexity budget. Java blew through it with generics in Java 5. Now, seriously, take another look at this:

@NotEmpty List<@NonNull String> strings = new ArrayList<@NonNull String>();

Does that even look like Java? That complexity budget is just a dim smudge in our rear-view mirror here. We’re so busy keeping the compiler happy here, we’ll completely forget what our actual project it.

All this is coming at exactly the worst possible time for Java the Language. The community is really, really excited about dynamic languages now. Instead of those contortions, we could just say:

var strings = ["one", "two"];

Now seriously, which one would you rather write? True, the dynamic version doesn’t let me enlist the compiler’s aid for enforcement. True, I do need many more unit tests with the dynamic code. Still, I’d prefer that "low ceremony" approach to the mouthful of formalism above.

So, getting back to that mainstream Java developer… it looks like there are only two choices: more dynamic or more static. More formal and strict, or more loosey-goosey and terse. JSR 308 will absolutely accelerate this polarization.

And, by the way, in case you were thinking that Java the Language might start to follow the community move toward dynamic languages, Alex Buckley, Sun’s spec lead for the Java language, gave us the answer today.

He said, "Don’t look for any ‘var’ keywords in Java."

SOA at 3.5 Million Transactions Per Hour

| Comments

Matthias Schorer talked about FIDUCIA IT AG and their service-oriented architecture. This financial services provider works with 780 banks in Europe, processing 35,000,000 transactions during the banking day. That works out to a little over 3.5 million transactions per hour.

Matthias described this as a service-oriented architecture, and it is. Be warned, however, that SOA does not imply or require web services. The services here exist in the middle tier. Instead of speaking XML, they mainly use serialized Java objects. As Matthias said, "if you control both ends of the communication, using XML is just crazy!"

They do use SOAP when communicating out to other companies.

They’ve done a couple of interesting things. They favor asynchronous communication, which makes sense when you architect for latency. Where many systems push data into the async messages, FIDUCIA does not. Instead, they put the bulk data into storage (usually files, sometimes structured data) and send control messages instructing the middle tier to process the records. This way, large files can be split up and processed in parallel by a number of the processing nodes. Obviously, this works when records are highly independent of each other.

Second, they have defined explicit rules and regulations about when to expect transactional integrity. There are enough restrictions that these are a minority of transactions. In all other cases, developers are required to design for the fact that ACID properties do not hold.

Third, they’ve build a multi-layered middle tier. Incoming requests first hit a pair of "Central Process Servers" which inspect the request. Requests are dispatched to individual "portals" based on their customer ID. Different portals will run different versions of the software, so FIDUCIA supports customers with different production versions of their software. Instead of attempting to combine versions on a single cluster, they just partition the portals (clusters.)

Each portal has its own load distribution mechanism, using work queues that the worker nodes listen to.

This multilevel structure lets them scale to over 1,000 nodes while keeping each cluster small and manageable.

The net result is that they can process up to 2,500 transactions per second, with no scaling limit in sight.

Project Hydrazine

| Comments

Part of Sun’s push behind JavaFX will be called "Project Hydrazine".  (Hydrazine is a toxic and volatile rocket fuel.)  This is still a bit fuzzy, and they only left the boxes-and-arrows slide up for a few seconds, but here’s what I was able to glean.

Hydrazine includes common federated services for discovery, personalization, deployment, location, and development. There’s a "cloud" component to it, which wasn’t entirely clear from their presentation. Overall, the goal appears to be an easier model for creating end-user applications based on a service component architecture. All tied together and presented with JavaFX, of course.

One very interesting extension—called "Project Insight"—that Rich Green and Jonathan Schwartz both discussed is the ability to instrument your applications to monitor end-user activity in your apps.

(This immediately reminded me of Valve’s instrumentation of Half-Life 2, episode 2. The game itself reports back to Valve on player stats: time to complete levels, map locations where they died, play time and duration, and so on. Valve has previously talked about using these stats to improve their level design by finding out where players get frustrated, or quit, and redesigning those levels.)

I can see this being used well: making apps more usable, proactively analyzing what features users appreciate or don’t understand, and targeting development effort at improving the overall experience.

Of course, it can also be used to target advertising and monitor impressions and clicks. Rich promoted this as the way to monetize apps built using Project Hydrazine. I can see the value in it, but I’m also ambivalent about creating even more channels for advertising.

In any event, users will be justifiably anxious about their TV watching them back. It’s just a little too Max Headroom for a lot of people. Sun says that the data will only appear in the aggregate. This leads me to believe that the apps will report to a scalable, cloud-based aggregation service from which developers can get the aggregated data. Presumably, this will be run by Sun.

Unlike Apple’s iron-fisted control over iPhone application delivery, Sun says they will not be exercising editorial control. According to Schwartz, Hydrazine will all be free: free in price, freely available, and free in philosophy.

JavaOne: After the Revolution

| Comments

What happens to the revolutionaries, once they’ve won?

It’s been about ten years since I last made the pilgramage to JavaOne, back when Java was still being called an  "emerging technology".

Many things have changed since then. Java is now so mainstream that the early adopters are getting itchy feet and looking hard for the next big thing. (The current favorite is some flavor of dynamic language running on the JVM: Groovy, Scala, JRuby, Jython, etc.) Java, the language, has found a home inside large enterprises and their attendant consultancies and commoditized outsourcers.

We just heard Sun say that, Java SE is on 91% of all PCs and laptops, 85% of mobile phones, and 100% of all Blu-Ray players. It’s safe to say that the revolution is over. We won.

A couple of things haven’t changed about JavaOne in the last ten years.

The crowds in Moscone are still completely absurd. There aren’t lines, so much as there are tides. People ebb and flow like a non-Newtonian fluid

Sun still keeps a tight reign on the Message. (This control is one of the major tensions between Sun and the broader Java community.) This year, Sun’s focus is clearly on JavaFX. The leading keynote talked repeatedly about "all the screens of your life" and said that the JavaFX runtime will be the access layer to reach your content from any device anywhere. We also heard about JavaFX’s animation, 3D, audio, and video capabilities.

Glassfish got a brief mention. Version 3 is supposed to have a new kernel that slims down to 98KB is its minimal deployment. Add-on modules provide HTTP service, SIP service, and so on. Rich Green said hat Glassfish will scale up to the data center and down to set top boxes.

Perhaps it’s just my perspective, since I’m mostly a server-side developer, but I had the oddest sense of deja-vu. Instead of Rich Green in 2008, I felt the strange sense that I was listening to Scott McNealy in 1998. Same message: Java from the handset to the data center. Set top boxes. Headspace for audio. (Anyone else remember Thomas Dolby at the keynote?  This year we got Neil Young.)

So, here we are, at the 13th JavaOne, and Sun is still trying to get developers to see Java as more than a server-side platform. 

Well, the more things change, the more they stay the same, I suppose.