Archive for May, 2008

How To Kill Productivity, Part I

Here’s a surefire, one-step way to sap your staff of two hours of productivity: 1) poorly schedule two hours worth of meetings!

Peopleware famously dissects productivity among thought workers and persuasively argues that environments conducive to developers getting into the “zone” and feeling the “flow” experience higher productivity than those that aren’t so hospitable. Task switching is considered harmful for those whose jobs require deep concentration, high creativity, and other pure thought stuff.

The meeting scenario in this picture may be mocked up, but it has happened to me in real life as I’m sure it happens in many organizations. It’s the quickest, simplest, easiest way to tack two extra hours onto the cost of those two hours of meetings.



Because, like DeMarco and Lister point out, it can easily take 15-30 minutes just to get into the zone!

When the first alert pops up, I’m usually distracted enough from my task at hand. I need to find out where the meeting room is, maybe grab a cup of coffee, and I’ll probably need to use the restroom. That’s 15 minutes gone.

In between meetings we’re catching up on email. And if it’s a slow email day, we’ll read Slashdot or CNN because it’s just not possible to get deeply into the zone in that short window… just to have another alert pop up in 15 minutes. That’s 30 minutes more down the tubes.

After Meeting #2, we’re again catching up on email or making lunch plans. I don’t care if it’s a slow email day AND you brown-bagged your lunch, you’re still not getting deeply into the zone for meaningful work in this 30 minute window. We’re down a full 75 minutes so far.

And after lunch? Too many of other things can distract us from work: more coffee, restroom, email, chitchat in the hallway, food coma, etc. It’s easily another 30 minutes here to even approach the zone, let alone get into it deeply enough before the next meeting alert pops up…

Poorly planned meetings, spaced out as illustrated, are a guaranteed productivity killer.

The New Yorker publishes an article proving the patent system is broken

The New Yorker has published an article on Nathan Myhrvold’s Intellectual Ventures, a think tank that brainstorms new ideas, patents them, and licenses their subsequent ownership of that new “intellectual property.”

The point of incredulity, for me, came when I read this quote from Bill Gates:

They also came up with this idea to stop hurricanes. Basically, the waves in the ocean have energy, and you use that to lower the temperature differential. I’m not saying it necessarily is going to work. But it’s just an example of something where you go, Wow.

The article talks about Alexander Graham Bell and his genius, and how Myhrvold is inspired by Bell. But Bell didn’t simply think up the telephone and patent it; Bell actually invented the telephone!

Intellectual Ventures files up to 500 patents a year. There are no inventions here, mind you, just ideas. You know the patent system is broken when a company can obtain a government-granted monopoly on an idea like preventing a hurricane and sue the bejeezus out of someone who might actually figure out how to control Mother Nature.

Ideas are cheap. Ideas are easy. It’s the implementation that is hard. The research and successful development of a seemingly impossible idea is worthy of a patent, not the brainstorming for the idea itself. How would you like to solve an impossible problem only to be rewarded with a lawsuit by a troll with a submarine patent who’s put zero work into solving the hard problem? Yeah, that’s what I thought.

More proof that you can’t keep a good idea down?

, Terracotta | 8 Comments

In this blog article, Michael Nygard discusses a talk he attended where a technical architect discussed an SOA framework at FIDUCIA IT AG, a company in the financial services industry. Nygard describes an architecture that echoes many of the features I implicitly spoke of in my first blog article about my big integration project / message bus.

You may be asking yourself right now, why does he keep talking about this particular project? Briefly: it’s been a very fun project, it’s ongoing, it consumes most of my daily brain cycles, we’re still growing it (it’s a brand new infrastructure for us), and it encompasses a whole lot of ideas that I thought were good and that are now being validated by other projects I read about online.

So, what other unsung features did we build in that I’ll now sing about?

Asynchronous Messaging

You’ll notice the Spooler component in the original broad sketch of our architecture. The high-level description I gave the Spooler touched on callbacks. Asynchronous messaging was left unsaid, but it is implied by having a mechanism for callbacks.

The description also labeled my Spooler an endpoint, which may be a web service endpoint. We use web services only because the Enterprise Service Bus (ESB) orchestrating work on our bus is .NET-based while our project is all Java. That said, we post Plain Ol’ XML (POX) over HTTP, which is deserialized quickly to a Java POJO. Our entire messaging system works on POJOs, not XML.

The outside world may use SOAP (or XML-RPC or flat files or whatever) when communicating with my company, but internally our ESB talks POX with the bus. Mediation and transformation (from SOAP –> POX) is part of the functionality of an ESB. Consumers, internally to our bus, would directly access queues instead of using web services.

Pure POJOs, but distributed

It’s extremely productive and useful to work with a pure POJO model, and it’s even more productive and useful when the state of those POJOs is automagically kept in sync across the cluster regardless of what node is working on it. This is where Terracotta Server shines.

We pass POJOs around through all the queues. Consumers — which can exist anywhere on the network — process the Service/Job/Message (all interchangeable terms, as far as I am concerned — they are all units of work). Our messages are stateful, meaning they enter our bus empty except for parameters in instance variables, get routed around to various and sundry consumers across the network, and get posted back (the callback) full of data to the ESB.

Why do we need distributed POJOs? Well, we found it to be highly useful. For example, we offer a REST API to abort a pending message (such as http://ourendpoint/message/abort/abcdefg-the-guid-wxyz). The easiest way we found to tell the entire bus to disregard this message was to flip the bit on the message itself. The endpoint is running under Terracotta Server, all of the queues live in TC, and our consumers are likewise plugged in. If you stick all your messages in a Map (or series of maps if you’re worried about hashing, locking, and high volumes) where the GUID is the key and the value is the message, then the endpoint or any consumer can quickly obtain the reference to the message itself and alter its state. We can also write programs that hook into TC temporarily to inspect or modify the state of the system. Persistent memory is cool like that. It exists outside the runtime duration of the ephemeral program.

The endpoint likewise has REST APIs for returning the state of the bus, queues sizes, current activity, and other metrics. All of this data is collected from the POJOs themselves, because the endpoint has access to the very object instances that are running all over the network. It just so happens this architecture works wonderfully inside a single JVM, too, without TC, for easier development and debugging.

Load balancing and routers

Straight from Michael Nygard’s article:

Third, they’ve build a multi-layered middle tier. Incoming requests first hit a pair of “Central Process Servers” which inspect the request. Requests are dispatched to individual “portals” based on their customer ID.

In other words, they have endpoints behind load balancers (we use Pound) and “dispatched” is another word for “routed.” We have content based routers (a common and useful Enterprise integration Pattern for messaging systems) that route messages/services/jobs of specific types to certain queues. Our consumers are not homogenous. We’ve configured different applications (the integration aspects of our project) to listen on different queues. This saved us from having to port applications off the servers where they were previously deployed. These apps are several years old. Porting would have taken time and money. Allowing messages to flow to them where they already exist was a big win for us.

More to come

I’ve got the outline for my white paper complete, where I bulleted the features above as well as those in my previous blog article. There are other features I haven’t covered yet. Overall, I think it will be an interesting paper to read.

Still, I’m a little jealous, though, that FIDUCIA IT AG has scaled out to 1,000 nodes in their system. I can’t say how many nodes we’re up to, but I can say I’m looking forward to the massive scalability that our new architecture will give us.

You can’t keep a good idea down


Our message bus

project was more than just replacing JMS with a POJO messaging system. It’s a whole piece of infrastructure designed to make it easy for different folks to do their jobs.

How did we do this and why do the next couple of paragraphs sound like I’m bragging? Because many of the features we implemented were recently announced in a new open source project (more on that later). Bear with me as I go through some of the features we implemented, knowing that I’ll tie them to the features of a recently announced (and exciting) open source middleware product.

Configuration Management and Network Deployments …

We deploy applications to our bus over the network by the way of a simple little bootstrap loader. You’ll note the Java class I used in my blog article uses a URLClassLoader. My example used a file URL (”file://”) but there’s nothing stopping those URLs from beginning with “http://…”

This lets our Config Mgt. team deploy applications to a single place on the network. As nodes on the network come up, they’ll download the code they need to run.

via Bootstrapping

While we’re on the subject of bootstrapping, there’s nothing stopping a smart developer from bootstrapping different applications into different classloaders. Again using the Java class from my blog article, you’ll notice the code finds the “main” class relative to the classloader. Who says you need just one classloader? Who says you can only run “main” methods? Stick all your classloaders in a Map and use the appropriate classloader to resolve an interface like, say, Service (as in Service Oriented Architecture) with an “execute” method. Suddenly, you can have applications that are invoked by a Service. We used this very technique to integrate legacy, stand-alone applications into an SOA.

Take bootstrapping and isolated classloading one step further and you’ll soon realize you can load multiple versions of the same application side-by-side in the same container. One could be your release branch version, the other could be your trunk code. Same infrastructure and container. We did that, too.

Lastly, what happens if you dump a classloader containing an application and replace it with a new one containing a new version of the application? Well, you just hot swapped your app. You updated the application without restarting your container.

Focus on Developer Productivity

Developers? We got them covered, too. We went for a simple pure Java POJO model. No app servers, databases, or anything else required. A developer can run the entire message bus — the entire infrastructure — in a single JVM, which means inside your IDE. Unit tests are a snap because all Services are POJO classes. Did I mention it takes one whole second for someone to start a local instance of our message bus in a single JVM? I’m a developer first, architect second. I like to think about how my decisions affect other developers. If I make it easy, they’ll love me. If I don’t make it easy, well, … I don’t think I’ve done my job well.

Utility Computing w/o Virtualization

It’s a lot easier to get efficient use of hardware once you can load all your applications into a single container/framework. If you bake in queuing and routing (a message bus), then you can implement the Competing Consumers pattern for parallelism in processing. Also, if all message consumers are running all applications (thanks, classloading!), then your consumers can listen on all queues to process all available work. This is utility computing without a VM. Our project lets us use all available CPU cycles as long as there is work in the queues.

There’s also the Master/Worker pattern to process batch jobs across a grid of consumers. Grid computing is one aspect of our project, but a minor one. I’m more interested in the gains we achieve through utility computing and the integration of several legacy applications to form our SOA.

The Open Source Version

Here are some features from the open source project, tell me if they sound familiar:

  • You can install, uninstall, start, and stop different modules of your application dynamically without restarting the container.
  • Your application can have more than one version of a particular module running at the same time.
  • OSGi provides very good infrastructure for developing service-oriented applications

SpringSource recently announced a new “application platform” with some of the following features and benefits:

  • Real time application and server updates
  • Better resource utilization
  • Side by side resource versioning
  • Faster iterative development
  • Small server footprint
  • More manageable applications

Now, if only the SpringSource Application Platform could add queuing and routing to their project, we might consider porting to it. In the meantime, I’m happy to see other projects validating the ideas we pitched here at our company.

I’m excited, too, to announce that I’ve received the blessing of Management and PR to write a white paper about our project. It will cover all the above features as well as a slew of others, such as service orchestration and monitoring, asynchronous callbacks, a few other key Enterprise Integration Patterns, and it will explain how we used Terracotta Server to tie it all together. Stay tuned! I’m going to write blog articles to coincide with the sections of the paper.

Switch to our mobile site