Archive for category Technology

Why Facebook will never catch up to Google

, Technology | 55 Comments

I don’t think Facebook will ever catch up with Google, personally.  Maybe FB knows what everyone “likes” and can sell ads, but Google has equally good data on users.
More than that, they bought all kinds of dark fiber after the dot com bust, created shipping container data centers conveniently located at many of the internet’s peering points, and makes their own servers for mammoth processing and storage capacity.  They are 10 years ahead of FB
And it culminates with this (and I want one!) …

I don’t think Facebook will ever catch up with Google.  FB knows what everyone “likes” and can sell ads, but Google has equally good data on users and also sells ads.  FB knows what people are talking about at any given moment while Google knows what people are interested in and searching for.   Call this one even.

But for years, Google was buying dark fiber on the cheap after the dot com bust.  They’ve created shipping container data centers conveniently located at many of the internet’s peering points.  They’ve created their own servers for mammoth processing and storage capacity.  They are 10 years ahead of FB in hardware and physical infrastructure.

And now this: Google unveils pricing, rollout strategy for high-speed Internet service

Facebook is trendy and users could decide FB is no longer cool at the drop of a hat (though I admit it’s far stickier than that, considering they have years of my pictures and history in a timeline).  Google, on the other hand, is building physical infrastructure in our country, stuff that people will rely on every day of their lives.  It’s the ultimate toll road.

And that’s why Google’s market cap is 4X Facebook’s (and I didn’t even mention Android).

Red Pen and Comments in the Margin for your UI

Do you remember how your High School English teacher graded your papers?  Mine all used red pen and circled things and wrote comments in the margin.  I still do this today when I write something important.  I print a copy, grab a red pen, and turn into a ruthless editor on the lookout for just a few simple things, like using fewer words.

But red pen and comments in the margin doesn’t only apply to writing.  You can apply the same principles and concepts to a user interface.  Take a screenshot of your application, print it out, and find yourself a red Bic.  There’s something special about writing on and drawing over an image of your application because it’s something you probably never do.  Editing a screenshot in Photoshop is good, too, though sometimes not as satisfying.

Working example

Below is a before and after screenshot of a single sidebar in FoodHub Pro that shows my Red Pen and Comments in the Margin thinking.  My software is constantly evolving and improving, and there’s no need to let ego get in the way of a great UI.  It might be that this sidebar could be refined further if we find that users don’t actually need all of the widgets.

The “After” image has fewer words, fewer lines, no distracting headers, and is shorter overall with no loss of functionality.

Use a Red Pen and Comments in the Margin and let me know how your UI turns out.  I’d love to see more before/after examples.


FoodHub Pro sidebar with "red pen and comments in the margin" editing

#1:  Use fewer words

How many times have you heard someone explaining something and they said “What that means is …”  These words don’t say anything.  Just say what it means, don’t preempt.

What that means is be thrifty with words.”

The first four words mean nothing and the last four mean everything.  Take your red pen and cross through half of that sentence.

What words can I cut from my sidebar?  ”New Purchase Order” and “Choose a grower …” can be concisely written as “New PO for grower …”  That’s a 33% word discount!

#2:  Avoid clutter

Don’t make me think, don’t make me read, and especially don’t hide the few words I must read beneath words I want to ignore.

The headers in my sidebar have no purpose other than to be bigger and bolder than the words that have meaning, like the “From” and “to” labels and “Show only” which implies filtering capability.

#3:  Remove stuff

Do we really need headers?  Headers are generally meant to delineate blocks of stuff, which I think it particularly well suited by wells.  Headers are big, bold words that aren’t important.  They are too tall.  Wells, on the other hand, provide excellent delineation of things and implies depth on a page.   With fewer meaningful words remaining in a well, the purpose of the widget becomes obvious and you don’t need a header to explain what it is.

Are there any form fields that aren’t really needed?  Our customers don’t change business models very often (read: never), which means the type of Purchase Order created never changed.  We tucked that option away on a config screen and dropped it from our form.  It simply wasn’t needed.

The Results Have Been Measured

Without any loss of functionality and with an immeasurable increase in clarity and simplicity, we can measure results by counting what remains.

  1. 30% of words removed
  2. 100% of tall and bold headers removed
  3. 100% of bold form field headers removed
  4. 0% change in distinct widgets with wells replacing horizontal lines
  5. 0% loss of functionality
  6. Increased clarity and simplicity:  immeasurable

Red Pen and Comments in the Margin works as well for interface design as it did for my old English teacher.  It cuts through clutter and simplifies by eliminating anything that doesn’t directly and succinctly address the task at hand.  Everything in the design is subject for removal, from redundant or unnecessary words to unused elements to the use of white space.  Each item in a design should fight for its life because a good editor always has a red pen handy.

The Zombie Horde vs. A Posse of Cowboys

A recent blog entry attempts to paint Big M Methodology as a zombie creating process and quotes Peopleware as the sole evidence of its argument.  You, the poor developer, are turned into a mindless zombie by having a defined process to follow.  You are given no license for creativity, no room for error, and you are discouraged from making mistakes.

This apparently makes you a zombie that must be told what to do and how to do it.  Or, to put it another way, this makes you a grown-up software developer that can write code for the space shuttle.

Fast Company has a fascinating article called “They Write the Right Stuff” that looks into the methodology that produces bug free software.  The software powering the space shuttle has to be bug free or people die.  Quality matters.  It was originally written in the internet stone age (1996), but it is just as relevant today as it was a decade ago.

[The shuttle group] is aggressively intolerant of ego-driven hotshots.In the shuttle group’s culture, there are no superstar programmers. The whole approach to developing software is intentionally designed not to rely on any particular person.

Mindless zombines cannot be superstars! Joel said that only superstars can hit the high notes!  How can bug free software be written by zombies?!  Don’t CMM Level 5 certified organizations (of which there are only a handful in the world) know they need superstars to send space ships into the wild blue yonder?

The blog entry makes the claim that disallowing developers to make mistakes is teamicide.  The blog author further claims that by stifling creativity, management and Big M Methodology shows distrust of their developers which dooms a project in the long run.

Again, someone forgot to tell the guys writing bug free code:

And the culture is equally intolerant of creativity, the individual coding flourishes and styles that are the signature of the all-night software world. “People ask, doesn’t this process stifle creativity? You have to do exactly what the manual says, and you’ve got someone looking over your shoulder,” says Keller. “The answer is, yes, the process does stifle creativity.”

And that is precisely the point — you can’t have people freelancing their way through software code that flies a spaceship, and then, with peoples lives depending on it, try to patch it once its in orbit. “Houston, we have a problem,” may make for a good movie; it’s no way to write software. “People have to channel their creativity into changing the process,” says Keller, “not changing the software.”

An interesting idea arises from Big M: You don’t fix bugs, you fix the process that allowed the bug in the first place.  The shuttle group “avoids blaming people for errors. The process assumes blame – and it’s the process that is analyzed to discover why and how an error got through.”

Capability and Maturity Model

CMM certification is an interesting thing, and I find the wording particularly enlightening:  “Maturity model.”  A CMM certified process is for grown-ups, not start ups.  It’s mature and rational, not for the cowboy coders who stay up all night slinging code from the hip in a heroic effort to ship version 1.0.

Tracking bugs, prioritizing issues, performing QA, and having basic version control and configuration management is the nuts and bolts of Level 2.  Many organizations have these basic project management processes in place and would qualify for level 2 certification. Level 3, though, is Big M Methodology and Process.  Without a defined process (level 3) that emits metrics (level 4), how can an organization possibly attempt to improve development, increase quality, and reduce costs via process improvement (level 5)?

When a process improvement demonstrably reduces the defect rate, the end user benefits with higher quality software at a reduced price.  This is absolutely required in the space shuttle, but isn’t it desired in everything else, from our operating system (no blue screens of death!) to our applications?  I don’t like kernel panics or having my computer crash from a bad driver.  I don’t like losing all my data because a bug shutdown my program.  A posse of cowboys can hack out a bad version 1 of their product, but it’s the Big M zombies lead by mature management that engineers the quality software I want to buy or manage our nuclear reactors.

It’s Just a Software Problem

The B-2 bomber wouldn’t fly on its maiden flight — but it was just a software problem. The new Denver airport was months late opening and millions of dollars over budget because its baggage handling system didn’t work right — but it was just a software problem. This spring, the European Space Agency’s new Ariane 5 rocket blew up on its maiden launch because of a little software problem. The federal government’s major agencies – from the IRS to the National Weather Service — are beset with projects that are years late and hundreds of millions of dollars over budget, often because of simple software problems.


Talent does vary by developer — after all, we’re not resources and interchangeable cogs — but we need better processes for developing software.  We need process improvement to increase quality, which leaves more time for more features because we’re not consumed by rework issues.  We need to reduce the cost of software development, which reduces the price and increases demand.

We need developers to stop thinking all their creativity goes into the code because their creativity should be put into improving how we write code in the first place.

Two “orders of magnitude” is one too many

An “order of magnitude” gain in efficiency, whether its a business process or computer program, is something to strive for, but two orders of magnitude, despite sounding cool, is one too many.


Assume you have a perfectly linear process — say, a computer program processing data –  whereby you can add additional processing nodes for parallel processing.  If 1 run of your program takes 1 minute and you’ve got 100 iterations, you can reasonably expect to wait for 100 minutes.

100 units of work x 1 minute per unit = 100 minutes elapsed time

But since your program can scale linearly, you can add an additional program and cut the time in half!

(100 units x 1 minute) / 2 processors = 50 minutes elapsed time

Similarly, you can scale up to 4 processors and reduce elapsed time to 25 minutes.  This is perfect linear scaling and with your big math brain, you figure out that you can get a 10X gain by scaling up to 10 processors!

So far, so good.  10x is an order of magnitude and represents a 90% decrease in elapsed time.

(100 units x 1 minutes) / 10 processors = 10 minutes elapsed time
10 minutes is 10% of the original 100 minute elapsed time. 10x gain!

I think the second order of magnitude is a waste of time.  That’s right, it’s not worth going for another 10x gain.

Why?  It costs too much!

Assume that a server costs $1000 and your process will consume the entire processing capacity of a server.  Scaling up to 10 servers costs $10,000.  You reduced processing time by 90% for $10k.

Math is not on your side for the second order of magnitude.  Taking your elapsed time from 10 minutes to 1 minute is another order of magnitude, but it also is 90% of your cost!

You have a perfectly linearly scalable process, right?  So, reducing your 100 minute elapsed time requires 100 servers at $1000 each.  That’s $100,000!  Meanwhile, already achieved a 90% reduction for $10,000.

90% of the gain is achieved by 10% of the investment.  The remaining 10% of the gain requires 90% of the investment!

Pareto was right.  The 80/20 rule applies, but in our case its 90/10.

The chart below shows two orders of magnitude.  You can’t help but notice the point of diminishing returns.  It doesn’t seem worthwhile to go for that second order of magnitude.


WSDL first development? Are they crazy?

From the CXF user guide: ” For new development the preferred path is to design your services in WSDL and then generate the code to implement them.”

Are they insane?

Which would you rather write by hand….



    endpointInterface = "com.southwind.PersonFacade",

    name = "PersonFacade"


public interface PersonFacade {


    public Person getPerson(@WebParam(name="ssn") String ssn);



<?xml version='1.0' encoding='UTF-8'?><wsdl:definitions name="PersonFacadeImplService" targetNamespace="" xmlns:ns1="" xmlns:soap="" xmlns:tns="" xmlns:wsdl="" xmlns:xsd="">  <wsdl:types>

<xs:schema attributeFormDefault="unqualified" elementFormDefault="unqualified" targetNamespace="" xmlns:tns="" xmlns:xs="">

<xs:element name="findPerson" type="tns:findPerson" />

<xs:element name="findPersonResponse" type="tns:findPersonResponse" />

<xs:element name="getPerson" type="tns:getPerson" />

<xs:element name="getPersonResponse" type="tns:getPersonResponse" />

<xs:complexType name="getPerson">


<xs:element minOccurs="0" name="ssn" type="xs:string" />



<xs:complexType name="getPersonResponse">


<xs:element minOccurs="0" name="return" type="tns:person" />



<xs:complexType name="person">


<xs:element minOccurs="0" name="birthday" type="xs:dateTime" />

<xs:element maxOccurs="unbounded" minOccurs="0" name="enrollments" nillable="true" type="tns:enrollment" />

<xs:element minOccurs="0" name="firstName" type="xs:string" />

<xs:element minOccurs="0" name="lastName" type="xs:string" />

<xs:element minOccurs="0" name="ssn" type="xs:string" />



<xs:complexType name="enrollment">


<xs:element minOccurs="0" name="planName" type="xs:string" />

<xs:element name="planRate" type="xs:double" />

<xs:element minOccurs="0" name="type" type="tns:type" />



<xs:complexType name="findPerson">


<xs:element minOccurs="0" name="id" type="xs:string" />



<xs:complexType name="findPersonResponse">


<xs:element minOccurs="0" name="return" type="tns:person" />



<xs:simpleType name="type">

<xs:restriction base="xs:string">

<xs:enumeration value="MEDICAL" />

<xs:enumeration value="DENTAL" />

<xs:enumeration value="VISION" />

<xs:enumeration value="PHARM" />





<wsdl:message name="findPerson">

<wsdl:part element="tns:findPerson" name="parameters">



<wsdl:message name="findPersonResponse">

<wsdl:part element="tns:findPersonResponse" name="parameters">



<wsdl:message name="getPersonResponse">

<wsdl:part element="tns:getPersonResponse" name="parameters">



<wsdl:message name="getPerson">

<wsdl:part element="tns:getPerson" name="parameters">



<wsdl:portType name="PersonFacade">

<wsdl:operation name="getPerson">

<wsdl:input message="tns:getPerson" name="getPerson">


<wsdl:output message="tns:getPersonResponse" name="getPersonResponse">



<wsdl:operation name="findPerson">

<wsdl:input message="tns:findPerson" name="findPerson">


<wsdl:output message="tns:findPersonResponse" name="findPersonResponse">




<wsdl:binding name="PersonFacadeImplServiceSoapBinding" type="tns:PersonFacade">

<soap:binding style="document" transport="" />

<wsdl:operation name="getPerson">

<soap:operation soapAction="" style="document" />

<wsdl:input name="getPerson">

<soap:body use="literal" />


<wsdl:output name="getPersonResponse">

<soap:body use="literal" />



<wsdl:operation name="findPerson">

<soap:operation soapAction="" style="document" />

<wsdl:input name="findPerson">

<soap:body use="literal" />


<wsdl:output name="findPersonResponse">

<soap:body use="literal" />




<wsdl:service name="PersonFacadeImplService">

<wsdl:port binding="tns:PersonFacadeImplServiceSoapBinding" name="PersonFacadeImplPort">

<soap:address location="http://mturanskylptp2:9000/personFacade" />




More proof that you can’t keep a good idea down?

In this blog article, Michael Nygard discusses a talk he attended where a technical architect discussed an SOA framework at FIDUCIA IT AG, a company in the financial services industry. Nygard describes an architecture that echoes many of the features I implicitly spoke of in my first blog article about my big integration project / message bus.

You may be asking yourself right now, why does he keep talking about this particular project? Briefly: it’s been a very fun project, it’s ongoing, it consumes most of my daily brain cycles, we’re still growing it (it’s a brand new infrastructure for us), and it encompasses a whole lot of ideas that I thought were good and that are now being validated by other projects I read about online.

So, what other unsung features did we build in that I’ll now sing about?

Asynchronous Messaging

You’ll notice the Spooler component in the original broad sketch of our architecture. The high-level description I gave the Spooler touched on callbacks. Asynchronous messaging was left unsaid, but it is implied by having a mechanism for callbacks.

The description also labeled my Spooler an endpoint, which may be a web service endpoint. We use web services only because the Enterprise Service Bus (ESB) orchestrating work on our bus is .NET-based while our project is all Java. That said, we post Plain Ol’ XML (POX) over HTTP, which is deserialized quickly to a Java POJO. Our entire messaging system works on POJOs, not XML.

The outside world may use SOAP (or XML-RPC or flat files or whatever) when communicating with my company, but internally our ESB talks POX with the bus. Mediation and transformation (from SOAP –> POX) is part of the functionality of an ESB. Consumers, internally to our bus, would directly access queues instead of using web services.

Pure POJOs, but distributed

It’s extremely productive and useful to work with a pure POJO model, and it’s even more productive and useful when the state of those POJOs is automagically kept in sync across the cluster regardless of what node is working on it. This is where Terracotta Server shines.

We pass POJOs around through all the queues. Consumers — which can exist anywhere on the network — process the Service/Job/Message (all interchangeable terms, as far as I am concerned — they are all units of work). Our messages are stateful, meaning they enter our bus empty except for parameters in instance variables, get routed around to various and sundry consumers across the network, and get posted back (the callback) full of data to the ESB.

Why do we need distributed POJOs? Well, we found it to be highly useful. For example, we offer a REST API to abort a pending message (such as http://ourendpoint/message/abort/abcdefg-the-guid-wxyz). The easiest way we found to tell the entire bus to disregard this message was to flip the bit on the message itself. The endpoint is running under Terracotta Server, all of the queues live in TC, and our consumers are likewise plugged in. If you stick all your messages in a Map (or series of maps if you’re worried about hashing, locking, and high volumes) where the GUID is the key and the value is the message, then the endpoint or any consumer can quickly obtain the reference to the message itself and alter its state. We can also write programs that hook into TC temporarily to inspect or modify the state of the system. Persistent memory is cool like that. It exists outside the runtime duration of the ephemeral program.

The endpoint likewise has REST APIs for returning the state of the bus, queues sizes, current activity, and other metrics. All of this data is collected from the POJOs themselves, because the endpoint has access to the very object instances that are running all over the network. It just so happens this architecture works wonderfully inside a single JVM, too, without TC, for easier development and debugging.

Load balancing and routers

Straight from Michael Nygard’s article:

Third, they’ve build a multi-layered middle tier. Incoming requests first hit a pair of “Central Process Servers” which inspect the request. Requests are dispatched to individual “portals” based on their customer ID.

In other words, they have endpoints behind load balancers (we use Pound) and “dispatched” is another word for “routed.” We have content based routers (a common and useful Enterprise integration Pattern for messaging systems) that route messages/services/jobs of specific types to certain queues. Our consumers are not homogenous. We’ve configured different applications (the integration aspects of our project) to listen on different queues. This saved us from having to port applications off the servers where they were previously deployed. These apps are several years old. Porting would have taken time and money. Allowing messages to flow to them where they already exist was a big win for us.

More to come

I’ve got the outline for my white paper complete, where I bulleted the features above as well as those in my previous blog article. There are other features I haven’t covered yet. Overall, I think it will be an interesting paper to read.

Still, I’m a little jealous, though, that FIDUCIA IT AG has scaled out to 1,000 nodes in their system. I can’t say how many nodes we’re up to, but I can say I’m looking forward to the massive scalability that our new architecture will give us.

You can’t keep a good idea down

Our message bus project was more than just replacing JMS with a POJO messaging system. It’s a whole piece of infrastructure designed to make it easy for different folks to do their jobs.

How did we do this and why do the next couple of paragraphs sound like I’m bragging? Because many of the features we implemented were recently announced in a new open source project (more on that later). Bear with me as I go through some of the features we implemented, knowing that I’ll tie them to the features of a recently announced (and exciting) open source middleware product.

Configuration Management and Network Deployments …

We deploy applications to our bus over the network by the way of a simple little bootstrap loader. You’ll note the Java class I used in my blog article uses a URLClassLoader. My example used a file URL (”file://”) but there’s nothing stopping those URLs from beginning with “http://…”

This lets our Config Mgt. team deploy applications to a single place on the network. As nodes on the network come up, they’ll download the code they need to run.

via Bootstrapping

While we’re on the subject of bootstrapping, there’s nothing stopping a smart developer from bootstrapping different applications into different classloaders. Again using the Java class from my blog article, you’ll notice the code finds the “main” class relative to the classloader. Who says you need just one classloader? Who says you can only run “main” methods? Stick all your classloaders in a Map and use the appropriate classloader to resolve an interface like, say, Service (as in Service Oriented Architecture) with an “execute” method. Suddenly, you can have applications that are invoked by a Service. We used this very technique to integrate legacy, stand-alone applications into an SOA.

Take bootstrapping and isolated classloading one step further and you’ll soon realize you can load multiple versions of the same application side-by-side in the same container. One could be your release branch version, the other could be your trunk code. Same infrastructure and container. We did that, too.

Lastly, what happens if you dump a classloader containing an application and replace it with a new one containing a new version of the application? Well, you just hot swapped your app. You updated the application without restarting your container.

Focus on Developer Productivity

Developers? We got them covered, too. We went for a simple pure Java POJO model. No app servers, databases, or anything else required. A developer can run the entire message bus — the entire infrastructure — in a single JVM, which means inside your IDE. Unit tests are a snap because all Services are POJO classes. Did I mention it takes one whole second for someone to start a local instance of our message bus in a single JVM? I’m a developer first, architect second. I like to think about how my decisions affect other developers. If I make it easy, they’ll love me. If I don’t make it easy, well, … I don’t think I’ve done my job well.

Utility Computing w/o Virtualization

It’s a lot easier to get efficient use of hardware once you can load all your applications into a single container/framework. If you bake in queuing and routing (a message bus), then you can implement the Competing Consumers pattern for parallelism in processing. Also, if all message consumers are running all applications (thanks, classloading!), then your consumers can listen on all queues to process all available work. This is utility computing without a VM. Our project lets us use all available CPU cycles as long as there is work in the queues.

There’s also the Master/Worker pattern to process batch jobs across a grid of consumers. Grid computing is one aspect of our project, but a minor one. I’m more interested in the gains we achieve through utility computing and the integration of several legacy applications to form our SOA.

The Open Source Version

Here are some features from the open source project, tell me if they sound familiar:

  • You can install, uninstall, start, and stop different modules of your application dynamically without restarting the container.
  • Your application can have more than one version of a particular module running at the same time.
  • OSGi provides very good infrastructure for developing service-oriented applications

SpringSource recently announced a new “application platform” with some of the following features and benefits:

  • Real time application and server updates
  • Better resource utilization
  • Side by side resource versioning
  • Faster iterative development
  • Small server footprint
  • More manageable applications

Now, if only the SpringSource Application Platform could add queuing and routing to their project, we might consider porting to it. In the meantime, I’m happy to see other projects validating the ideas we pitched here at our company.

I’m excited, too, to announce that I’ve received the blessing of Management and PR to write a white paper about our project. It will cover all the above features as well as a slew of others, such as service orchestration and monitoring, asynchronous callbacks, a few other key Enterprise Integration Patterns, and it will explain how we used Terracotta Server to tie it all together. Stay tuned! I’m going to write blog articles to coincide with the sections of the paper.

Why Linux will never be the world’s primary desktop

Every year for the past N years has been proclaimed as “The Year of Linux on the Desktop!” It hasn’t happened. It will never happen.


GNOME vs. KDE? Which distro?

I understand that Linux is the kernel and that GNOME/KDE is the desktop. I am well aware of this distinction. Joe Average User is not. Joe Average User runs Windows because that’s what came installed with his machine from BestBuy. Jane Schmancy User might be using a Mac, but OS X came pre-installed when she bought her machine. In both scenarios, the computers Just Workâ„¢ when they brought them home and booted them up. It’s a packaged experience where the value-add of the OEM vendor is the preconfigured-everything-work-out-of-the-box.

Enter Linux.

First, you have to download a distribution. Which one? With this single step, you’ve lost 95% of the people.

Second, you have to install the OS. It’s a well-known fact that 98.87823423% of the people don’t know what an operating system is nor do they care. They want to vote for their favorite American Idol, not worry about what it means to walk through Anaconda’s install process.

The Free Open Source Software community (of which I am a fervent supporter) believes that choice is a good thing. They are wrong. Less is more, particularly when it comes to making choices. This is the paradox of choice.

The group of people in the world who likes more choice when it comes to operating systems is vanishingly small.

I’ve got CentOS on a desktop at home. I’ve installed Ubuntu on a work machine. Damn Small Linux is our OS of choice for our message bus. I’m in the minority of users. It takes one to know one.

The real reason people won’t switch desktops

It’s different.

That’s it. In a nutshell, “it’s different” will keep the vast majority of users from switching desktops. Joe and Jane Average User barely know Windows, I don’t expect them to voluntarily want to be a newbie on another system. No one likes being a newbie, especially when they’ve achieved some level of mastery of something.

One of my teammates (we’ll call him “Dan”) just got a MacBook Pro to replace his aging Windows laptop. Dan is among the technical elite. He chose Damn Small Linux for our server OS. One week later, he’s lamenting the fact that he’s not as productive on his new machine because he has to learn all new ways of doing things. He briefly considering remapping all the Mac hot keys to match the Windows hot keys he was used to.

When a tech master is considering remapping hot keys, Joe Average User is lost!

The average user doesn’t use hot keys, doesn’t know what they are, and certainly doesn’t know how to remap them. If they even manage to install a new OS, they’ll be lost when looking to run their programs; they won’t get the dumb joke in KDE where every app has to start with a K (Kommander? Konquerer? Kalculator? Please.)

The rise of Mac OSX?

If there will be another desktop to challenge Windows — and that’s a pretty big IF — it will be Apple’s wares. They’ve got the iPod and the iPhone leading the way. They’ve got a much cooler brand than Microsoft. They are trickling into the enterprise market (our CEO uses a Mac, for example, as does our creative staff, media department, and several developers).

Still, “Think Different” becomes “it’s different” for the average user. The person switching from Windows to Mac will be on the right side of the bell curve. The billion PCs out there in the world (and growing) will be running Windows for a long time.

I’m writing this from a Windows laptop. Of the 12 people I can see in my immediate field of vision, only Dan has a Mac. One runs Ubuntu in a VM on his Windows laptop. The rest are running straight Windows.

This article isn’t meant to be a comparison of desktops, features, security, reliability or anything else. I’m just calling it like it see it in terms of usage. The word “never” in the title makes my position an absolute. Perhaps I should modify it to say “Why Linux won’t be the world’s primary desktop for a looooooooooong time, if ever.”

I’m sure some will disagree.

Scalability & High Availability with Terracotta Server

Our message bus will be deployed to production this month. We’re currently sailing through QA. Whatever bugs we’ve found have been in the business logic of the messages themselves (and assorted processing classes). Our infrastructure — the message bus backed by Terracotta — is strong.


People are asking questions about scalability. Quite frankly, I’m not worried about it.

Scalability is a function of architecture. If you get it right, you can scale easily with new hardware. We got it right. I can say that with confidence because we’ve load tested the hell out of it. We put 1.3 million real world messages through our bus in a weekend. That may or may not be high throughput for you and your business, but I guarantee you it is for our’s.

The messages we put through our bus take a fair amount of processing power. That means they take more time to produce their result than they do to route through our bus. How does that affect our server load? Terracotta sat idle most of the time. The box hosting TC is the beefiest one in our cluster. Two dual-core hyperthreaded procs, which look like 8 CPUs in htop. We figured we would need the most powerful server to host the brains of the bus. Turns out we were wrong, so we put some message consumers on the TC box, widening our cluster for greater throughput. Now the box is hard at work, but only because we put four message consumers on it.

When we slam our bus with simple messages (e.g, messages that add 1+1), we see TC hard at work. The CPUs light up and the bus is running as fast as it can. 1+1 doesn’t carry much overhead. It’s the perfect test to stress the interlocking components of our bus. You can’t get any faster than 1+1 messages. But when we switched to real world messages, our consumers took all the time, their CPUs hit the ceiling, and our bus was largely idle. The whole bus, not just TC. We’ve got consumers that perform logging and callbacks and other sundry functions. All of these are mostly idle when our message consumers process real world workloads.

We’ve got our test farm on 4 physical nodes, each running between 4 and 8 Java processes (our various consumers) for a total of 24 separate JVMs. All of these JVMs are consumers of queues, half of them are consumers of our main request queue that performs all the real work. The other half are web service endpoints, batch processors, loggers, callback consumers, etc. and each are redundant on different phsyical nodes. Because our message processing carries greater overhead than bussing, I know we can add dozens more consumers for greater throughput without unduly taxing Terracotta. If we hit a ceiling, we can very easily create another cluster and load balance between them. That’s how Google scales. They’ve got thousands of clusters in a data center. This is perfectly acceptable for our requirements. It may or may not be suitable for your’s.

You might be thinking that dozens of nodes isn’t a massive cluster, but our database would beg to differ. Once we launch our messaging system and start processing with it, we’ll begin to adversely impact our database. Scaling out that tier (more cheaply than buying new RAC nodes) is coming next. I hope we can scale our database as cheaply and easily as our message bus. That’ll enable us to grow our bus to hundreds of processors.

Like I said, I’m not worried about scaling our bus.


I might not be worried about scalability, but I am worried about high availability. My company is currently migrating to two new data centers. One will be used for our production servers while the other is slated for User Acceptance Test and Disaster Recovery. That’s right, an entire data center for failover. High availability is very important for our business and any business bound by Service Level Agreements.

Terracotta Server has an Active-Passive over Network solution for high availability. There is also a shared disk solution, but the network option fits our needs well. Our two data centers are connected by a big fat pipe, and Terracotta Server can have N number of passive servers. That means we can have a redundant server in our production data center and another one across the wire in our DR data center. We’ve also got a SAN that replicates disks between data centers. We might go with the shared disk solution if we find it performs better.

Overall, though, it is more important for our business to get back online quickly than it is to perform at the nth degree of efficiency. Messaging, after all, doesn’t guarantee when your stuff gets run, just that it eventually runs. And if everything is asynchronous, then performance, too, is a secondary consideration to high availability.


If there’s one lesson to be learned through this blog article, it’s that one size does not fit all. Not all requirements are created equal. Our message bus is the right solution for our needs. Your mileage may vary. Some factors may outweigh others. For example, having a tight and tiny message bus that any developer can run in their IDE without a server (even without TC) is a great feature. No APIs lets us do that with Terracotta. You might have very different requirements than we do and find yourself with a very different solution.

HOW TO: Better JavaScript Templates

JavaScript Templates (Jst) is a pure Javascript templating engine that runs in your browser using JSP-like syntax. If that doesn’t sound familiar, check out the live working example on this site and download the code. It’s Free Open Source Software.

Better JavaScript Templates

Switch to our mobile site