Archive for category Terracotta

More proof that you can’t keep a good idea down?

In this blog article, Michael Nygard discusses a talk he attended where a technical architect discussed an SOA framework at FIDUCIA IT AG, a company in the financial services industry. Nygard describes an architecture that echoes many of the features I implicitly spoke of in my first blog article about my big integration project / message bus.

You may be asking yourself right now, why does he keep talking about this particular project? Briefly: it’s been a very fun project, it’s ongoing, it consumes most of my daily brain cycles, we’re still growing it (it’s a brand new infrastructure for us), and it encompasses a whole lot of ideas that I thought were good and that are now being validated by other projects I read about online.

So, what other unsung features did we build in that I’ll now sing about?

Asynchronous Messaging

You’ll notice the Spooler component in the original broad sketch of our architecture. The high-level description I gave the Spooler touched on callbacks. Asynchronous messaging was left unsaid, but it is implied by having a mechanism for callbacks.

The description also labeled my Spooler an endpoint, which may be a web service endpoint. We use web services only because the Enterprise Service Bus (ESB) orchestrating work on our bus is .NET-based while our project is all Java. That said, we post Plain Ol’ XML (POX) over HTTP, which is deserialized quickly to a Java POJO. Our entire messaging system works on POJOs, not XML.

The outside world may use SOAP (or XML-RPC or flat files or whatever) when communicating with my company, but internally our ESB talks POX with the bus. Mediation and transformation (from SOAP –> POX) is part of the functionality of an ESB. Consumers, internally to our bus, would directly access queues instead of using web services.

Pure POJOs, but distributed

It’s extremely productive and useful to work with a pure POJO model, and it’s even more productive and useful when the state of those POJOs is automagically kept in sync across the cluster regardless of what node is working on it. This is where Terracotta Server shines.

We pass POJOs around through all the queues. Consumers — which can exist anywhere on the network — process the Service/Job/Message (all interchangeable terms, as far as I am concerned — they are all units of work). Our messages are stateful, meaning they enter our bus empty except for parameters in instance variables, get routed around to various and sundry consumers across the network, and get posted back (the callback) full of data to the ESB.

Why do we need distributed POJOs? Well, we found it to be highly useful. For example, we offer a REST API to abort a pending message (such as http://ourendpoint/message/abort/abcdefg-the-guid-wxyz). The easiest way we found to tell the entire bus to disregard this message was to flip the bit on the message itself. The endpoint is running under Terracotta Server, all of the queues live in TC, and our consumers are likewise plugged in. If you stick all your messages in a Map (or series of maps if you’re worried about hashing, locking, and high volumes) where the GUID is the key and the value is the message, then the endpoint or any consumer can quickly obtain the reference to the message itself and alter its state. We can also write programs that hook into TC temporarily to inspect or modify the state of the system. Persistent memory is cool like that. It exists outside the runtime duration of the ephemeral program.

The endpoint likewise has REST APIs for returning the state of the bus, queues sizes, current activity, and other metrics. All of this data is collected from the POJOs themselves, because the endpoint has access to the very object instances that are running all over the network. It just so happens this architecture works wonderfully inside a single JVM, too, without TC, for easier development and debugging.

Load balancing and routers

Straight from Michael Nygard’s article:

Third, they’ve build a multi-layered middle tier. Incoming requests first hit a pair of “Central Process Servers” which inspect the request. Requests are dispatched to individual “portals” based on their customer ID.

In other words, they have endpoints behind load balancers (we use Pound) and “dispatched” is another word for “routed.” We have content based routers (a common and useful Enterprise integration Pattern for messaging systems) that route messages/services/jobs of specific types to certain queues. Our consumers are not homogenous. We’ve configured different applications (the integration aspects of our project) to listen on different queues. This saved us from having to port applications off the servers where they were previously deployed. These apps are several years old. Porting would have taken time and money. Allowing messages to flow to them where they already exist was a big win for us.

More to come

I’ve got the outline for my white paper complete, where I bulleted the features above as well as those in my previous blog article. There are other features I haven’t covered yet. Overall, I think it will be an interesting paper to read.

Still, I’m a little jealous, though, that FIDUCIA IT AG has scaled out to 1,000 nodes in their system. I can’t say how many nodes we’re up to, but I can say I’m looking forward to the massive scalability that our new architecture will give us.

You can’t keep a good idea down

Our message bus project was more than just replacing JMS with a POJO messaging system. It’s a whole piece of infrastructure designed to make it easy for different folks to do their jobs.

How did we do this and why do the next couple of paragraphs sound like I’m bragging? Because many of the features we implemented were recently announced in a new open source project (more on that later). Bear with me as I go through some of the features we implemented, knowing that I’ll tie them to the features of a recently announced (and exciting) open source middleware product.

Configuration Management and Network Deployments …

We deploy applications to our bus over the network by the way of a simple little bootstrap loader. You’ll note the Java class I used in my blog article uses a URLClassLoader. My example used a file URL (”file://”) but there’s nothing stopping those URLs from beginning with “http://…”

This lets our Config Mgt. team deploy applications to a single place on the network. As nodes on the network come up, they’ll download the code they need to run.

via Bootstrapping

While we’re on the subject of bootstrapping, there’s nothing stopping a smart developer from bootstrapping different applications into different classloaders. Again using the Java class from my blog article, you’ll notice the code finds the “main” class relative to the classloader. Who says you need just one classloader? Who says you can only run “main” methods? Stick all your classloaders in a Map and use the appropriate classloader to resolve an interface like, say, Service (as in Service Oriented Architecture) with an “execute” method. Suddenly, you can have applications that are invoked by a Service. We used this very technique to integrate legacy, stand-alone applications into an SOA.

Take bootstrapping and isolated classloading one step further and you’ll soon realize you can load multiple versions of the same application side-by-side in the same container. One could be your release branch version, the other could be your trunk code. Same infrastructure and container. We did that, too.

Lastly, what happens if you dump a classloader containing an application and replace it with a new one containing a new version of the application? Well, you just hot swapped your app. You updated the application without restarting your container.

Focus on Developer Productivity

Developers? We got them covered, too. We went for a simple pure Java POJO model. No app servers, databases, or anything else required. A developer can run the entire message bus — the entire infrastructure — in a single JVM, which means inside your IDE. Unit tests are a snap because all Services are POJO classes. Did I mention it takes one whole second for someone to start a local instance of our message bus in a single JVM? I’m a developer first, architect second. I like to think about how my decisions affect other developers. If I make it easy, they’ll love me. If I don’t make it easy, well, … I don’t think I’ve done my job well.

Utility Computing w/o Virtualization

It’s a lot easier to get efficient use of hardware once you can load all your applications into a single container/framework. If you bake in queuing and routing (a message bus), then you can implement the Competing Consumers pattern for parallelism in processing. Also, if all message consumers are running all applications (thanks, classloading!), then your consumers can listen on all queues to process all available work. This is utility computing without a VM. Our project lets us use all available CPU cycles as long as there is work in the queues.

There’s also the Master/Worker pattern to process batch jobs across a grid of consumers. Grid computing is one aspect of our project, but a minor one. I’m more interested in the gains we achieve through utility computing and the integration of several legacy applications to form our SOA.

The Open Source Version

Here are some features from the open source project, tell me if they sound familiar:

  • You can install, uninstall, start, and stop different modules of your application dynamically without restarting the container.
  • Your application can have more than one version of a particular module running at the same time.
  • OSGi provides very good infrastructure for developing service-oriented applications

http://www.javaworld.com/javaworld/jw-03-2008/jw-03-osgi1.html

SpringSource recently announced a new “application platform” with some of the following features and benefits:

  • Real time application and server updates
  • Better resource utilization
  • Side by side resource versioning
  • Faster iterative development
  • Small server footprint
  • More manageable applications

http://www.springsource.com/web/guest/products/suite/applicationplatform

http://www.theserverside.com/news/thread.tss?thread_id=49243

Now, if only the SpringSource Application Platform could add queuing and routing to their project, we might consider porting to it. In the meantime, I’m happy to see other projects validating the ideas we pitched here at our company.

I’m excited, too, to announce that I’ve received the blessing of Management and PR to write a white paper about our project. It will cover all the above features as well as a slew of others, such as service orchestration and monitoring, asynchronous callbacks, a few other key Enterprise Integration Patterns, and it will explain how we used Terracotta Server to tie it all together. Stay tuned! I’m going to write blog articles to coincide with the sections of the paper.

Scalability & High Availability with Terracotta Server

Our message bus will be deployed to production this month. We’re currently sailing through QA. Whatever bugs we’ve found have been in the business logic of the messages themselves (and assorted processing classes). Our infrastructure — the message bus backed by Terracotta — is strong.

SCALABILITY

People are asking questions about scalability. Quite frankly, I’m not worried about it.

Scalability is a function of architecture. If you get it right, you can scale easily with new hardware. We got it right. I can say that with confidence because we’ve load tested the hell out of it. We put 1.3 million real world messages through our bus in a weekend. That may or may not be high throughput for you and your business, but I guarantee you it is for our’s.

The messages we put through our bus take a fair amount of processing power. That means they take more time to produce their result than they do to route through our bus. How does that affect our server load? Terracotta sat idle most of the time. The box hosting TC is the beefiest one in our cluster. Two dual-core hyperthreaded procs, which look like 8 CPUs in htop. We figured we would need the most powerful server to host the brains of the bus. Turns out we were wrong, so we put some message consumers on the TC box, widening our cluster for greater throughput. Now the box is hard at work, but only because we put four message consumers on it.

When we slam our bus with simple messages (e.g, messages that add 1+1), we see TC hard at work. The CPUs light up and the bus is running as fast as it can. 1+1 doesn’t carry much overhead. It’s the perfect test to stress the interlocking components of our bus. You can’t get any faster than 1+1 messages. But when we switched to real world messages, our consumers took all the time, their CPUs hit the ceiling, and our bus was largely idle. The whole bus, not just TC. We’ve got consumers that perform logging and callbacks and other sundry functions. All of these are mostly idle when our message consumers process real world workloads.

We’ve got our test farm on 4 physical nodes, each running between 4 and 8 Java processes (our various consumers) for a total of 24 separate JVMs. All of these JVMs are consumers of queues, half of them are consumers of our main request queue that performs all the real work. The other half are web service endpoints, batch processors, loggers, callback consumers, etc. and each are redundant on different phsyical nodes. Because our message processing carries greater overhead than bussing, I know we can add dozens more consumers for greater throughput without unduly taxing Terracotta. If we hit a ceiling, we can very easily create another cluster and load balance between them. That’s how Google scales. They’ve got thousands of clusters in a data center. This is perfectly acceptable for our requirements. It may or may not be suitable for your’s.

You might be thinking that dozens of nodes isn’t a massive cluster, but our database would beg to differ. Once we launch our messaging system and start processing with it, we’ll begin to adversely impact our database. Scaling out that tier (more cheaply than buying new RAC nodes) is coming next. I hope we can scale our database as cheaply and easily as our message bus. That’ll enable us to grow our bus to hundreds of processors.

Like I said, I’m not worried about scaling our bus.

HIGH AVAILABILITY

I might not be worried about scalability, but I am worried about high availability. My company is currently migrating to two new data centers. One will be used for our production servers while the other is slated for User Acceptance Test and Disaster Recovery. That’s right, an entire data center for failover. High availability is very important for our business and any business bound by Service Level Agreements.

Terracotta Server has an Active-Passive over Network solution for high availability. There is also a shared disk solution, but the network option fits our needs well. Our two data centers are connected by a big fat pipe, and Terracotta Server can have N number of passive servers. That means we can have a redundant server in our production data center and another one across the wire in our DR data center. We’ve also got a SAN that replicates disks between data centers. We might go with the shared disk solution if we find it performs better.

Overall, though, it is more important for our business to get back online quickly than it is to perform at the nth degree of efficiency. Messaging, after all, doesn’t guarantee when your stuff gets run, just that it eventually runs. And if everything is asynchronous, then performance, too, is a secondary consideration to high availability.

CONCLUSION

If there’s one lesson to be learned through this blog article, it’s that one size does not fit all. Not all requirements are created equal. Our message bus is the right solution for our needs. Your mileage may vary. Some factors may outweigh others. For example, having a tight and tiny message bus that any developer can run in their IDE without a server (even without TC) is a great feature. No APIs lets us do that with Terracotta. You might have very different requirements than we do and find yourself with a very different solution.

InfoQ writes about my use of Terracotta Server as a message bus

Check out this article on InfoQ about using Terracotta Server as a message bus!

Some wheels need reinventing

Reinventing a square wheel is a common anti-pattern. The idea is a) we don’t need to reinvent the wheel because b) we’re likely to recreate it poorly compared to what is already available. But if we never reinvent any wheels, then we never progress beyond what we have. The real question, then, is when does it make sense to recreate a wheel? Some wheels need to be recreated.

I recently reinvented a wheel. A big one. The wheel is “Enterprise Messaging,” which much be complex because it has “enterprise” right in the name! I’d be a fool to reinvent that wheel, right? Maybe. Maybe not. We fit our “enterprise messaging system” into 92kb:

enterprise_messaging_in_92kb.jpg

Some won’t consider 92kb to be “enterprisey” enough, but that’s ok with me. I know we were able to put 1.3 million real-world messages through our bus over a weekend. That’s enterprisey.

Jonas Bonér wrote an article about building a POJO datagrid using Terracotta Server, and I replied on his blog saying we did something similar by using Terracotta Server as a message bus. Another reader asked why I did this instead of using JMS.

I think there are several benefits to this reinvented wheel:

TINY!

92kb contains the entire server framework. We have another jar containing common interfaces we share with client applications that weighs in at 18kb.

It works!

A single “consumer” in our framework is bootstrapped into an isolated classloader, which enables our framework to load applications (the various apps we need to integrate) into their own isolated classloaders. One consumer can process a message for any integrated application.

This is utility computing without expensive VMWare license fees.

We’re consolidating servers instead of giving each application dedicated hardware. The servers were mostly idle, anyway, which is why enterprises are looking to utility computing and virtualization to make more efficient use those spare CPU cycles. In our framework, hardware becomes generic processing power without the need for virtualizing servers. Scaling out the server farm benefits all applications equally, whereas the prior deployments required separate capital expenditures for each new server.

Pure POJO

Our framework runs inside an IDE without any server infrastructure at all. No ActiveMQ, no MySQL, and no Terracotta Server. Developers can stand up their own message bus in their IDE, post messages to it, and debug their message code right in the framework itself.

We introduce Terracotta Server as a configuration detail in a test environment. Configuration Management handles this part, leaving developers to focus on the business logic.

So, I might not be writing my own servlet container anytime soon (not when Tomcat and Jetty are open source and high quality), but I think it made a lot of sense to reinvent the “enterprise messaging” wheel. Terracotta Server allows me, in effect, to treat my network as one big JVM. My simple POJO model went wide as a configuration detail. That makes my bus (and TC) remarkably transparent.

Code complete doesn’t mean you’re done

Joe Coder runs through his feature in the UI. It works. The doodad renders beautifully on the screen, and when he clicks the button, all the right things happen on the server. He checks his code in, writes a quick comment in Jira, changes the issue status to “Completed & Checked-in”, and goes to his next task. Lo and behold, his To Do list is empty! Joe’s done coding!

Or is he?

Configuration Management cuts a branch off the trunk code. Joe’s code goes through QA.

Some QA departments try to break developers’ code, others just test the “go path” to insure their test scripts work. In Joe’s case, the QA department has a test script that is a step-by-step use case for how the feature should work. QA signs off on his feature. The doodad worked exactly as specified, which is to say, the select/combo box was correctly filled with test data from the database.

Joe’s code goes to production … and takes down an entire server.

How can this be? It went through QA! Testers verified that his code did what it was supposed to do!

Most software organizations use the term robust wrongly. I generally hear it used in a context that implies the software has more features. Robustness has nothing to do with features or capabilities! Software products are robust because they are healthy, strong, and don’t fall over when the table that populates a select/combo box has a million rows in it.

Joe Coder never tested his feature with a large result set. Michael Nygard — in his excellent book Release It! from the Pragmatic Press — calls this problem an unbounded result set. It’s a common problem and an easy trap to fall into.

Writing code for a small result set enables rapid feedback for a developer. This is important, because the developer’s first task is to write his program correctly. It seems, though, that this first step is oftentimes the only step in the process! Without testing the program against a large result set, the new feature is a performance problem waiting to happen. The most common consequences are memory errors (have you ever tried filling a combo/select box with a million rows from the database?) or unscalable algorithms (see Schlemiel the Painter’s Algorithm).

In our case, testing with big numbers revealed concurrency issues that we did not and could not find when developing with simple, smaller tests. Our multi-threaded, multi-node messaging system would routinely deadlock whenever we slammed it with lots and lots of messages. It didn’t do this when we posted simple messages to the endpoint during development. Similarly, we noticed that we held on to objects for too long during big batch runs. They all got garbage collected when the batch completed, so it wasn’t exactly a memory leak, but there was no need to hold on to the references. After we fixed that, we noticed that our servers would stay within a tight and predictable memory band, as opposed to GC’ing all at once at the end of a batch. Terracotta server expertly pages large heaps to disk, so we were never at risk of running out of memory. Still, it’s nice to see our JVMs running lean and mean.

We’re still stress testing our system today. This past weekend, we pumped over one million real-world messages through our message bus. Our concurrency issues are gone, memory usage is predictable, and we stopped our test only to let our QA department have at our servers. There are zero technical reasons why our test couldn’t have processed another million messages. Terracotta Server held up strong throughout.

But we’re still not done testing yet. We still have to see what happens if we pull the plug (literally) on our message bus. We still have to throw poorly written messages/jobs at our bus to see how they’re handled. We need to validate that we’ve got enough business visibility into our messaging system so that operations folks have enough info at runtime to do their jobs. We need canaries in the coal mine to inform us when things are going wrong, and for that we need better metrics and reporting built into our system that shows performance over time.

We’re code complete, but we’re not done yet.

Terracotta Server as a Message Bus

Terracotta is excellent software to glue messaging components together. This article is a high-level view of how we used TC to create our own messaging backbone.

Just a few weeks ago I made two predictions for 2008, but both centered around Terracotta. Since that time, I’ve gone deeper into the server and used it to write a message bus for a non-trivial integration project.

I’m impressed.

Our first implementation used a MySQL database for a single queue. JTA transactions running “select for update” statements against InnoDB worked just fine, actually, but there were other clunky things about that implementation. All roads looked like they led to queuing and routing. In a nutshell: enterprise messaging with multiple queues, not just batch jobs on a single queue.

Our second implementation (I believe strongly in prototyping, a la Fred Brooks “Plan to throw one away”) used JMS. Early in our design process, we talked about implementing our own messaging system using TC. We managed to talk ourselves out of it because a) no one else that we know of has done it and b) ActiveMQ is also open source, mature, and Camel looked very cool insofar as they give you a small domain specific language for routing rules between queues. The Camel project claims to have implemented all the patterns in EIP.

Well, we managed to deadlock ActiveMQ with multiple clients running with Spring’s JmsTemplate. Our request queue would just stop. We’d get an error saying our message couldn’t be found and the queue would simply stall. We couldn’t restart it without bouncing ActiveMQ. New clients all blocked on the queue. ActiveMQ did not survive our load test well. When we inquired, we were told about an know problem between Spring and ActiveMQ and that we should use the latest snapshot.

DISCLAIMER: I understand the preceding paragraph is entirely FUD unless I provide tangible evidence otherwise. We’ve since moved on from that implementation and removed all the JmsTemplates from our Spring apps. I won’t be providing screenshots or sample code to deadlock their server. To be fair, we did not choose to try again with another FOSS JMS queue, like JBoss. Our configuration of ActiveMQ and our Spring JmsTemplate clients may have been wrong. Feel free to take my criticism above with the proverbial grain of salt.

Happily, my team understands good design and the value of clean interfaces. All JMS-related code was hidden by handler/listener interfaces. Our consumer logic did not know where the messages (our own domain objects) came from. Implementations of the handlers and listeners were injected by Spring. As a result, it took just 90 minutes to swap in a crude but effective queueing and routing system using Terracotta. We’ve since cleaned it up, made it robust, added functionality for business visibility, and load tested the hell out of it. It all works beautifully.

Here are the main ingredients you need to roll your own message bus with Terracotta:

  1. Knowledge of Java 5’s Concurrent API for queueing
  2. Java’s Concurrent API expertly handles nearly all of your threading issues. Bounded LinkedBlockingQueues (also ArrayBlockingQueues) will neatly throttle your entire system for you. Consumers live in their own threads (and through the magic of Terracotta they can live in their own JVMs!) and can safely remove the next item from the queue, optionally waiting for a period of time for something to become available. Producers can add messages to a BlockingQueue in a thread-safe way, also optionally waiting for space to become available.

  3. Knowledge of Java threading for consumers and producers
  4. You’ll need to be able to start and stop your own threads in order to create producers and consumers.

  5. Daemon Runners
  6. Daemon Runners (my term for them, a better one may already exist) are long running POJO Java processes that you can cleanly shutdown later. Browsing Tomcat’s source code taught me a neat trick for hooking into a running JVM. Write a main program which spawns a thread that runs your actual application. Have the main thread open a ServerSocket and await a connection. When a token such as “stop” comes through, main stops its child thread and your application can exit gracefully. Anything else over the socket can be ignored, which lets your ServerSocket go right back to listening. We implemented a “gc” command, among others, to provide simple but effective hooks into our running processes anywhere on the network. You just need the IP and Port. You can optionally put IP checks into your daemon runner to validate that the IP sending the token is a trusted one. Our runners only accept tokens from 127.0.0.1. SSH lets us run scripts from across the network.

  7. Named classloaders
  8. Named classloaders is a TC trick needed to run multiple stand-alone Spring applications yet have them share the same clustered data. TC ties applications together using specific names for classloaders. Modules they’ve built already know how to cluster Tomcat Spring applications, for example, because the classloaders are the same every time. In standalone apps, you’re not guaranteed that the system classloader even has a name, let alone the same name across JVMs. See this post on TC’s forums to make a named classloader. It wasn’t hard. There may be another way to cluster standalone Spring apps. The named classloader did the trick for us. You will need to bootstrap your application to make this work. You should probably be doing this anyway.

  9. Spooler
  10. A Spooler lets your messaging system accept messages long after the rest of the queues get throttled by a bounded BlockingQueue. Your Spooler is an endpoint (maybe a web service endpoint) that will put everything it receives into an unbounded queue: your spool. A Spool consumer will read from the spool and forward to the next queue. Because the next queue is bounded, you’ve achieved throttling. You may have other components in your messaging system that require spooling. For example, we’ve got a consumer that performs callbacks and posts the results of the message to the callback URL. What happens if the callback endpoint is down? We don’t want our throttled message system to stop processing messages, so we spooled messages going into the callback queue.

  11. Consumer interface
  12. You’ll need to create a class or two around queue consumption. Our first crude implementation simply injected the queue itself into the listening thread. The listening thread blocks/waits on the blocking queue (hence the name!) until something is available. We’ve refined that a bit so that we now have listener classes that monitor the queues and pass the messages to consumer classes. The business logic is pure POJO Java logic, which is easily unit testable. This is, in essence, an event-driven system where your POJO class accepts events (messages) but doesn’t know or care where it came from. You want to decouple the business logic from the plumbing.

  13. Terracotta Server — messaging backbone & glue
  14. Last but not least, you need some queues, you need multi-JVM consumers, you need persistent data (a message store) that won’t get wiped out with a catastrophic failure, you need business visibility to track health and status of all queues and consumers, and you need to glue them all together. Terracotta Server handles these requirements very well.

TC really came through for us. We were curious about some of its behavior in a clustered environment. We made some assumptions about its behavior based on what would be ideal for minimizing network chatter and limiting heap size. TC nailed every single one of our assumptions.

We made the following assumptions and were happy to find out that all held up under load testing:

  1. L1 clients that were write-only wouldn’t ever need to have the entire clustered/shared dataset faulted to its heap. If you’re not going to read it, you don’t need it locally.
  2. Clustered ConcurrentMaps have their keys faulted to all L1 clients, but values are retrieved lazily.
  3. Reading from a BlockingQueue would fault just one object to the L1 client, instead of faulting in the entire queue, because the single object is retrieved in a TC transaction.
  4. TC and our unbounded spools wouldn’t run out of memory because TC pages object graphs to disk. Our unbounded L1 clients would work within an acceptable memory band.
  5. We can add/remove consumers to any point in our messaging system without affecting the entire system.

We’ve got our canaries in the coal mine, so we see what the entire system is doing in real time. We’re happy to see that our memory bands are predictable and that we’re entirely CPU bound. This is excellent for horizontal scalability. We can simply throw more processors at any part of our system to scale out. It doesn’t look like Terracotta server will be a bottleneck because the messages we’re processing take significantly more time to crunch than it takes to route through our queues. We have enough juice on our TC box to handle dozens more consumers across the network, which would give us significant throughput gains. We can revisit this when we have the need for hundreds of consumers. I’ll assume TC server will scale up with us, but if it can’t for any reason, it is perfectly acceptable to have more than one messaging cluster. That’s how Google scales. There are lots and lots of clusters in Google’s datacenters. Bridging between two messaging systems is a solved problem. That’s what messaging is, after all, a connection between disparate systems.

What did we gain?

Initially, we had MySQL. Then we added ActiveMQ, which is backed by MySQL. We saw how TC server would be beneficial if only to cluster POJOs that gather runtime data, so we had TC server in the mix. That’s three different servers in our system all of which needed high availability and routine backups. All were configured in Spring, making our dependency injection a maze to follow through.

When we switched to a TC message bus, we got rid of 2/3 of the infrastructure and most of the Spring configurations. We now have just one piece of infrastructure to maintain in a highly available way.

But I’m a guy that really likes simple. TC lets us make an entirely POJO system that runs beautifully in IntelliJ. A single “container” type main program can run all our components in a single JVM simply by loading all our various Spring configs. Developers can run the entire messaging system on their desktop, in their IDE, and run their code against it. They can post messages to an endpoint listening on 127.0.0.1 and debug their message code inside the messaging system itself.

We replace our container main with Terracotta in our integration and test environments. TC seamlessly and invisibly wires together all the components of the system, irrespective of where they live on the network. The POJO model goes wide with Terracotta server. It’s elegant, simple, and Just Worksâ„¢.

Tech predictions for 2008

Fancying myself as a wise prognosticator is fun, so I’ll lay down down a guess or two about technology in 2008.

OPEN TERRACOTTA

First, I am overwhelmed with ideas and potential uses for Open Terracotta as well as mystified and amazed by its ease of use.

I am a curmudgeon for most technologies. I’ve got a healthy skepticism for anything recommended by vendors. It almost always seems like a lot more than I need. I’m more than a little lazy. I want quick and easy, fast and simple. I’ve learned over the course of my career that simple isn’t easy. It takes good, smart work to make things simple.

For example, years ago, I avoided EJBs (and thus heavy weight app servers) just because they seemed like a really hard way of writing otherwise simple classes. I wanted to write simple classes and deploy to a simple container, so I rolled my own ORM and deployed all my apps on Tomcat. Today? Vindication! Rod Johnson and the folks who developed Spring deserve every bit of credit they get. They were equally frustrated with the vendor-driven solution and created a fantastic framework for POJO-based programming. Likewise, Hibernate — which Spring embraces beautifully — was a “roll your own” ORM that grew into one of best pieces in Java’s open source community. Add Java 5’s Generics to Spring and Hibernate3 and you’ve got all the tools you need to create a reusable framework for writing ultra-small, easily configured POJO DAOs and transactional Facades.

Back to Terracotta… Open Terracotta is clustering software. It is one of the finest uses of AOP I’ve seen. It invisibly and magically clusters your Java classes via configuration. You write your programs in simple POJO style, then declare what Terracotta should cluster. Developers can run small, simple unit tests for their work and let Configuration Managers handle the clustering.

What really got me was how easy it was to configure. The guys at Terracotta provide several excellent examples with the Terracotta distribution. Simplistic examples only go so far, of course, so they’ve also provided detailed error messages to guide you as you’re learning what goes into that magical config file.


Look! Useful error messages!
Terracotta Error Message

Terracotta is Free Open Source Software (FOSS), which is the best kind of software to facilitate widespread usage. It’s been open for a little over a year now. I predict good things in ‘08 for this software.

GRID COMPUTING

More cores on each chip and ever cheaper computers means we’ll have yet more computing power tomorrow than we did yesterday. My laptop has a dual core 2.4Ghz CPU, 4GB of RAM, and a 90GB drive. Install your favorite Linux distro and you’ve got a monster server compared to what was available 10 years ago. This type of machine is cheap, too, compared to a server from 10 years ago. That means you can buy a whole lot more of them.

What do we do with all this computer power? How do we harness it?

With enabling software like Terracotta, clustering becomes easy. You’ve still got to design your software to take advantage of parallelism, but the act of running programs in parallel is no longer difficult. This is what Fred Brooks means when he talks about the essential versus accidental complexity.

Distributing code and running massively parallel programs used to be difficult. It required complex architectures and expensive application servers. This is accidental complexity. Advances in software development — like Terracotta, GridGain, Spring, and other FOSS programs — dramatically reduce if not eliminate the accidental complexity of distributing your programs to a cluster of machines. The essential complexity is writing your program and designing it for parallelism from the start. What your software does will always be the hard part of writing software, which is why there really isn’t a Silver Bullet. Enabling technologies like Terracotta, however, makes it easier to move bits around. We’ll see more uses of “the grid” in 2008.

SUMMARY

Grids have been coming for a long time, and lots of work have been put into them such that it’s constantly getter easier to write for and deploy to a grid. Watch how Open Terracotta enables architects to design a grid in ways that we haven’t even imagined yet.

I think 2008 is going to be a very good year for architects and developers.

Switch to our mobile site