Archive for March, 2008

“Don’t Make Me Think” applies to your code, too

Don’t make me think. That’s how I feel about your code.

Or as Martin Fowler puts it:

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” -Martin Fowler, Refactoring: Improving the Design of Existing Code

You’ve reached a whole new level of mastery when you write for simplicity, elegance, and maintainability. This is done on purpose, and it’s hard to get right. Deadlines, schedules, pressure, and stress all encourage us to cut corners and adopt a “Git ‘er done!” mentality. But Abandonment of planning under pressure is one of software’s classic mistakes. It’s a cardinal sin.

How do you write simple and maintainable code? I’ve got a 3-step program for you:

Step 1: Admit that simple isn’t easy

Designing simple software is hard. It has to be done on purpose. You can’t accidentally find yourself with well-written code and an elegant solution, it has to be written that way on purpose.

This admission is a bedrock principle required for designing great software and products. If you can’t admit that simple is Hard Workâ„¢, you haven’t hit rock bottom yet by having to maintain code that would make readers of The Daily WTF blush.

Step 2: Read “Don’t Make Me Think”

Steve Krug’s excellent book “Don’t Make Me Think” is about website usability, yet it changed how I look at my code.

Why? Because Steve applied the same principles in his book to his book! And if it works in those two mediums, I thought it just might work for me, too, in my medium (code).

“Don’t Make Me Think” is very easily absorbed because he’s feeding you information in a readily accessible way. He wrote it simply on purpose, and I’m certain it took many more hours to edit than it did to write. Simple is hard.

Step 3: Practice simple everyday

There are innumerable decisions you make everyday that affect your project for better or worse. You need to recognize these as the opportunities they are. Here are a few things you can do every day:

  • Code in plain English. Use an active voice (just like writing). What do you think this method does?
  • dao.findCustomerBy(order);

    Or what about this if statement?

       // allow...

    or better yet…

       // allow...

    The pretty method on the Admin class looks like this:

    public boolean hasViewFilePermission(){
       return hasPermission(Permissions.VIEWFILE);

  • Make Stuff Obvious. Quick, what does this line of code do?
  • Date dt = march(28, 1973);

    When I’m reading through unit tests, I’d much rather see the above statement to create a date than the equivalent Java:

    Calendar cal = Calendar.getInstance();
    cal.set(Calendar.MONTH, Calendar.MARCH);
    cal.set(Calendar.DATE, 28);
    cal.set(Calendar.YEAR, 1973);
    Date dt = cal.getTime();

    You can find those convenient date methods here: (it’s Free software). Use Java 5’s static imports to make the short date seen above.

  • Be Merciless. Be your own worst critic when reviewing your code. Always strive to improve what you’ve written. Just as great essays and novels (and books like “Don’t Make Me Think”) require several rounds of editing, so too does your code.
  • Never nest ternary statements. ’nuff said.
  • Write comments, but be brief and explain why your code does what it does, not how it does it. We already know how it does it, we’re looking at the code.
  • That’s it. Three steps to better code. Putting it into practice won’t be easy, but if you want to be a master of your craft you’ll embrace the challenge and write things simply on purpose. The people who follow you and maintain your code will appreciate it.

    No one should work alone. Ever.

    No one should work alone; not in design or planning or coding or any other aspect of software development. Why?

    Nobody gets it right the first time. Nobody.

    Moreover, different people have difference experience. I’d be foolish to think I could write something as good as someone who’s already done it. You cut down on rework issues but tapping into others’ experience. You get it closer to right the first time by having others help think through the issues surrounding a design, and often by watching over your shoulder as you write code. Even then, no one gets it exactly right the first time, but you won’t be nearly as far off as you would be by yourself.

    Xtreme Programming advocates pair programming on all production code. Not all code needs to be written by a pair, in my opinion; most is just fine for a single programmer, but that programmer should never be coding in isolation. Everyone on my team is involved with 100% of the project. We all know everything that is going on and we can jump in at any spot. Any non-trivial code is discussed by the entire team so that we can understand the best path to take. Not only does this help find flaws earlier in the process, it also gives all team members a keen understanding of the entire project.

    Two separate incidences came up today that drove the point home for me.

    First, a co-worker and I were profiling a piece of slow production code. We found a bottleneck and discussed a solution. His original fix would have worked, but it wasn’t as elegant as another piece of code I was able to point to that faced a similar problem and had an elegant solution. We only came to the right solution because we were working together on the same problem.

    Later, another teammate discussed a Spring classloading issue with me (It was a tricky issue surrounding the generation of classes in Hibernate in Spring and having more than one instance of this in the same classloader). He and another coder came up with a solution that would have worked around their problem, but I was able to talk about an experience I had nearly two years ago where I was asked to find and fix a disastrous memory leak in our production servers. Both issues involved class generation in Spring and Hibernate, and I was able to set them on a new path. They are going to tease apart the distinct pieces of software from the monolithic whole and deploy them as separate components on our message bus. All our components are deployed in isolated classloaders, which will solve their problem.

    The older issue involved the constant instantiation/initialization of a Spring ApplicationContext, where each instance caused the generation of Spring proxy classes and Hibernate DOAs. Classes once loaded are never unloaded from a JVM. This was our memory leak. The issue manifested itself at login, and our QA department did not login thousands of times a day like our users do. Code complete doesn’t mean you’re done. You’ve got abuse your system to find bugs like this.

    I can’t begin to count the number of times I’ve asked for help with the design of a tricky piece of code only to find I was going about it entirely the wrong way. Oftentimes, people with a fresh pair of eyes will see things differently than you do. This does, of course, require egoless programming and smart teammates. If don’t have either, well, you’re project has bigger problems than we can solve here.

    Doesn’t all this review and pair programming and constant communication decrease productivity? I’d argue no. In fact, as an investment of time, it’s paying rich dividends in decreased maintenance costs, robust production deployments, and higher morale during a time when other projects are struggling to pay off their design debt. We’re swimming when most others are just trying to tread water.

    I’ve done the solo thing. I’ve also lead or been part of smart teams with tight communication. Pair programming is always the natural result of open communication and egoless programming as the team works together.

    I know which one I prefer. How about you?

    Running the numbers for Risk

    Months ago, I rummaged through the bargain bin at Best Buy and found Risk (the board game) for the PC. I used to have a lot of fun with this with my friends when I was younger, so I thought “what the heck!” and bought it.

    Being a board game geek, math nerd, and computer programmer, I wrote a program to calculate the odds of winning Risk™.  My program ran through all the scenarios thousands of times, spit out the data, and I made some charts in Excel.

    Without further ado…

    Here is a chart showing your odds of winning a fight in Riskâ„¢:


    But what about after you win? How many armies can you expect to have left? This should be an important piece of your strategy.

    Here is the chart showing your remaining armies after the battle:


    You can download the source code for the program, too.  Here’s the link: Java program to caculate odds in Risk

    Creative Commons License

    This work is licensed under a
    Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

    Scalability & High Availability with Terracotta Server

    Our message bus will be deployed to production this month. We’re currently sailing through QA. Whatever bugs we’ve found have been in the business logic of the messages themselves (and assorted processing classes). Our infrastructure — the message bus backed by Terracotta — is strong.


    People are asking questions about scalability. Quite frankly, I’m not worried about it.

    Scalability is a function of architecture. If you get it right, you can scale easily with new hardware. We got it right. I can say that with confidence because we’ve load tested the hell out of it. We put 1.3 million real world messages through our bus in a weekend. That may or may not be high throughput for you and your business, but I guarantee you it is for our’s.

    The messages we put through our bus take a fair amount of processing power. That means they take more time to produce their result than they do to route through our bus. How does that affect our server load? Terracotta sat idle most of the time. The box hosting TC is the beefiest one in our cluster. Two dual-core hyperthreaded procs, which look like 8 CPUs in htop. We figured we would need the most powerful server to host the brains of the bus. Turns out we were wrong, so we put some message consumers on the TC box, widening our cluster for greater throughput. Now the box is hard at work, but only because we put four message consumers on it.

    When we slam our bus with simple messages (e.g, messages that add 1+1), we see TC hard at work. The CPUs light up and the bus is running as fast as it can. 1+1 doesn’t carry much overhead. It’s the perfect test to stress the interlocking components of our bus. You can’t get any faster than 1+1 messages. But when we switched to real world messages, our consumers took all the time, their CPUs hit the ceiling, and our bus was largely idle. The whole bus, not just TC. We’ve got consumers that perform logging and callbacks and other sundry functions. All of these are mostly idle when our message consumers process real world workloads.

    We’ve got our test farm on 4 physical nodes, each running between 4 and 8 Java processes (our various consumers) for a total of 24 separate JVMs. All of these JVMs are consumers of queues, half of them are consumers of our main request queue that performs all the real work. The other half are web service endpoints, batch processors, loggers, callback consumers, etc. and each are redundant on different phsyical nodes. Because our message processing carries greater overhead than bussing, I know we can add dozens more consumers for greater throughput without unduly taxing Terracotta. If we hit a ceiling, we can very easily create another cluster and load balance between them. That’s how Google scales. They’ve got thousands of clusters in a data center. This is perfectly acceptable for our requirements. It may or may not be suitable for your’s.

    You might be thinking that dozens of nodes isn’t a massive cluster, but our database would beg to differ. Once we launch our messaging system and start processing with it, we’ll begin to adversely impact our database. Scaling out that tier (more cheaply than buying new RAC nodes) is coming next. I hope we can scale our database as cheaply and easily as our message bus. That’ll enable us to grow our bus to hundreds of processors.

    Like I said, I’m not worried about scaling our bus.


    I might not be worried about scalability, but I am worried about high availability. My company is currently migrating to two new data centers. One will be used for our production servers while the other is slated for User Acceptance Test and Disaster Recovery. That’s right, an entire data center for failover. High availability is very important for our business and any business bound by Service Level Agreements.

    Terracotta Server has an Active-Passive over Network solution for high availability. There is also a shared disk solution, but the network option fits our needs well. Our two data centers are connected by a big fat pipe, and Terracotta Server can have N number of passive servers. That means we can have a redundant server in our production data center and another one across the wire in our DR data center. We’ve also got a SAN that replicates disks between data centers. We might go with the shared disk solution if we find it performs better.

    Overall, though, it is more important for our business to get back online quickly than it is to perform at the nth degree of efficiency. Messaging, after all, doesn’t guarantee when your stuff gets run, just that it eventually runs. And if everything is asynchronous, then performance, too, is a secondary consideration to high availability.


    If there’s one lesson to be learned through this blog article, it’s that one size does not fit all. Not all requirements are created equal. Our message bus is the right solution for our needs. Your mileage may vary. Some factors may outweigh others. For example, having a tight and tiny message bus that any developer can run in their IDE without a server (even without TC) is a great feature. No APIs lets us do that with Terracotta. You might have very different requirements than we do and find yourself with a very different solution.

    InfoQ writes about my use of Terracotta Server as a message bus

    Check out this article on InfoQ about using Terracotta Server as a message bus!

    HOW TO: Better JavaScript Templates

    JavaScript Templates (Jst) is a pure Javascript templating engine that runs in your browser using JSP-like syntax. If that doesn’t sound familiar, check out the live working example on this site and download the code. It’s Free Open Source Software.

    Better JavaScript Templates

    Who Saved Watt?!

    My family uses compact flourescent light bulbs. Our next cars will be hybrids. We’d love to go solar when the price per watt for the panels comes down, and I support net metering to allow people to reduce their monthly bills by putting energy back onto the grid . We recycle religiously. We’re green or at least we’re trying to be.

    So I made the Who Saved Watt?! site to a) encourage people to switch to CFs and learn other little conservation habits and b) add their saved watts to the aggregate total. I’d love to make the total interactive somehow, like a Google Map showing all the energy saved by locale. That’s for a future version.

    Here’s my eco-friendly merit badge:

    I saved

    Horses For Courses 2 (or Tools for Fools)

    My recent post “Linux is killing Solaris” is surpisingly controversial, if you could infer that by reading the comments on Reddit or watching the split vote on DZone.

    I think the critics and naysayers are missing a big point. One tool (htop on linux) is ridiculously easy to read at a glance while the other (prstat on solaris) requires parsing and mental math.

    You be the judge. Can you choose the right horse to run this course?

    Imagine you’ve got a dozen nodes on the network, each one hosting components of your enterprise message bus. You and your operations folks need visibility into the entire system so that you can tell at a glance whether something is going wrong or not.

    One tool is plain text, with row on row of process data, cpu utilization by process, etc. Want to know total CPU usage of the box? Add your processes together! “Well, process A is using 38%, process B is using 13%, and process C is using 4%. Let’s see, 38 + 13 + 4 is 55%. On to the next box!” Twelve console windows into all twelve nodes looking like this:


    Can you tell how much swap space your system is using by looking at this first tool? Nope!

    Alternatively, you can look at another tool that’s also plain text in a console, but has simple yet effective color-coded bars that display CPU and memory usage. It looks something like this:


    What’s important here is people. I don’t care if prstat runs twice as fast as htop. I don’t care if htop uses more memory or is less efficient. The fact is, they both run fast enough. What I want is for people to be fast when reading the console.

    Our Config Management team is going to have a single monitor with SSH shells open to all the nodes on the network. The monitor will sit in our Ops center and plenty of people will glance at it to see how our system is doing. If even one of them is doing math in their head to get CPU usage on a box, then I’ve failed the usability test. I picked the wrong horse for the course.

    So, htop runs on Linux and there’s no mention anywhere on the internet about it being unavailable for Solaris (except here on this blog). Get me used to all the niceties of Linux and I’m not a happy camper when I have to deploy my message bus to Solaris because my quick visual status bars are gone.

    This might seem like a trivial thing to get worked up about, but my responsibility as a leading architect at my company is to make things which are simple to run, easy to test, quick to report status, and to remain cognizant of the Ops and business folks that use our system to make money for our company. There’s still a ton of work to do after you’re code complete. Monitoring is one of those non-functional but critical requirements that has to be built in if you want a robust system.

    Solaris on x86 was too late. I could never afford Sun hardware at home or for development. So it all went Linux. And it’s not just for me, it’s for a lot of people. That’s why I wrote Linux is killing Solaris. I still think it’s an obvious point missed by no one, yet somehow it managed to stir up enough emotion in people on Reddit and Dzone.

    “Throw a little hot rod red in there…”

    Oh, man, this looks good… I’ve been waiting for this since I was a kid.

    This might have nothing to do with technology (at least, not as I know how to build it in the real world), but this is a kick ass movie trailer.

    Switch to our mobile site