Archive for category Engineering

HOW TO: Enable debug and JMX ports in your java app

Ever have a stuck or deadlocked thread in a production application? Use JMX to inspect what’s going on inside your JVM, which includes thread views.  It’ll show you which threads are running, waiting, or blocked and where in the stacktrace they currently are.  I’ve used this information to find blocked threads in strange places.  JMX also shows you the memory usage of your java process, including memory consumed by classloaders in permspace.

The debug options will open your debug ports, naturally, and let you connect your debugger.

All you have to do is run your java process with these startup options:
DEBUG
-Xdebug -Xrunjdwp:transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=n
JMX
-Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
Look in your $JAVA_HOME/bin and you’ll see a jconsole executable. That GUI will let you connect to the machine running your java process on the port specified.

I hope you find these tips useful.  Both have been extremely useful to me (as well as adding optional profiling vars to a JVM!).

HOW TO: Use mini-batching to improve grid performance

We achieved a 3.5X increase in throughput by implementing “mini-batching” in our grid-enabled jobs.

We have a parent BatchService that creates child Services where each individual Service is a unit of work.  A Service implementation might perform some calculation for a single employee of a large employer group.  When the individual Services are very fast and the cost of bussing them around the network is greater than the cost of processing the Service, then adding more consumers makes the BatchService run slower!  It is slower because these fine grained units of work require more queue locks, more network traffic, and more handling calls when the child Service is returned back to the parent BatchService for accumulation.

The secret, then, is to give each consumer enough work to make the overhead of bussing negligible.  That is, give each consumer a “mini-batch” of Services to run instead of sending just one Service to a consumer.

Here’s a graph of some of our benchmarks:

throughputbybatchsize.png

Some of the data surprised us.  For example, we expected 3 big batches to run fairly slowly across 11 consumers because there would be 8 consumers sitting idle, but we were not expecting 11 batches to run more slowly than 43 batches.  We thought dividing the work equally across consumers in the exact number of batches would be the lowest point on the graph.  We were wrong.  We expected the U-shape, but we thought the trough would be at a different batch size.

Our test system can only support up to 11 consumers, so we haven’t yet tested batch sizes with more than 11, but the graph implies that we’ll have a deeper trough when we add consumers and tweak the batch size.  There should be, in theory, a point where we can’t process jobs any faster due without killing the database.  I’ve warned our DBAs that we’re looking to hit that point.

If you’re doing any kind of grid computing (by way of Terracotta’s Master-Worker project, GridGain, or rolling your own), check out the effects mini-batching can have on your throughput.  You might be surprised by your benchmarking metrics!

How to incur 3X costs for 1X worth of functionality

A software development lifecycle that does not include design review early in the process is doomed to poor estimates, cost overruns, and a wildly inaccurate schedule.

Why? Let me tell you what just happened to me.

I picked up a task for a project manager because I had some time free and his resources were completely booked. It was a simple feature with a two day estimate and it was already scheduled for release without having gone through design review. Since it was scheduled, it had a code cutoff date. That was last Friday.

The feature was pretty easy to implement. I needed to add a column to a database table, add support for it in our system, create some services (as in SOA) to change this field, and include the field in our web UI. That’s it. One database column with support for it across our system. Not a hard task.

I implemented the feature within the original estimate, I checked my code into our version control system, signed off on the feature, and asked our Database Engineers (DBEs) to include the new column in our test environment. As far as I know, this was the first time a DBE had a chance to review the feature. They put my change on hold while they suggested moving the field to a different table.

The DBE has a good argument for the field being on the other database table. He may be right. The original requirements may have been good but not good enough. But the problem is this review happened after the entire implementation was said and done.

Changing where the column exists represents a 3X cost of the original feature. The first 1X was the original implementation. Should we choose to move the column, I have to undo the original work and then do it all over again for a different table. Even if undoing the original work isn’t a full X of cost, it is still work I have to do that was not part of the original estimate. Redoing all the work on the new table is a full X of additional cost. We’re at least 2X above the estimate.

A 30 minute design review with the appropriate people would have kept the cost to 1X and given us the right solution the first time. Instead, we’ve got a potentially sub-optimal 1X solution or a 3X correct solution. And this was a simple feature. Larger features with more complex requirements would incur significantly higher cost overruns if not properly designed up front.

Design reviews must be an early part of the process, not an afterthought. It is the only way to avoid 3X overruns.

HOW TO: Use JDBC Batching for 7-8X throughput gains

Using the batched statement capability of your JDBC driver can give you 7-8X throughput gains. Not only is batching significantly faster, it’ll save database CPU cycles and be easier on the network, too.

The graph below shows elapsed time (in milliseconds) by batch size. For each data point, 1K rows were inserted into a simple table in MySQL. The benchmarking code I used can be found here.

jdbc_batching_gains.png

Why is batching so much faster?

First, depending on how much PreparedStatement caching your driver is doing, your database may be spending a lot of time parsing and compiling statements. After the statement is parsed and compiled, bind variables are applied. In our example, the data base will parse and compile the statement once as opposed to 1,000 times. This reduces the work your database performs and saves CPU.

Second, all bind variables are passed to the database in a single network call instead of 1,000 separate out-of-process, across-the-network calls. This helps reduce network traffic.

Third, depending on the internal architecture of your code, single statements may return the connection to a pool after every use. Multiply that by 1,000 and run a profiler and you’ll see yourself calling take/put methods a lot. Many pools also verify the connection on check-in and check-out. “select 1 from dual” is a common check for a pool to use. Your 1,000 uses of a connection may also be incurring the cost of 2,000 “select 1 from dual” statements!

When should you use batching?

Batching is particularly useful in importing scenarios where you need to get lots of data into your application quickly, but it can be used even when executing a few similar statements. Check out the example source code provided to see if batching is right for you. Fiddle with the numbers to see the gains for batching just 10 similar statements. It may not be 8X big, but trumpeting 25% gains to management is still a win for you and your team.

Use JDBC Batching!

JDBC batching can give you dramatic throughput gains while simultaneously being less abusive to your hardware. Overall, if you have the opportunity to use batch inserts and updates, you should seize that opportunity. Look at your application’s internal architecture to see if batching is right for you.

More proof that you can’t keep a good idea down?

In this blog article, Michael Nygard discusses a talk he attended where a technical architect discussed an SOA framework at FIDUCIA IT AG, a company in the financial services industry. Nygard describes an architecture that echoes many of the features I implicitly spoke of in my first blog article about my big integration project / message bus.

You may be asking yourself right now, why does he keep talking about this particular project? Briefly: it’s been a very fun project, it’s ongoing, it consumes most of my daily brain cycles, we’re still growing it (it’s a brand new infrastructure for us), and it encompasses a whole lot of ideas that I thought were good and that are now being validated by other projects I read about online.

So, what other unsung features did we build in that I’ll now sing about?

Asynchronous Messaging

You’ll notice the Spooler component in the original broad sketch of our architecture. The high-level description I gave the Spooler touched on callbacks. Asynchronous messaging was left unsaid, but it is implied by having a mechanism for callbacks.

The description also labeled my Spooler an endpoint, which may be a web service endpoint. We use web services only because the Enterprise Service Bus (ESB) orchestrating work on our bus is .NET-based while our project is all Java. That said, we post Plain Ol’ XML (POX) over HTTP, which is deserialized quickly to a Java POJO. Our entire messaging system works on POJOs, not XML.

The outside world may use SOAP (or XML-RPC or flat files or whatever) when communicating with my company, but internally our ESB talks POX with the bus. Mediation and transformation (from SOAP –> POX) is part of the functionality of an ESB. Consumers, internally to our bus, would directly access queues instead of using web services.

Pure POJOs, but distributed

It’s extremely productive and useful to work with a pure POJO model, and it’s even more productive and useful when the state of those POJOs is automagically kept in sync across the cluster regardless of what node is working on it. This is where Terracotta Server shines.

We pass POJOs around through all the queues. Consumers — which can exist anywhere on the network — process the Service/Job/Message (all interchangeable terms, as far as I am concerned — they are all units of work). Our messages are stateful, meaning they enter our bus empty except for parameters in instance variables, get routed around to various and sundry consumers across the network, and get posted back (the callback) full of data to the ESB.

Why do we need distributed POJOs? Well, we found it to be highly useful. For example, we offer a REST API to abort a pending message (such as http://ourendpoint/message/abort/abcdefg-the-guid-wxyz). The easiest way we found to tell the entire bus to disregard this message was to flip the bit on the message itself. The endpoint is running under Terracotta Server, all of the queues live in TC, and our consumers are likewise plugged in. If you stick all your messages in a Map (or series of maps if you’re worried about hashing, locking, and high volumes) where the GUID is the key and the value is the message, then the endpoint or any consumer can quickly obtain the reference to the message itself and alter its state. We can also write programs that hook into TC temporarily to inspect or modify the state of the system. Persistent memory is cool like that. It exists outside the runtime duration of the ephemeral program.

The endpoint likewise has REST APIs for returning the state of the bus, queues sizes, current activity, and other metrics. All of this data is collected from the POJOs themselves, because the endpoint has access to the very object instances that are running all over the network. It just so happens this architecture works wonderfully inside a single JVM, too, without TC, for easier development and debugging.

Load balancing and routers

Straight from Michael Nygard’s article:

Third, they’ve build a multi-layered middle tier. Incoming requests first hit a pair of “Central Process Servers” which inspect the request. Requests are dispatched to individual “portals” based on their customer ID.

In other words, they have endpoints behind load balancers (we use Pound) and “dispatched” is another word for “routed.” We have content based routers (a common and useful Enterprise integration Pattern for messaging systems) that route messages/services/jobs of specific types to certain queues. Our consumers are not homogenous. We’ve configured different applications (the integration aspects of our project) to listen on different queues. This saved us from having to port applications off the servers where they were previously deployed. These apps are several years old. Porting would have taken time and money. Allowing messages to flow to them where they already exist was a big win for us.

More to come

I’ve got the outline for my white paper complete, where I bulleted the features above as well as those in my previous blog article. There are other features I haven’t covered yet. Overall, I think it will be an interesting paper to read.

Still, I’m a little jealous, though, that FIDUCIA IT AG has scaled out to 1,000 nodes in their system. I can’t say how many nodes we’re up to, but I can say I’m looking forward to the massive scalability that our new architecture will give us.

When you absolutely, positively have to write software that does not fail

I’ve been fascinated about the software they run on the space shuttle ever since I read this article years ago:  They Write the Right Stuff

Today, I ran across this article about Self-Modifying Code written by someone that used to work at Lockheed on the shuttle. He describes using it for fault tolerance down near the hardware.

I imagine the computers running the Federal Reserve have similarly robust features baked in.  Interesting stuff.

Very Old School — Walking down memory lane

I vividly remember when my neighbor two doors down got an Atari 2600 in 1978. I was probably 4 1/2, maybe 5, but I remember the first time I saw Space Invaders. It was the only game they had. My brothers and I pooled our allowances for a while and bought our own. Combat quickly became my favorite game. You could ricochet bullets from the tanks off the walls! My brothers didn’t stand a chance.

Fast forward to 2008 and I decided to go find an Atari emulator. A few minutes later, I was playing Space Invaders again. Combat in the emulator wasn’t fun because I had no one to play with.

space_invaders.png

 

I really cut my computer teeth on the C64. I remember when I’d walked into Electronics Boutique in the mall and see a wall full of C64 games. A few years later, there was a small “IBC PC” games section. The C64 was great. I spent a lot of time playing games on my C64, but I’d also try to write programs. I remember I was able to make a few basic sprites move across the page, but I had no idea what sprites really were. I remember trying to type in programs I’d find in a book or computer magazine. They never worked. All that mattered to an 8 or 10 year old is playing games.

There was one game in particular that I really fell in love with on the C64: Ultima IV, Quest of the Avatar. It exposed me to my first game with any depth and to a persistent world that I’d live in while playing the game. I was mesmerized. I spent countless hours exploring every nook and cranny in Britannia, figuring things out (I copied the game and didn’t have the manual), immersing myself in the story, and I loved every minute of it. I was probably 11 or 12 years old.

A close friend of mine always had a PC. I remember playing games with him on his 286 and 386. Police Quest and Chopper Commando were our favorites. He had to start the games from the command line.

All the time we had a computer, we used it for word processing. I’d type book reports or other papers on the computer and print them on my dot matrix printer. During my early college days, I worked for a small financial planning firm where I maintained their software (by keeping versions up to date with frequent update disks from insurance carriers and other financial services firms) and created spreadsheets for the agents in Lotus 1-2-3. I’d never even heard of Excel, but I’d gotten my first exposure to early Windows. I think it was Windows 3.1.

I left school to enlist in the Navy where I qualified to train as a Navy Intelligence Specialist. After training, I deployed to the USS Independence in Japan, where I pulled intelligence reports from a computer in the SCIF for the ship’s intel officer. Little did I know it, but it was the internet. Well, it was the government’s classified version of the internet, but looking back now, I can clearly remember clicking on links, printing the pages, and preparing a report for the department Commander.

After the Navy, I went back to finish school. A few of my friends had just gotten email and they told me to look into it. I remember using Pine in the school library to read my email. I didn’t have much email then.

I don’t remember how I learned of it, but my life changed when I learned that Ultima Online existed. Here it was, the game I loved at a kid, the world I spent years exploring (literally, through Ultima 4, 5, & 6), in a new online game!

I bought my first computer explicitly for the purpose of playing Ultima Online. It had a 300Mhz processor, 64mb of RAM, and I forget the size of the drive. It ran Windows 98. And that was the biggest timesink I had heretofore discovered in my life.

Soon, I ran across Ultima Offline eXperiment (UOX), which was (still is) an open source version of UO Server. It was created by a group of hackers to run the UO client. It allowed someone to have their own private game server with a world devoid of people except for those you invite. I remember I organized a tournament with 8 people with me both playing and hosting the server. I didn’t know anything about performance then, but I can laugh at myself in retrospect for thinking I can host 8 active players in a networked game on a 300Mhz machine. It crashed all the time, but it didn’t matter. I was completely amazed that people could do this. I browsed the source code. I knew it was something called “C++”, but I had no idea what I was looking at, yet I thought it looked beautiful. It may have been a kludgefest of cruft for I know, but I fell in love with code, with how it looked, and with what it could do.

So I decided to learn what this code stuff was all about. I bought Sam’s Teach Yourself Java in 21 Days. I installed Java 1.1 and learned Hello, World. Soon thereafter I was writing a program to feed a Jabbywocky. I still don’t know what a Jabbywocky is. I didn’t finish the book. Some of the concepts were over my head, and I could tell that it was all trite and contrived. I wasn’t going to be able to run a UO Server emulator after reading that book.

Still, something stuck and I kept learning new things. I learned HTML, JavaScript, and then ASP (using JavaScript). My first job out of college required me to make reports, so I learned SQL to pull data. Then I applied my new ASP skills to automate the reports. I’m lazy and grew bored with report-making. A few years later, I learned Java for real.

Here we are, a decade later, and I’m busy integrating legacy applications into our shiny new message bus. It’s highly concurrent, runs all our integrated applications in a single JVM but in isolated classloaders, and my company is porting all our automation and data processing to my message bus for integration. It’s got massive horizontal scalability capabilities. Our Linux servers have multiple processors with multiple cores that are 100x faster than my first PC and have 500x as much memory.

This current project of mine is a long way from Space Invaders. I guess 30 years will do that for you. It’s been fun thinking about how I’ve been involved with computers and software in some way (even as a consumer) for my entire life. I’m looking forward to another 30 in high technology and I’m excited to play a part. I might even learn what a Jabberwocky is.

Why Linux will never be the world’s primary desktop

Every year for the past N years has been proclaimed as “The Year of Linux on the Desktop!” It hasn’t happened. It will never happen.

Why?

GNOME vs. KDE? Which distro?

I understand that Linux is the kernel and that GNOME/KDE is the desktop. I am well aware of this distinction. Joe Average User is not. Joe Average User runs Windows because that’s what came installed with his machine from BestBuy. Jane Schmancy User might be using a Mac, but OS X came pre-installed when she bought her machine. In both scenarios, the computers Just Workâ„¢ when they brought them home and booted them up. It’s a packaged experience where the value-add of the OEM vendor is the preconfigured-everything-work-out-of-the-box.

Enter Linux.

First, you have to download a distribution. Which one? With this single step, you’ve lost 95% of the people.

Second, you have to install the OS. It’s a well-known fact that 98.87823423% of the people don’t know what an operating system is nor do they care. They want to vote for their favorite American Idol, not worry about what it means to walk through Anaconda’s install process.

The Free Open Source Software community (of which I am a fervent supporter) believes that choice is a good thing. They are wrong. Less is more, particularly when it comes to making choices. This is the paradox of choice.

The group of people in the world who likes more choice when it comes to operating systems is vanishingly small.

I’ve got CentOS on a desktop at home. I’ve installed Ubuntu on a work machine. Damn Small Linux is our OS of choice for our message bus. I’m in the minority of users. It takes one to know one.

The real reason people won’t switch desktops

It’s different.

That’s it. In a nutshell, “it’s different” will keep the vast majority of users from switching desktops. Joe and Jane Average User barely know Windows, I don’t expect them to voluntarily want to be a newbie on another system. No one likes being a newbie, especially when they’ve achieved some level of mastery of something.

One of my teammates (we’ll call him “Dan”) just got a MacBook Pro to replace his aging Windows laptop. Dan is among the technical elite. He chose Damn Small Linux for our server OS. One week later, he’s lamenting the fact that he’s not as productive on his new machine because he has to learn all new ways of doing things. He briefly considering remapping all the Mac hot keys to match the Windows hot keys he was used to.

When a tech master is considering remapping hot keys, Joe Average User is lost!

The average user doesn’t use hot keys, doesn’t know what they are, and certainly doesn’t know how to remap them. If they even manage to install a new OS, they’ll be lost when looking to run their programs; they won’t get the dumb joke in KDE where every app has to start with a K (Kommander? Konquerer? Kalculator? Please.)

The rise of Mac OSX?

If there will be another desktop to challenge Windows — and that’s a pretty big IF — it will be Apple’s wares. They’ve got the iPod and the iPhone leading the way. They’ve got a much cooler brand than Microsoft. They are trickling into the enterprise market (our CEO uses a Mac, for example, as does our creative staff, media department, and several developers).

Still, “Think Different” becomes “it’s different” for the average user. The person switching from Windows to Mac will be on the right side of the bell curve. The billion PCs out there in the world (and growing) will be running Windows for a long time.

I’m writing this from a Windows laptop. Of the 12 people I can see in my immediate field of vision, only Dan has a Mac. One runs Ubuntu in a VM on his Windows laptop. The rest are running straight Windows.

This article isn’t meant to be a comparison of desktops, features, security, reliability or anything else. I’m just calling it like it see it in terms of usage. The word “never” in the title makes my position an absolute. Perhaps I should modify it to say “Why Linux won’t be the world’s primary desktop for a looooooooooong time, if ever.”

I’m sure some will disagree.

“Don’t Make Me Think” applies to your code, too

Don’t make me think. That’s how I feel about your code.

Or as Martin Fowler puts it:

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” -Martin Fowler, Refactoring: Improving the Design of Existing Code

You’ve reached a whole new level of mastery when you write for simplicity, elegance, and maintainability. This is done on purpose, and it’s hard to get right. Deadlines, schedules, pressure, and stress all encourage us to cut corners and adopt a “Git ‘er done!” mentality. But Abandonment of planning under pressure is one of software’s classic mistakes. It’s a cardinal sin.

How do you write simple and maintainable code? I’ve got a 3-step program for you:

Step 1: Admit that simple isn’t easy

Designing simple software is hard. It has to be done on purpose. You can’t accidentally find yourself with well-written code and an elegant solution, it has to be written that way on purpose.

This admission is a bedrock principle required for designing great software and products. If you can’t admit that simple is Hard Workâ„¢, you haven’t hit rock bottom yet by having to maintain code that would make readers of The Daily WTF blush.

Step 2: Read “Don’t Make Me Think”

Steve Krug’s excellent book “Don’t Make Me Think” is about website usability, yet it changed how I look at my code.

Why? Because Steve applied the same principles in his book to his book! And if it works in those two mediums, I thought it just might work for me, too, in my medium (code).

“Don’t Make Me Think” is very easily absorbed because he’s feeding you information in a readily accessible way. He wrote it simply on purpose, and I’m certain it took many more hours to edit than it did to write. Simple is hard.

Step 3: Practice simple everyday

There are innumerable decisions you make everyday that affect your project for better or worse. You need to recognize these as the opportunities they are. Here are a few things you can do every day:

  • Code in plain English. Use an active voice (just like writing). What do you think this method does?
  • dao.findCustomerBy(order);

    Or what about this if statement?

    if(admin.hasPermission(Permissions.VIEWFILE)){
       // allow...
    }

    or better yet…

    if(admin.hasViewFilePermission()){
       // allow...
    }

    The pretty method on the Admin class looks like this:

    public boolean hasViewFilePermission(){
       return hasPermission(Permissions.VIEWFILE);
    }

  • Make Stuff Obvious. Quick, what does this line of code do?
  • Date dt = march(28, 1973);

    When I’m reading through unit tests, I’d much rather see the above statement to create a date than the equivalent Java:

    Calendar cal = Calendar.getInstance();
    cal.set(Calendar.MONTH, Calendar.MARCH);
    cal.set(Calendar.DATE, 28);
    cal.set(Calendar.YEAR, 1973);
    Date dt = cal.getTime();

    You can find those convenient date methods here: dates.java (it’s Free software). Use Java 5’s static imports to make the short date seen above.

  • Be Merciless. Be your own worst critic when reviewing your code. Always strive to improve what you’ve written. Just as great essays and novels (and books like “Don’t Make Me Think”) require several rounds of editing, so too does your code.
  • Never nest ternary statements. ’nuff said.
  • Write comments, but be brief and explain why your code does what it does, not how it does it. We already know how it does it, we’re looking at the code.
  • That’s it. Three steps to better code. Putting it into practice won’t be easy, but if you want to be a master of your craft you’ll embrace the challenge and write things simply on purpose. The people who follow you and maintain your code will appreciate it.

    No one should work alone. Ever.

    No one should work alone; not in design or planning or coding or any other aspect of software development. Why?

    Nobody gets it right the first time. Nobody.

    Moreover, different people have difference experience. I’d be foolish to think I could write something as good as someone who’s already done it. You cut down on rework issues but tapping into others’ experience. You get it closer to right the first time by having others help think through the issues surrounding a design, and often by watching over your shoulder as you write code. Even then, no one gets it exactly right the first time, but you won’t be nearly as far off as you would be by yourself.

    Xtreme Programming advocates pair programming on all production code. Not all code needs to be written by a pair, in my opinion; most is just fine for a single programmer, but that programmer should never be coding in isolation. Everyone on my team is involved with 100% of the project. We all know everything that is going on and we can jump in at any spot. Any non-trivial code is discussed by the entire team so that we can understand the best path to take. Not only does this help find flaws earlier in the process, it also gives all team members a keen understanding of the entire project.

    Two separate incidences came up today that drove the point home for me.

    First, a co-worker and I were profiling a piece of slow production code. We found a bottleneck and discussed a solution. His original fix would have worked, but it wasn’t as elegant as another piece of code I was able to point to that faced a similar problem and had an elegant solution. We only came to the right solution because we were working together on the same problem.

    Later, another teammate discussed a Spring classloading issue with me (It was a tricky issue surrounding the generation of classes in Hibernate in Spring and having more than one instance of this in the same classloader). He and another coder came up with a solution that would have worked around their problem, but I was able to talk about an experience I had nearly two years ago where I was asked to find and fix a disastrous memory leak in our production servers. Both issues involved class generation in Spring and Hibernate, and I was able to set them on a new path. They are going to tease apart the distinct pieces of software from the monolithic whole and deploy them as separate components on our message bus. All our components are deployed in isolated classloaders, which will solve their problem.

    The older issue involved the constant instantiation/initialization of a Spring ApplicationContext, where each instance caused the generation of Spring proxy classes and Hibernate DOAs. Classes once loaded are never unloaded from a JVM. This was our memory leak. The issue manifested itself at login, and our QA department did not login thousands of times a day like our users do. Code complete doesn’t mean you’re done. You’ve got abuse your system to find bugs like this.

    I can’t begin to count the number of times I’ve asked for help with the design of a tricky piece of code only to find I was going about it entirely the wrong way. Oftentimes, people with a fresh pair of eyes will see things differently than you do. This does, of course, require egoless programming and smart teammates. If don’t have either, well, you’re project has bigger problems than we can solve here.

    Doesn’t all this review and pair programming and constant communication decrease productivity? I’d argue no. In fact, as an investment of time, it’s paying rich dividends in decreased maintenance costs, robust production deployments, and higher morale during a time when other projects are struggling to pay off their design debt. We’re swimming when most others are just trying to tread water.

    I’ve done the solo thing. I’ve also lead or been part of smart teams with tight communication. Pair programming is always the natural result of open communication and egoless programming as the team works together.

    I know which one I prefer. How about you?

    Switch to our mobile site