Archive for May, 2009

Early Childhood Education

Sesame Street is 40 years old and struggling (ratings-wise) against Dora the Explorer and SpongeBob SquarePants.  What little bit of Spanish Dora teaches my little daughter is not the same as the impact Sesame Street has had on the world.

Newsweek has a retrospective new article that talks about the importance of Sesame Street.  I can corroborate the facts stated in the article:

“Before Sesame Street, kindergartens taught very little,” says [Joan Ganz Cooney, Sesame Street co-founder and TV producer], “and suddenly masses of children were coming in knowing letters and numbers.” Independent research found that children who regularly watch Sesame Street gained more than nonviewers on tests of letter and number recognition, vocabulary and early math skills.

My daughter isn’t 4 yet, but she’s reading her bedtime books to us now. She turns 4 next month, and for this past month she’s taken over all nighttime reading. I simply help with the hard words and encourage her to sound out the rest.

We’re doing math now, too.  We incorporate fun little games into daily activities that demonstrate addition and subtraction.  For example, we’ll ask her how many strawberries she’ll have left in her bowl if she eats 3 of them.  She gets it.  She understands addition and subtraction.  It’s time to start with multiplication and division.  Maybe I’ll show her how to separate her blocks into groups of 3 and ask her how many groups she has.  It doesn’t matter how I introduce the concepts, so long as it’s fun.

Maria Montessori was right in her approach to learning and her new pedagogical style, but researchers today find there is almost no age requirement to early education.  Maria Montessori originally developed her curriculum for young children aged 3-6, but there are now programs for younger children, too.

My daughter learned sign language as a baby.  The benefits are amazing.  Toddlers can communicate with us long before they can speak.  Knowing their needs are being heard gives them confidence and makes for an easier child.  My daughter once signed “cold” to me in a gas station parking lot during a road trip.  She was only old enough to say a couple of words (”dada”, “mama”, “dog”, and “duck” come to mind), but she knew dozens of signs and this was the first time she used “cold” on her own.  I was stoked! She very clearly communicated her need to me. She wanted to be back in the car!

I read that 18 month old toddlers can only speak 8-10 words but can know up to 75 signs.  We counted my daughter’s vocabulary and the math was spot on.  She knew 8 words and 65 signs, many of which were genuinely useful (others were just fun):  up/down, hot/cold, hungry, sleepy, more, milk, apple, diaper, dog, cat, and many more.

Kids are natural sponges.  They want to learn.  They just need the right environment and encouragement.

How to grow old and happy

I just read a very interesting article in The Atlantic about a seven decade study that followed 268 Harvard undergrads throughout their life with the single question: “What makes us happy?”  (The official study is called the “Harvard Study of Adult Development”).

You can read the full article here: http://www.theatlantic.com/doc/200906/happiness/1

The study found 7 major criteria for a happy life:

  • Employing mature adaptations *
  • Education
  • Stable marriage
  • Not smoking
  • Not abusing alcohol
  • Moderate exercise
  • Healthy weight

*Psychoanalytic metaphor of “adaptations,” or unconscious responses to pain, conflict, or uncertainty

I found this passage notable:

Of the 106 Harvard men who had five or six of these factors in their favor at age 50, half ended up at 80 as what [the author] called “happy-well” and only 7.5 percent as “sad-sick.” Meanwhile, of the men who had three or fewer of the health factors at age 50, none ended up “happy-well” at 80. Even if they had been in adequate physical shape at 50, the men who had three or fewer protective factors were three times as likely to be dead at 80 as those with four or more factors.

The purpose of the study was to determine who ages well and is happy and well adjusted.  Being unhappy may lead to drinking or drugs.  Drinking may cause a spouse to leave.  Depression can lead to more unhealthy living or unfulfilled aspirations.  On the other hand, having a good education may offer more opportunities in life to perform good works or be actively engaged.  Maintaining a healthy family life may boost self-esteem and cause people to stay healthy or productive.

It is easy to weave these factors together and understand how they interact and compound each other.

So says the author of the study after decades of research: “That the only thing that really matters in life are your relationships to other people.”

The Truth About Code Generation

Code generation done right can be a very effective and highly useful tool in your toolbox.  Done wrong it could be a maintenance nightmare.  This article reflects on different types of code generation, when to use each of them, and explains some pitfalls to avoid.

WHAT CODE GENERATION ISN’T:  A SILVER BULLET

Before we explore what code generation is and how to use it effectively, we must first understand what it isn’t:  A silver bullet.

No amount of code generation will save a doomed project.  If you’ve got inadequate staff , bad requirements (or no requirements), poor project sponsorship, or any number of the classic mistakes, code generation will not help you.  You’ve got bigger problems.

Moreover, you shouldn’t expect miracle productivity gains by using a code generator.  Fred Brooks and Steve McConnell (in The Mythical Man Month and Rapid Development, respectively) argue persuasively that actual coding and construction of software is or should be a minority part of the schedule.  Even if coding accounts for 50% of the schedule (which is doesn’t) and you can effectively generate half of the project’s code (which you can’t), the best you can hope to achieve is a 25% reduction in effort.

In reality, boilerplate code (the kind that is best generated) has been on a long, gradual decline thanks to advances in technology and better abstractions.  We’re left more and more to focus on the differences in our software (the essence) and less with the mundane minutiae of simple coding tasks (the accidental).

This is what Fred Brooks argues in No Silver Bullet.  There is no single tool that can produce an order of magnitude gain in productivity or quality because the accidental complexity of software (the act of constructing software itself) gets continuously easier, leaving you to focus on the truly hard problem (the essence):  What does your software do, how can it do it, and how do we test it sufficiently to know that it does it?

No silver bullet, indeed.

WHAT CODE GENERATION IS

A code generator is a tool that takes metadata as its input, merges the metadata with a template engine, and produces a series of source code files for its output.  The tool can be simple or elaborate, and you can generate any kind of code that you want.  You simply need to write the control program and templates for whatever you want to generate.

Code generation done well can save you some time in the long run (you have to invest effort in creating your generator) and increase quality because you know all generated code will be identical.  Any bugs you find in the code will be corrected once in the template.

One argument against code generation is that a data-driven subroutine can produce the same result as code generation.  I agree with this argument because the generator is a data-driven program.  Runtime reflection and good abstractions can produce the same results as code generation. I would argue, though, that this code is more complicated than the code created by the generator.  The generator might be as complex as the data-driven subroutine, but the code that is produced by the generator should be simple by design.  It would be trivially easy to attach a debugger and step over the generated code to find a bug.  I like debuggability.

Active vs. Passive

Generators come in two flavors:  Active and Passive.  Both are useful, but you must plan and design your project accordingly.

An active code generator maintains the code for the life of the project. Many active generators are invoked during the build process.  XDoclet is a good example of an active code generator.  I’ve used XDoclet to generate my webapp’s struts-config.xml file, and the generator was invoked by Ant during the build.  Another popular use of XDoclet is generating the boilerplate code and configurations for Enterprise Java Beans (EJBs).

Code generated by an active generator may or may not be checked into source control.  When invoked during a build and as part of the final artifact, generated code probably would not be in source control.  On the other hand, the output from an active code generator can be checked into source control and you could remove that step from the build process.  This isn’t to say the code is then maintained by hand!  On the contrary, the generator can be invoked frequently during a project.  The purpose of the active generator is to maintain the generated code.

A passive code generator creates code that you expect to maintain by hand afterwards.  Consider a wizard that asks you some questions before creating your basic class for you.  Likewise, many IDEs have useful generation snippet such as generating all your getters/setters from your class’ instance variables.  Both of these examples are simple yet extremely useful.  I would be continually frustrated if I had to write all my getters/setters by hand.

Passive code generators needn’t stop at simple IDE-level functionality.  Maven archetypes, for example, can create an entire project setup for you.  They create all your directories and starting pom.xml.  Depending on the archetype, this could be quite complex.

Similarly, you can create entire skeletal projects with functionality from a passive code generator.  One good example would be AppFuse, which creates your project structure, layout, build scripts, and can optionally create some basic functionality like user authentication.

IT’S JUST A TOOL

Always remember that code generation is a tool in your toolbox, nothing more.  More accurately, it’s a tool and die.

Every manufacturer has highly skilled workers creating dies, molds, and machine tools to create they parts they need.  Expert furniture makers don’t hand carve each and every table leg they require.  They make a jig and create exact copies of the table leg.  Each leg may be lovingly hand-checked for quality and assembled in the final table, but each leg certainly isn’t carved individually.

In the software world, there will be times when you need expert programmers writing templates and fewer junior engineers cranking out grunt code.  The experts make the tools and dies of our software world.

YOUR RESPONSIBILITY

If code generation is just a tool, then responsibility falls to the developer to understand when and how to use it.  It becomes the developer’s responsibility to create a design that does not require hand modification of any actively generated code. The design should be robust enough with plenty of hooks to allow for modification when needed.

One possible solution is to use active generation for base classes while using subclasses throughout the code.  The subclass could contain all the application-specific code needed, override base functionality as required, and leave the developer with a domain that could be easily regenerated while preserving all hand-written code.  Another design consideration is to model your application into a framework somewhat like Spring. Spring makes extensive use of the Template Method pattern and provides plenty of documented hooks for you to override when needed.

CONCLUSION

Code generation done well can increase quality and decrease costs in a project.  Time savings are compounded, too, when you find yourself implementing similar code across projects.  Each successive new project can benefit from the templates made in the last project.

Consistency across all generated code yields an easier learning curve because developers learn one standard way for basic functionality, leaving them to focus on the custom pieces of an application. Put another way, place as much functionality into the “accidental” realm as you can so that your developers can focus on the “essence.”  Generated code is easily understood and allows for better debuggability than runtime abstractions that produce the same effect.

There are very specific design considerations to be mindful of, particularly the need for a design to be robust enough to ensure hand-modification of actively generated code is not required.

Combine good active code generation with a library of common components and you will find yourself covering a large percentage of an application’s accidental complexity, leaving you more time to focus on the essence.

Code generation is a good tool for your toolbox.  An expert developer will understand when and how to use it effectively.

HOWTO: Sort a Python Dictionary/Map

I use Python all the time for quick little scripting tasks.  There’s nothing better to slice and dice a file, so I use Python for a lot of reporting tasks.  That usually involves building some kind of data structure in my script that I’m slicing and dicing from files.

In my work, I have a LOT of units of work processing in parallel on a grid.  I have GUIDs tagging each unit of work, and that GUID is the perfect key for a Map/Dictionary data structure.  There are times, though, that I want to get the values of the Map and sort by some value in the data itself.  The is important if I want to sort my results by elapsed time or some other interesting metric.

Here’s how you sort a Python Dictionary by some arbitrary value within the data structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import time
 
work = {}
 
#
# create some sample data...
#
for i in range(10):
    key = "unit_%s" % i
    unitOfWork = {
        "id" : key,
        "data" : {
            "name" : "Turansky",
            "dob" : "03/28",
            "favoriteNumber" : int(time.time()) + i
        }
    }
    work[key] = unitOfWork
 
print "The 'work' dictionary will print the objects randomly..."
for i in work:
    print work[i]
 
print ""
print "Sprinkle some sorting magic..."
 
# but you want to sort the objects by favoriteNumber'
# get your values as a list... you want to use the list.sort() method
units = work.values()
 
# provide a lambda function that references your data structure
units.sort(key = lambda obj:obj["data"]["favoriteNumber"])
 
print ""
print "... and just like that, you have order."
for u in units:
    print u

Here is the output:

The 'work' dictionary will print the objects randomly...
{'data': {'dob': '03/28', 'favoriteNumber': 1242069926, 'name': 'Turansky'}, 'id': 'unit_5'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069925, 'name': 'Turansky'}, 'id': 'unit_4'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069928, 'name': 'Turansky'}, 'id': 'unit_7'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069927, 'name': 'Turansky'}, 'id': 'unit_6'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069922, 'name': 'Turansky'}, 'id': 'unit_1'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069921, 'name': 'Turansky'}, 'id': 'unit_0'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069924, 'name': 'Turansky'}, 'id': 'unit_3'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069923, 'name': 'Turansky'}, 'id': 'unit_2'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069930, 'name': 'Turansky'}, 'id': 'unit_9'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069929, 'name': 'Turansky'}, 'id': 'unit_8'}

Sprinkle some sorting magic...

... and just like that, you have order.
{'data': {'dob': '03/28', 'favoriteNumber': 1242069921, 'name': 'Turansky'}, 'id': 'unit_0'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069922, 'name': 'Turansky'}, 'id': 'unit_1'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069923, 'name': 'Turansky'}, 'id': 'unit_2'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069924, 'name': 'Turansky'}, 'id': 'unit_3'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069925, 'name': 'Turansky'}, 'id': 'unit_4'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069926, 'name': 'Turansky'}, 'id': 'unit_5'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069927, 'name': 'Turansky'}, 'id': 'unit_6'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069928, 'name': 'Turansky'}, 'id': 'unit_7'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069929, 'name': 'Turansky'}, 'id': 'unit_8'}
{'data': {'dob': '03/28', 'favoriteNumber': 1242069930, 'name': 'Turansky'}, 'id': 'unit_9'}

Be mindful of Collection.contains(obj)

Summary

All Collection.contains(obj) methods are not the same!

This article is a real world case study of the Big O differences between various implementations of Java’s Collection interface.   I found and fixed a grievous O(n^2) algorithm by using the right data structure.

Background

I was asked to investigate why some pages in our web application would save session data very quickly while another problem page would take literally tens of minutes. The application had at its core a Stateful Session Bean that held dirty objects which would be persisted to the database in a single transaction. Sure, the easy pages didn’t contain very much data to persist and we knew the problem page contains many times more data, but certainly not that much more data to cause 20 minute request times!

After I implemented the fix, the page elapsed time dropped from 20+ minutes to ~10 seconds. What did I do? I used the right data structure.

Data Structures and the Big O

The application used a Vector to store dirty objects. A Vector was used for two reasons: 1) the original engineers thought synchronization was important and 2) order was important for referential integrity. A Vector’s internal synchronization was unneeded because only a single user’s request thread ever access the application. The ordering, however, was extremely important because you couldn’t add a person’s data without first adding the person!

The problem page in the web app had to add thousands of rows of data to the database, hence there were thousands of dirty objects waiting in the cache for persistence. As the application created or dirtied objects, it checked its cache (the Vector) before adding it. You wouldn’t want the data to be persisted twice.

How did the app check its cache? vector.contains(obj);

The problem with vector.contains(obj) and list.contains(obj) is that they are O(n), which means they scale linearly. Put another way, it gets slower the more items you put into it. The page that created thousands of objects to persist got progressively slower with each object it created.

The solution was to switching to a LinkedHashSet which perserves order for referential integrity while providing O(1) performance for set.contains(obj) because all the objects are hashed.

The real problem was even worse, of course, because the app checked the cache each time before it added a new object.  This represents a good ol’ fashioned O(n^2) algorithm.

To be fair to the original developers, they wrote the application in Java 1.3 and LinkedHashSet was implemented in 1.4. Also, I don’t think they anticipated having a single page in the application generate thousands of objects.

Sample Code

Below is a simple program to highlight the performance differences between various Collection.contains(obj) methods

Elapsed times (in ms):

Vector: 3663
List: 3690
Set: 15
LinkedSet: 12

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
package mgt.perf;
 
import java.util.*;
 
public class ContainsExample {
 
    private int collectionCount = 10000;
    private int testCount = 50000;
 
    public static void main(String[] args) {
        new ContainsExample().start();
    }
 
    private void start() {
 
        Collection vector = new Vector();
        Collection list = new ArrayList();
        Collection set = new HashSet();
        Collection linkedSet = new LinkedHashSet();
 
        populate(vector);
        populate(list);
        populate(set);
        populate(linkedSet);
 
        System.out.println("Elapsed times\n");
        System.out.println("    Vector:" + test(vector));
        System.out.println("      List:" + test(list));
        System.out.println("       Set:" + test(set));
        System.out.println(" LinkedSet:" + test(linkedSet));
    }
 
    private void populate(Collection set) {
        for (int i = 0; i < collectionCount; i++) {
            set.add(i);
        }
    }
 
    private long test(Collection collection) {
        Random rnd = new Random(System.currentTimeMillis());
        long started = System.currentTimeMillis();
        for (int i = 0; i < testCount; i++) {
            int lookFor = rnd.nextInt(collectionCount);
            if (!collection.contains(lookFor)) {
                throw new IllegalStateException(lookFor + " really should be in the collection");
            }
        }
        long elapsed = System.currentTimeMillis() - started;
        return elapsed;
    }
 
}

Frequently Forgotten Fundamental Facts about Software Engineering

I ran across this interesting article today:  Frequently Forgotten Fundamental Facts about Software Engineering.

I particularly like Requirements & Design bullet 2 (RD2) because we tend to gloss over “non-functional requirements” (e.g, performance, creating frameworks, etc.):

RD2. When a project moves from requirements to design, the solution process’s complexity causes an explosion of “derived requirements.” The list of requirements for the design phase is often 50 times longer than the list of original requirements.

Absent from the list is the Fred Brooks axiom: Adding people to a late project only makes it later.

Augmenting the Frequently Forgotten Fundamental Facts are Steve McConnell’s Classic Mistakes that prevent efficient software engineering. There is some overlap between the two.

I agree that many of these facts are frequently forgotten and that most organizations constantly make the classical mistakes.  How do I know?  A company I know tongue-in-cheekly named one of their conference rooms “Schedule Compression.”

As The Bard wrote, “Never a truer word than said in jest.”

Switch to our mobile site