Card Sorting: Getting the right results

We've been been performing simple card sorting exercises for some time now to gain insight on how people think about content. If you don't know what card sorting is, read about it here. The results can be fascinating, and more importantly, useful.

Why we like it

Card sorting has found a comfortable home in our information architecture process, somewhere between market strategy and industry conventions. In addition to helping us develop navigation language, card sorting puts our assumptions about information architecture to the test. We learn where people expect to find content, and what other content they expect to find there. Sometimes, card sorting provides some surprising and exciting feedback, and it's always fun to work those results into our final product. Generally, though, the results coincide with our expectations, which is valuable in itself because it helps to get our clients on board with our recommendations that much quicker. We can simply say "it's in the cards".

The old method

At first, we were doing all of our card sorting manually. We'd solicit a few participants, get together some numbered index cards, sticky notes, and a few pens, and go for it. Our participants would scratch out their categories, which cards go where, and any notes they had on a sheet of paper, then collect their money and leave smiling. Afterward, I'd do the dirty work of putting the results together in a huge spreadsheet and counting how frequently different sets of cards were categorized together.
The spreadsheet was a mess. Participants were represented as columns (A, B, C, etc), and cards were represented as rows(1,2,3,etc). In each cell, I would write the number of the other cards that were sorted with the card number of that row, by the user defined in that column. Afterward, I would count up the frequency of each card appearing with another card, or a specific group of cards. It was a pain.

The new method.

Optimalsort. To be fair, Brad mentioned this to me awhile back, and it's taken me some time to really look into it. To be fairer, if I'd googled "card sorting", google would have mentioned it to me, too. I take little consolation in it being a paid ad.
Optimalsort is an online card sorting application. It's free for up to three projects, with ten participants each. It's easy to use. No more index cards and messy spreadsheets, right? Kind of.

The Problem

Optimalsort saves us tons of time and trouble actually administering the card sorting tests, and I've got no complaints there. They also provide a few result analysis options that are useful in their own right - including results standardized for use in an analysis spreadsheet designed by Joe Lamantia. These work great in a closed sort.
The problem is that the analysis options provided by Optimalsort for an open card sort don't offer the metrics that I'm most interested in - how frequently individual cards are grouped together, or in other words, a cluster analysis. In Information Architecture for the World Wide Web, Peter Morville and Louis Rosenfeld mention just one "obvious" quantitative metric to capture during open sorts: The percentage of time that users place two cards together.

The solution

Optimalsort offers the open card sort data in raw CSV format, which is great, because it means you can figure out how to do just about anything with the results - given a little time and effort.
Or, even better, someone else can figure out how to do just about anything with the results. Aapo Puskala, a Finnish psychologist with a focus on user interfaces, is just the guy. His card sort cluster analysis tool turns raw Optimalsort CSV data into a whole pile of useful metrics, including, of course, the percentage of time that users place two cards together. Sweet.
Thanks, Aapo.

Process Usability User Experience Information Architecture

Read This Next