Test-Driven Sitemap Design

We spend significant time and effort in the information architecture phase of projects, during which our principle goal is to create an effective sitemap. As with other steps in the process, we pitch an initial recommendation, solicit feedback, host discussions, make some changes, rinse and repeat. Also similar to other parts of the process, it is critical to establish metrics that define a particular final product as successful. With sitemaps, this has proven particularly important. I recall many a meeting trying hard to explain why a link is necessary, or unnecessary, or needs a simpler name. I'm a natural rambler, and things get challenging without clear, concise reasons for what I am suggesting. Defending an idea that sounds arbitrary is impossible.
With respect to sitemaps, we have established a simple, 2-part test to define a baseline level of success. We communicate this criteria upfront, getting buy-in on the metrics before even discussing the actual recommendation. Getting buy-in on metrics is much simpler than getting approval on actual recommendations, and provides a framework for highly relevant discussions, on-target feedback, and effective iterations.

  1. Given any piece of content in the entire website, would a user know which top-level navigation element to click to find that content?
  2. For each top-level navigation option, if a user is asked, "What do you expect to find when you click this top-level navigation link?", will she respond with an answer that is 90% accurate?

How the test is done:

Usually, these metrics can be applied to the process without an actual, formal "test". Simply asking the project team (both ours and our client's) those two questions throughout the information architecture process, and measuring every idea against that criteria, works.
In cases where the answers to these questions are not readily apparent to the project team, formalized testing — with outside volunteers — is extremely useful. Formalized testing further eliminates subjectivity, and provides clarity for evaluating feedback and other potential changes.
We typically use the following simple exercises when formalized tests seem necessary:

  • Card Sorting
    With card sorting, users are given a stack of cards, each labeled with a title and brief description for a specific piece of content, and are asked to group the cards into stacks that seem most logical. Closed card sorting is particularly useful for testing against the first question in our 2-part sitemap test. (There is plenty of information about card sorting on the internet, and more information on the Aten blog about our approach specifically.)
  • Usability Testing
    We often run usability tests in our office at critical points in the design and development process — ideally, early and often. Testing static design comps and clickable wireframes often provides valuable feedback on the second question of our 2-part test.

More about test-driven design...

All of this leads me to a more general point, on the subject of test-driven design. While test-driven development has become a widely-adopted approach for improving software quality, test-driven design receives far less attention. This is surprising, especially when considering the fact that subjectivity is one of the most common complaints designers have about client interaction and their craft. So many designers feel sabotaged by opinion. Clients are predisposed to a particular color, a word, or (gasp) a jingle they insist must play when users visit their websites.
Defining simple tests before designing helps dissolve subjectivity. It improves the final product, and significantly improves the process in a few key ways, as follows:

  • Agreeing to criteria ahead of time sets up the conversation for useful, targeted feedback.
    Feedback should help us get closer to the mutually agreed upon goal. Client feedback tends to be significantly more relevant when there is first an agreed-upon metric. By the same token, interpreting client feedback is easier when the goals are clear to all sides. Often this doesn't reduce the number of iterations or the amount of change, but it keeps iterations on a path of quantifiable improvement.
  • Clear, common goals strengthen collaboration.
    When the goal is simple and clearly defined, conversations between consultant and client are collegial and constructive, with both parties pushing an idea — together — toward a well-define measurement of success.
  • Metrics provide an objective framework for evaluating ideas.
    Recommendations are no longer arbitrary.

When tests are agreed to up-front, the rationale behind simplifying a link — even if the result seems less exciting — is clear and objective. Opinion is no longer a capable enemy to the design process.

Design Information Architecture Process Usability User Experience

Read This Next