measuring the quality of IA
I’ve been asked to come up with a quantitative measure of success for the information architecture of the BBC website. I won’t be solving this is one post.
Now I’m wary of this. We all know that what you measure goes up. There’s an anecdote (office myth?) about the days when site funding was tied to page impressions and some (mis-guided) web producers redesigned their interactive quizzes so that each question was on a single page, resulting in higher page impressions.
At the same time, Martin’s been expressing doubts about the BBC Trust chosen metric for measuring the success of the BBC search:
“They state that internal referrals from the search engine are down to 19% of all search referrals from 24% the previous year. Now, of course, there are lies, damned lies, statistics and then web metrics, but I’m unclear that you can argue how ‘good’ or ‘useful’ a site search is from these figures.
People tend to use site search when they are lost or disorientated, not just when they are seeking a specific piece of information. You can use exactly the same figures to argue that nearly a quarter of people used to get lost on the site and had to resort to search, and now only a fifth do – that could equally suggest an improved navigation user experience rather than a deterioration of search quality.”
The attention from the BBC Trust on site search is helpful and correct. But the success metrics need to be chosen carefully else we could genuinely improve the quality of the search and still get marked down as having failed (likewise we could fail to improve the quality but the chosen metric might improve, resulting in pats on the back all round but no improvements for the users). So this stuff is v. important to get right.
But back to measuring the quality of the IA in general…
Our key metrics are reach, impact and appreciation i.e. lots of people spend lots of time on the site and like it so much they tell lots of other people that it was great.
Getting the basic IA right should reduce time spent on site as it would get people where they wanted faster. They would then be appreciative and tell other people, but their time on site might well go down.
But we could make it easier to get around and increase time spent by cross-selling – providing a clear contribution in meeting their expressed needs but also in showing them what else we have that they didn’t know about.
That bit is important because some of the research commissioned for the 2004 Graf review showed that members of the public who took part in the usability tests (and were hence shown lots of the BBC site) were angry that all this free content that their licence fee had funded was there and they didn’t know about it. Cross-selling is a public service duty not just commercial good-sense.
So good IA would mean short (what does this mean?) journeys to each piece of content AND a high number of pieces of content found (and used? and liked? in a single session?).
And what if they get BBC content elsewhere, in some syndicated form? Surely we’ve got to include that too? It might be different with each platform too…the potential for cross selling on mobile might be more limited, given the context of use and time people want to spend looking at content on their phones.