ia play

the good life in a digital age

Archive for the ‘ucd’ Category

build your own job title

without comments

In the old days job titles were created by grabbing a bit of Latin/Greek and adding ‘er’ or ‘or’ to it. The suffix just means “one who does”.

Something of the bits of Latin /Greek are obvious, some not:

Carpenter=wagons, Cooper=vats, Plumber=lead, Lawyer=law, Miner=digging, Baker=roasting, Butcher=slaughtering goats, Doctor=teaching, Teacher=also teaching, Farmer=collecting tax/rent, Soldier=being paid, Tinker=jingles, Tailor=cuts, Dyer= dark/secrets

Vicar interestingly just means substitute or deputy.

And who slaughtered anything that wasn’t a goat? (I’m putting the etymological dictionary away now).

It seems for a modern job title that a single word is not enough. You need a combination of object and activity.

Possible objects in my professional sphere:

    project/programme
    product
    business
    content
    user experience
    customer experience
    usability
    interaction
    systems
    software
    applications
    development
    technical
    information
    accessibility
    search
    web
    digital
    online
    intranet
    e-commerce
    sharepoint

Posssible activities:

    manager
    analyst
    architect
    designer
    producer
    engineer

Some people seem to feel hemmed in by the activities bit and choose something vaguer. This usually implies they will only produce opinions not things e.g.

    consultant
    expert
    specialist
    professional

In the public and non-profit sector you also get ‘officer’ as in police officer but also projects officer or knowledge officer. This usually just means one who holds an office and seems to be a way of avoiding saying ‘man’. “Head of” is similar but usually at the opposite end of the hierarchy.

All combinations of object and activity are plausible and many are common. Although so far I only know one Usability and experience design oompa-loompa.

Written by Karen

August 26th, 2009 at 8:22 pm

Posted in career,ucd,words

do IA and accessibility always agree?

without comments

Being the IA in an organisation that is fundamentally and very practically committed to accessibility is for the most part an IA dream.

Imagine it. A top-down drive towards machine readable content. An emphasis on the content rather than the style. A management team that understands that whizzy and award-winning is no b****y use if your users can’t use it (unless you count getting management their next job as a use).

But occasionally IA and accessibility, if not conflict, at least exchange a couple of slightly sniffy words.

Let’s take machine readable for a start. Which machine is doing the reading? And what language does it speak? Google and Jaws at the very least speak different dialects. I’ve been struggling for while to get to the bottom of the punctuation in URLs issue. SEO suggests a slight preferences for hyphens in URLs, screenreaders (well JAWs) seem to work better with CamelCase than with either hyphens or underscores (if the screenreader is set to read out the punctuation then imagine listening to all those underscores). It isn’t clear cut with either technology.

(as an aside, I was impressed to discover that JAWs seems to get Latin and had no trouble trotting through the Lorum Ipsum in lots of my documents)

In an effort to get a local navigation that shows the user where they are on the site, regardless of whether they are using a screenreader, we’ve ended up with a rather unfamiliar pattern of navigation on our new site. And as a general rule I don’t like novel patterns for common stuff like navigation. No-one wants to think about navigating.

But mostly my IA instincts and the needs of screenreader users are happily in tune, or at the very least don’t interfer with each other (courtesy of the magic of CSS).

Where it really gets interesting is when you consider screen magnification users. Screen magnification users are using the same interface as everyone else, just a whole lot bigger. I actually find screen mag much harder to use than a screenreader. I can mostly touch type and I tend to use the keyboard rather than the mouse so I don’t find a screenreader too much of a leap (when the site is accessible, of course!). But a significantly magnified screen is just baffling. It is the world as you knew it but nothing quite works the same. And moving around the screen just makes me feel a bit sick.

So some design constraints are: You can only see a very small amount of the screen at any one time. You don’t know where the next bit of information is, unless part of it is already on screen. And you don’t want to have to go back and forth on the page.

In many ways this helps the IA. It reinforces the need to follow accepted patterns. If the mag user is expecting the search box to be top left then don’t stick it in the middle of the left column or they’ll never find it.

Magnification creates a slight preference for linear, left aligned layout. You have to be careful with white space, otherwise the mag users is left with nowhere to go. I’m noticing a tendancy for my layouts to end up with empty space towards the right and bottom of the page.

A similar issue that isn’t really about magnification but about designing for low vision comes up when you design for significant font resizing. You can find that you are not making full use of the screen when the font is smaller.

Now none of this can’t be sorted out with some clever information design and a CSS whizz. Except maybe the URL punctuation but I should probably just get over that and worry about something a little more important.

Related posts

Written by Karen

August 24th, 2009 at 6:52 am

content management resources

without comments

Debora emailed asking for resources about content management from an IA perspective.

I had a rummage around and created a quick list of content management books, presentations, and websites. Plus a short flurry of content strategy links as quite a few of the interesting structured content debates seem to have moved that way (is that a sympton of all IAs being UX designers these days?)

Written by Karen

August 21st, 2009 at 6:18 am

webinar on SEOMoz tools

without comments

I often refer back to SEOMoz ranking factors article when I think teams are getting hung up on minor SEO issues.

Will from Distilled just ran a free webinar about the SEOMoz tools so it seemed a good opportunity to learn more about what more is available from SEOMoz.

Will says that SEO tools (some free) give you three things:

  1. Quick research (basic understanding)
  2. Deep dive research (actionable insights)
  3. Making things pretty for boss/client (ever important)

The Pro tools aren’t particularly cheap, so it was useful to have someone talk you through what the return on that investment would actually be. In places the data looks a lot like the stuff you get from your web analytics tool e.g. Google Analytics. But remember this is data on your competitors as well as your own site.

Using AutoTrader as an example, Will talked about

  1. SEOToolbox: Free tools. Will likes and uses Firefox plugins instead of some of these. Still likes and uses Domain Age tool
  2. Term Target: free, aggregates data on a given page, identifies keyphrases
  3. Term Extractor tool: free, uses for competitor and keyword research. 3 word phrases might give you something new.
  4. Geotarget. Get Listed is an alternative.
  5. Popular searches. Particularly likes the Amazon content.
  6. Trifecta. Useful aggregator. But has the comparison of your site to the rest of the web as whole (possibly unique data).
  7. CrawlTest: pro-tool. Xenu is an alternative.
  8. JuicyLinkfinder: finds linking opportunities
  9. Keyword Difficulty: how hard a keyword is going to be to rank for, regardless of domain.
  10. Rank Tracker: Will keen to stress that individual keyword ranking isn’t the important thing. Often your boss will demand it. Makes little graphs and will export to CSV. Can combine with analytics data e.g. using Google Analytics API
  11. Firefox toolbar Will loves this. Uses it more than any other SEO tool. Pro version better. Shows some pagerank-esque data for page and domain. Going up 1 MozRank point is equivalent to 8x stronger. So decimal points are important.MozTrust is similar but restricted to links from trusted sites.  Page Analysis also part of the toolbar? Alternative is Bronco tools.
  12. Linkscape: the tool SEOMoz are heavily investing in. Web graph of which pages link to each other on the web. Will doesn’t see an alternative to this. Free version does basic stuff. Pro version produces more data and prettier data. Will recommends the Adv Link Intelligence Report. You can get data on who links with “nofollow” which Will thinks is unique data.
  13. Labs: Online Non-Linear Regression is scary. Visualizing Link Data is more mortal friendly. Link Acquisition Assistant helps you construct queries for search engines to find link opportunities.Other tools include Social Media Monitoring and Blogscape.

(As a side point, Will recommends learning Excel functions MATCH and LOOKUP. And pivot tables.)

Distilled are going to do more conference calls, including one on keyword research tactics.  Could be  useful. Free webinars are another useful alternative to conferences when budgets are tight but you need to keep learning.

Written by Karen

August 20th, 2009 at 5:49 pm

Posted in analytics,search

keyword tools for seo and navigation design

without comments

There are lots of tools that help you choose terms to purchase in PPC campaigns and to target for SEO.

They can also be useful in helping you design navigation, choose your site name and even your company name.

Google provides all sorts of resources, some which seem to do very similar things.

There are analytics specifically for your own site:

And some that anyone can use:

Of the ‘public’ tools I mostly use the Adwords Keyword Tool, inspite of not using Adwords.

Try searching for ‘phones’. From the results you can see whether ‘cell phone’, ‘wireless phone’ or ‘mobile phone’ is the dominant language in your area. When there are labels that my team is arguing about, Ill sometimes see if the Keyword Tool can add evidence to the argument.

But beware, they can get addictive.

Written by Karen

August 19th, 2009 at 6:25 am

testing site search: solutions

without comments

So you’ve tested your site search. You’ve submitted some bugs. You’ve probably got lots of responses to those bugs along the lines of “oh, that’s just a config setting” , “you don’t understand – that’s a feature of how this product works” and “the search is fine, you just need to get the authors to do their metadata properly”.

Now the config statement is fine. So long as changing the configurations actually sorts the problem. Don’t sit back at this point. Either make the recommended changes yourself or insist the supplier does. Don’t close the bug until they’ve proved the point.

Changes you can usually make to the configuration

  • change the crawled pages
  • change the indexed fields
  • default query syntax
  • change stop/noise words, stemming and the thesaurus
  • ranking parameters

Be very, very careful if you are changing the ranking parameters. If fact, I’d suggest this is a mini-project in it’s own right. You’ll need to be able to make one change at a time and compare the new results with the old, across a large set of queries. You probably want to do this with someone who has experience with the specific search engine.

The other two scenarios/excuses are more problematic. If the search has a feature that you thing make the results bad you’ll need to see if you can get it switched off/removed. If you can’t you may have chosen the wrong product.

If your supplier thinks that teaching authors to do metadata properly is a simple goal then you may need a new supplier. This is hardly the attitude that made Google the search masters.

(I’m not contradicting my Best Bets post here: I think there are scenarios where properly motivated and focused editorial staff can do a better job than natural search results. But I’m not thinking of your average author, I mean your central web or search team. I mean people paid to care about search.)

You change the guidelines/training for authors. You can probably get the current batch of authors to listen to some simple tips and pointers. They might remember. They might pass them on. But be realistic, how much control do you have over the authors? Metadata education is often a thankless and futile task. The best solutions are those that don’t require the authors to think about search, whether that is technology or intervention by search specialists.

Where the natural results just aren’t good enough and the authors can’t help there are things you can do on the search results page to help the user out.

Not really about testing but still coming soonish:  Changing the interface

Related posts:

Written by Karen

August 17th, 2009 at 6:16 am

Posted in search

testing site search: running the tests

without comments

So you’ve prepared for testing site search.  Now you have to run the tests.

Set aside a reasonable block of time where you won’t be interupted. Schedule later sessions bearing in mind the crawl timescales. If you make changes you’ll need to wait for the crawl to run before you can test again.

You need content in the system before you can test search.  The ideal scenario is to be testing search once a site or system is fully populated with real content but this often isn’t possible. Don’t wait for the system to be populated if that means you won’t be able to make any technical changes.

So allow time for content creation as part of testing. You’ll probably want a mix of real content and dummy content that has been specifically written to test an aspect of search.

You’ll need to record the results so you need a spreadsheet.

  • Set up columns something like this:the query (linked if you are running the tests from here), whether the results are ok, a description of the issues, hypotheses about causes, changes or adjustments made to validate, bugs reported, screenshots (where necessary)
  • Create new versions of the worksheet each time you test, and label accordingly. If you make changes to the content or the configuration then test again after the crawl has run
  • Add queries to the spreadsheet as you go. No matter how good your original lists, you’ll explore other issues as you actually use the system.

I’m not merely testing. I’m attempting to analyse and resolve the issues. You could argue that I shouldn’t need to do this, I could just log all the issues with the supplier and get them to resolve them. In my experience it is more successful to do as much as possible yourself.

So what does ok mean? Inevitably it is subjective and it is also qualitative. You could compare with benchmarking metrics for the existing site but some part of the testing usually relies on the subjective judgement of the expert tester. Where time for testing is fixed, I raise the bar with different rounds of testing i.e. round one could be focusing on results that are patently unacceptable, with later rounds raising the standard of quality.

(this testing is in no way meant to replace user testings, the intention is more to test that the functionality works as promised and to get the results to the sort of quality that it is worth putting in front of test participants!)

Mostly you’ll have no problem spotting bad results. Explaining the bad result is the challenge.

Possible sources of issues

Incomplete crawls. First check the search engine successfully completed a crawl. Testing is easiest if you can check yourself. Otherwise you’ll keep having to nag the suppliers/IT to tell you if the crawl went ok. Ask if there is an interface that shows how the crawl went and ask for access.

What is the default query syntax? This is a simple one to check off. If you thought the search was performing an OR search and it is actually running AND then that might well explain why you aren’t happy with the results. And vice versa.

Documents/pages that shouldn’t be crawled? Pages I’ve seen in the results that shouldn’t have been there include:

  • admin pages (in one case the blocked profanity list!)
  • permission controlled pages
  • quiz answers
  • form thank-you page
  • user profile information

You may need to get rid of a lot of these pages before you can see the true quality of the results.

Documents/pages that should be crawled?

  • other specified domains in addition to your main site e.g. www.rnibcollege.ac.uk as well as www.rnib.org.uk
  • all sub-domains e.g. not just www.bbc.co.uk but also jobs.bbc.co.uk and news.bbc.co.uk.
  • pages regardless of their position in the site
  • Office and other documents
  • images, video, audio (depending on how you want these assets to appear)

What is being indexed within a document/page? You can check by creating a variety of dummy content and adding your test keyword to a different field on each piece of dummy content. Choose an unusual keyword that won’t be appearing in the rest of the content (I tend to use my mother’s Polish maiden name). Fields to check:

  • titles
  • URLs
  • meta descriptions and keywords
  • main page content
  • authors and other metadata relevant to your content set
  • navigation and page furniture (you’ll see this cause trouble more when the content set is small)
  • full content of Office document, pdfs etc?
  • metadata attached to multimedia assets

What filters are being applied? Check for:

  • stop words
  • stemming
  • thesaurus

Ask if there is an interface where you can view/edit these filters. If not, ask for copies of the actual files.

What is affecting the ranking? This is complicated to test with any ease as most systems use a variety of factors and there’s usually a level of mystery in the supplier communications. Consider:

  • where the keyword appears
  • how many times the keyword appears
  • the ratio of keywords/article length
  • type of document
  • links to the document, text of those links, authority/rank of the linking page

If you’ve been told that your search system utitlises “previous user behaviour” to adjust ranking then this can make testing a bit tricky. It also gives the suppliers a black box to hide behind if you don’t think the search is working right.

I’ve been told “don’t worry about testing search, this is a learning system”. Which sounds lovely but on day one the search results still need to be good enough to go live and you’re going to have to really work hard to get a grip on how the system is working. And who says it is learning the right lessons? In this particular scenario I doubled the amount of time I had set aside for testing.

Next:  Solutions to try

Written by Karen

August 14th, 2009 at 3:35 pm

Posted in search

testing site search: preparation

without comments

In last week’s post about Best Bets I commented that search software is “certainly not good enough without a lot of work. A lot of expensive work. If your supplier says ‘the search is really good, you don’t need to worry about it’ then you definitely need to worry about it.”

Worrying about and testing search systems has been a common theme in my working life: whether that involves benchmarking the performance of existing system, testing a new one prior to launch and comparing vendors when choosing a new system.

I’ve had varying levels of exposure to APR Smartlogik, Google, Inktomi/Yahoo, Fast, Verity, Autonomy, SharePoint. At this moment I’m in the middle of testing and tweaking the search for a SharePoint powered website. The challenges are surprisingly similiar to those I encountered when working with Muscat in 2001.

Having gone through such similar processes so many times, now seemed a good time to write it all down. I’ve divided my process into three stages: preparation, running the tests, and making changes.

Preparation


1. Ask the suppliers lots and lots of questions. You are after actual answers, testing their level of knowledge and letting them know that the quality of the search matters to you. Don’t rely wholy on the suppliers answers. Find other users and do your own reading to validate what the supplier tells you.

Most important to find out:

  • Ranking criteria
  • What is configurable; of those configurations which have a graphical interface; and of those which have a user friendly graphical interface?

Other useful things to find out:

  • What query syntax is supported? What is the default syntax?
  • What are the stemming rules and which words are stop words? Ask for copies
  • Is there a default thesaurus? Ask for a copy
  • What will the crawl timescales be during testing?
  • How to construct queries using the URL query strings

2. Build a list of test queries. You really need hundreds. Good sources are:

  • Names of a pages/articles on your current site or items in your catalogue
  • Real queries from your search logs or from a similar site if you can find someone willing to share
  • Obvious variants of these terms – thesaurus, misspellings, abbreviations
  • Known problems – ask for feedback from users
  • Include a range of specific items, broad topics and ambiguous queries

Your list could be a simple list of terms but you’ll find it easier to run many rounds of tests if you set your list up as http links that will run the query in your test search engine.

If you are testing multiple search engines and you have access to coding skills then you can set up the list to run automatically across the range of search engines and display your result back to you, saving lots of time. Or if you are running multiple rounds of testing on the same search system, an interface that checks to see if the results have changed since last time is invaluable.

But for most of us, we’ll be working from a list of queries and running them one by one.

Next: Running the tests

Written by Karen

August 3rd, 2009 at 6:52 am

Posted in search

dodgy recommendations

without comments

I always like examples of recommendation engines and the like that have got a bit muddled. The WalMart Apes scandal remains the classic. In this case the book is Apocalypses: Prophecies, Cults and Millennial Beliefs Throughout the Ages and the sponsored link reads “Cheap Weber BBQs”.

dodgy recommendations

It would be nice to think that the suggestion that customer interested in a book on apocalypses might also like a BBQ had some sort of ‘burn in hell’ connnection but it appears to just be that the author is called “Weber” which is a BBQ brand.

Which started me thinking about how to improve the recommendation engine with a bit of semantic insight about which fields to match upon. You could just not match on the author field but presumably some of the sponsored links are actually related to the author (I’m thinking the Gillian McKeiths and Deepak Chopras of the world). So you’d need some semantic information about the content of the sponsored link as well. Which could be a bit more challenging…

Written by Karen

July 30th, 2009 at 6:18 am

NTEN redesign: bounce rates

without comments

NTEN continue to share lots of useful information about their redesign process, including this insight into their web analytics:

“Our bounce rate is pretty darn high for folks who find our site through search: about 68%. New visitors also bounce at a high rate: about 67%. Our blog, which gives us the most traffic from search, has a bounce rate above 75%.

Friend of NTEN Avinash Kaushik says that organizations should aim for a bounce rate under 50%. We don’t expect our new visitor bounce rate will get THAT low, but there’s some work we can do to make sure people find MORE great content and stick around our site.”

via Wireframe Testing: Failing Informatively | NTEN: The Nonprofit Technology Network.

Written by Karen

July 29th, 2009 at 6:48 am

Posted in analytics,ucd