Joan M. Cherry's September 1998 ITAL - article does a careful job of evaluating Web online catalog displays based on checklist judging. This communication raises some questions about checklist judging in general and the Cherry checklist in particular, suggesting that checklist judging is inherently flawed and that we don't know enough to establish the ideal online catalog display (if such an animal exists). A digression discusses the draft IFLA guidelines for catalog displays and suggests that they may do more harm than good in recommending particular approaches for "standard" catalog displays.
Joan M. Cherry's article in the September 1998 Information Technology and Libraries ("Bibliographic Displays in OPACs and Web Catalogs: How Well Do They Comply with Display Guidelines?") offers a realworld catalog display evaluation based on years of theory and discussion. As I read it, I found myself alternating between feeling appreciation for what the article did well and mild distress about the underpinnings of the article.
These comments are not intended as an attack on Ms. Cherry's article. Instead, I hope to offer some cautionary notes about checklists in general and the checklist used in particular. Maybe it's time for someone to do a major new work on aspects of online catalog design; maybe some of the "old hands" should come together with new practitioners to improve our understanding of the world out there. Or maybe there isn't a single best answer, and no checklist can serve to judge a catalog.
Appreciating the Research
Cherry's article is very well done. The literature search is good, although I'm naturally disappointed that my 1992 book The Online Catalog Book: Essays and Examples wasn't used when refining the checklist. The checklist seems to have been used carefully and consistently. I have no doubt that the findings in both studies are legitimate. If the checklist is a legitimate approach to judging catalogs, the results appear to be sound.
I don't question the results as they stand. The only INNOPAC system came out best overall for bibliographic display, with a SIRSI system trivially behind and a university-developed system slightly behind that. Overall, the best systems only managed to succeed on about two-thirds of the applicable checklist items. Systems generally did worst on text handling and best on instructional information. Even though I haven't seen most of these systems in use, those results seem perfectly sensible from what I do know of the field.
I have good reasons to love the checklist and the analysis. Much of the checklist resembles points made in my own writing. Better yet, the end-user interface I designed (RLG's Eureka) comes out smelling like a rose in this evaluation. As far as I can tell, the current Eureka on the Web scores 85 percent for labels, 70 percent …