As noted by Jansen (2005), research and development in the area of online Help has largely proceeded without attention to either the evaluation of automated Help assistants or to the precursors of help-seeking behaviors within the context of IR.
In other words, the development of Help functionalities found in virtually all IR systems, including digital libraries, is proceeding without parallel attention to the
users for whom these systems are designed. Such a discrepancy is especially im- portant in the area of digital libraries, which are proliferating at a rapid pace and used by novice searchers who are most often in need of assistance to achieve their information goals. At the same time, the design of Help mechanisms also should consider how to assist expert users for their unique Help needs.
For the most part, Help mechanisms have been construed as assistants in the query formulation process rather than as ongoing partners during the information-seeking episode. Furthermore, research within IR has shown that although people frequently report that they believe Help mechanisms to be important components of the over- all IR system, they use these Help functions infrequently, even though they might potentially improve search results (Cool & Xie, 2004).
Current research focuses on the evaluation of users’ online Help use in digital libraries.
Monopoli, Nicholas, Georgiou, and Korfiati (2002) evaluated users’ use of a digital library including its online Help. Even though only 34.6% of the 246 respondents used online Help, majority of the respondents (61.2%) who used online Help implied that it was a useful service and easy to use. That is contradictory to Slack’s (1991) research examining the effectiveness and use of online Help features in five differ- ent OPACs. Using “enhanced” transaction logs, mailed surveys, and focus groups, she found that even though the Help feature was utilized by one-third of the novice users, it did not assist the users in their help-seeking situations. Interestingly, 20%
of the respondents of Monopoli, Nicholas, Georgiou, and Korfiati’s (2002) study preferred human support, and they agreed with the statement “it is a helpful service, but I prefer asking a person to help me.” While half the respondents did not feel the need to use help, about 5.1% of them did not know the availability of online Help.
Surprisingly, 22.5% respondents did not understand what online Help was. That echoes Connell’s (1995) finding that inexperienced users do not use Help because they do not understand how Help can be helpful to them.
The evaluation studies not only presented the current use of online Help but also provided information about user requirements in designing online Help in digital libraries. Hill et al. (2000) tested user interfaces of the Alexandria Digital Library through a series of studies; they collected feedback about the users’ interactions with interfaces of the Alexandria Digital Library, the problems of the interfaces, the requirements of system functionality, and the collection of the digital library, all based on user evaluation studies. Derived from these studies, they found that users require the following Help functions: 1) creation of search examples to assist user query formulation, 2) offering context-sensitive Help, and 3) providing tutorials and FAQ. That is similar to Othman’s (2004) findings, derived from users’ evalua- tion of retrieval features of 12 online databases, in which search term Help, search examples, and context-sensitive Help were expected. In addition, users desired the following Help features: relevance feedback, a list of similar terms or synonyms, and assignment of weight values for search terms. Based on the findings of the above studies, it seems that while users require a variety of Help mechanisms in
Interactive IR in Digital Library Environments 129
their use of digital libraries, users need more guidance in search refinement and search terms in their use of online databases. More studies are needed to investigate users’ use of the Help mechanisms of digital libraries, specifically, the problematic situations that lead users to look for help and types of help desired in the digital library environments.
Research has also been conducted to compare Help mechanisms in digital libraries to other IR systems. Xie and Cool (2006) compared 50 subjects’ usage and evaluation of the Help functionalities of the American Memory (AM) Digital Library, hosted by the US Library of Congress, and the image retrieval system at the Hermitage Museum (HM) Web site. Four ways of learning Help mechanisms emerged from the data: (1) using trial and error, (2) using past experience, (3) looking for the Help icon, and (4) using related Help functions. The major problems users encountered when using Help are (1) don’t know where to start, (2) need direction, (3) too general, and (4) difficult to understand the content of provided Help. The first two problems are related to the design of online Help; the last two are associated with the content of the Help. The results of the study suggest that people prefer specific help, visual help, and help with demonstration, as presented in HM, instead of general help, text help, and help with description, as shown in AM. Users need help in the information retrieval process. They need assistance in identifying and expressing problems, in locating information regarding a problem, in obtaining relevant information, and in understanding the explanation provided. As to the assistance in overall interaction in the information retrieval process, the Hermitage Museum Help system is more highly rated than the American Memory Help system. The American Digital Library Help system was only rated 2.5 on a 5.0 scale in assisting users’ interaction with the digital library. There is a discrepancy between the existing Help mechanisms of IR systems and the Help mechanisms that users need, because the IR Help systems have been designed without paying much attention to users’ help-seeking situa- tions and behaviors. We need more knowledge about users’ help-seeking situations and behaviors, especially how they interact with Help mechanisms in IR systems, including digital libraries.
In order to design better Help mechanisms, researchers have taken several approaches.
The first is to understand how users learn how to use new interfaces and systems.
Based on the analysis of videotapes of people using the Illinois Digital Library sys- tem and their assumption of the systems as well as transcripts and audiotapes from the sessions, Neumann and Ignacio (1998) examined how users learned to use the interface and the functionality of the system. Their findings revealed that users had a structured exploration of the interface even though it might look like a random trial-and-error use of the interface. The second is to understand the processes of intermediation in digital library research. Library users are used to human help in using physical libraries; now they have to depend on system help in using digital libraries. According to Heckart (1998, p. 251), “Unmediated access is optimized when help is built in as an aspect of user-friendly design and as an explicit option
users can invoke when needed.” Southwick (2003) reported on an exploratory case study of intermediation in a hospital digital library information service in which a user and an intermediary communicated through an asynchronous, text-based, digital medium. Nine categories of factors perceived as affecting digital intermediation emerged from the data.
The third approach addresses the need to understand how users interact with Help mechanisms. Brajnik and his colleagues (Brajnik, Mizzaro, Tasso, & Venuti, 2002) have developed a conceptual framework of “collaborative coaching” between users and IR systems, stressing the importance of interaction in the design of intelligent Help mechanisms that can provide strategic support to users in help-seeking situa- tions. Their preliminary evaluation of a prototype knowledge-based system showed that participants provided positive assessment of their interaction with strategic Help. Users appreciated the proposed search activities, especially the help provided without users’ requests and without interrupting users’ activities. Users have the control in interaction with Help mechanisms. Chander, Shinghal, Desai, and Rad- hakrishnan (1997) suggested an expert system for cataloging and searching digital libraries with an intelligent user interface to provide context-sensitive help to users.
The fourth is the need to understand how users organize concepts in digital library Help systems. Faiks and Hyland (2000) employed the card sort technique, in which users impose their own organization on a set of concepts: the goal of this study was to determine how users would organize a set of concepts to be included in an online digital library Help system. The card sort technique proved to be a highly effective and valuable method for gathering user input on organizational groupings prior to total system design.