• Tidak ada hasil yang ditemukan

users can invoke when needed.” Southwick (2003) reported on an exploratory case study of intermediation in a hospital digital library information service in which a user and an intermediary communicated through an asynchronous, text-based, digital medium. Nine categories of factors perceived as affecting digital intermediation emerged from the data.

The third approach addresses the need to understand how users interact with Help mechanisms. Brajnik and his colleagues (Brajnik, Mizzaro, Tasso, & Venuti, 2002) have developed a conceptual framework of “collaborative coaching” between users and IR systems, stressing the importance of interaction in the design of intelligent Help mechanisms that can provide strategic support to users in help-seeking situa- tions. Their preliminary evaluation of a prototype knowledge-based system showed that participants provided positive assessment of their interaction with strategic Help. Users appreciated the proposed search activities, especially the help provided without users’ requests and without interrupting users’ activities. Users have the control in interaction with Help mechanisms. Chander, Shinghal, Desai, and Rad- hakrishnan (1997) suggested an expert system for cataloging and searching digital libraries with an intelligent user interface to provide context-sensitive help to users.

The fourth is the need to understand how users organize concepts in digital library Help systems. Faiks and Hyland (2000) employed the card sort technique, in which users impose their own organization on a set of concepts: the goal of this study was to determine how users would organize a set of concepts to be included in an online digital library Help system. The card sort technique proved to be a highly effective and valuable method for gathering user input on organizational groupings prior to total system design.

Interactive IR in Digital Library Environments 131

of digital libraries. The evaluation model was tested, and the results revealed that effectiveness, efficiency, and satisfaction are interrelated. At the same time, the results also identified users’ perceptions of ease of use, organization of information, terminology, attractiveness, and mistake recovery.

The attributes of usability, in particular user needs and user satisfaction, have been investigated in many of the digital library usability studies. In order to understand users’ information needs and their perception of existing information systems, Fox et al. (1993) interviewed potential users and experts in the related fields. They designed and conducted usability testing of an interface based on these interviews, which led to the design of a usable prototype of a digital library. Van House, Butler, Ogle, and Schiff (1996) discussed the iterative design process for the University of California Berkeley Electronic Environmental Library Project. After observing and interviewing users about design elements, including query form, fields, instructions, results displays, and formats of images and texts, they enhanced the design of the digital library. Bishop et al. (2000) presented the nature and extent of digital library testbed use, which includes extent of use, use of the digital library compared to other systems, nature of use, viewing behavior, purpose and importance of use, and user satisfaction. Data were collected from potential and actual users through focus groups, interviews, observations, usability testing, user registration and transaction logging, and user surveys.

Bishop et al.’s (2000) usability tests were extended to “situated usability” modeled by Van House and her colleagues (Van House, 1995), in which both the usability and how and why people used the system were investigated. The situated usabil- ity studies enable researchers to understand and observe users’ context of use for digital libraries as part of the design and evaluation of digital libraries. Adopting an interpretive and situated approach, Yang (2001) evaluated learners’ problem-solv- ing in using the Perseus digital library. The findings of the study helped designers develop and refine better intellectual tools to facilitate learners’ performance. Kas- sim and Kochtanek (2003) conducted usability studies of an educational digital library through the use of focus groups, Web log analysis, database usage analysis, satisfaction surveys, remote usability testing, and so forth. These usability studies attempted to understand user needs, find problems, identify desired features, and assess overall user satisfaction.

Although some of these usability studies are part of overall digital library evalu- ation, they also examine the content and performance of the system in addition to interface usability. Based on data collected from observations, interviews, and document analysis, Marchionini, Plaisant, and Komlodi (1998) applied multifaceted approaches to the evaluation of the Perseus Project. Their evaluation was focused on learning, teaching, system (performance, interface, and electronic publishing), and content (scope, accuracy). Hill et al. (2000) tested user interfaces of the Alexandria Digital Library through a series of studies; they collected feedback about the users’

interaction with the interfaces of Alexandria Digital Library, the problems of the

interfaces, the requirements of system functionality, and the collection of the digital library based on user evaluation studies. User evaluation generated the following users’ requirements for the design of digital library interfaces: unified and simplified search, being able to manage sessions, more options for results display, offering user workspace, holdings visualization, offering more Help functions, allowing easy data distribution, and informing users of the process status.

Indeed, user evaluation provides valuable input for the design and enhancement of digital libraries to satisfy user need. Cherry and Duff (2002) conducted a longitudinal study of a digital library collection of Early Canadiana Materials, focusing on how the digital library was used and the level of user satisfaction with different features of the digital library, including response time, browse capabilities, comprehensive- ness of the collection, print function, search capabilities, and display of document pages. These studies provide a basic understanding of how to enable digital libraries to meet, and possibly exceed end user needs and expectations.

Another type of usability study is to compare an experimental group with a control group on different interfaces; in these studies, usefulness and learnability are the main measurements for comparison. Baldonado (2000) conducted two small-scale experiments to evaluate a user-centered interface (SenseMaker) for digital libraries.

Her first experiment was to test the value of structure-based actions by comparing the use of an early version of SenseMaker with a baseline system. The results indi- cated that the majority of the participants understood and used the structure-based searching and filtering mechanisms, and considered the mechanisms useful after training. The second experiment tested the learnability of structure-based actions.

The findings showed that the participants exhibited different comprehension of the structure-based actions without training. Two of the three participants understood the structure-based action. The interface needs to be further improved for users to learn the structure. The small sample of these experiments limited the generaliz- ability of the study results.

The comparison of different interfaces also focuses on the effectiveness of the interfaces. In order to support effective user interactions with heterogeneous and distributed information resources, Park (2000) compared users’ interaction with multiple databases through a common interface vs. an integrated interface. Her study was based on data collected from transaction logs, thinking-aloud protocols, post-search questionnaires, demographic questionnaires, exit questionnaires, and exit interviews. Most of the 28 subjects preferred the common interface over the integrated interface because of their ability to control database selection. After comparing the recall of two interfaces, the results indicated that users performed better within the common interface over the integrated interface. The search characteristics of two interfaces were also compared, and the findings revealed that users interacted more with the common interface than the integrated interface. This study suggested that in digital libraries, users preferred to interact with databases separately rather than integrally. Besser et al. (2003) conducted usability testing with 4th and 12th graders

Interactive IR in Digital Library Environments 133

to compare the effectiveness of an existing finding-aid-based interface with a newly developed prototype interface based on the pretest of the existing finding aid for broad user access in retrieving cultural heritage information from a digital library.

The findings of this study indicate that there is a need for research on adaptive and flexible systems for broad user access.

Buttenfield (1999) suggested two evaluation strategies for the usability evaluation of digital libraries: 1) The convergent method paradigm. Evaluation data need to be collected through the system life cycle: system design, development, and de- ployment, and 2) The double-loop paradigm. Evaluators can identify the value of a particular evaluation method under different situations. In most of the cited studies above, evaluation takes place at different stages of digital library development, which helps the iterative design and evaluation of digital libraries. One concern of the usability studies is that many of the studies have been conducted depending on the prototypes instead of the actual digital libraries, so that the actual use contexts are not taken into consideration. In addition, the small and convenient sample of the usability studies also limits the generalizability of the study results.