• Tidak ada hasil yang ditemukan

CHAPTER FIVE – RESEARCH METHODOLOGY

5.5 Data analysis

146

researcher explained the questions to the respondents to facilitate effective responses. The librarians also helped the researcher to identify and interview some of the users who have reduced their usage of the libraries in the last six months. These interviews were critical in establishing why they were not using the library as actively as they had done before and the alternatives they were now using.

Again, the participation of the users so identified was through informed consent. In some cases, such as KARI, ILRI and AMREF, where the library is open to some members of the public, the librarians helped the researcher to identify bona fide researchers both in the case institutions and other associated institutions. These are the respondents who were asked to participate in the research and those who agreed to being interviewed; not just all the people who came to the library during the period. The interviews and observations were based on schedules in order to ensure that comparable data was collected from each site (Stark and Torrance 2005).

147

1. Coding - This is the basic tool of content analysis. It involves simply determining the basic units of analysis (for example, each word in a particular five-minute speech), and counting how many times each word appears;

2. Categorizing - This is the next level up in content analysis. It involves creating meaningful categories to which the units of analysis (for example, “terms signifying „satisfaction‟ and terms signifying „discontentment‟”) can be assigned;

3. Classifying - This level involves verifying that the units of analysis can be easily and unambiguously assigned to the appropriate categories;

4. Comparing - This is the next level. It involves comparing the categories in terms of numbers of members in each category (for example, a speech can be coded as having 135

“satisfaction” references and three “discontentment” references) and performing any relevant statistical analysis; and

5. Concluding - This is the highest, and often most controversial, level of content analysis. It involves drawing theoretical conclusions about the content in its context. The context of any type of communicative content is very important at this level of analysis.

Using this technique, the researcher analyzed the primary documents relating to the vision, mission, role and governance of the selected case research libraries. These documents included strategic plans, policies, evaluation reports and brochures. These provided insights into the perception of the libraries by their users regarding their role, performance, challenges and future. It was also used to analyze data obtained through interviews and focus group discussions.

5.5.2 Conversation analysis

This is a data analysis technique in which elements of a conversation are analyzed to derive meaning (Spencer, Ritchie and O‟Connor 2003). Antaki (2002) asserts that transcription of recorded conversations forms a significant part of Conversation Analysis. He also identifies three questions that worry conversation analysts as: 1) is turning movement and expression into descriptions as accurate as writing down sounds as words? 2) can descriptions be impartial? 3) can they be complete? No concrete answer is available so far for these questions.

According to Woodruff and Aoki (2004), conversation analysis involves two steps. First, the analyst makes a moment-by-moment, turn-by-turn transcript of the actions in each encounter. Second, the

148

analyst examines these encounters individually and then comparatively to reveal a practice‟s generalizable orderliness.

Conversation analysis also relies heavily on what the participants see and hear. It doesn‟t depend on matters like feelings or motivation which cannot easily be seen or heard (Antaki 2002). Though this may appear to be a major weakness of this technique, it is also strength in the sense that it ensures objectivity in perception of the issues under study. This technique was useful for the analysis of the data collected from the focus group discussion and interviews with researchers and librarians.

5.5.3 Descriptive/interpretive techniques

Also known as hermeneutic techniques, these approaches concentrate on the historical meaning of the experience and its developmental and cumulative effects on the individual and society (Filippo 1991). Hermeneutics is a technique of interpreting phenomena and events by combining both the literal meaning of the words used as well as the human experience of the phenomena described (Filippo 1991; Abulad 2007). Hermeneutics emphasizes the role of contextualized human experience in interpreting a phenomenon correctly (Ramberg 2005). Abulad (2007) argues that linguistic knowledge alone, without experience, is inadequate in unraveling events or phenomenon correctly.

Filippo (1991) adds that hermeneutics is formal and systematic and attempts to analyze human phenomena from different angles. Thus, hermeneutic research design focuses on all perspectives and expressions of phenomena (Filippo 1991; Lee 1991b; Abulad 2007).

Various branches of hermeneutics are used in qualitative data analysis. The researcher used the Heideggarian approach, the focus of which is upon how people interpret their lives and attach meaning to their experiences. This approach recognizes that the data generated by the research subjects becomes fused with the experience of the researcher during research. This means that the views of the researcher cannot be bracketed off, thereby recognising that no researcher can come to the study with suspended preconceptions. Thus within Heideggarian philosophy the researcher is an active participant in the study (Bale et al. 2003). Using this technique, the researcher began by interpreting the basic terminology and then moved on to interpret fully the meanings of the issues and events observed and/or captured by other means during the study. This analysis drew from the researcher‟s experience and skills to condense the data and extract meaning in various contexts.

149

5.5.4 Computer Assisted Qualitative Data Analysis Software (CAQDAS)

Computers and software tools aid faster data analysis. Though these systems do not actually analyze qualitative data, they facilitate storage, coding, retrieval, comparison, and linking while human beings do the analysis (Patton 2002).

The researcher used the Non-numerical Unstructured Data with Indexing, Searching and Theorizing (NUD*IST) software – also known as NVivo – to store, sort and manipulate the qualitative data and SPSS to process quantitative data electronically. This choice was based on the fact that both software packages are easy to use (have a good graphical user interface) and are readily available on the University of KwaZulu-Natal local area network (LAN). Besides, Patton (2002) reports that CAQDAS specialists generally agree that most of these systems do not exhibit any significant functional differences.