119
collected qualitative data related to the interviewees’ assessment of relevant infrastructure, human resource requirements, and policy issues, among others.
Whilst interview guides are used to guide the conduct of the whole interview, tape recorders and notebooks are used to capture or record the proceedings of the interview. According to Smith (1995), tape recorders enables the researcher to capture a fuller record of the interview than notes taken during the interview process. The other advantage of using a tape recorder is that the researcher can always playback the recording to get greater clarity of the notes the researcher took during the interview. Greeff (2011) advise that permission be obtained from the interviewees where a tape recorder is used for data capture. Creswell (2013) recommend the use of a lapel mic for both interviewer and interview or an adequate mic sensitive to the acoustics of the room for audiotaping the interviews.
In this study, proceedings of the interviews were audio-recorded using mobile phone, and notes were also hand-written in notebooks as a form of backup. In compliance with University of KwaZulu-Natal research code of ethics, the researcher asked for permission from the interviewees before commencement of recording. Moreover, interviewees were also asked to sign an informed consent letter for interviews as evidence that they gave consent for recording of the interviews.
120
(Creswell, 2013; Connaway and Powell, 2010; Mouton, 1996). Therefore, discussion in this section focusses on the measures the researcher used to ensure validity and reliability of the research findings.
4.8.1 Adapting Questions from Well-Established Researchers
One of the commonly accepted practices in the Social Sciences is the adoption of tools commonly used by well-established researchers. The rationale behind this practice is that such tools may have been tested before they were used, and hence their validity is assured. In keeping with this, the researcher enhanced the content validity of the data collected through the interview guides by adapting questions from the McConnell International (MI)’s Ready?Net.Go! (2001).
4.8.2 Pilot-Testing
Pilot-testing is another commonly used measure of ensuring validity of research instruments.
Walliman (2011) defines a pilot-test as a pre-test of the questionnaire or other type of survey on a small number of cases in order to test the procedures and quality of responses. Although the focus of this definition is on the questionnaire, ideally, all research instruments need to be pilot-tested. A pre-test gives the researcher an opportunity to identify items that are misunderstood by the participants, do not obtain the information that is needed, to practice analysis of the items in preparation of the actual task, and more. (Bell and Woolner, 2012;
Muijs, 2012; Connaway and Powell, 2010). Besides gaining some valuable experience in analysing data hence getting an insight of what to expect from the actual study, the pilot-test also enables the researcher to refine the data collection instruments.
Taken from the perspective of Muijs (2012), pilot-testing is a two-stage process. The first stage is where the researcher asks peers or other experts to comment on the appropriateness of the items in the data collection instruments. This stage is also referred to as peer-review or debriefing (Creswell, 2013). Feedback received from this exercise leads to the second stage which is refining the items before they are actually pilot-tested. The peer review or debriefing stage of the pilot-study was conducted at Mzuzu University. The researcher asked senior collegues from the Library and others from the Department of Library and Information Science to go through the questionnaires and interview guides.
121
A number of scholars (Muijs, 2012; De Vos and Strydom, 2011; De Vaus, 2002) recommend pilot-testing to be conducted with people who resemble those to whom the questionnaire will be administered, and the range of interviewers to represent experience of those who will finally administer the questionnaire. Connaway and Powell (2010) add that in the pre-test, the sample should be scientifically selected. In light of this, all research instruments were pilot- tested at Chancellor College (CHANCO), a constituent college of the University of Malawi, and the Malawi University of Science and Technology (MUST), both of which were not part of this study. The pretesting was done by the researcher as he led the data collection team.
Leman (2010) recommends running a pilot test to 50 respondents whilst De Vaus (2002) indicates that a number of between 75 and 100 respondents provides a useful pilot-test.
However, the researcher conducted a pilot test, each comprising of 50 students and academic staff as recommended by Leman (2010) in order to get a manageable sample. The interview guide for the university/college librarians was pilot-tested on the College Librarian of CHANCO and University Librarian of MUST whilst the interview guide for the ICT directors was pilot-tested on the ICT directors of the same institutions as there were no other comparable institutions to pre-test these instruments.
Data collected from the pre-tested questionnaires were analysed using SPSS Vesrion 23. The researcher used the data to experiment on the generation of various statistical results such as graphs, pie charts and tables. Comments obtained from the research subjects and researcher’s own observations from the questionnaires were used to improve the instruments whereby other questions were ommited whilst others were rephrased. Similarly, comments from interviewees and observations made by the researcher were used to improve the format and content of the interview guides.
4.8.3 Triangulation
Triangulation refers to the use of a variety of methods and techniques of data collection in a single study (Mouton, 1996). According to Creswell (2013), triangulation is not only restricted to methods but also include the use of multiple or different investigators and theories. The aim is to provide corroborating evidence from different sources to shed light on a theme or perspective. It is essentially a means of cross-checking data to establish its validity
122
(Bush, 2012). The underlying assumption, according to Mouton (1996), is that the various methods complement each other hence their respective shortcomings can be balanced out.
In this study, the researcher triangulated methods, data sources, and even theories. In terms of methods, the researcher used both quantitative and qualitative methods. Quantitative methods involved the collection of quantitative data through the use of questionnaires administered to students and academic staff. Conversely, qualitative methods involved collection of qualitative data through the interviews conducted with university/college librarians and ICT directors. In other instances, the researcher asked similar questions to different respondents (i.e. the issue of attitudes was examined across all the respondents) with the aim of corroborating the findings. Similarly, the researcher triangulated data sources (qualitative and quantitative data) in the interpretation of findings on a similar theme. For instance, in trying to establish the existence of ICT infrastructure for the provision and access to library and information services, the researcher triangulated data obtained from university/college librarians and students and academic staff whose findings pointed to the high prevalence of mobile phones amongst students and academic staff. Finally, the researcher triangulated theories whereby the UTAUT theory and TOE framework were used to underpin this study.
4.8.4 Cronbach’s Alpha
Several procedures for establishing reliability of quantitative research instruments exist. They include test-retest, split-half technique, and internal consistency. Cronbach’s Alpha,whose coefficinets range from 0 to 1, is often used to obtain the reliability coefficient of an instrument when the internal consistency technique is used. A number of scholars (Lwoga and Questier, 2014; Delport and Roestenburg, 2011; Cavana et al., 2001) all agree that coefficients that are closer to 1 are deemed to be of high reliability. Genarally, coeffients that are less than 0.6 are considered as poor hence not suitable for use in a study whilst those in the range of 0.7 are considered as acceptable whilst those over 0.8 are considered as good (Cavana et al., 2001).
Reliability of some of the questionnaire items in this study were determined by calculating the Cronbach’s Alpha values of the variables in the questions. As shown in Table 4.4, all the Cronbach’s Alpha values were either closer to 0.7 or were over that mark. This shows that the items in the questionnaires used in this study had high levels of internal consistency.
123
Table 4. 4: Cronbach's Alpha Values for Questionnaires Administered to Students and Academic Staff
Questions Cronbach's Alpha
value for students
Cronbach's Alpha value for academic staff
Main uses of mobile phones 0.745 0.723
Why use mobile phones over other available means i.e. laptop computers
0.672 0.671
Why use of mobile phone is good for offering library services
0.763 0.730
Services to be prioritised for offering on the mobile platform
0.859 0.827
Challenges respondents face/would likely face in accessing mobile library services
0.762 0.788
4.8.5 Member Checking
The researcher also used another technique called member checking to validate the research findings. This is a commonly used technique in qualitative studies, and has been described by Lincoln and Guba (1985, p.314) as “the most critical technique for establishing credibility”.
Member checking involves taking data, analyses, intepretations and conclusions back to the participants so that they can judge the accuracy of the account (Creswell, 2013). The aim is to enable participants check not only the accuracy of the findings but even the language used. In implementing this technique, the researcher sent the university/college librarians and ICT directors findings drawn from the interviews conducted with them through email so that they could verify its accuracy.