106
• Trustworthiness of the electronic records keeping systems.
The interview data collection occurred concurrently with the online questionnaire per the research design in section 4.4. The purpose of the interview data collection was to complement the questionnaire.
107 this advice assumes the availability of resources and that the advice is not always practically possible to implement. Although the value of pretesting has been recognised as critical to the valid measurement of phenomena by survey methodology, Alaimo, Olson and Frongillo (1999) and Hilton (2017) argue that there is a lack of pretesting protocols or guidelines. However, Ruane (2005:35) and Hilton (2017) do point out that respondents in the pretest can be asked to think out loud while completing the pretest questionnaire and/or the interviewer can introduce probe questions to check that the questions are understood and being interpreted as intended. Drennan (2003) and Hilton (2017) both agree that cognitive interviewing is best characterised as a combination of think-aloud and probing procedures. Cognitive interviewing involves the researcher asking survey respondents to think out loud as they go through a survey questionnaire and tell them everything they are thinking (Drennan 2003). This helps ensure understanding of the questionnaire from the respondents’ perspectives rather than from that of the researcher. However, the procedures mentioned above were not possible to apply in the current study because of Covid-19 and the questionnaire was pretested online without the researcher being present.
Aspects that need to be checked when pretesting questionnaires should include instructions given to respondents, style and wording of the questions, formality and informality of the questions (tone and presentation), length of the questionnaire, sequence of questions, scales and format used, and quality of individual questions (objectivity) (Bless, Higson-Smith and Kagee 2007).
A pretest of the survey questionnaire was conducted on the 29th January 2021 at the University of Botswana’s Records Management Unit which deals with administrative records. The University of Botswana is a parastatal and was not part of the study. Bless, Higson-Smith and Kagee (2007) emphasise that the pre-testing entity should not be part of the study. The survey link was sent via email to 20 records officers in the Records Management Unit to pre-test the questionnaire and 17 officers responded.
The questionnaire was also shared with and critiqued by graduate students for the course “Constructing Questionnaires and Surveys” at the University of Botswana (Appendix P). The class was chosen to review the instrument because at the time of constructing and refining the questionnaire, the class was also busy with a module on questionnaire review and as part of their class assignment, they asked the researcher to avail the questionnaire for reviewing. The feedback focused on the informed consent form and the main questions and was as follows:
Informed consent document:
108 It was pointed out that getting consent is a crucial step in the administration of questionnaires (or any other data collection tool) and how it is presented needs careful crafting.
• Salutation: Questionnaires are ideal for surveys (finite populations) which means you know who is in your sample, so if possible, name them. This improves response rates.
• Suggest you do not bullet but rather describe in sentences. This is a letter so needs details.
• Do indicate when a response is expected.
• State benefits to responding. Let them know if there are any. (Note that these need not be monetary but communal or even just advancement of knowledge.)
• This letter should show affiliation (be on letterhead).
Questionnaire:
Various suggestions were provided as follows:
• Number sections for ease of reference, especially during analysis.
• Better use numbers for naming (coding), for example, Gender 1 = Male, 2 = Female (and not a and b).
• If possible, close the questions on designation, division and organisation as your sample is probably specific as to the kind of respondents you expect.
• Include transitional/introductory statements at the beginning of sections and not just to throw respondents into reading what is in the tables.
All the suggestions were taken into consideration and implemented.
4.8.2 Reliability
Reliability is concerned with the consistency of measures. Bless, Higson-Smith and Kagee (2007:150) state that reliability of measurement is the degree to which an instrument produces equivalent results when the study for which it was used is repeated. Teddlie and Tashakkori (2009:211) opine that reliability is the degree to which the results of measurement consistently and accurately represent the exact magnitude or quality of a construct. The common understanding here is that if the use of the instrument is repeated over time, it should yield the same results for it to be considered reliable. For the measurements to be meaningful there should be variance in the scores among varying subjects
109 The data collected from the respondents participating in the pretest was exported from REDCap into SPSS, coded and labelled and once this was done a Cronbach’s alpha was run. Cronbach’s alpha is a statistic commonly used to demonstrate that tests and scales that have been constructed or adapted for research projects are fit for purpose.
It is usually used to test reliability (internal consistency). Thus, the data collected were subjected to reliability analysis using Cronbach’s alpha method to determine the measures and ensure dependable measurements among the different items in the instrument.
Table 4.4: Reliability statistics Cronbach's
Alpha
Cronbach's Alpha Based
on Standardised
Items
N of Items
.808 .818 58
Source: Field data (2021)
Running Cronbach’s alpha on all the study variables yielded the desired coefficient. In terms of item-by-item analysis, Cronbach’s alpha was between .793 and .800 which is generally considered good (Eisinga, Te Grotenhuis and Pelzer 2013). Thus, no item was removed as a threat to reliability and the instrument was considered reliable.
4.8.2.1 Credibility of qualitative findings
In qualitative research, the term reliability is similar to credibility, conformability, dependability, consistency and applicability. Credibility is defined as the extent to which the data and data analysis are believable and trustworthy (Cope 2014:89). The researcher ensured proper documentation of the methodology and this increased and ensured the reliability of the tests for the study. The study maintained the same meaning of every research question and research questions were framed in an easily understandable manner (Bless, Higson-Smith and Kagee 2007).
The study also employed member checking, also known as participant or respondent validation, to explore the credibility of the results. Creswell (2013:208) suggests that the researchers take “data, analyses, interpretations, and conclusions back to the participants so they can judge the accuracy and credibility of the account.” The researcher sent narrative descriptions from the interviews to the study participants for member checking. Polit
110 and Beck (2004) purport that if researchers claim that their interpretations are good representations of participants’
realities, then participants should be allowed to react to them. Polit and Beck (2004) advise that member checking with participants can be carried out both informally in an ongoing way as data are being collected, and more formally after data have been fully analysed. In the current study, the researcher carried out member checking as the data collection took place (ongoing). The researcher sent the interview transcripts to the interviewees to check the accuracy of the accounts once the transcriptions were complete. Furthermore, Birt, Scott, Cavers, Campbell and Walter (2016) argue that member checking enables the researcher to make claims about the transcription accuracy because it focuses on confirming, modifying, and verifying the interview transcript.
Peer debriefing is when the researcher shares the study with a few professionals from the research community who are interested in the study but not part of the study. Selected peers may question the research’s methodology and theoretical framework used in the study (Saunders, Lewis and Thornhill 2009). In this regard, the researcher sought assistance from the lecturers at the University of Botswana’s Department of Library and Information Studies to review and critique the data collection tools and their input helped improve the tools.