4.6.4.3 Internal Consistency Reliability
Internal consistency reliability is a tool used to assess the consistency of outcomes across objects during a test and it measures the accuracy of items on a test measure of the same construct or idea (DeVellis, 2016). For testing internal consistency, surveys must be sent out simultaneously.
Surveys send out at different phases while testing, could result in confusing variables (DeVellis, 2016; Gordon et al., 2008). Test for internal consistency can be done informally by comparing the responses for absolute agreement amongst each other (Puzenat et al., 2010). A wide variety of answers may come up, complicating the internal consistency (Rutledge and Sadler, 2007). A selection of statistical tests is accessible for internal consistency; the most commonly and widely used is Cronbach’s Alpha (Bonett and Wright, 2015).
The Cronbach’s Alpha statistic was used in this study to ensure the reliability of all variables used in the questionnaire during this research. Cronbach's alpha defines the internal consistency of items in a survey tool to measure its reliability (Santos, 1999). Baruch (1999) and Field (2014) concur in determining Cronbach’s alpha value of 0.7 to 0.8 as generally acceptable. An alpha value of 0.4 is unacceptable, whilst a value greater than 0.9 reflects potential for likely similarities between the items (Baruch, 1999). A Cronbach alpha value of 0.5 is arguably sufficient in early stages of piloting research in a new area (Field, 2014).
4.7.1 Measurement validity
Bryman and Bell (2007) concluded that measurement validity applies largely to quantitative research and to the processes of social scientific concept. Measurement validity is frequently referred to as construct validity. Construct validity is an assesSMEsnt of whether measurement tools measure the constructs that they are designed to measure (Welman et al., 2007). The researcher implemented a mixed methodology research approach which augments validity of the data collected as a way of ensuring measurement or construct validity for this study (Tashakkori and Teddlie, 2010; Johnson and Onwuegbuzie, 2004; Denscombe, 2008). The questionnaires were pre- tested, or pilot tested with the purpose of determining whether the constructs were measuring what they were intended to measure. Demographic data collection questions used in the questionnaires were adopted from prior research studies.
4.7. 1.1 Internal validity
Internal validity denotes causality and is focused on whether an inference that integrates a causal connection between two or more variables is suitable (Bryman and Bell, 2007). This type of validity, as Bryman and Bell (2007) state, prescribes how an experimental design is organized and includes all the phases of the scientific research methodology. Suggestions point to the fact that if that x causes y, there is need for the researcher to ensure that x is accountable for deviations in y and that nothing else is generating that causal connection (Bryman and Bell, 2007; Brownson, Baker, Deshpande and Gillespie, 2017).
4.7.1.2 Ecological validity
Ecological validity is when social scientific exploration is appropriate to people’s daily, natural social settings (Saunders et al., 2003; Sekaran and Bougie, 2013; Stangor, 2014). The instruments must capture the way of life conditions, ethics, attitudes and knowledge base (Bryman and Bell, 2007). Furthermore, the use of a mixed methodology that involved in-depth, face-to-face interviews with SMES finance experts informed the third stage of data collection, which ensured ecological validity of the study.
4.7.1.3 External validity
Mitchell and Jolley (2001) suggest that external validity is concerned with indiscriminate or causal inferences in scientific investigation, frequently based on experiments as experimental validity. It
is the extent to which the results of a study can be generalized to other situations and to other people (Aronson, Wilson, Akert and Fehr, 2007; Bryman and Bell, 2007). Mathematical analysis of external validity is concerned with a determination of whether generalizations across heterogeneous populations are feasible (Pearl and Bareinboim, 2014).
4.7.2 Triangulation
The mixed methods approach was preferred as it also ensures validity of data through triangulation.
Here qualitative data, quantitative data and secondary data were used to corroborate findings. In this case, data from questionnaires was corroborated with that from in-depth interviews and secondary sources of data to improve the validity of the findings.
4.7.3 Pilot Testing
According to Williams and Wilkins (2007), a pilot study is a small-scale preliminary study conducted to evaluate feasibility, time, cost, adverse events, and effect size in an attempt to predict an appropriate sample size and improve on the study design prior to performance of a full-scale research project. Hair, Money, Samuel and Page (2007) assert that the sample for pre-testing may include four or five individuals but a maximum of thirty individuals. The process of pilot testing is carried out prior to the actual data collection for the main study (Saunders et al., 2003; Stangor, 2014; Haralambos and Holborn, 2000; Stangor, 2014; Saunders et al., 2003). Pilot testing ensures that participants understand the questionnaire as the researcher expects them to do (Clark et al., 2009). The other reasons for conducting pilot tests includes easing out the difficulties that arise in data recording and ensures validity and reliability (Clark et al., 2009; Harms, Cryer, Clifford and Yazejian, 2017; McKenzie, Neiger and Thackeray, 2016).
For the purposes of this study, the pilot test was conducted by the distribution of questionnaires to five (5) women owner or managers of SMESs in the City of Gweru. A Cronbach’s statistical analysis for the questionnaire was done, to measure the internal consistency or reliability of the research tool.
The analysis done using SPSS 25.0.0 and yielded an alpha value of 0.7. In addition, two (2) SMES finance experts were also pilot-tested during this study. The results of the pilot test validated the use of the chosen research tools to collect the required research data, as was done in studies by DeVellis (2016), Dunn, Baguley and Brunsden (2014) and Spielberger et al., (2017).