Rigorous and careful application of the research methods, tools and techniques assisted in ensuring that the results of the study were valid and reliable. Creswell (2013) explains that validity in research refers to the potential or ability of measuring instrument to measure what it is supposed to measure and or achieve. Reliability according to (Du Plooy-Cilliers, Davis & Bezuidenhout, 2014), pertains to how accurate the measures are and how they can be relied on. Reliability in quantitative and qualitative research on the other hand is defined by Joppe (2000) and Creswell (2013) as the consistency of results over time and that if the results can be repeated under similar conditions and methodology then the research methodology can, for all intents and purposes, be regarded as reliable.
Reliability and validity of research study refers to data and conclusions that can be regarded as credible, dependable, trustworthy and can withstand academic scrutiny and rigour. Reliability and validity in the study was observed through engaging with multiple methods of data gathering. Du Plooy-Cilliers, Davis & Bezuidenhout (2014) are in favour
of using multiple methods for data collection as it may be a credible instrument to demonstrate validity and reliability. Accordingly, in this study a survey was used for data collection, documentary analysis was conducted, focus group discussions were held and semi-structured interviews were conducted as means of data collection.
On reliability and validity, Creswell (2009: 190) contends that, ‘qualitative validity means that the researcher checks for the accuracy of the findings by employing certain procedures, while qualitative reliability indicates that the researcher’s approach is consistent across different researchers and different projects.’
Reliability (Creswell, 2013; Du Plooy-Cilliers, Davis & Bezuidenhout, 2014) implies that research instruments are dependable and consistent. In pursuance of this research standard questions were piloted with a few participants in the same research site and it emerged during the pilot phase that discussions were of quality standard given their engaging and thought-provoking nature.
Validity of the data collection instruments was established through the use of peer referral for examination, and this was done through allowing two of the five-member PhD cohort, which the researcher was part of, to review the research instruments particularly the interview and survey questions. The data collection instruments were also submitted to the supervisor for feedback before submitting them to the ethics office and prior to the commencement of the data collection process.
The trustworthiness and credibility of data was ensured by using a combination of data collection methods. In this case the researcher used focus group discussions with middle level managers, NPO Board members from the two youth development academies, and with members of the social cooperatives, semi-structured interviews with two centre managers, two traditional leaders and two local municipal ward councillors. Survey questions were administered on three senior managers, thirty-two staff members from the youth development academies as well as on eight officials from other government departments that support the youth development academies. Data that had been analyzed into themes was also taken back to participants for checking of accuracy and representation of facts for authentication since these were real people in real life
situations. Lastly the focus group discussions and interview sessions allowed the participants to respond in mother-tongue, which was isiZulu. This made participants to freely engage in discussions in a language that they felt more comfortable and conversant with. Where there were concepts or phrases in English, these were translated and explained in their context.
According to Creswell (2009), it is vitally important that the reliability and validity of data is tested to ensure that acceptable scientific research standard are met. To this end, reliability and validity procedures as posited by Creswell (2009) and Singh (2015) were employed in the study.
For reliability purposes and ensuring that no mistakes occurred during transcription these were examined and all transcribed material matched the audio recordings of the interviews. In the process of ensuring that no changes in the coding crept during the process in deviation from the original point, this risk was minimized through the checking by the researcher and a neutral person for consistency. Furthermore, in the procedures as suggested by Creswell & Singh that there needs to be the cross-checking of codes used by different researchers on the same data sets these were found to be non-existent in the study since the researcher was the only one responsible for all the coding of themes from the different data sets.
For validity purposes, Creswell & Singh suggest that there should be triangulation of data emerging from different data sources to identify alignment among the different data sets.
This standard was adhered to through comparing the data emerging from the interviews, focus groups and from the examination of archival and official documentation relating to the youth development academies. On providing final report or feedback to the findings to research participants to determine accuracy, this was not possible due to time constraints and availability of research participants. On the utilization of peer debriefing process for feedback about the study and guidance of the researcher as to how to improve certain aspects, the researcher made use of the research supervisor as well as the PhD cohort who played the role of a peer debriefers as they were neutral to the study.
.