• Tidak ada hasil yang ditemukan

87

Confirmability refers to the degree of neutrality in the study’s findings by addressing the researchers’ influences and biasness on the study (Rule and John, 2011; Dawadi et al., 2021).

This is achieved by confirming that the findings of the study are not just the researcher’s figments of imagination. To achieve confirmability the researcher systematically demonstrated that the results gathered from this study were linked to the conclusions. Confirmability was achieved by keeping records of transcripts, consent forms and field notes that serve as evidence (Rule and John, 2011).

According to Rule and John (2011), dependability focuses on the methodological rigour and coherence towards generating findings that the research community can accept with confidence.

Dependability is therefore the quality of the process of integration that takes place between the data generation methods, data analysis and the theory generated from the data (Plooy-Cilliers, Davis and Bezuidenhout, 2014). For this study to be dependable a detailed and accurate account of the research methodology was given, to allow the reader to assess the extent to which proper research practices have been followed.

88

Figure 9: Flow model component of editing style. Author's compilation adopted from Pope and Mays (1995: 42-45).

The data were analysed using the editing style as shown in the flow model of the qualitative data analysis component in Figure 9. This means that analysis begins at the same time as data collection. According to Crabtree and Miller (1992), the editing style of data analysis is an appropriate approach when analysing data for developing grounded theory. The guidelines that were followed were extracted from Pope and Mays (1995), Tesch (1990) and Wilson and Hutchinson (1991).

3.11.1 Process of analysis

The recorded interviews were transcribed verbatim by the researcher and a written text was created for each interview. The identity of the participants was removed from the transcripts to maintain their confidentiality and pseudonyms were assigned to participants to protect their identity while providing information relating to their backgrounds. To allow the researcher to become familiar with the data as quickly as possible, the recorded interviews were transcribed within 72 hours of being conducted. In a case where there were idioms, the interviewer would

89

ask the respondents to elaborate further to ensure that the meaning is not lost during the transcription of data.

The analysis was a continuous process in which the protocols were read over time and a deeper level of analysis was reached each time. As a result, the data analysis and the literature review occurred in tandem. While the literature review guided the researcher in observing certain aspects of the topic under investigation during the analysis phase, the process of data analysis also informed the relevant literature that needed to be reviewed. The analysis was completed in two phases and each phase is outlined in the sections below, as in Miles and Huberman (1994).

In phase one, the researcher read through the collected data to get an overall sense of the data and to get a feel for the different participants’ frameworks. While reading the individual protocols the researcher made note of how the participants constructed the meaning of the concepts that were being investigated. Each of the individual protocols was read several times to ensure that the constructions of the concepts were fully documented by the researcher and to ensure that no new interpretations emerged from each successive re-reading. The individual protocols were, therefore, read and re-read until the researcher was convinced that there was no evidence of any new trends or interpretations from the text. During the initial analysis of the different protocols, the researcher also noted similarities in terms of how the participants constructed ideas and attached meaning to their constructions.

Then the researcher first reduced the data from interview transcripts by extracting information relevant to the research questions. In-vivo codes (such as “who has the control of the organisation” and “best image in the organisation”) were then allocated to segments of text that were relevant to the issues under investigation. These were grouped into first-order concepts based on similarities between them (such as “customer and management leadership in the firm”,

“organisational culture”). Where available, the researcher adapted their labelling to match more common literature-based terminology (Nemkova, Souchon and Hughes, 2012).

90

The researcher read every piece of data as it came, line by line and paragraph by paragraph identifying words or statements made by the participants about their experience of the phenomena of the management of information sharing in order processing. Wilson and Hutchinson (1991) calls this process substantive coding, while Glaser and Strauss (1967) refer to the same process as concept specification. Substantive codes are used to describe dimensions, properties, and consequences of the phenomena under study.

In phase two (comparative case study), the first step of the data analysis involved compiling separate case studies for each sector. Once the data were reduced for each casing segment, extracting information relevant to the research questions, in-vivo codes (such as “factors with contracts”, “history building”, “achieved information”, “handling orders”, “information broadcast”, “information sharing”, “information access”, “order problems”) were allocated to segments of text that were relevant to the issues under investigation. These codes were grouped into first-order concepts based on similarities between them. With the help of NVIVO software, the researcher coded every statement as she read the text over and over, line by line. Every piece of data was coded as free nodes. The free nodes looked like a shopping list of words or phrases used by participants to describe the phenomena of information sharing. Later, the free nodes were joined to form tree nodes. Tree nodes started showing some relationships between the words or phrases used by the participants. Tree nodes were also linked together according to the relationships they had. The linked tree nodes formed parent nodes and the parent nodes when linked according to their relationships, formed grandparents. It was from the grandparent nodes that the themes for this study were developed.

The researcher joined the substantive codes together according to their relationships to form categories that were related. The substantive codes, in this case, were the free nodes. When free nodes were joined together, they formed the tree nodes. This was the second level of coding called selective coding (Wilson and Hutchinson, 1991). The third level of coding is called theoretical coding. Theoretical codes were developed from the researcher's interpretation of the data using the selective codes, memos, and field notes to discover the main storyline or a basic social process in the phenomenon of information sharing.

91

The researcher utilised theoretical sampling to fill the gaps which develop in the emerging categories and concepts. This called for the rephrasing of questions to validate responses and even moving from one setting to another to find different participants. The researcher went back and forth into the data verifying them with the participants, carefully reading and analysing them until all categories were fully developed and the relationships between categories and their properties were identified. The theory which was grounded on the data was developed through the use of both inductive and deductive modes of reasoning (Cheniz and Swanson, 1986).