In order to find a framework within which to work one needs first to discover ones’ starting point. In engineering one tends to be taught by engineers who typically reside in a positivist
45 paradigm, often without even realising it. Hence, to get a feel for where one is situated, it is first necessary to describe the activities involved in this thesis, at the same time situating them in the paradigm in which they belong.
In order to undertake this study it was necessary to step out of the comfort zone of a clinically distant positivist paradigm (Cohen, Manion, & Morrison, 2000, p.19), and step into one in which students could be active participants. Since there were both quantitative and qualitative components to the analysis of the data, this necessitated a multi-paradigm approach. The dominant paradigms of modern social science research are the interpretive paradigm (Cohen et al., 2000, pp.22-23; Neuman, 2000, pp.70-75) and the critical paradigm (Cohen et al., 2000, p.28; Neuman, 2000, pp.75-81). The former is the preferred one here, as the idea behind this study was not that society or its participants were necessarily flawed, or needed changing (Cohen et al., 2000, p.28).
3.2.1 The Positivist Paradigm
Neuman (2000, p.66) defines positivism as “an organised method for combining deductive logic with precise empirical observations of individual behaviour in order to discover and confirm a set of probabilistic causal laws that can be used to predict general patterns of human activity”.
Within this paradigm the epistemological stance of the researcher is seen as being an external neutral observer i.e. external to the experiment with no influence on the situation under observation. As Guba (1990, p.20) implies, the “Values and other biasing and confounding factors are…automatically excluded from influencing the outcomes”. This can become difficult, however, when other human beings with different ideals, values and beliefs become involved in the process.
As mentioned previously in Chapter 1.4, most engineers are taught, predominantly by other engineers, who operate within the positivist paradigm, and thus tend to remain in this paradigm throughout their lives. Neuman (2000, p.65), in reference to positivism, points out that “most people never hear of alternative approaches”. Like many other staff members, before embarking on this study, the Researcher was unaware that he too was labelled a positivist. This was borne out in a survey, originally used by Luckett (1995, pp.9-10) at a SAAAD conference on
Curriculum Development. It was posed at random to staff and students in the Mechanical Engineering Department at DUT in 2005. The majority of responses indicated that both staff
46 and students operated within this paradigm and yet were unaware of the term.
This then is typically the paradigm in which students and lecturers spend most of their life in engineering. The interests of the learner are not the most important, but the utilization of reliable and valid data is. Thus the syllabus is seen as the primary focus. Teaching in this paradigm is not always in the best interests of the students as it can encourage a surface learning approach (Luckett, 1995, p.32).
Comparing the results of the students in the Research study with the results from previous semesters is quantitative and hence falls within this paradigm. The Researcher can claim that the results are reliable in that they involve solving similar problems with unique answers (Heywood 2000, p.21) in all the tests and examinations and if repeated should obtain similar results
(SAQA, 2001, p.18, Yin, 2009, pp.40,45). As all the tests and examinations were set and marked by him this provided a measure of consistency in mark allocation and judgement (SAQA, 2001, p.18). However, one can question the validity of the data, depending on the degree of validity required. If one compares the learning outcomes (Appendix W) with the questions posed in tests and examinations then one could state that face validity and content validity (Heywood 2000, p.21) have been met. The questions posed cover many of the learner outcome requirements (for example, ‘usethe non-flow and steady-flow energy equations in the appropriate applications’, which was also required of the computer spreadsheet exercises), but the degree to which it has been met may be uncertain. However, predictive validity (Heywood 2000, p.21), the ability to predict future performance cannot be guaranteed since there is no certainty that students, using the basic skills learnt, would be able to show mastery of the subject’s outcomes in future assessments of a similar nature. Construct validity, “the extent to which an assessment measures the content (aptitude, attitude, skill) it intends to assess, and predicts results on other measures of content...” (Heywood 2000, p.22), is applicable to the semester tests and also the spreadsheets exercises as they both require certain skills levels to be achieved. Heywood (2000, p. 22) discusses the use of ‘A’ level grades as an “indicator of potential” for students entering universities in the UK to be able “to cope with university studies”, but which show little correlation to the “final degree grade”. Similarly in South Africa, the entrance requirements to the DIT are based on a student’s final senior certificate marks (DIT, 2006a, p.6), but studies in the past have also shown little correlation of success, or final grade, in the programme.
47 A student study survey, the details of which are discussed in Chapter 3.9, was undertaken, to determine if any factor(s) may contribute to success. The survey, because of the style of presentation, limited the amount of interpretation students could give to their answers. Hence the analysis of this was mostly statistical, a positivist paradigm trait. It was assumed that the students gave honest answers about what they normally did, and not what they thought the Researcher would like to hear. However, there was also an interpretive paradigm aspect to the survey since there was an open-ended question at the end, where students could add anything further even if it was unrelated to the questionnaire.
3.2.2 The Interpretivist Paradigm
Neuman (2000, p. 70) describes several types of interpretive social science, namely
“hermeneutics, constructivism, ethnomethodology, cognitive, idealist, phenomenological, subjectivist, and qualitative sociology”. He indicates that the aim is to grasp the social
interactions of people in their normal environment, by studying “meaningful social action”. It is also usually very contextual as it typically relates to a certain situation in which it deals with the values, norms and culture associated with people within that social setting. The role of the researcher would not be an external neutral observer, as in the positivist paradigm, but would be a participant in the social interaction taking place, which could have an influence on the process.
Since the students have agency (i.e. some control over their destiny), with varying ideas of the world around them they would be likely to tackle the assignments in different ways. From an active learning perspective, using a constructivist approach as described in Chapter 2.3.5, how they would interact with their environment, the computer laboratories and other students in the class, was up to them. They were free to use the time to do whatever they wished in whatever manner they decided. This was possible operating within an interpretivist paradigm, the
research paradigm investigating the teaching, since one was not trying to control the situation or the environment. The lecturer’s role was simply to be a facilitator and adviser when requested.
One might assume that a linear relationship between cause (the computer intervention) and effect (improved pass rate) existed. However, there could be other factors that contribute to or influence the success or otherwise of the intervention, making a simple linear assumption problematic. Other teaching methods, some of them mentioned in Chapter 2.2, could have been used besides the computer intervention that may equally have had an influence on pass rates, either positively or otherwise.
48 Yin (2009, p.40) states that “four tests are common to all social science methods”, those being construct, internal and external validity together with reliability. For construct validity one needs to collect data from “multiple sources” to “establish a chain of evidence”. Also one needs to have a “draft case…reviewed by key informants” (ibid, p.42). Internal validity was not applicable in this instance since the study was not “explanatory or causal” (ibid, p. 40). External validity applies to how generalizable a study’s findings may be, single case studies often being a
“poor basis for generalizing” (ibid, p.43). In this study construct validity could be claimed since multiple sources and types of data were gathered, as described in Chapter 3.2.4. Finally
reliability refers to the ability to repeat the study and achieve a similar result. Considering the qualitative data collected, neither external validity nor reliability could be claimed since the data was unique to this study and opinions gathered would not necessarily apply to another class if the study was repeated.
As opposed to the positivist notion of determining the data as reliable and valid, “Lincoln and Guba (1985) suggest a different set of criteria for establishing rigour in interpretive enquiry”, these being credibility, transferability, and dependability and confirmability (as cited in Stringer, 1999, p.176-177). Credibility arises from “prolonged engagement with participants;
triangulation …from multiple data sources; member checking…check and verify the accuracy of the information recorded; and peer debriefing…articulate and reflect on research
procedures…”. Transferability is seen as being able to apply the “findings to other contexts”.
Dependability and confirmability are gained by the rigour in which the data collection and analysis are described and by the ability to refer back to raw data.
3.2.3 The research activities compared under the research paradigms
As described later in this chapter, various activities were undertaken in order to collect data for this study. The scope and limitations of this project will also be described later in Chapter 3.4.
Part of the thinking involved was to enable students to build up the subject theory themselves, thus generating their own knowledge base, augmented by lectures. The teaching would fall in line with a constructivist style of learning, as discussed in Chapter 2.3.5, the students
constructing meaning of new material for themselves in the computer laboratories, assisted by the Researcher.
49 In an attempt to place some clarity on the various dynamics of the research, Table 3.1 below has been included, placing the research questions and study components into perspective within their respective paradigms. The details of each component are discussed later in this chapter.
TABLE 3.1: Comparison of Research Questions, Paradigms and Research Methods
RESEARCH QUESTIONS INTERPRETIVE POSITIVIST
Primary: How does delivery affect student understanding?
Spreadsheet Exercises Interviews
Spreadsheet Exercises
Also Test 1 vs 2 marks Secondary 1: How do students
learn thermodynamics?
Spreadsheet Exercises Study Habit Survey Interviews
Study Habit Survey
Concept Test Secondary 2: What problems
do students studying
thermodynamics experience and why?
Interviews
Concept Test
Study Habit Survey Concept Test
Secondary 3: Did the
intervention improve pass rates?
Other Semester Test and Examination Comparison with Intervention Semester Results (Tests 1 and 2, plus
examination)
To add further clarity to the project, a summary of all activities in which students participated is included in Table 3.2, showing in which paradigm the analysis of those activities falls. Some of the analyses would move across the paradigms, since there are aspects of both qualitative and quantitative analysis in some of the activities.
50 TABLE 3.2: Comparison of Student Activities and Paradigm Analysis
STUDENT ACTIVITIES INTERPRETIVE POSITIVIST
Spreadsheet 1 Y Y
Marking of Spreadsheet 1 Y Y
Study Habit Survey Y Y
Spreadsheet 2 Y Y
Marking of Spreadsheet 2 Y Y
Concept Test Y
Test 1 Y
Test 2 Y
Interviews Y
Semester examination Y
3.2.4 Triangulation
Neuman (2000, p.125) defines “triangulation of method” as “mixing qualitative and quantitative styles of research and data”. As several different sources and styles of data were available it was hoped to get further “credibility” by “triangulation” of the data (Locke, Silverman and Spirduso, 1998, p.100; Leedy, 1997, p.169; Denscombe, 2005, p.38). Although triangulation of methods is possible here, it was realised that the informants of this approach are from the same source, the students themselves. This may limit the generalizability (Denscombe, 2005, p39; Yin, 2009, p.43) but the sample, for most sources of data, was reasonably large, as highlighted in Table 4.2.
The number of interviews conducted provided a smaller sample because of the time constraints involved in performing this task (Gillham, 2000, p.61), as well as the transcribing mentioned in Chapter 3.11.3. Nevertheless, a fairly wide spectrum of students was to be chosen for the interviews, as described by the sampling strategy in 3.11.1.
In an attempt to quantify and, to a certain extent, generalise the quantitative data further, statistical methods were employed to analyse and compare data of past and current semester tests, the methodology discussed later in Chapter 3.6 and the analysis thereof in Chapter 4.5.4.
In this way the triangulation was extended to a wider population in an attempt to make the data
51 more reliable and externally valid.