• Tidak ada hasil yang ditemukan

3.5 Research design

3.5.2 Cohorts two and three

3.5.2.2 Data collection methods and tools

In line with the approach adopted above in describing the data collection methods and tools related to cohort one, the component of the study related to cohorts two and three is considered below under two main sections entitled Examining cognitive test performance (relating primarily to Research Question 1 and Research Question 2) and Investigating student and teacher perceptions of collective self-efficacy (relating to Research Question 3).

Examining cognitive test performance

The following describes refinements for cohorts two and three of the cognitive test performance components of the study, specifically with regard to the measures and data collection procedures employed.

3.5.2.2.a.1 Measures of cognitive test performance Student test score

As explained in the foregoing, for cohort one the measures of student academic performance were students’ single post-test scores and improvement scores (post-test score – pre-test score) during a semester. For cohorts two and three, raw student test scores for modules (courses) attended over a two year period (2011 and 2012) were used. Thus the datasets for both cohorts comprised a number of test score, module and teacher combinations per student. As shown in Table 3-2, three measures in respect of student test score were variously used in the analyses to standardize the data and to control for varying class difficulty levels (student scores were spread across a variety of modules and teachers), viz. raw scores, raw scores converted to simple deviations of student score from class mean, and raw scores converted to z-scores (deviation of student score from class mean, divided by the standard deviation).

Demographic match/mismatch

The analysis fundamentally requires comparisons of student academic performance between and across different comparisons of student-teacher demographics. A match in a given demographic is where the student and teacher share the same demographic (e.g. a Black student and Black teacher would be a match, an Indian student and Black teacher would be a mismatch). There are various ways to assess match/mismatch. On a broad basis, one can simply record match/mismatch without recognition of which combination it stems from, creating binary match/mismatch data. This can also be constructed into complex multiple factor combinations such as match/mismatch across race, language and gender simultaneously. In addition, recognition of the exact combinations is possible, whereby, for instance, English student-Afrikaans teacher can be compared to Afrikaans teacher- English student and so forth. Various approaches are attempted and analysed below for cohorts two and three.

3.5.2.2.a.2 Overview of cognitive test performance research approach

While cohort one made use of a pre and post-assessment test per subject/class, cohorts two and three drew on institutional records of each student’s assessment results for each module for which the student had received teacher led instruction during the academic years 2011/2012. This approach allowed for a vastly larger dataset of relevant student test scores and thus a broader analysis (see Table 3-2 which describes the increased size and variety of data included in the sample set for cohorts two and three in comparison with cohort one).

Investigating student and teacher perceptions of collective self-efficacy The following describes refinements to the student collective teaching self-efficacy (S-CTSE) instrument used for cohorts two and three, as well as the addition of a teacher collective teaching self- efficacy (T-CTSE) instrument.

Refined research instruments

The results from cohort one’s collective self-efficacy survey suggested that collective self-efficacy may account for match/mismatch data for some demographic groups (4.2.3.2.c Summary of findings for cohort one). For cohorts two and three, the collective self-efficacy instrument was refined with a view to allowing more robust and nuanced testing of the potentially moderating effect of the collective teacher self-efficacy variable on the match mismatch effect.

Furthermore, to test for the potentially confounding influence of teacher perceptions of collective teaching self-efficacy, a teacher collective teaching self-efficacy instrument was developed. This is based on the premise that teacher student interactions that influence academic performance do not necessarily relate only to the perceptions and attitudes of students. Although student perceptions and their impact on academic performance are the focus of this study, the fact that the literature abounds with studies that show a significant relationship between teacher attitudes and perceptions and the academic performance of their students cannot be ignored entirely in a study of this nature (Jussim et al., 1996, Oates, 2003, Obiakor, 2004).

Thus two survey instruments were developed for use with cohorts two and three- a student collective teaching self-efficacy questionnaire (S-CTSE) and a teacher collective teaching self-efficacy questionnaire (T-CTSE) (see Appendices D and E). The S-CTSE questionnaire was used to identify students’ perceptions of the collective teaching efficacy of reference groups they identified with (viz.

race, home language and gender). This instrument was a refinement of that used with cohort one (see Appendix B) with the introduction of six point Likert scale items and sub-scales to allow for factor

analysis. The T-CTSE allowed for the analysis of teacher perceptions of the collective teaching efficacy of reference groups they identified with (viz. race, home language and gender) with a view to identifying possible moderating effects of this construct on the teacher student match/mismatch data.

In addition to those referred to in Table 3-7, four guidelines emerge from the literature in respect of the development of self-efficacy research instruments, viz. the questionnaire should be multidimensional, should emphasise the use of the ‘I’ pronoun, should use verbs such as ‘can’ and

‘be able to’ and each item should contain ‘barrier expressions’ where possible (Skaalvik, et al. (2007), Goddard et al. (2000), Tschannen-Moran et al. (2001), Bandura (1997)).

The S-CTSE and T-CTSE instruments used with cohorts two and three were developed according to the aforementioned guidelines as follows:

The questionnaire should be multidimensional: The S-CTSE (student collective teaching self-efficacy) and T-CTSE (teacher collective teaching self-efficacy) instruments include four dimensions, viz. subject expertise (SEX), instructional strategies (IS), classroom management (CM) and student engagement (SEN). This allows for a four dimensional factor analysis of factors that contribute to collective teaching self-efficacy scores (see Table 3-11 Data source/instrument and question number mapping to variables

The questionnaire should emphasise the use of the ‘I’ pronoun: Skaalvik et al. (2007) point out that this guideline ensures ‘expression of subjective perception of the participant’ and explain their use of the ‘I’ pronoun as follows: "...the object in each statement was I because the aim was to assess each teacher’s subjective belief about his or her own capability"

(Skaalvik, et al., 2007). In line with this guideline, all items in the S-CTSE and T-CTSE begin with “I believe…”, “I have confidence in…”, “I am confident that…” or similar expressions (see Appendices D and E).

The questionnaire should use verbs such as ‘can’ and ‘be able to’: Skaalvik et al. (2007) justify the use of verbs like ‘can’ and ‘be able to’, as follows: " …the items contained verbs like can or be able to so that the items clearly asked for mastery expectations because of personal competence" (Skaalvik, et al., 2007). The S-CTSE and T-CTSE items align with this guideline wherever possible. For example, item 4.1 is worded as follows: “I have confidence in the ability of teachers that are of the same race as me to teach computer related subjects effectively” (see Appendices D and E).

Each item should contain ‘barrier expressions’ where possible: Bandura (1997) notes that if

“there are no obstacles to surmount, the activity is easy to perform, and everyone has uniformly high perceived self-efficacy for it” (Bandura, 1997, p. 42). The S-CTSE and T- CTSE items therefore include barrier expressions where appropriate. For example, item 4.30 includes the ‘barrier expression’ “the most difficult students: “I believe that teachers that are of the same race as me are effective at getting through to the most difficult students” (see Appendices D and E).

It should be noted that in the literature reviewed on collective teacher self-efficacy as it applies in education, reference is typically made to school or university ‘faculties’ as the reference group that defines the ‘collective’ (Bandura, 1995, Oettingen, 1995, Tschannen-Moran and Barr, 2004). The theoretical design principles for faculty based collective teacher self-efficacy instruments found in the literature have in this study been adapted for use with culture-based reference groups (race, home language and gender). Similarly, while many studies in the literature focus on the teachers’ own perceptions of collective teacher self-efficacy (Bandura, 1995, Oettingen, 1995, Tschannen-Moran and Barr, 2004), this study, in addition to researching the effect of teachers’ own perceptions, also explores the effect of student perceptions of the teaching efficacy of teacher reference groups (viz.

race, home language and gender groupings). Furthermore, there is no evidence in the literature that collective self-efficacy has previously been explored as a potential moderating variable for the match

 academic performance effect.

Variable Subscales/Factors Data Source/Instrument and Question Number Key:

S-CTSE: Student Collective Teacher Self-Efficacy Questionnaire (see Appendix D)

T-CTSE: Teacher Collective Teacher Self-Efficacy Questionnaire (see Appendix E)

Student ID S-CTSE

Teacher Student Match (Race) S-CTSE 2, T-CTSE 2

Teacher Student Match (Home Language) S-CTSE 3, T-CTSE 3

Teacher Student Match (Gender) S-CTSE 1, T-CTSE 1

Student Race S-CTSE 2

Student Home Language S-CTSE 3

Student Gender S-CTSE 1

Student CTSE (Race) S-CTSE 4.1, 4.2, 4.3, 4.10, 4.11, 4.12, 4.19, 4.20, 4.21, 4.28, 4.29, 4.30

Student CTSE (Race) Subject Expertise (SEX) S-CTSE 4.1, 4.2, 4.3 Student CTSE (Race) Instructional Strategies (IS) S-CTSE 4.10, 4.11, 4.12 Student CTSE (Race) Classroom Management (CM) S-CTSE 4.19, 4.20, 4.21 Student CTSE (Race) Student Engagement (SEN) S-CTSE 4.28, 4.29, 4.30

Student CTSE (Home Language) S-CTSE 4.4, 4.5, 4.6, 4.13, 4.14, 4.15, 4.22, 4.23, 4.24, 4.31, 4.32, 4.33

Student CTSE (Home Language) Subject Expertise (SEX) S-CTSE 4.4, 4.5, 4.6 Student CTSE (Home Language) Instructional Strategies

(IS)

S-CTSE 4.13, 4.14, 4.15 Student CTSE (Home Language) Classroom Management

(CM)

S-CTSE 4.22, 4.23, 4.24 Student CTSE (Home Language) Student Engagement

(SEN)

S-CTSE 4.31, 4.32, 4.33

Student CTSE (Gender) S-CTSE 4.7, 4.8, 4.9, 4.16, 4.17, 4.18, 4.25, 4.26, 4.27, 4.34, 4.35, 4.36

Student CTSE (Gender) Subject Expertise (SEX) S-CTSE 4.7, 4.8, 4.9 Student CTSE (Gender) Instructional Strategies (IS) S-CTSE 4.16, 4.17, 4.18 Student CTSE (Gender) Classroom Management (CM) S-CTSE 4.25, 4.26, 4.27 Student CTSE (Gender) Student Engagement (SEN) S-CTSE 4.34, 4.35, 4.36

Teacher ID T-CTSE

Variable Subscales/Factors Data Source/Instrument and Question Number Key:

S-CTSE: Student Collective Teacher Self-Efficacy Questionnaire (see Appendix D)

T-CTSE: Teacher Collective Teacher Self-Efficacy Questionnaire (see Appendix E)

Teacher Race T-CTSE 2

Teacher Home Language T-CTSE 3

Teacher Gender T-CTSE 1

Teacher CTSE (Race) T-CTSE 4.1, 4.2, 4.3, 4.10, 4.11, 4.12, 4.19, 4.20, 4.21, 4.28, 4.29, 4.30

Teacher CTSE (Race) Subject Expertise (SEX) T-CTSE 4.1, 4.2, 4.3 Teacher CTSE (Race) Instructional Strategies (IS) T-CTSE 4.10, 4.11, 4.12 Teacher CTSE (Race) Classroom Management (CM) T-CTSE 4.19, 4.20, 4.21 Teacher CTSE (Race) Student Engagement (SEN) T-CTSE 4.28, 4.29, 4.30

Teacher CTSE (Home Language) T-CTSE 4.4, 4.5, 4.6, 4.13, 4.14, 4.15, 4.22, 4.23, 4.24, 4.31, 4.32, 4.33

Teacher CTSE (Home Language) Subject Expertise (SEX) T-CTSE 4.4, 4.5, 4.6 Teacher CTSE (Home Language) Instructional Strategies

(IS)

T-CTSE 4.13, 4.14, 4.15 Teacher CTSE (Home Language) Classroom

Management (CM)

T-CTSE 4.22, 4.23, 4.24 Teacher CTSE (Home Language) Student Engagement

(SEN)

T-CTSE 4.31, 4.32, 4.33

Teacher CTSE (Gender) T-CTSE 4.7, 4.8, 4.9, 4.16, 4.17, 4.18, 4.25, 4.26, 4.27, 4.34, 4.35, 4.36

Teacher CTSE (Gender) Subject Expertise (SEX) T-CTSE 4.7, 4.8, 4.9 Teacher CTSE (Gender) Instructional Strategies (SEN) T-CTSE 4.16, 4.17, 4.18 Teacher CTSE (Gender) Classroom Management (CM) T-CTSE 4.25, 4.26, 4.27 Teacher CTSE (Gender) Student Engagement (SEN) T-CTSE 4.34, 4.35, 4.36

Table 3-11 Data source/instrument and question number mapping to variables