• Tidak ada hasil yang ditemukan

The entrance skills and competencies of the graduate students, both master’s and doctoral, are quite low

N/A
N/A
Protected

Academic year: 2023

Membagikan "The entrance skills and competencies of the graduate students, both master’s and doctoral, are quite low"

Copied!
20
0
0

Teks penuh

(1)

Construct Validity of the College of Education Graduate School Admission Examination: Implications to Entrance Skills and

Competencies of the Graduate School Students

Elvira L. Arellano, Shirley R. Jusayan

Abstract

This study looked into the constructs that underlie the admission examinations of the West Visayas State University College of Education Graduate School that may very well identify the entrance skills and competencies of the Graduate School students. Further, this study hopes to give a clear picture of the capabilities of entering graduate students both masters and doctoral.

Results of factor analyses show that both the Master’s and Doctoral admission examinations are valid in terms of the critical thinking skills that are being assessed. The entrance skills and competencies of the graduate students, both master’s and doctoral, are quite low. However, verbal and analytical abilities of doctoral and master’s students are significantly different with doctoral students having better competencies. It is therefore imperative that graduate school faculty members have to initiate reforms and re-direct their teaching towards development of critical thinking skills if we have to do our share to properly place the higher education and the country’s overall Human Resource Development (HRD) landscape. The graduate school faculty need to re-examine the academic structure and contents of the courses they are teaching so that graduate credits (“units”) or completed degrees of the students contextually fit in either the Knowledge, Skills, Attitude and Values (KSAV) requirements of the workplace, particularly the specific required competencies in government offices.

Keywords: admission examination, construct validity, critical thinking skills graduate education

© WVSU Research and Development Center

(2)

As per BOR Resolution No. 22-2010, one of the admission requirements to the Graduate School is passing the entrance examination.

The West Visayas State University College of Education Graduate School conducts three admission examinations every year, once every academic term to cater the needs and availability of incoming graduate students. The Masters’

programs include Master of Arts in Education and Master in Education while for the Doctoral degree programs includes Doctor of Philosophy in Science Education and Doctor of Philosophy in Education.

It is a known fact that not all primary and secondary teachers, the main applicants in the graduate studies, may be able to enroll in the opening of the school year (June), hence the conduct of three admission tests. These admission examinations have been administered for the past six years; however the constructs that underlie the examinations have not been empirically established. It is hypothesized that the constructs that underlie these examinations may very well identify the entrance skills and competencies of the Graduate School students, as well as gives a clear picture of the capabilities of graduate students.

The study established the construct validity of the West Visayas State University College of Education Graduate School Admission Examination.

Further, it verified the entrance skills and competencies of the Graduate students based on the identified constructs of the entrance examination.

Specifically, answers to the following questions were sought:

a. What are the constructs that underlie the WVSU College of Education Graduate School Admission examinations?

b. What are the levels of the entrance skills and competencies of the graduate students based on their performance in the admission examinations?

c. How are the entrance skills and competencies of the masters students compared to the doctoral students?

Construct validity is one of the most important concepts in education and psychology (Western & Rosenthal, 2003). It is at the heart of any study in which researchers use a measure as an index of a variable that is not itself directly observable (e.g., intelligence, critical thinking, analytical reasoning).

If an educational or psychological test lacks construct validity, results obtained using this test will be difficult to interpret. Not surprisingly, the “construct”

(3)

of construct validity has been the focus of theoretical and empirical attention for over half a century, especially in personality, clinical and educational psychology, where measures of individual differences of hypothesized constructs became the center of research (Anastasi & Urbina, 1997; Nunnally

& Bernstein, 1994).

Despite the importance of this construct validity, no simple metric can be used to quantify the extent to which a measure can be described as construct valid. Most often, researchers establish construct validity by presenting correlations between a measure of a construct and a number of other measures that should theoretically, be associated with it (convergent validity). The aim of construct validation is to establish the relation of a construct to other variables with which it should, theoretically, be associated positively, negatively, or practically not at all (Cronbach & Meehl, 1955). A procedure designed to help quantify construct validity should provide a summary index not only of whether the measure correlates positively, negatively, or not at all with a series of other measures, but the relative magnitude of those correlations.

Construct Validation

Over the past several decades psychologists have gradually refined a set of methods for assessing the validity of a measure. In broadest sense, psychologists have distinguished a number of kinds of statements about the validity of a measure, including (a) content validity, which refers to the extent to which the measure adequately samples the content of the domain that constitutes the construct (e.g., different behavioral expressions of critical thinking that should be included in a measure of critical thinking as a cognitive trait); criterion validity, which refers to the extent to which a measure is empirically associated with relevant criterion variables, which may either be assessed at the same time (concurrent validity), in the future (predictive validity), or in the past (postdictive validity); and (c) construct validity, which refers to the extent to which a measure adequately assesses the construct it purports to assess (Nunnally & Bernstein, 1994).

Two points are important to note here about construct validity. First, although researchers often describe their instruments as “validated”, construct validity is an estimate of the extent to which variance in the measure reflects variance in the underlying construct. Virtually all measures (except those using relatively infallible indicators, such as measures of biological sex) include error components that reflect not only random factors but method variance (variance

(4)

attributable to the method being used, such as self-report vs. interviews) and irrelevant but nonrandom variables that have been inadvertently included in the measure. Thus, a researcher studying critical thinking would want to show that the measure correlates with trait intelligence, but would also want to demonstrate that something is left when holding critical thinking constant other than random error and methods of variance-that is, something unique to critical thinking over and above intelligence.

Second, construct validation is always theory dependent (Cronbach &

Meehl, 1995). A statement about the validity of an instrument is a statement about the extent to which its observed associations with measures of other variables match theoretical predictions about how it should be associated with those variables. If the theory is wrong, the pattern of correlations will appear to invalidate the measure. Construct validation is a bootstrapping operation:

Initial theories about a construct lead to creation of a measure designed to have content validity vis-á-vis the construct as understood at that point in time (Cronbach & Meehl, 1955). Subsequently, researchers assess the relation between the measure and relevant criterion variables and determine extent to which (a) the measure needs to be refined, (b) the construct needs to be refined, or (c) more typically, both. Thus construct validation is not only continuous or a matter of degree, not categorical distinction between valid and invalid but continual or perpetual, self-refining process.

In their classic article on validation, Cronbach and Meehl (1955) considered the possibility of developing an overall coefficient for indexing construct validity but noted the difficulty of providing anything more than a broad indication of the upper and lower bound of validity. However, developments since that time, particularly in the concept of the multitrait- multimethod matrix (Campbell & Fiske, 1959; Shrout & Fiske, 1995), have led to continued efforts to derive more quantitative, less impressionistic ways to index the extent to which a measure is doing its job. Thus, a number of researchers have developed techniques to try to separate true variance on a measure of a trait from method variance, often based on the principle that method effects and trait effects (and their interactions) should be distinguishable using analysis of variance (ANOVA), confirmatory factor analysis (because trait and method variance should load on different factors), structural equation modeling (SEM), and related statistical procedures (George & Mallery, 2009;

Cudeck, 1988; Hammond, Hamm & Grassia, 1986; Kenny, 1995; Reichardt

& Coleman, 1995; Wothke, 1995).

(5)

Critical Thinking

The College of Education Graduate School Admission examination was developed to hopefully measure the critical thinking skills of entering graduate students. Critical thinking has multiple definitions, complex and contextual evidences (Ennis, 1985; Facione, 1990). Generally, according to Lai (2011) this includes the component skills of analyzing arguments, making inferences using inductive or deductive reasoning, judging or evaluating, and making decisions or solving problems. Ennis (1985) in his work identified twelve abilities of critical thinking. These abilities indicate ways of avoiding making mistake in evaluating when selecting the only right answer. In using the terms “meaningful”, “consistent”, “logical”, “precise”, “following rule”,

“accurate”, “justified”, “relevant”, “assumption”, and “true”, one can think more precisely and critically in the evaluation process of critical thinking.

Critical thinking is not universal in any individual (The National Council for Excellence in Critical Thinking, 1987). It includes the examination of structures of thoughts. It comprises varied components, hence can be defined in varied ways.

Critical thinking is a self-disciplined kind of thinking which is directed to reason out at the highest level. In this manner, persons who think critically live reasonably. They strive to develop the intellectual virtues to examine life, according to Facione (2013). Facione further elaborated critical thinking as the purposeful, self-regulatory judgment, which results in interpretation, analysis, evaluation, and inference. It also includes explanation of the evidential, conceptual, methodological, criteriological or contextual considerations upon which the judgment is based. He further claimed that there are six core critical thinking skills: interpretation, analysis, evaluation, inference, explanation, and self-regulation. All of these six skills have also been provided with specific evidence.

First, as Facione and Gittens (2013) quoted from the consensus statement of the rational panel of experts, interpretation means “to comprehend and express the meaning or significance of a wide variety of experiences, situations, data, events, judgments, conventions, beliefs, rules, procedures, or criteria.” It includes the sub-skills of categorization, decoding, significance, and classifying meaning. Some questions to be considered in measuring interpretation skills are: What does this mean? What’s happening? What is the author’s purpose, theme, or point of view?

(6)

Second, analysis means identification of the intended and actual inferential relationship among questions, statements, concepts, descriptions, or other forms of representation formulated to express opinions, beliefs, experiences, judgment, reasons, or information. It also includes examining ideas, detecting arguments, and analyzing these arguments as sub-skills of analysis (Facione & Glittens, 2013). Some of the questions that suggest analysis are: What is your conclusion? What are the similarities and differences between two approaches?

Third, evaluation has been defined by experts as assessment of the credibility of statements, or other representation which are accounts or descriptions of a person’s perception, experience, situation, belief, or opinion.

It also includes assessment of the logical strength of the actual or intended inferential relationships among statements. Some questions that involve evaluation are: How strong are the arguments? How credible is the claim?

Fourth, Facione and Gittens (2013) elaborated the experts’ claim that inference involves identifying and securing elements needed to draw reasonable conclusion, to form conjectures and hypotheses, to consider relevant information, and to deduce the consequences flowing from data, statements, principles, evidence, judgments, beliefs, opinions, concepts, descriptions, and other definitions. The three sub-skills of inference are querying evidence, conjecturing alternatives, and drawing conclusions.

Inference can be manifested by the students if they are able to answer the following questions: Given what we know so far, what conclusions can we draw? What does the evidence imply?

Fifth, the experts define explanation as being able to present in a convincing and coherent way the result of one’s reasoning (Facione &

Gittens, 2013). This implies the ability to give someone a full look at the big picture. The sub-skills under explanation are describing methods, justifying procedures, describing results, proposing and defending with good reasons one’s casual and conceptual explanations of events or points of view, and presenting full and well-reasoned arguments in the context of seeking the best understanding possible. Some questions related to explanation are: What are the specific findings or results of the investigation? How did you come to that interpretation?

Sixth, self-regulation is defined by experts as self-consciously monitoring one’s cognitive activities, the elements involved in the activities,

(7)

and the results deduced, particularly by applying skills in analysis and evaluation to one’s own inferential judgments with consideration toward questioning, confirming, validating, or correcting manner of reasoning or results (Facione & Gittens, 2013). It involves two sub-skills which are self- examination and self-correction. A student may possess self-regulation if he or she can answer the following questions: Our position on this issue is too vague, can we be more precise? How good was our methodology, and how well did we follow it?

Methodology

The constructs that underlie the WVSU-COE GS Admission examination was established using confirmatory factor analysis. This kind of factor analysis involves the testing of hypotheses about the structure of variables in terms of the expected number of significant factors and factor loadings (Lee, 1994).

Participants

The study included 63 Doctoral students and 219 Masters’ students as participants who took the admission examination from June 2012 to June 2014.

The College of Education graduate school admission examination.

The Admission examination for the Master’s and Doctoral Program has two parts: Part I is composed of the Verbal and Analytical questions and Part II, the Essay question. Part I is designed to measure the critical thinking skills of the graduate students.

The Verbal part includes questions on contextual understanding, analogy, reading comprehension, and antonyms. The analytical part includes questions on mathematical algorithm, conceptual understanding, logical reasoning, problem solving, and reading tables and graphs.

Data Analyses

Factor analysis was conducted to determine the construct validity of the Doctoral and the Master’s Admission examinations. Factor analysis is an analytic technique used to identify a relatively small number of factors that can be used to represent relationships among sets of many interrelated variables.

(8)

Before factor analysis was allowed to proceed. Two tests were undertaken.

One is the Bartlett’s test of sphericity, which was used to determine if the correlation matrix is an identity matrix; that is, all diagonal terms are 1 and all terms not on the diagonal are 0. A factor model can be used only if the hypothesis that the population correlation matrix is an identity matrix that can be rejected because the observed significance level is small (lower that the set α = 0.05). Another test that was performed was the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. The KMO is an index for comparing the magnitudes of the partial correlation coefficients. Large values for the KMO measure indicate that a factor analysis of the variables is appropriate since the correlations between pairs of variables can be explained by other variables. Kaiser (in Norusis, 1986) characterizes KMO value of 0.50 and above as acceptable.

For the Verbal Part of the Doctoral Admission examination, the Kaiser- Meyer-Olkin (KMO) Measure of Sampling Adequacy is .542, the Bartlett’s Test of Sphericity is 387.726 with df = 300, and p =.000. On the other hand, the Verbal Part of the Masters Admission examination has a KMO Measure of Sampling Adequacy equal to .594, the Bartlett’s Test of Sphericity is 507.942 with df = 300, and p =.000. For the Analytical Part of the Doctoral Admission examination, the Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy is .547, the Bartlett’s Test of Sphericity is 592.721 with df = 435, and p =.000.

On the other hand, the Analytical Part of the Masters Admission examination has a KMO Measure of Sampling Adequacy equal to .620, the Bartlett’s Test of Sphericity is 637.932 with df = 435, and p =.000. With these results, factor analysis was employed.

Results and Discussion

The results of the factor analysis of graduate students’ responses to the admission examinations are presented in Tables 1 to 4. The tables present the factors and the factor loading associated with each item. Only factors that account for variances greater than 1 (the eigenvalue is greater than 1) were included. Factors with a variance less than 1.0 are no better than a single variable, since each variable has a variance of 1.0. Items or variables that have loadings of .30 or higher were also included. Items with smaller loadings are doubtful to be taken seriously because they represent less than 10 percent of the variance (Lee, 1994).

(9)

Constructs of the Doctoral Verbal Examination

Results of the factor analysis seemed quite messy. It can be seen that items that were initially identified to measure a skill converged with other items that measure different skills. It can be gleaned from these results that items that measure reading comprehension merged with items that measure contextual understanding, analogy and antonym. Perusal of the rotated component matrix reveals however, that there are factors that are purely composed of antonym items and purely analogy items, Table 1 presents the items that converged after conducting a varimax rotation.

Table 1

Factor Loadings of the Items in the Verbal Part of the Doctoral Admission Examination after Rotation in each Component

Item Component

1 2 3 4 5 6 7 8 9 10

1.7 Analogy .710

1.13 Reading comprehension -.707 1.1 Contextual understanding .702

1.6 Analogy .615

1.21 Antonym .535

1.11 Reading comprehension .523

1.25 Antonym .843

1.23 Antonym .700

1.22 Antonym .668

1.19 Antonym .551

1.10 Analogy .809

1.8 Analogy 699

1.16 Reading comprehension .848

1.18 Antonym .677

1.17 Antonym .876

1.3 Contextual understanding .798

1.9 Analogy .381

1.20 Antonym .876

1.5 Contextual understanding .835

1.15 Reading comprehension -.685

1.4 Contextual understanding -.809

1.14 Reading comprehension .407

1.12 Reading comprehension .868

1.24 Antonym .842

1.2 Contextual understanding .429

(10)

Figure 1. The scree plot of the components of the verbal part of the doctoral admission examination.

Constructs of the Masters Verbal Examination

Results of the factor analysis done for the Master’s seemed quite confusing. It can be seen that items that were initially identified to measure a skill converged with other items that measure different skills. The rotated component matrix shows that many of the items measuring different skills converged.

Results of factor analyses for both the doctoral and master’s admission examinations in the verbal part reveal that indeed critical thinking has multiple definitions, complex and contextual evidences (Ennis, 1985; Faccione, 1990). Table 2 presents the items that converged after conducting a varimax rotation.

(11)

Table 2

Factor Loadings of the Items in the Masters’ Verbal Admission Examination after Rotation in each Component

Item Component

1 2 3 4 5 6 7 8 9 10 11

1.17 Antonym .679

1.12 Reading Comprehension .625 1.3 Contextual Understanding .440

1.8 Analogy .417

1.10 Analogy .635

1.13 Reading Comprehension .586

1.9 Analogy .586

1.23 Antonym .533

1.18 Antonym .747

1.6 Analogy .576

1.22 Antonym .550

1.24 Antonym .715

1.25 Antonym .684

1.14 Reading Comprehension .461 1.2 Contextual Understanding -.759

1.19 Antonym .505

1.20 Antonym .795

1.1 Contextual Understanding -.403

1.15 Reading Comprehension .752

1.7 Analogy .546

1.5 Contextual Understanding .790

1.4 Contextual Understanding .427

1.11 Reading Comprehension .781

1.16 Reading Comprehension .801

1.21 Antonym .888

(12)

Figure 2. The scree plot of the components of the verbal part of the master’s admission examination.

Constructs of the Doctoral Analytical Examination

Results of the factor analysis show that items that were initially identified to measure a skill converged with other items that measure different skills. The rotated component matrix shows that many of the items measuring different skills converged. Items that initially were identified to measure problem solving skills converged with items that measure logical reasoning, conceptual understanding, reading tables and graphs, and performing mathematical algorithm. These results are shown in table 3.

(13)

Table 3

Factor Loadings of the Items in the Analytical Part of the Doctoral Admission Examination after Rotation in each Component

Item Component

1 2 3 4 5 6 7 8 9 10 11

2.22 Problem Solving .616 2.15 Problem solving -.549 2.11 Reading tables and graphs .546 2.28 Reading tables and graphs .738 2.26 Problem Solving .726 2.9 Logical reasoning .625

2.24 Problem solving .768

2.25 Problem solving .723

2.12 Reading tables and graphs .554 2.29 Reading tables and graphs .547

2.18 Problem solving .780

2.30 Problem solving .666

2.17 Problem solving .630

2.7 Problem solving .685

2.20 Problem solving .670

2.21 Problem solving .630

2.3 Problem solving .497

2.5 Problem solving .739

2.4 Conceptual understanding .639

2.1 Mathematical algorithm .799

2.6 Logical reasoning -.556

2.19 Problem solving .822

2.8 Logical reasoning -.521

2.23 Problem solving .780

2.2 Mathematical algorithm .671

2.13 Reading tables and graphs .771

2.27 Reading tables and graphs -.542

2.16 Problem solving .431

2.10 Reading tables and graphs .838

(14)

Figure 3. The scree plot of the components of the analytical part of the doctoral admission examination.

Constructs of the Masters’ Analytical Examination

Results of the factor analysis show that items that were initially identified to measure a skill converged with other items that measure different skills. The rotated component matrix shows that many of the items measuring different skills converged. Items that initially were identified to measure problem solving skills converged with items that measure logical reasoning, conceptual understanding, reading tables and graphs, and performing mathematical algorithm. Results of factor analyses for both the doctoral and master’s admission examinations in the analytical part reveal that indeed critical thinking has multiple definitions, complex and contextual evidences (Ennis, 1985; Facione, 1990). Table 4 shows the results.

(15)

Table 4

Factor Loadings of the Items in the Masters’ Analytical Admission Examination after Rotation in each Component

Item Component

1 2 3 4 5 6 7 8 9 10 11 12

2.26 Problem solving .719 2.3 Problem solving .505 2.25 Problem solving .485 2.9 Logical reasoning .413 2.19 Problem solving -.739 2.16 Problem solving .546 2.13 Reading tables and

graphs .390

2.23 Problem solving .631 2.11 Reading tables and

graphs .606

2.17 Problem solving .590 2.10 Reading tables and

graphs -.365

2.12 Reading tables and

graphs .642

2.4 Conceptual

understanding .531

2.1 Mathematical algorithm .530 2.14 Reading tables and

graphs .702

2.8 Logical reasoning .607

2.22 Problem solving .628

2.5 Problem solving .458

2.30 Problem solving .71

2.29 Reading tables and

graphs .545

2.7 Logical reasoning -.515

2.24 Problem solving -.676

2.18 Problem solving .639

2.2 Mathematical algorithm -.780

2.15 Problem solving .771

2.20 Problem solving .715

2.21 Problem solving .655

2.27 Reading tables and

graphs .449

2.6 Logical reasoning .797

2.28 Reading tables and

graphs -.518

(16)

Figure 4. The Scree Plot of the Components of the Analytical Part of the Masters Admission Examination

Verbal and Analytical Abilities of the Masters and Doctoral Students In terms of verbal ability, data in Table 5 show that Doctoral and Master’s students have average verbal ability. However, doctoral students have better analytical skills than the master’s students. This is quite expected as doctoral students have been exposed to more graduate courses that have made them develop their analytical skills. Exposure to research courses in the master’s level may have made doctoral students become more critical especially when making decisions.

(17)

Table 5

Verbal and Analytical Abilities of the Masters and Doctoral Students

Verbal Analytical

N SD Mean Description SD Mean Description Doctoral 63 3.79 13.16 Average 4.84 13.35 Average Masters 219 3.52 10.96 Average 3.80 11.84 Low

Note: For verbal ability 20.6 – 25 = Very high; 15.6 – 20.5 = High; 10.6 – 15.5= Average;

5.6 – 0.5= Low; 0 – 5.5= very low. For analytical ability: 24.6 – 30 = Very high;

18.6 – 24.5 = High; 12.6 – 18.5 = Average; 6.6 – 12.5 = Low; 0 – 6.5 = Very low.

Differences in the Entrance Verbal and Analytical Abilities of the Masters and Doctoral Students

Data in Table 6 show that the verbal and analytical abilities of doctoral and master’s students are significantly different with t(280) = 4.295, p = .000 and t(85.15) = 2.276, p = .025 respectively.

Table 6

Differences in the Entrance Verbal and Analytical Abilities of the Masters and Doctoral Students

Competencies

Mean (Doctoral)

(n=63)

Mean (Masters)

(n=219)

Mean Difference

Confidence95%

Interval of the Difference

t df Sig.

Lower Upper

Verbal 13.16 10.96 2.20 1.19 3.21 4.295** 280 .000

Analytical 13.35 11.84 1.51 .19 2.82 2.276* 85.15 .025 Note: * p< .05. ** p< .001.

(18)

Conclusions and Recommendations

From the results, it can be concluded that the College of Education Graduate School Admission examinations for both the Doctoral and Master’s students are valid in terms of the competencies that they purport to measure.

The items in the admission examination measure specific skills subsumed in the construct of critical thinking. This information can therefore be used when revising the items in the admission examination. It could further be said that the admission examination measures varied aspects of critical thinking skills which are expected of graduate students as they enter the age of globalization.

This is really fitting if the Philippine graduate education has the goal to harness the mind for nation-building.

The entrance skills and competencies of the graduate students, both Master’s and Doctoral, are quite low. It is therefore imperative that graduate school faculty members have to initiate reforms and re-direct their teaching towards development of critical thinking skills if we have to do our share to properly place the higher education and the country’s overall Human Resource Development (HRD) landscape. The graduate school faculty need to re-examine the academic structure and contents of the courses they are teaching so that graduate credits (“units”) or completed degrees of the students either contextually fit the Knowledge, Skills, Attitude and Values (KSAV) requirements of the workplace or the specific required competencies in government offices.

Further, results have shown that Doctoral students have better entrance skills and competencies compared to the master’s students. Indeed, this is a redeeming factor. This has really shown that “there really is a gradation of subject matter-knowledge in being “master’s” or “doctoral” (Imperial, 2011).

(19)

References

Anastasi, A. & Urbina, S. (1997). Psychological testing (7th ed.). New York:

Macmilan.

Campbell, D. & Fiske, D. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin. 56, 81- Cronbach, L., & Meehl, P. (1955). Construct validity in psychological tests. 105

Psychological Bulletin. 52, 281-302

Cudeck, R. (1998). Multiplicative models and MTMM matrices. Journal of Educational Statistics. 13, 131-147

Ennis, R.H. (1985). A logical basis for measuring critical. Educational Leadership, 43(2), 44-48

Facione, P.A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. Millbrae, CA:

The California Academic Press.

Facione, P. A. & Gittens, C. A. (2013). Think critically (2nd ed). U.K.:

Pearson.

Fraenkel, J., & Wallen, N. (1994). How to design and evaluate research in education. New York: McGraw-Hill Inc.

George, D. & Mallery, P. (2009). SPSS for Windows: A simple guide and reference. New York: Pearson Education, Inc.

Hammond, Hamm, & Grassia. (1986). Generalizing over conditions by combining the multitrait-multimethod matrix and the representative design of experiments. Psychological Bulletin. 100, 257-269

Imperial, N. (2011). Thoughts on Re-thinking Postgraduate Education in the Philippines. Essay (Unpublished essay).

Kenny, D.A. (1995). The multirtrait-multimethod matrix: Design, analysis, and conceptual issues. In P. Shrout & S. Fiske (Eds.). Personality research, methods, and theory: A festschrift honoring Donald W. Fiske (pp. 111-124). Mahwah, NJ: Erlbaum.

Lai, E. R. (2011). Critical thinking: A literature review. Pearson’s Research Reports, 6, 40-41

Lee, E. (1994). Educational research: Instrumentation, data collection, analysis. Manila: ALJUN Printing Press.

The National Council for Excellence in Critical Thinking. (1987). Defining critical thinking. The Critical Thinking Community. Retrieved from http://www.criticalthink ing.org/pages/defining-critical-thinking/766 Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.).

New York: McGraw-Hill.

(20)

Reichardt, C. & Coleman, S. C. (1995). The criteria for convergent and discriminant validity in a multitrait-multimethod matrix. Multivariate Behavioral Research. 30, 513-538

Shrout, P. & Fiske, S. (1995). Personality research, methods, and theory: A festschrift honoring Donald W. Fiske. Hillsdale, NJ: Erlbaum.

Western, D., & Rosenthal, R. (2003). Quantifying construct validity: Two simple measures. Journal of Personality and Social Psychology.

84(3), 608-618. doi:10.1037/0022-3514.84.3.608

Wothke, W. (1995). Covariance components analysis of the multitrait- multimethod matrix. In P. Shrout & S. Fiske (Eds.). Personality research, methods, and theory: A festschrift honoring Donald W. Fiske (pp. 125-144). Mahwah, NJ: Erlbaum.

Referensi

Dokumen terkait

4 RESEARCH GAP In the wake of checking on the different written works it was tracked down that many examinations had been led before on preparing example of SBI and ICICI bank

In measuring the suitability of CS practice in secondary schools in Malaysia, a questionnaire with 26 self-developed items that represent five sub-dimensions/constructs namely,