• Tidak ada hasil yang ditemukan

A. THE KEY PRINCIPLES

4. Test-reliability

Apart from those mentioned above, the reliability factor may come from the test itself. A test that is administered twice to the same person which will likely have similar results can be said to be reliable. Unreliable assessments or tests can come from several sources including poorly written test items, too many items for the test takers to accomplish, or timed-test that increase test takers’ anxiety.

Some important points regarding assessment reliability can be examined in the following list by Russel and Airasian.32

1. Reliability refers to the consistency of assessment information.

2. Reliability is not concerned with the appropriateness of the assessment information collected.

32 Russell and Airasian, Classroom Assessment: Concepts and Applications, (B&B.

Kasyfur Rahman, M.Pd. 39 3. Reliability exists in degrees (e.g. highly reliable,

moderately reliable)

4. Reliability is a necessary but insufficient condition for validity.

The following are some considerations when maintaining reliability.

 The proper review of content before an examination ensures that students are prepared in a uniform and consistent manner.

 Directions for any assessment activity must be clear and understood by all participants.

 During all evaluation activities, consistent administration methods must be followed at all times. Test items or activities must be given and scored in a consistent and accurate manner (which necessitates the use of objective scoring systems).

 Maintaining a consistent and suitable completion time assists in ensuring equity and fairness for all participants in an assessment activity.

 During an examination, distractions and disruptions must be avoided at all costs.

c. Validity

Validity, also known as accuracy, may refer to whether the assessment measures what it is supposed to measure. Analogically speaking, when an arrow is pointed toward the target, it will highly likely hit it. The arrow represents the assessment; when it precisely measures students’ performance or achievement based on standards set out, then it can be said to be valid. Compared to other principles, validity is the most important aspect in assessment.3334 Valid findings from tests, measures, or assessment methods are required; otherwise,

33 W James Popham, Classroom Assessment: What Teachers Need to Know (ERIC, 1999).

34 Brown and Abeywickrama, Language Assessment Principles and Classroom Practices.

40 Language Learning Assessment: Linking Theory to Practice

incorrect inferences would be drawn, and achieving consistent and dependable findings each time will not change that fact.35 However, validity is more than just a matter of accuracy as it also entails educational decision-making. Russell and Airasian define validity as concern about whether the gathered information is pertinent to the decision to make.36 The assessment information garnered here must be accurate as it informs the decision. Popham explains that when more about students with regard to educationally relevant variables is known by the teachers, the better educational decisions are to be made by the teachers about those students.37

To illustrate these concepts, we may imagine an English teacher who discovers that her students know much more about English grammar than she previously expected, then she would likely to set more advanced topics of grammar in her lesson. However, her decision to tackle more sophisticated grammar should be grounded in an accurate assessment of her students’ knowledge of the grammar. The more accurate the assessment data serving as the foundations of the decision, the more the decision qualities are improved or vice versa. Another example that is worthy of our attention is that if a teacher wants to know the extent to which her students meet the curricular objectives, she would likely to sample the students’ performance on the intended objective using, for example, a test, since it is impossible to measure students’ whole ability regarding the objectives. This sample performance is used to generalize whether the students master the entire curricular objectives.

To summarize, Brown and Abyekrama outline several characteristics of test validity as follows:38

1. Measures exactly what it proposes to measure 2. Does not measure irrelevant or ‘contaminating’

variables

35 Witte, Classroom Assessment for Teachers.

36 Russell and Airasian, Classroom Assessment: Concepts and Applications, (B&B.

37 Popham, Classroom Assessment: What Teachers Need to Know.

38 Brown and Abeywickrama, Language Assessment Principles and Classroom Practices.

Kasyfur Rahman, M.Pd. 41 3. Relies as much as possible on empirical evidence

(performance)

4. Involves performance that samples the criterion (objective)

5. Offers useful meaningful information about the test taker’s ability

6. Is supported by theoretical rationale or arguments

In addition to these, we need to keep in mind the nature of validity as outlined by Miller et al39:

1. Validity represents the appropriateness of use and interpretation of assessment procedure for a given group of individuals, not the procedure itself.

2. Validity is about the degree of extent as there is no such thing as valid and invalid, rather whether the assessment procedure is very or less valid.

3. Validity is always specific to the use or interpretation of the population of test-takers.

4. Validity is a unitary concept, meaning that the nature of validity is based on standards set by the professional testing organization.

5. Validity entails an overall evaluative judgment.

Apart from these, validity is a complex construct as it may cover several kinds of evidence of validity.