• Tidak ada hasil yang ditemukan

Making Sense of Methods and Measurement: Lawshe’s Content Validity Index

N/A
N/A
dicky ahmadi

Academic year: 2024

Membagikan "Making Sense of Methods and Measurement: Lawshe’s Content Validity Index"

Copied!
3
0
0

Teks penuh

(1)

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/308761291

Making Sense of Methods and Measurement: Lawshe's Content Validity Index

Article  in  Clinical Simulation in Nursing · December 2016

DOI: 10.1016/j.ecns.2016.08.002

CITATIONS

173

READS

32,341

2 authors:

Gregory E Gilbert SigmaStats Consulting LLC.

144PUBLICATIONS   2,212CITATIONS    SEE PROFILE

Susan Prion

University of San Francisco 63PUBLICATIONS   966CITATIONS   

SEE PROFILE

All content following this page was uploaded by Gregory E Gilbert on 16 October 2017.

The user has requested enhancement of the downloaded file.

(2)

Departments

Making Sense of Methods and Measurement: Lawshe’s Content Validity Index

Gregory E. Gilbert, EdD, MSPH

a,b

, Susan Prion, EdD, RN, CNE

c,

*

aDMI’s-IRCS, Iselin, NJ 08830, USA

bCenter for Teaching and Learning, Ross University School of Medicine, Roseau, Dominica, West Indies

cSchool of Nursing and Health Professions, University of San Francisco, San Francisco, CA 94117, USA

Calculating Lawshe’s Content Validity Index

Simulation performance evaluation is a complex and complicated process. To produce valid and reliable assess- ment data, the instruments used to gather the data must be empirically grounded. In previous columns, we have discussed reliability (Adamson & Prion, 2012a) and validity (Adamson & Prion, 2012b, 2012c). In this column, we extend our understanding of validity by introducing Law- she’s Content Validity Ratio (CVR) and Content Validity In- dex (CVI) used to quantify validity of an assessment instrument or tool as evaluated by review of clinical experts.

A validity study begins with gathering evidence-based items for potential inclusion in the instrument or tool. If a skill is being assessed, locate the best practices for perform- ing the skill and include all the steps for conducting that skill listed by all sources. Once the items to be evaluated have been collected, a Content Evaluation Panel is formed.

The Content Evaluation Panel should be composed of per- sons who are experts about the domain being studied.

Ideally, there should be a range of experts (also known as subject matter experts) on this panel at various professional levels. In content areas where it is difficult to find experts, the use of three experts is acceptable; normally, a panel of 5-10 experts is preferred. The use of>10 experts is prob- ably unnecessary (Lynn, 1986). For example, if assessing the validity of a task trainer used for urinary catheterization, an investigator could assemble a group of nurses, nurse

practitioners, and physician assistants. These are experts representing practitioners who regularly insert urinary catheters.

Each member of the panel is supplied the list of evidence-based items chosen by the researcher to represent the construct or skill (Figure 1). Independent of the other panelists, each panelist is asked to rate each of the items as ‘‘essential,’’ ‘‘useful,’’ or ‘‘not necessary.’’ A weighted value is assigned to each rating.

Responses from all panelists are pooled, and the number indicating ‘‘essential’’ for each item is determined.

Calculating the Content Validity Ratio

The CVR is an item statistic useful in rejection or retention of individual items and is internationally recognized as the method for establishing content validity (Wilson, Pan, & Schumsky, 2012). The CVI is the mean CVR for all the items included in the final in- strument (DeVon et al., 2007). When all panelists say that the tested knowledge or skill is ‘‘essential,’’ or when none say that it is ‘‘essential,’’ we are confident to include or delete the item. It is when there is not consensus that item issues arise. Two assumptions are made, each of which is consistent with established psy- chophysical principles:

Any item, performance on which is perceived to be

‘‘essential’’ by more than half of the panelists, has some degree of content validity.

Clinical Simulation in Nursing (2016) 12, 530-531

www.elsevier.com/locate/ecsn

* Corresponding author:[email protected](S. Prion).

1876-1399/$ - see front matterÓ2016 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.

http://dx.doi.org/10.1016/j.ecns.2016.08.002

(3)

The more panelists (beyond 50%), perceiving an item as ‘‘essential’’, the greater the extent or degree of its content validity.

The CVR is calculated using the shown inFigure 2:

When all panelists agree an item is ‘‘essential,’’ the CVR is 1.00 (adjusted to 0.99 for ease of manipulation according toLawshe [1975]). When the number of panelists rating an item ‘‘essential’’ is more than half, but less than all, the CVR is somewhere between 0 and 0.99. If none of the raters marks the item as ‘‘essential,’’ the CVR would be 0.

Item Selection

A CVR is calculated for each item. Items are eliminated possibly occurring due to chance using a table of critical values found inAyre and Scally (2014). Alternatively,Polit, Beck, and Owen (2007)suggest items with a CVR of 0.78 or higher with three or more experts could be considered evidence of good content validity. If an item does not reach this threshold, it would normally be deleted from the final instrument.

Content Validity Index

The CVR tells us about the validity of individual items. If we want to know the content validity of the entire instrument

or tool, we can calculate a CVI. The CVI is simply the mean of the CVR values for all items meeting the CVR threshold of 0.78 and retained for the final instrument.Tilden, Nelson, and May (1990)suggest CVI values exceed 0.70; however, Davis (1992)suggests a CVI exceeding 0.80 is preferred. In many situations, it is more efficient to report the overall CVI score than each individual item CVR.

The CVR is a useful statistical technique to determine the validity of individual instrument items, as rated by a panel of content experts. The CVI provides a numeric value for the overall mean CVRs of all items included in the instrument. Both the CVR and CVI can provide researchers and consumers with a quantitative measure of the validity of a simulation evaluation instrument.

References

Adamson, K., & Prion, S. K. (2012a). Making sense of methods and mea- surement: Reliability.Clinical Simulation in Nursing,8(6), e259-e260.

Adamson, K., & Prion, S. K. (2012b). Making sense of methods and mea- surement: Validity Part II.Clinical Simulation in Nursing,8(8), e383- e384.

Adamson, K., & Prion, S. K. (2012c). Making sense of methods and mea- surement: Validity Part I.Clinical Simulation in Nursing,8(7), e319-e320.

Ayre, C., & Scally, A. (2014). Critical values for Lawshe’s content validity ratio: Revisiting the original methods of calculation.Measurement and Evaluation in Counseling and Development, 47(1), 79-86. http:

//dx.doi.org/10.1177/0748175613513808.

Davis, L. (1992). Instrument review: Getting the most from a panel of ex- perts.Applied Nursing Research,5(4), 194-197.

DeVon, H., Block, M. E., Moyle-Wright, P., Ernst, D. M., Hayden, S. J., Lazzara, D. J.,., & Kostas-Polston, E. (2007). A Psychometric toolbox for testing validity and reliability. Journal of Nursing Scholarship, 39(2), 155-164.

Lawshe, C. (1975). A quantitative approach to content validity.Personnel Psychology, 28(4), 563-575. http://dx.doi.org/10.1111/j.1744-6570.

1975.tb01393.x.

Lynn, M. (1986). Determination and quantification of content validity.

Nursing Research,35(6), 382-385.

Polit, D., Beck, C., & Owen, S. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing and Health,30(4), 459-467.http://dx.doi.org/10.1002/nur.20199.

Tilden, V., Nelson, C., & May, B. (1990). Use of qualitative methods to enhance content validity.Nursing Research,39(3), 172-175.

Wilson, F., Pan, W., & Schumsky, D. (2012). Recalculation of the critical values for Lawshe’s content validity ratio.Measurement and Evaluation in Counseling and Development,45(3), 197-210.http://dx.doi.org/10.

1177/0748175612440286.

where:

neis the number of panelists identifying an item as “essential” and N is the total number of panelists (N/2 is half the total number of panelists).

Figure 2 Equation to calculate Lawshe’s Content Validity Ratio (CVR).

Is the skill (or knowledge) measured by this item o Essential?

o Useful but not essential? Why?

o Not necessary? Why?

Figure 1 Lawshe’s method for assessing content validity.

Making Sense of Methods and Measurement: Lawshe’s Content Validity Index 531

pp 530-531 Clinical Simulation in Nursing Volume 12Issue 12

View publication stats

Referensi

Dokumen terkait