• Tidak ada hasil yang ditemukan

Identification as a study of diagnostic accuracy using at least one measure of accuracy  (such as sensitivity, specificity, predictive values, or AUC) 

N/A
N/A
Protected

Academic year: 2023

Membagikan "Identification as a study of diagnostic accuracy using at least one measure of accuracy  (such as sensitivity, specificity, predictive values, or AUC) "

Copied!
5
0
0

Teks penuh

(1)

 

  Section & Topic  No  Item  Reported on page # 

     

 

TITLE OR ABSTRACT       

 

 

Identification as a study of diagnostic accuracy using at least one measure of accuracy  (such as sensitivity, specificity, predictive values, or AUC) 

#3 

 

ABSTRACT   

   

 

 

Structured summary of study design, methods, results, and conclusions   (for specific guidance, see STARD for Abstracts) 

#3 

 

INTRODUCTION       

 

 

Scientific and clinical background, including the intended use and clinical role of the index test  #4‐5 

 

 

Study objectives and hypotheses  #5 

 

METHODS       

 

Study design 

Whether data collection was planned before the index test and reference standard   were performed (prospective study) or after (retrospective study) 

#5 

 

Participants 

Eligibility criteria   #5‐6 

 

 

On what basis potentially eligible participants were identified   (such as symptoms, results from previous tests, inclusion in registry) 

#5 

 

 

Where and when potentially eligible participants were identified (setting, location and dates)  #5 

 

 

Whether participants formed a consecutive, random or convenience series  #6 

 

Test methods  10a 

Index test, in sufficient detail to allow replication  #7‐9 

 

  10b 

Reference standard, in sufficient detail to allow replication  #6 

 

  11 

Rationale for choosing the reference standard (if alternatives exist)  #6 

 

  12a 

Definition of and rationale for test positivity cut‐offs or result categories   of the index test, distinguishing pre‐specified from exploratory 

#7‐9 

 

  12b 

Definition of and rationale for test positivity cut‐offs or result categories   of the reference standard, distinguishing pre‐specified from exploratory 

#7‐9 

 

  13a 

Whether clinical information and reference standard results were available   to the performers/readers of the index test 

N/A 

 

  13b 

Whether clinical information and index test results were available   to the assessors of the reference standard 

N/A 

 

Analysis  14 

Methods for estimating or comparing measures of diagnostic accuracy  #9   

  15 

How indeterminate index test or reference standard results were handled  #9‐10   

  16 

How missing data on the index test and reference standard were handled  #10   

  17 

Any analyses of variability in diagnostic accuracy, distinguishing pre‐specified from exploratory  #9 

   

18 

Intended sample size and how it was determined  #9 

 

RESULTS       

 

Participants  19 

Flow of participants, using a diagram  #10 

 

  20 

Baseline demographic and clinical characteristics of participants  #10‐11 

 

  21a 

Distribution of severity of disease in those with the target condition  #11   

  21b 

Distribution of alternative diagnoses in those without the target condition  N/A   

  22 

Time interval and any clinical interventions between index test and reference standard  N/A   

Test results  23 

Cross tabulation of the index test results (or their distribution)  

by the results of the reference standard 

#10‐13 

   

24 

Estimates of diagnostic accuracy and their precision (such as 95% confidence intervals)  #11‐13     

25 

Any adverse events from performing the index test or the reference standard  N/A 

 

DISCUSSION       

 

  26 

Study limitations, including sources of potential bias, statistical uncertainty, and generalisability  #16‐17   

  27 

Implications for practice, including the intended use and clinical role of the index test  #17   

OTHER 

INFORMATION 

     

   

28 

Registration number and name of registry  #6 

   

29 

Where the full study protocol can be accessed  #2 

   

30 

Sources of funding and other support; role of funders  #2 

       

(2)

 

STARD 2015 

AIM  

STARD stands for “Standards for Reporting Diagnostic accuracy studies”. This list of items was developed to contribute to the  completeness and transparency of reporting of diagnostic accuracy studies. Authors can use the list to write informative  study reports. Editors and peer‐reviewers can use it to evaluate whether the information has been included in manuscripts  submitted for publication.  

EXPLANATION 

A  diagnostic  accuracy  study  evaluates  the  ability  of  one  or  more  medical  tests  to  correctly  classify  study  participants  as  having a target condition. This can be a disease, a disease stage, response or benefit from therapy, or an event  or condition  in the future. A medical test can be an imaging procedure, a laboratory test, elements from history and physical examination,  a combination of these, or any other method for collecting information about the current health status of a patient. 

The  test  whose  accuracy  is  evaluated  is  called  index  test.  A  study  can  evaluate  the  accuracy  of  one  or  more  index  tests. 

Evaluating  the  ability  of  a  medical  test  to  correctly  classify  patients  is  typically  done  by  comparing  the  distribution  of  the  index test results with those of the reference standard. The reference standard is the best available method for establishing  the presence or absence of the target condition. An accuracy study can rely on one or more reference standards. 

If test results are categorized as either positive or negative, the cross tabulation of the index test results against those of the  reference  standard can be used  to estimate the sensitivity of  the index test (the  proportion  of  participants  with the target  condition who have a positive index test), and its specificity (the proportion without the target condition who have a negative  index  test).  From  this  cross  tabulation  (sometimes  referred  to  as  the  contingency  or  “2x2”  table),  several  other  accuracy  statistics  can  be  estimated,  such  as  the  positive  and  negative  predictive  values  of  the  test.  Confidence  intervals  around  estimates of accuracy can then be calculated to quantify the statistical precision of the measurements. 

If  the  index test results can take more than  two  values,  categorization of test results  as positive or  negative  requires  a test  positivity cut‐off. When  multiple  such cut‐offs  can  be  defined,  authors can report  a  receiver  operating characteristic  (ROC)  curve  which graphically represents the  combination of  sensitivity and specificity for each possible test positivity cut‐off. The  area under the ROC curve informs in a single numerical value about the overall diagnostic accuracy of the index test.  

The intended use of a medical test can be diagnosis, screening, staging, monitoring, surveillance, prediction or prognosis. The  clinical  role  of  a test explains  its position relative  to  existing  tests  in  the clinical  pathway.  A  replacement  test,  for  example,  replaces an existing test. A triage test is used before an existing test; an add‐on test is used after an existing test.  

Besides diagnostic accuracy, several other outcomes and statistics may be relevant in the evaluation of medical tests. Medical  tests can also be used to classify patients for purposes other than diagnosis, such as staging or prognosis. The STARD list was  not explicitly developed for these other outcomes, statistics, and study types, although most STARD items would still apply.  

DEVELOPMENT 

This STARD list was released in 2015. The 30 items were identified by an international expert group of methodologists,  researchers, and editors. The guiding principle in the development of STARD was to select items that, when reported, would  help readers to judge the potential for bias in the study, to appraise the applicability of the study findings and the validity of  conclusions and recommendations. The list represents an update of the first version, which was published in 2003.  

 

More information can be found on http://www.equator‐network.org/reporting‐guidelines/stard. 

(3)

TRIPOD Checklist: Prediction Model Development and Validation

Section/Topic Item Checklist Item Page

Title and abstract

Title 1 D;V Identify the study as developing and/or validating a multivariable prediction model, the

target population, and the outcome to be predicted. 1

Abstract 2 D;V Provide a summary of objectives, study design, setting, participants, sample size,

predictors, outcome, statistical analysis, results, and conclusions. 3 Introduction

Background and objectives

3a D;V

Explain the medical context (including whether diagnostic or prognostic) and rationale for developing or validating the multivariable prediction model, including references to existing models.

4-5

3b D;V Specify the objectives, including whether the study describes the development or

validation of the model or both. 5

Methods

Source of data

4a D;V Describe the study design or source of data (e.g., randomized trial, cohort, or registry

data), separately for the development and validation data sets, if applicable. 5-6 4b D;V Specify the key study dates, including start of accrual; end of accrual; and, if applicable,

end of follow-up. 5

Participants

5a D;V Specify key elements of the study setting (e.g., primary care, secondary care, general

population) including number and location of centres. 5

5b D;V Describe eligibility criteria for participants. 5-6

5c D;V Give details of treatments received, if relevant. N/A

Outcome 6a D;V Clearly define the outcome that is predicted by the prediction model, including how and

when assessed. 6-9

6b D;V Report any actions to blind assessment of the outcome to be predicted. N/A

Predictors

7a D;V Clearly define all predictors used in developing or validating the multivariable prediction

model, including how and when they were measured. 7-8

7b D;V Report any actions to blind assessment of predictors for the outcome and other

predictors. N/A

Sample size 8 D;V Explain how the study size was arrived at. 9

Missing data 9 D;V Describe how missing data were handled (e.g., complete-case analysis, single

imputation, multiple imputation) with details of any imputation method. 10

Statistical analysis methods

10a D Describe how predictors were handled in the analyses. 9

10b D Specify type of model, all model-building procedures (including any predictor selection),

and method for internal validation. 9

10c V For validation, describe how the predictions were calculated. 9 10d D;V Specify all measures used to assess model performance and, if relevant, to compare

multiple models. 9

10e V Describe any model updating (e.g., recalibration) arising from the validation, if done. N/A Risk groups 11 D;V Provide details on how risk groups were created, if done. N/A Development

vs. validation 12 V For validation, identify any differences from the development data in setting, eligibility

criteria, outcome, and predictors. 10-11

Results

Participants

13a D;V

Describe the flow of participants through the study, including the number of participants with and without the outcome and, if applicable, a summary of the follow-up time. A diagram may be helpful.

10-11

13b D;V

Describe the characteristics of the participants (basic demographics, clinical features, available predictors), including the number of participants with missing data for predictors and outcome.

10-11

13c V For validation, show a comparison with the development data of the distribution of important variables (demographics, predictors and outcome).

10- 11,13 Model

development

14a D Specify the number of participants and outcome events in each analysis. 10-12 14b D If done, report the unadjusted association between each candidate predictor and

outcome. 11-12

Model specification

15a D Present the full prediction model to allow predictions for individuals (i.e., all regression coefficients, and model intercept or baseline survival at a given time point). 13

15b D Explain how to the use the prediction model. 12-13

Model

performance 16 D;V Report performance measures (with CIs) for the prediction model. 13 Model-updating 17 V If done, report the results from any model updating (i.e., model specification, model

performance). 12-13

Discussion

Limitations 18 D;V Discuss any limitations of the study (such as nonrepresentative sample, few events per

predictor, missing data). 16-17

Interpretation

19a V For validation, discuss the results with reference to performance in the development

data, and any other validation data. 13

19b D;V Give an overall interpretation of the results, considering objectives, limitations, results

from similar studies, and other relevant evidence. 13-16

Implications 20 D;V Discuss the potential clinical use of the model and implications for future research. 17 Other information

Supplementary

information 21 D;V Provide information about the availability of supplementary resources, such as study

protocol, Web calculator, and data sets. 6

Funding 22 D;V Give the source of funding and the role of the funders for the present study. 2

*Items relevant only to the development of a prediction model are denoted by D, items relating solely to a validation of a prediction model are denoted by V, and items relating to both are denoted D;V. We recommend using the TRIPOD Checklist in conjunction with the TRIPOD Explanation and Elaboration document.

(4)

CONSORT 2010 checklist Page 1

CONSORT 2010 checklist of information to include when reporting a randomised trial * Section/Topic

Item

No Checklist item

Reported on page No Title and abstract

1a Identification as a randomised trial in the title N/A

1b Structured summary of trial design, methods, results, and conclusions

(for specific guidance see CONSORT for abstracts)

#3 Introduction

Background and objectives

2a Scientific background and explanation of rationale #4-5

2b Specific objectives or hypotheses #5

Methods

Trial design 3a Description of trial design (such as parallel, factorial) including allocation ratio #6 3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons N/A

Participants 4a Eligibility criteria for participants #5-6

4b Settings and locations where the data were collected #5

Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered

#7-9 Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they

were assessed

#6 6b Any changes to trial outcomes after the trial commenced, with reasons N/A

Sample size 7a How sample size was determined #9

7b When applicable, explanation of any interim analyses and stopping guidelines N/A Randomisation:

Sequence generation

8a Method used to generate the random allocation sequence N/A

8b Type of randomisation; details of any restriction (such as blocking and block size) N/A Allocation

concealment mechanism

9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned

N/A

Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions

N/A

Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those N/A

(5)

CONSORT 2010 checklist Page 2

assessing outcomes) and how

11b If relevant, description of the similarity of interventions N/A

Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes #9 12b Methods for additional analyses, such as subgroup analyses and adjusted analyses N/A Results

Participant flow (a diagram is strongly recommended)

13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome

#10-11 13b For each group, losses and exclusions after randomisation, together with reasons #10-11

Recruitment 14a Dates defining the periods of recruitment and follow-up #5

14b Why the trial ended or was stopped #5

Baseline data 15 A table showing baseline demographic and clinical characteristics for each group #10-11 Numbers analysed 16 For each group, number of participants (denominator) included in each analysis and whether the analysis was

by original assigned groups

#10-11 Outcomes and

estimation

17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval)

#11-13 17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended #9 Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing

pre-specified from exploratory

N/A

Harms 19 All important harms or unintended effects in each group

(for specific guidance see CONSORT for harms)

N/A Discussion

Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses #16-17 Generalisability 21 Generalisability (external validity, applicability) of the trial findings #16-17

Interpretation 22 Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence #14-16 Other information

Registration 23 Registration number and name of trial registry #6

Protocol 24 Where the full trial protocol can be accessed, if available #5

Funding 25 Sources of funding and other support (such as supply of drugs), role of funders #2

*We strongly recommend reading this statement in conjunction with the CONSORT 2010 Explanation and Elaboration for important clarifications on all the items. If relevant, we also recommend reading CONSORT extensions for cluster randomised trials, non-inferiority and equivalence trials, non-pharmacological treatments, herbal interventions, and pragmatic trials.

Additional extensions are forthcoming: for those and for up to date references relevant to this checklist, see www.consort-statement.org.

Referensi

Dokumen terkait