INTERVENTION MANUAL
9.3 STRATEGIES AND METHODS FOR ASSESSING OPERATIONAL FIDELITY
9.3.2 Methods for Assessing Interventionist Adherence
Once validated, the instrument is used to gather data on the interventionists’ adher- ence to the intervention. Adherence data can be obtained from different sources (including research or clinical personnel, interventionists, and clients) and through different methods (including observation and self-report by interventionists and cli- ents) (Campbell et al., 2013; McGee et al., 2018; Walton et al., 2017). The methods are described and their strengths or advantages and limitations or disadvantages are discussed.
9.3.2.1 Observation of Intervention Delivery
The observation is done of the interventionists while providing the intervention sessions. The application of this method involves training the observers and conduct- ing the observation. Observers’ level of agreement or inter-rater reliability is evalu- ated throughout the training and actual observation (Swindle et al., 2018).
Training of Observers
Observers include research staff members or other clinical personnel responsible for assessing the interventionists’ or health professionals’ adherence to the manual when delivering the intervention in research or practice, respectively. The training is intensive involving didactic and hands-on phases; it aims to prepare observers in conducting the observation and using the instrument to record adherence data. The didactic phase is informed by the instrument manual (Section 9.3.1) and covers information on (1) the intervention, with a particular focus on its components and specific activities that should be performed in each planned session; (2) the instru- ment content and rating scale, with a detailed explanation of how each activity is exhibited; (3) the observation logistics, that is, what to observe, where and when, and how to document it (Forsberg et al., 2015).
The hands-on phase consists of reviewing video or audio recordings of interven- tion delivery sessions by the trainee and the trainer. Both conduct the assessment and document performance of the activities on the instrument form, independently. The trainer and trainee compare their responses to determine agreement and disagree- ment. Disagreements are discussed and resolved. The training is extended, as necessary, until an acceptable (usually 80%) level of agreement or inter-observer or inter-rater reliability is attained by the trainee.
Conducting Observation
Observation of intervention delivery is usually done for (1) each interventionist providing the intervention to clients individually or in group, via in-person, tele- phone, or other technology-based methods of offering the intervention; (2) all sessions planned to deliver all intervention’s components to clients; and (3) the whole or total duration of each session given. This is essential to comprehensively assess interventionists’ performance, particularly when the content of the sessions is cumulative, demanding engagement in different sets of specific and nonspecific activities, as well as to enable meaningful computation of adherence scores that quantify the percentage of activities actually performed out of the total planned.
Although the ideal is to conduct the observation on all interventionists providing all
sessions to all clients, this may not be feasible. Therefore, the observation can be done on each interventionist for 10–25% of her or his clients or of the sessions she or he delivers (Mars et al., 2013; O’Shea et al., 2016; Swindle et al., 2018).
Observation of intervention delivery can be either direct or indirect, and done by trained observers.
Direct Observation
In direct observation, the observers are physically present during the intervention delivery. They attend the sessions with individual or group of clients; however, they do not participate in any intervention-related activities. Thus, they assume the non- participant observer role. Observers are expected to follow through the intervention- ist’s presentation of the content or information, performance of specific and nonspecific activities, and adaptation of content/activities as needed. The observer should be alert and cognitively able to recognize what the interventionist does, to evaluate the adequacy or consistency of performance with the activities as delineated in the instrument for assessing adherence, and to record accurately the observations.
Direct observation has logistics challenges. Observers’ attendance at the sessions requires clients’ oral approval or formal written consent (as demanded by the research ethics board or committee). Observers’ presence may not be logistically possible such as when the sessions are held with individual clients, in geographically dispersed areas, and at a place and a time convenient to clients (e.g. in client’s home, in the eve- ning). Some observers may be overwhelmed with their responsibilities of simulta- neously following through, interpreting, and documenting the interventionist’s performance. Two observers are needed to overcome this challenge and to reduce potential observer bias. Consequently, direct observation is resource intensive, time consuming, and costly (Toomey et al., 2016).
Direct observation is advantageous. It is considered a valid method for assessing interventionist’s adherence (DiRezze et al., 2013). It enables objective assessment of performance that is not tainted by social desirability bias. Specifically, the observer’s presence is useful in capturing nuances in interventionists’ performance of specific activities, and nonverbal responses that may be associated with some nonspecific activities such as eye contact and smile (Toomey et al., 2016; Walton et al., 2017).
Direct observation has several limitations. Despite intensive training, some observers may not correctly recognize, interpret, and document performance of the intervention activities. Observer fatigue may also lead to inaccurate reporting.
Observer expectancies may contribute to biased reporting (Gresham, 2009). The observer’s presence at the session may be perceived as intrusive by both the interven- tionist and clients, producing reactivity (Hardeman et al., 2008). In other words, the observer’s presence may not be well received by, and may change the behavior of, those observed (Walton et al., 2017). For instance, interventionists exhibit the best performance and behave in a socially expected and appropriate manner, thereby introducing observation bias indicated by high levels of adherence, which is com- monly reported in research and practice (e.g. Mars et al., 2013). Clients may limit their engagement in the planned activities and interventionists may use different or additional strategies to engage clients, which may be interpreted as deviations in performance potentially resulting in less-than-optimal adherence. Logistical chal- lenges may constrain direct observation to a few sessions provided by each interven- tionist. The reliance on data gathered in these sessions leads to incomplete information on interventionists’ adherence across clients and over sessions (Berkel et al., 2019;
Cross & West, 2011; Gresham, 2009). The use of these data yields inaccurate scores of the interventionists’ actual level of adherence.
9.3 Strategies and Methods for Assessing Operational Fidelity
183
Indirect Observation
Indirect observation is resorted to when the observer’s attendance at the intervention sessions is not logistically possible or when clients’ approval of observer’s presence is not granted. Indirect observation consists of reviewing video or audio recordings of the intervention sessions. Similar to direct observation, trained observers are expected to recognize what the interventionist does, evaluate the consistency of the actual interven- tionist’s performance with the activities as planned, and record their observations on the instrument form. The review of the recordings is done at the required pace, which allows for comprehensive and detailed examination of interventionist’s adherence.
Indirect observation generates some challenges. Clients’ written consent must be secured prior to the video or audio recording. Video recording can be done in the presence or absence of a staff member; the presence of a staff member may be required to monitor the appropriate functioning of the recording equipment and to manipulate the video recorder, as necessary, to capture the interventionist’s performance (e.g. zooming in to capture the demonstration of a particular skill). The staff member’s presence is resource intensive, time consuming, and not always fea- sible. In the absence of a staff member, the video recorder is set to focus on the inter- ventionist, with the drawback of missing performance of some activities when the interventionist moves away from the recorder range. Audio recording is done with high-quality equipment that can clearly record voices of the interventionist and cli- ents participating in face-to-face, telephone, or other technology-based methods of offering the sessions. Additional challenges reported with the use of video and audio recording include: equipment failure or forgetting to turn on the recording equip- ment, resulting in the loss of adherence data (Hardeman et al., 2008; Mars et al., 2013).
Similar to direct observation, indirect observation is considered a valid method for objectively assessing interventionists’ performance. Compared to audio recording, video recording captures the performance of specific activities such as demonstration of skills and nonverbal behaviors reflecting nonspecific behaviors (Toomey et al., 2016). Indirect observation shares the same limitations (i.e. observer bias, reac- tivity, incomplete adherence data) as direct observation.
9.3.2.2 Interventionist Self-report on Adherence
Interventionist self-report is another method for gathering data on adherence. The method is useful to complement and supplement the adherence data collected through observation. Self-report is utilized to obtain relevant data in situations when observation is not feasible or appropriate such as interventions addressing sensitive topics and requiring maintenance of clients’ privacy and confidentiality. With the self-report method, the interventionists are requested to document the intervention activities they perform when providing each session to individual or group of clients (Campbell et al., 2013). This should be done immediately following completion of each session in order to minimize recall bias (Melder et al., 2006).
The documentation or reporting on the activities performed can be unstructured or structured. With unstructured documentation, interventionists list the activities they carried out during the session, in their own words. This may take some time as the interventionists have to recall the activities and to find the appropriate wording to describe the activities. Further, the description of the activities performed may not be quite consistent with the ones delineated in the intervention manual and the instrument measuring adherence, potentially contributing to an underestimation of levels of adherence. Nonetheless, the interventionists’ description could be useful in identifying adaptations of the activities.