• Tidak ada hasil yang ditemukan

INVESTIGATING INTERVENTIONIST EFFECTS

INTERVENTION MANUAL

8.5 INVESTIGATING INTERVENTIONIST EFFECTS

of performance that should be improved; and provide guidance for revising the training content and methods to enhance its effectiveness.

There are no specific guidelines and standard instruments for evaluating training.

The following points can be considered. The evaluation can be done upon comple- tion of each part of the training, that is, the didactic and the experiential part, or at the end of all training. The evaluation can cover assessment of the interventionists’

knowledge of the intervention and practical competence in providing it. Assessment of knowledge is accomplished by administering a test containing close- and open-ended questions, and vignettes followed by relevant questions. The questions are designed to measure interventionists’ understanding of the theory underpinning the intervention, the operationalization of its active ingredients into respective com- ponents, and the rationale for specific content and activities. Short vignettes and associated items are generated to assess interventionists’ skills at implementing var- ious intervention activities accurately and at handling challenges that may arise dur- ing delivery. Additional items can be incorporated to assess the interventionists’

interpersonal skills. Formal evaluation of interventionists’ skills is planned as part of the supervised delivery of the intervention, or in a separate session scheduled prior to entrusting the delivery of the intervention to clients. The latter session is comparable in content and format to the supervised delivery of the intervention: The interven- tionists deliver a session to actors posing as clients and the evaluator observes and rates their performance.

8.5 Investigating Interventionist Effects

163

Researchers planning to investigate interventionist effects should consider the following methodological features when designing and analyzing data collected in evaluation studies:

1. The number of interventionists has to be large enough (at least 30) in order to obtain meaningful estimates of the interventionist effects. Large number of interventionists can be easily achieved in naturalistic studies evaluating the implementation of evidence-based interventions by health professionals in practice. When accrual of this number is not feasible, which is often the case in experimental studies or clinical trials, it is advisable to have at least two interventionists deliver the experimental intervention and at least two interventionists provide the comparison treatment. In all studies, each inter- ventionist is assigned to deliver the respective treatment (i.e. experimental intervention or comparison treatment) to a reasonable number of clients (at least 30). The number of clients is balanced across interventionists. Assignment of clients to interventionists is done in a way that minimizes potential selec- tion bias, so that the baseline characteristics (such as level of severity of the problem) of clients assigned to different interventionists are comparable.

Differences in the number and characteristics of clients across intervention- ists may confound the interventionist effects (Lutz et al., 2007).

2. A crossed design is most appropriate to dismantle the interventionist from the intervention effects (Kim et al., 2006). In this design, all selected inter- ventionists are trained in the delivery of all treatments under investigation.

The treatments may be distinct interventions (e.g. stimulus control therapy and sleep restriction therapy) or an experimental intervention (e.g. cogni- tive-behavioral therapy) and a comparison treatment (e.g. standard care) for the management of the health problem. Each interventionist is asked to deliver each treatment to the pre-specified number of clients. In this way, all interventionists have the opportunity to deliver all treatments (Staines et al., 2006). The crossed design minimizes the confounding of intervention- ists with treatments and permits the examination of the interventionist main effect and the interventionist-by-treatment interaction effect. Confounding occurs when the same interventionist provides the same intervention to all clients assigned to that treatment group. In this situation, improvements in clients’ outcomes can be attributable, equally, to the interventionist or to the intervention. By having different interventionists deliver different treat- ments, variability in interventionists and in treatments is generated. Each source of variability induces its unique influence on client outcomes, thereby allowing to detect the interventionist influence independently from the treatment effects. Use of the crossed design requires the selection of inter- ventionists who are willing to learn, acquire competency in, and deliver interventions with which they may not be familiar or that differ from their theoretical orientation.

3. Collection of data on the interventionists’ personal and professional qualifi- cations, as well as performance in delivering the intervention: Data on performance relate to adherence to the intervention protocol and interper- sonal skills or working alliance, using pertinent measures (discussed in Chapter 9). Interventionists’ level of adherence and working alliance were found to impact clients’ outcomes (e.g. Lingiardi et  al.,  2018; Pellecchia et al., 2015).

4. Tracking and documenting which clients receive which treatment from which interventionist: Code numbers are assigned to interventionists and used in pertinent data entry.

5. Application of multilevel models or hierarchical linear models to analyze the client outcome data. These models are statistical techniques that account for the nesting of clients within interventionists and within treatment when estimating the effects of interventionist and of treatment on client outcomes.

The outcomes are those assessed following implementation of the interven- tion or represented in the level of changes in the outcomes over time. It is recommended to consider the interventionists as a random factor and the treatment as a fixed factor in the data analysis. Representing interventionists as a random factor has the advantage of generalizability of the observed effects to other interventionists with characteristics similar to the qualities of inter- ventionists who were involved in the delivery of treatments (Kim et al., 2006;

Lutz et al., 2007).

These methodological features are not always practical and feasible. The number of interventionists needed to examine their effects on client outcomes may exceed the financial resources available for a study, or the human resources that are locally accessible. Similarly, the required client sample size is large. Few competent inter- ventionists may agree to the implementation of different treatments, and those who do agree may not be representative of the general population of health pro fessionals.

Investigating interventionist effects is best done in large multicenter experimental studies, or in large cohort studies conducted in the natural, real world, prac- tice setting.

REFERENCES

Addis, M.E. & Krasnow, A.D. (2000) A national survey of practicing psychologists’ atti- tudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology, 68(2), 331–339.

Alcázar Olán, R.J., Deffenbacher, J.L., Hernández Guzmán, L., et al. (2010) The impact of perceived therapist characteristics on patients decision to return or not return for more sessions. International Journal of Psychology and Psychological Therapy, 10(3), 415–426.

Anderson, T., Ogles, B.M., Patterson, C.L., Lambert, M.J., & Vermeersch, D.A. (2009) Therapist effects: Facilitative interpersonal skills as a predictor of a therapist success.

Journal of Clinical Psychology, 65(7), 755–768.

Baldwin, S.A. & Imel, Z.E. (2013) Therapist effects: Findings and methods. In: M.J.

Lambert (ed) Bergin and Garfield’s Handbook of Psychotherapy and Behaviour Change (6th ed.). Wiley, New York, NY.

Barber, J.P., Gallop, R., Crits-Christoph, P., et  al. (2006) The role of therapist adher- ence, therapist competence, and alliance in predicting outcome of individual drug counseling: Results from the National Institute Drug Abuse Collaborative Cocaine Treatment Study. Psychotherapy Research, 16(2), 229–240.

Becker-Haimes, E.M., Okamura, K., Wolk, C.B., et al. (2017) Predictors of clinician use of exposure therapy in community mental health settings. Journal of Anxiety Disorders, 49, 88–94.

References

165

Berghout, C.C. & Zevalkink, J. (2011) Therapist variables and patient outcome after psy- choanalysis and psychoanalytic psychotherapy. Journal of the American Psychoanalytic Association, 59(3), 577–583.

Borrelli, B. (2011) The assessment, monitoring and enhancement of treatment fidelity in public health clinical trials. Journal of Public Health Dentistry, 71, S52–S63.

Brooks, K. (2010). The simulation job interview [Blog post]. Available on: http://

www.psychologytoday.com/blog/career-transitions/201012/

the-simulation-job-interview.

Brose, L.S., McEwen, A, Michie, S, et al. (2015) Treatment manuals, training and success- ful provision of stop smoking behavioral support. Behaviour Research and Therapy, 71, 34–39.

Cameron, S.K., Rodgers, J., & Dagnan, D. (2018) The relationship between the thera- peutic alliance and clinical outcomes in cognitive behaviour therapy for adults with depression: A meta-analytic review. Clinical Psychology & Psychotherapy, 25, 446–456.

Campbell, B.K., Buti, A., Fussell, H.E., Srikanth, P., & Guydish, J.R. (2013) Therapist pre- dictors of treatment delivery fidelity in community-based trial of 12-step facilitation.

American Journal of Drug and Alcohol Abuse, 39, 304–311

Castonguay, L.G., Constantino, M.J., & Holtforth, M.G. (2006) The working alliance:

Where are we and where should we go? Psychotherapy: Theory, Research, Practice, Training, 43(3), 271–279.

Constantino, M.J., Manber, R., Org, J., et al. (2007) Patient expectation and therapeutic alliance as predictors of outcomes in group cognitive-behavioral therapy for insom- nia. Behavioral Sleep Medicine, 5, 210–228.

Degnan, A., Seymour-Hyde, A., Harris, A., & Berry, K. (2016) The role of therapist attach- ment in alliance and outcome: A systematic literature review. Clinical Psychology &

Psychotherapy, 23, 47–65.

Del Re, A.C., Flückiger, C., Horvath, A.O., Symonds, D., & Wampold, B.E. (2012) Therapist effects in the therapeutic alliance–outcome relationship: A restricted-maximum like- lihood meta-analysis. Clinical Psychology Review, 32(7), 642–649.

DiGennaro Reed, F.D. & Codding, R.S. (2014) Advancements in procedural fidelity assessment and intervention: Introduction to the special issue. Journal of Behavioral Education, 23, 1–18.

Dinger, U., Strack, M., Leichsenrig, F., Wilmers, F., & Schauenburg, H. (2008) Therapist effects on outcome alliance in inpatient psychotherapy. Journal of Clinical Psychology, 64(3), 344–35.

Elkin, I., Falconnier, L., Martinovich, Z., & Mahoney, C. (2006) Therapist effects in the National Institute of Mental Health treatment of depression collaborative research program. Psychotherapy Research, 16, 144–160.

Eymard, A.S. & Altmiller, G. (2016) Teaching nursing students the importance of treat- ment fidelity in intervention research: Students as interventionists. Journal of Nursing Education, 55(5), 288–291.

Friedman, R. (2014) The Best Place to Work: The Art and Science of Creating an Extraordinary Workplace. Perigee, Penguin Group, New York, USA.

Fuertes, J.N., Mislowack, A., Bennett, J., et al. (2007) The physician-patient working alli- ance. Patient Education and Counseling, 66, 29–36.

Gaume, J., Gmel, G., Faouzi, M., & Daeppen, J.-B. (2009) Counselor skill influences out- comes of brief motivational interventions. Journal of Substance Abuse Treatment, 37, 151–159.

Goldberg, S.B., Hoyt, W.T., Nissen-Lie, H.A., Nielsen, S.L., & Wampold, B.E. (2018) Unpacking the therapist effect: Impact of treatment length differs for high- and low- performing therapists. Psychotherapy Research, 28(4), 532–544.

Greeson, J.K.P., Guo, S., Barth, R.P., Hurley, S., & Sisson, J. (2009) Contributions of therapist characteristics and stability to intensive in-home therapy youth outcomes.

Research in Social Work Practice, 19(2), 239–250.

Herschell, A.D., Kolko, D.J., Baumann, B.L., & Davis, A.C. (2010) The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review, 30(4), 448–466.

Horvath, A.O., Del Re, A.C., Flückiger, C., & Symonds, D. (2011) Alliance in individual psychotherapy. Psychotherapy: Theory, Research, & Practice, 48, 9–16.

Huppert, J.D., Bufka, L.F., Barlow, D.H., et  al. (2001) Therapists, therapist variables, and cognitive-behavioral therapy outcome in a multicenter trial for panic disorder.

Journal of Consulting and Clinical Psychology, 69(5), 747–755.

Huss, R., Jhileek, T., & Butler, J. (2017) Mock interviews in the workplace: Giving intern the skills they need for success. The Journal of Effective Teaching, 17(3), 23–37.

Imel, Z.E., Baer, J.S., Martino, S., Ball, S.A., & Carroll, K.M. (2011) Mutual influence in therapist competence and adherence to motivational enhancement therapy. Drug and Alcohol Dependence, 115, 229–236.

Johnson, M.O. & Remien, R.H. (2003) Adherence to research protocols in a clinical con- text: Challenges and recommendations from behavioral intervention trials. American Journal of Psychotherapy, 57, 348–360.

Joyce, A.S., Ogradniczuk, J.S., Piper, W.E., & McCallum, M. (2003) The alliance as medi- ator of expectancy effects in short-term individual therapy. Journal of Clinical and Consulting Psychology, 71(4), 672–679.

Kaplowitz, M.J., Safran, J.D., & Muran, C.J. (2011) Impact of therapist emotional intel- ligence on psychotherapy. Journal of Nervous and Mental Disorders, 199, 74–74 Kim, D.M., Wampold, B.E., & Bolt, D.M. (2006) Therapist effects in psychotherapy: A

random-effects modeling of the National Institute of Mental Health treatment of depression collaborative research program data. Psychotherapy Research, 16, 161–172.

Kitson, A., Marshall, A., Bassett, K., & Zeitz, K. (2013) What are the core elements of patient-centered care? A narrative review and synthesis of the literature from health policy, medicine and nursing. Journal of Advanced Nursing, 69, 4–15.

Krukowski, R.A., Smith West, D., Priest, J., et al. (2019) The impact of the intervention- ist–participant relationship on treatment adherence and weight loss. TBM, 9, 368–372.

Lingiardi, V., Muzi, L., Tanzilli, A., & Carone, N. (2018) Do therapists’ subjective variables impact on psychodynamic psychotherapy outcomes? A systematic literature review.

Clinical Psychology & Psychotherapy, 25, 85–101.

Lutz, W. & Barkham, M. (2015) Therapist effects. In: R. Cautin & S. Lilienfeld (eds) Encyclopedia of Clinical Psychology. Wiley-Blackwell, Hoboken.

Lutz, W., Leon, S.C., Martinovich, Z., et al. (2007) Therapist effects in outpatient psychother- apy: A three-level growth curve approach. Journal of Counseling Psychology, 54, 32–39.

Mauricio, A.M., Rudo-Stern, J., Thomas J., et al. (2019) Provider readiness and adapta- tions of competency drivers during scale-up of the family check-up. The Journal of Primary Prevention, 40, 51–68.

McDiamid Nelson, M., Shanley, J.R., Funderbuk, B.W., & Bard, E. (2012) Therapists’ atti- tudes toward evidence-based practices and implementation of parent-child interac- tion therapy. Child Maltreatment, 17, 47–55.

Moyers, T.B., Miller, W.R., & Hendrickson, S.M.L. (2005) How does motivational inter- viewing work? Therapist interpersonal skill predicts client involvement within motivational interviewing sessions. Journal of Consulting and Clinical Psychology, 73(4), 590–598.

Okiishi, J., Lambert, M.J., Nielson, S.L., & Ogles, B.M. (2003) Waiting for supershrink: An empirical analysis of therapist effects. Clinical Psychology & Psychotherapy, 10, 361–373.

Pellecchia, M., Connell, J.E., Beidas, R.S. et al. (2015) Dismantling the active ingredients of an intervention for children with autism. Journal of Autism and Developmental Disorders, 45, 2917–2927.

Reichow, B., Volkmar, F.R., & Cicchetti, D.V. (2008) Development of the evaluative method for evaluating and determining evidence-based practices in autism. Journal of Autism and Developmental Disorders, 38, 1311–1319.

Saxon, D., Barkham, M., Foster, A., & Parry, G. (2017) The contribution of therapist effects to patient dropout and deterioration in the psychological therapies. Clinical Psychology & Psychotherapy, 24(3), 575–588.

Schiefele, A.-K., Lutz, W., Barkham, M., et al. (2017) Reliability of therapist effects in practice-based psychotherapy research: A guide for the planning of future studies.

Administration and Policy in Mental Health, 44(5), 598–613.

Schoenwald, S.K., Garland, A.F., Chapman, J.E., et al. (2011) Toward the effective and efficient measurement of implementation fidelity. Administration and Policy in Mental Health and Mental Health Services Research, 38, 32–43.

Schwantes, M. (2017) The job interview will soon be dead. Here’s what the top companies are replacing it with. Available on: https://www.inc.com/marcel-schwantes/

science-81-percent-of-people-lie-in-job-interviews-heres-what- top-companies-are-.html

Sidani, S. & Fox, M. (2014) Patient-centered care: A clarification of its active ingredients.

Journal of Interprofessional Care, 28(2), 134–141.

Staines, G.L., Cleland, C.M., & Blankertz, L. (2006) Counselor confounds in evaluations of vocational rehabilitation methods in substance dependency treatment. Evaluation Review, 30, 139–170.

Titzler, I., Sarwhanjan, K., Berking, M., Riper, H., & Ebert, D.D. (2018) Barriers and facili- tators for the implementation of blended psychotherapy for depression: A qualitative pilot study of therapists’ perspective. Internet Interventions, 12, 150–164.

Tschuschke, V., Crameri, A., Koehler, M., et al. (2015) The role of therapists’ treatment adherence, professional experience, therapeutic alliance, and clients’ severity of psy- chological problems: Prediction of treatment outcome in eight different psychother- apy approaches. Preliminary results of a naturalistic study. Psychotherapy Research, 25(4), 420–434.

Verschuur, R., Huskens, B., Korzilius, H., et al. (2019) Pivotal response treatment: A study into the relationship between therapist characteristics and fidelity of implementation.

Autism, 1, 16–27.

Wampold, B.E. & Brown, G.S. (2005) Estimating therapist variability: A naturalistic study of outcomes in managed care. Journal of Consulting and Clinical Psychology, 73, 914–923.

Wampold, B.E. & Imel, Z.E. (2015) The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge, New York, NY.

Webb, C.A., DeRubeis, R.J., & Barber, J.P. (2010) Therapist adherence/competence and treatment outcome: A meta-analytic review. Journal of Consulting and Clinical Psychology, 78(2), 200–211.

References

167

Webster-Stratton, C.H., Ried, M.J., & Marsenich, L (2014) Improving therapist fidel- ity during implementation of evidence-based practices: Incredible years program.

Psychiatric Services, 65, 789–795.

Zimmermann, D., Rubel, J., Page, A. C., & Lutz, W. (2017) Therapist effects on and predic- tors of non-consensual dropout in psychotherapy. Clinical Psychology & Psychotherapy, 24(2), 312–321.

169

Nursing and Health Interventions: Design, Evaluation, and Implementation, Second Edition.

Souraya Sidani and Carrie Jo Braden.

© 2021 John Wiley & Sons Ltd. Published 2021 by John Wiley & Sons Ltd.

Assessment of Fidelity

C H A P T E R 9

The development of an intervention manual, the careful selection of competent interventionists, and the intensive training of interventionists are strategies to promote fidelity of intervention delivery; however, they do not guarantee it. The actual delivery of an intervention may deviate from what is designed and described in the manual, and may vary for different clients. Interventionists, especially those experienced, view intervention manuals as being at odds with the princi- ples of treatment, stating that treatment should be provided with flexibility and tailored to clients’ individual characteristics, concerns, and life circumstances.

Interventionists also report that strictly adhering to the intervention manual interferes with building and maintaining a good rapport, therapeutic relationship, or working alliance, and with the quality of interactions between interventionists and clients; yet, interventionists value these interactions because they contribute to clients’ engagement in and enactment of treatment, and subsequently improve- ment in client outcomes (Brose et al., 2015). Accordingly, interventionists and health professionals have a tendency to not use or follow the intervention manual in delivering standardized and/or individualized interventions (Lorencatto et al., 2014; Wallace & von Ranson, 2011). This, in turn, leads to variability in intervention delivery, which has been reported widely in research (e.g. Webb et al., 2010) and in practice (e.g. Tschuschke et al., 2015; Verschuur et al., 2019).

Variability in intervention delivery results in differences in the active ingredients to which clients are exposed, potentially yielding nonsignificant intervention’s effects on outcomes.

Monitoring fidelity is critical for identifying deviations or variability in interven- tion’s delivery and rectifying them as necessary. Assessing fidelity is important for examining the impact of such variability on the outcomes expected of an interven- tion. Monitoring and assessing fidelity rest on a clear conceptualization of fidelity.

The focus of this chapter is on the conceptualization and operationalization of fidelity. Definitions and levels of fidelity are reviewed. Strategies and methods for assessing fidelity are discussed.