Chapter 2: Literature review
A. MRC framework for developing and evaluating complex interventions
2.9 Feasibility testing
62 than designing and planning the type of intervention that will result in behaviour change [Webb et al., 2010; Taylor et al., 2007; Hardeman et al., 2002]. Hardeman et al. (2002:149) supports this with the following statement: “even when authors use the TPB to develop parts of the intervention, they seem to see the theory as more useful in identifying cognitive targets for change than in offering suggestions on how these cognitions might be changed”. The TPB is therefore specifically useful, and preferred by researchers, for intervention development that involves identification of components of a behaviour that need to be targeted to bring about behaviour change.
2.9 Feasibility testing
63 explain that feasibility studies have an important role when examining matters related to context, mechanism of impact and implementation. Therefore, insufficient exploration of the feasibility of interventions increases the likelihood of failed implementation as well as failure to generate useful information. A feasibility study is therefore imperative to identify the usefulness and the adaptations required to enhance the intervention design (e.g. the mode of delivery) and the evaluation design (e.g.
choice of outcome measures).
2.9.2 Feasibility of the intervention design
After the development phase of an intervention, any uncertainties regarding 1) optimal intervention content and mode of delivery; 2) acceptability of the intervention to participants; 3) the ability to collect appropriate data for assessment; 4) cost-effectiveness and 5) ability of the providers to deliver the intervention within a specific setting should be explored before embarking on an effectiveness study [O’Cathain et al., 2015]. Depending on the findings, the intervention may require refinement before the full-scale study commences and some guidance suggests this occurs within the feasibility study on an ongoing basis [Fletcher et al., 2016; Levati et al., 2016; O’Cathain et al., 2015; Bowen et al., 2009]. The role of qualitative data collection to optimise or refine an intervention was highlighted by both Fletcher et al. (2016) and O’Cathain et al. (2015). In a recent systematic review investigating feasibility guidance, where thirty guidance papers were included, qualitative data was also suggested to obtain information regarding acceptability and implementation [Hallingberg et al., 2018]. Although focus groups are considered useful to obtain qualitative data, it does create potential for differing views to remain concealed in this setting, as noted by one paper included in the systematic review [O’Cathain et al., 2015].
2.9.3 Feasibility of the evaluation design
Feasibility testing of the evaluation design aims to provide useful data to improve the quality of a full- scale evaluation [Cook et al., 2018].The uncertainties regarding evaluation design that would be explored may relate to: 1) recruitment, retention and sample size; 2) acceptability of randomisation;
3) duration of follow-ups; 4) choice of outcome measures; 5) floor or ceiling effects (when responses on a measure or questionnaire cluster at the top or bottom); 6) potential harms; 7) attrition reasons;
or 8) impact of the intervention on widening health inequalities (testing in a range of contexts) [Hallingberg et al., 2018]. It is widely recommended that both qualitative and quantitative data be used to assess these criteria [Hallingberg et al., 2018; Eldridge et al., 2016a; O’Cathain et al., 2015; Taylor et al., 2015]. Eldridge et al. (2016a) further explain that a randomised controlled trial is the best design feature for a pilot study to estimate potential impact However, as feasibility studies are often underpowered, the MRC guidance warns to interpret quantitative data with caution when assessing effectiveness. Randomisation as a design feature in feasibility studies was deemed unnecessary to
64 estimate cost or select outcomes but was thought to be useful to estimate recruitment, willingness to be randomised and retention rates for intervention and control groups [Taylor et al., 2015; Shanyinde et al., 2011] and to inform the sample size for the full-scale evaluation by estimating the effect size [Bowen et al., 2009; Campbell et al., 2000].
2.9.4 Progression criteria
Feasibility data should be tested against pre-determined progression criteria to establish if an intervention is feasible and acceptable and if the evaluation design is suitable to measure effectiveness in a full-scale study [Hallingberg et al., 2018]. Simpson et al. (2020) recommend that qualitative data be used to assess factors such as acceptability of the intervention content and mode of delivery while quantitative data be used to assess recruitment and retention rates {Simpson et al., 2020]. Other sources also recommend that qualitative findings should be considered more influential than quantitative findings [Westlund et al., 2017; O’Cathain et al., 2015].
Progression criteria should be used as guidelines where a traffic light system may be used indicating varying levels of acceptability where green indicates no issues identified; amber means amend where identified issues can be resolved and red means stop when issues cannot be resolved [Eldridge et al., 2016a; Eldridge et al., 2016b]. This system allows scope for discussion with a steering committee but the decision to continue with the study ultimately lies with the study team. Some sources suggest that if extensive changes are made to the intervention or the evaluation design, researchers should return to the feasibility or intervention development phase [O’Cathain et al., 2015; Shanyinde et al., 2011]. At this point however, there is no guidance when movement between these two phases should occur [Hallingberg et al., 2018].
Stakeholders should be included in the design of the feasibility study and the progression criteria to ensure that the data generated, upon which the future decisions of the study are being made, are appropriate [Hallingberg et al., 2018, O’Cathain et al., 2015].
65