• Tidak ada hasil yang ditemukan

Theoretical and evidential issues Concepts and definitions

Dalam dokumen Child Neglect (Halaman 46-54)

research. It proposes not a ‘research active practitioner’ model but a ‘practitio-ner as research driver’ model of practitio‘practitio-ner engagement with research.

Theoretical and evidential issues

· apply to harm caused by observance of religious or other beliefs where the parents are concerned that health care (such as blood transfusion) or lack of certain ritualistic acts (such as circumcision) may cause greater harm to the child?

· apply equally to lack of care caused by circumstances within or not within the carers’ control? If so, then to what extent are carers responsible for lack of care due to substance abuse, or relationship conflict and separation?

In terms of scope of responsibility, the English guidance refers to parents and carers as responsible agents, so the focus on mothers as carers of children is even more pronounced than in other forms of child protection. Those deemed as having the responsibility of care who do not fulfil that responsibility in the absence of any morally allowed exceptions (maybe, for example, poverty or illness) are deemed to have committed neglect. In contrast, sexual abuse is fre-quently used to describe acts by non-family members (though interestingly physical abuse is usually limited to carers). It is possible to apply the concept of neglect more widely. In France, it is an offence for anyone including strangers not to come to the aid of others in certain crises and with explicit needs for help.

Similar responsibilities could be placed on individuals or organizations to respond when they know or suspect that a child’s needs are not being met for any reason.

In addition, society can be considered neglectful by allowing children to be at risk of harm or infringement of their rights. This could include risks of disease or of accidents or of other forms of abuse such as violence. Intra-familial violence can be seen as ‘an inevitable by-product of selfish, competitive and inegalitarian values and of dehumanizing, authoritarian and exploitative social structures and dynamics which permeate many contemporary societies’ (Gil 1979, p.1). Physical abuse and sexual abuse can be seen as due to societal neglect of unequal power relations between adults and children and between males and females (Dominelli 1986; Ennew 1986).

All of these, often implicit, issues within definitions of neglect will affect the population of situations and scenarios that will be considered neglect in defining research samples. This will vary, not just on value judgements about caring responsibilities, but also on the purpose of the research. Definitions determining whether child protection responses are necessary are likely to be different from those constructed to examine the differential cause and effect of different child care situations or the prevalence of neglect in the population.

Definitions created for different purposes will vary in the relative extent that they have been constructed on a priori ideological and theoretical reasons or built up from empirical data on cases (and the extent that these have been quali-fied by data on cases falling outside of the definition). Until all these issues are made explicit in reporting of research and the presentation of cases and case

typologies, conceptual clarity and practical use of research findings are likely to be limited. Not only should practitioners and policy makers be cautious about making use of research evidence that does not provide these details, but also the lack of accountability of much research evidence seriously hampers the potential for the accumulation of evidence over time.

Research evidence from primary studies

Although there may seem to be a large amount of primary research on social and psychological issues such as neglect, there are relatively few studies consid-ering the enormous numbers of research questions that could be asked within all the different definitional positions available. The amount of information provided by these studies is further limited by:

· the quality of the studies and study reporting

· limits to generalizability and sampling error

· accessibility of these primary studies.

QUALITY OF THE STUDIES AND STUDY REPORTING

Many research studies, including many of those referred to in other chapters of this volume, are well executed but many are not. A first problem commonly found in studies is a lack of clarity and thus conceptual confusion and inconsis-tency about the topic under study, for example, lack of consideration of the def-inition of the topic and thus of the recruitment of participants or other sources of data for the study.

A second problem is the lack of an appropriate research design to address the research question being asked. This may occur for practical reasons. A study may want to assess the impact of an intervention to reduce the likelihood of neglect within families, but this may not be possible because the research was not built into the development of the intervention, there were not sufficient resources to support such a study or there are ideological objections to the use of experimental designs (Oakley 2000).

A common problem is the use of descriptive studies to draw conclusions about the effects of a service initiative. A funder, such as a government agency, may commission research to study the impact of an intervention but require or only provide sufficient funds for a research design that is based on monitoring the implementation with some outcome data. Such descriptive studies are limited in what they can conclude about the efficacy of service. Without some form of experimental or internal control it is not possible to determine whether other uncontrolled variables (from selection bias and the effects of other vari-ables over the time period of undertaking the study) are influencing the reported results.

These implementation studies may include data on participants’ views of the service. Such data can provide needs assessment and information about the acceptability of a service, ensure users’ views are represented, and identify possible adverse or positive effects of the intervention for further study. Partici-pants’ views about the efficacy and appropriateness of a service are essential but there is a need for further data in terms of evidence of the effects of interven-tions. Users are unlikely to be able to control and thus fully know the effects of all the variables that might be affecting their experiences. For example, many users of hormone replacement therapy believed it to be an effective therapy but it is now known to have many adverse effects. Similarly, the views of parents in families with children considered to be neglected should inform service delivery but parents may not know for sure which services do, or do not, change the level of care for their children.

Lack of fitness can also occur where a study failed in its original aims.

Studies may have been set up to assess the impact of a service but for many reasons be unable to execute a fully fit-for-purpose design and collect good quality and relevant data. Such studies may make most use of their descriptive data to give insight into the process of intervention. The results may be of some use but such studies would probably have been more useful if they had been originally set up with an appropriate qualitative design to assess process.

Even where a study is conceptually sound and uses an appropriate research design the design may not be implemented well. Technical limitations may include non-systematic sampling, inappropriate or wrongly applied measures, analysis, or interpretation. In addition, studies are often reported without giving full details about the methods employed, so even if one has the relevant technical skills and background knowledge it may not be possible to assess the extent to which the findings are reliable (would be found again if the study was replicated) or valid in terms of measuring what they purport to represent.

LIMITS TO GENERALIZABILITY AND SAMPLING ERROR

Studies not only differ in their purpose, conceptual frameworks and definitions of neglect, they also vary in many other aspects of the context in which they occurred. In general a study on a small number of cases will provide rich detail on those cases, which may be highly informative for developing conceptual insights and hypothesis generation, but provide less clarity about how repre-sentative these findings are for different contexts. Large-scale studies may cover more contexts but are less likely to have the richness of detail of the small-sample studies.

In statistics the extent that a sample is representing a population is often built into the logic of the analysis. In an experimental study, those receiving the experimental and control interventions are samples from a hypothetical popu-lation of all those who could receive the intervention. The statistical analysis

attempts to identify if there is a difference in outcome between the intervention and control groups; in other words, to try to determine if they came from the same or different (hypothetical) populations. The problem is that any differ-ences between the samples in outcome measures might be due to chance varia-tion in sampling (sampling error). This can be illustrated by an example where we know that there are differences suggesting two ‘populations’. For example, we know that eight-year-olds are on average taller than six-year-olds, but if you tried to test this by using only small samples of six and eight-year-olds – relatively tall six-year-olds and relatively short eight-year-olds – you could wrongly come to the conclusion that children of different ages are the same height and that it is no problem that a child has not grown over two years!

Significant tests reporting in terms of percentage chance of significant levels are merely stating the chance that any difference in outcome measures found could have been due to chance from sampling error. Even studies reporting that an intervention has a significant effect (difference in outcome measures) at the 0.05 level have a 5% chance of drawing a wrong conclusion (that there is a real difference and therefore that there is an effect when there is not one) due to sampling error.

How are these issues related to the study of child neglect? Many research studies on child neglect are based on small samples of cases known to health and welfare agencies. First, these cases are not likely to be representative of all the instances of neglect occurring in the community. The factors that lead to their identification by agencies may be as significant as any features of the neglect. Second, the cases may be described without reference to how they differ from other children and families. Some studies try to overcome this by comparing children experiencing neglect with children and families that are similar on a range of variables such as age, family history and structure, and socioeconomic status. The problem is that controls identified by this matching process may not be representative of ‘normal’ families. The tighter the matching the less representative the controls are likely to be and may even include hidden cases of neglect within the sample. Third, the samples may be small, which is useful in case studies where you are trying to obtain detailed data to develop insights into the processes by which neglect occurs or how children and families react to services provided to help. The limitation is the extent that you can be sure that the results generalize as the small sample may not be representative and any comparisons with control groups may identify differences which are just due to chance (sampling error).

This is not to argue that we should not try to understand the phenomenon of neglect by describing its features and its processes. For example, Gaudin and colleagues (Gaudin et al. 1996) classified neglectful families as chaotic/leader-less or dominant/autocratic and suggested that they need different sorts of pre-ventive intervention. Such classification to enable understanding seems to be an ingrained and effective human strategy for understanding the world around us,

for everything from typologies of abuse (including child neglect) to syndromes to explain sudden infant death. These strategies are powerful for developing working hypotheses to help us deal with pressing practical issues but the limits to the explanatory power of these models do require testing.

ACCESSIBILITY OF PRIMARY STUDIES

A further issue is the accessibility of studies to different users of research.

Research on neglect is reported in a diverse range of academic journals, books, and unpublished research reports. The factors influencing academics’ decisions about where to publish do not necessarily encourage reporting in similar journals or in places which are accessible to other users of research. Reports of research reported in the publications aimed at practitioners, policy makers and the general media for the public may not be from the best studies nor provide the best evidence about services. Special initiatives that seem plausible, exciting, and fitting with current fashions but have little research evidence to support (or undermine) them are frequently reported in such publications. For example, home visiting services are often enthusiastically championed in the public and professional media. These are caring and plausible strategies for helping parents who are finding it difficult to cope with the care of their children, but what evidence is there that they make any difference in practice?

There is much evidence from the USA of the positive effects of directive nurse visitation (Olds, Henderson and Eckenrode 2002) but there is less evidence to date for the effects of the more empathic model of nurse and volunteer home visitation support in the UK (Wiggins et al. 2003). There is not only the possi-bility that these services may have no effect but they might even do more harm than good. There is evidence, for example, that mothers’ support groups can negatively impact upon the parents’ existing social networks (Stevenson and Bailey 1998). A more dramatic example related to concerns that infant deaths could be related to neglect or active abuse is the change of advice to parents on the sleeping position of infants. For years, professional knowledge stated that putting babies on their backs to sleep risked them choking on their vomit and that head turning could result in flattening of one side of their heads. Since the knowledge changed to a belief that it is safer for infants to sleep on their backs, the incidence of sudden infant deaths in the UK has dropped by two thirds (Chalmers 2003).

The problem for busy social care and health practitioners, policy makers, users of services and members of the public is that the popular public and pro-fessional media is a major source of information on research evidence. This may not give a full overview of what is known about a topic and may give greater prominence to some new initiative, idea, or research report. Individual studies may be misleading because of the quality of execution of the study or simply because of the sampling error involved in research. In medicine these problems

are well known. The ‘Hitting the Headlines’ project at the Centre for Reviews and Dissemination, University of York, reviews research reports in the national media and examines the broader research evidence and places this on a website for doctors and others to check. General practitioners who are then contacted by patients interested in receiving the new treatment can access the National Electronic Library for Health website to clarify the wider picture and knowl-edge about the condition and its treatment. Maybe we should have a similar service for busy social work managers and practitioners.

Synthesis and quality assessment

These problems of quality of research, study reporting, generalizability and sampling error make any individual study on child neglect vulnerable to giving misleading conclusions. In addition, the diversity of places where research reports are found makes access to such evidence for social care practitioners and other non-academic users of research problematic.

EVIDENCE REVIEW AND SYNTHESIS

All of this suggests the need for some form of synthesis of the research evidence for practitioners and other users of research. This is the function of reviews of research to identify, quality assess and summarize what is known about any par-ticular topic. Literature reviews should enable us to be better informed about what is known, what further needs to be known, the extent to which decision-making can be evidence informed and the extent to which any new study may add to what we already know. Until recently such reviews did not have any clear methodology. How to undertake reviews was not taught in methodology courses and when people were expected to undertake a review they just had to attempt a logical approach to this task.

There are many types of literature review. Some are undertaken to take forward an area of research and so are focused on those research needs. Straus and Kantor (2003), for example, wished to develop knowledge on the preva-lence of child neglect. A brief review of the major prevapreva-lence studies showed an over 50-fold variation in prevalence rates from less than half a per cent to 27%.

Straus argued that, in addition to the real variation in the populations studied, the variation in results was also likely to be due to methodological differences in sources of data of neglect (such as child, parent, professional or agency report), the criterion of neglect, the reference time period, and the dimensions of neglect mentioned. This focused literature review led him to develop a new measure of neglect and undertake a large cross-national study of over 5000 students in 14 nations. The study found that even with methodological consis-tency there was huge variation in reporting between the research sites. The number of respondents who reported childhood experience of one form of

neglect varied from 20% to 95%, those reporting three or more forms of neglect varied from 3% to 36% (Straus and Kantor 2003).

Other reviews are undertaken to argue a case and use research literature to support their argument. These can be powerful resources to inform policy and practice but they need either to be exhaustive and systematic in terms of the studies they include and the way these are assessed or the review needs to be contested by other academics to check that other evidence could not support different arguments.

Many reviews are less argument focused in attempting to review all that is known about a certain area or research question. These can provide a full account of what is known about the review question, but traditional literature reviews have not been explicit about their methods of review. It is therefore dif-ficult to assess whether the review has been undertaken in a systematic way. Just as with primary research, secondary research that is reviewing primary research needs to specify its methods for the results to be checked and potentially repli-cated by others and thus be accountable and believable.

The same analysis can be applied to academic expert opinion. How do you distinguish trustworthy from not trustworthy experts or, maybe even more dif-ficult, two trustworthy experts who differ in unknown ways in the range of their knowledge and the assumptions that underlie their assessments of the quality and relevance of the research that they are summarizing? Some experts may seemingly provide a full account of knowledge in an area, but it is difficult to assess the extent to which this has been achieved. The process of knowledge production is not explicit and the believability of the experts’ view may thus depend upon the acceptability to the listener of the conclusions or the status of the expert. An expert with high status may have that status for good reason but it can be difficult to assess when an expert goes beyond their area of expertise.

This can also be a problem in courts where expert witnesses with high status for practice or clinical skills provide expert views on research data.

SYSTEMATIC RESEARCH SYNTHESIS

The need for explicit systematic methods has led to the setting up of the Cochrane and Campbell Collaborations to co-ordinate the systematic review of research literature on ‘what works’ questions in health and social science respectively. The Campbell Collaboration has three main topic areas of social welfare, education, and crime and justice. The focus of both collaborations on questions of efficacy means that the reviews are principally statistical meta analyses of quantitative data from experimental studies. This has led some to believe that systematic reviews are only concerned with such research designs and data. This is a misunderstanding as the logic of being systematic and explicit about research synthesis applies to all research questions and thus all review questions (Gough and Elbourne 2002). It is a misunderstanding that

stems partly from the unfortunate polarization of research into quantitative and qualitative communities rather than seeing research methods as fit for purpose (Oakley 2000).

Some argue that systematic research synthesis is a mechanistic process but this is again a misunderstanding. All research requires some form of process; the intellectual work and judgement comes in the framing of the question, the con-ceptual assumptions within the questions, and operationalizing those ideas in every stage of undertaking the review.

Different review questions will need to consider different types of research design so systematic reviews may have to consider all types of research data including qualitative data to answer process issues and concepts for conceptual synthesis in areas such as meta ethnography. For example, the types of study, and thus evidence, used in a systematic review of the efficacy of interventions for families with non-organic failure to thrive is likely to differ from a review of the evidence of the processes by which such interventions have their effects.

Some studies may inform both outcome and process questions as with, for example, Iwaniec and colleagues’ twenty-year follow-up of non- organic failure to thrive families (Iwaniec 1995). Outcome and process evidence reviews also differ from a synthesis of the concepts professionals and research-ers use to undresearch-erstand and describe neglect and failure of children to develop in the ways expected of them.

There are many different approaches to undertaking systematic reviews ranging from statistical meta analysis, to systematic narrative reviews, to con-ceptual reviews including meta ethnography. The basic main stages of under-taking a review are relatively similar but differ in detail and in terms of the content and processes involved at each stage (Gough 2004; Gough and Elbourne 2002).

Dalam dokumen Child Neglect (Halaman 46-54)