Ethics, ethical theories and research ethics
The variety of ethical inquiries
St Augustine famously asked ‘What is time?’ He answered by saying that if no one asked him then he knew, but if pushed for a definition he could not satisfactorily give an answer. He simply did not ‘know’. It might seem to the reader that there exists a confusion of terms regarding morality, according to where they live and work. As will be discussed in Chapter 3, there is a terminological dispute as to whether the designations Research Ethics Committees (RECs) or Institutional Review Boards (IRBs) ought to be used.
The former is typically used in the UK, while the latter is the dominant assignation in the USA. We note this to distinguish clearly what is merely a terminological dispute as opposed to a conceptual one. Sometimes, though, words commonly used as synonyms do harbour important conceptual dis- tinctions. While we can, with little or no loss of meaning, substitute RECs and IRBs, we cannot simply exchange the words ‘ethics’ and ‘morality’
because of the complex conceptual disputes that these terms are merely a front for.
It is not unlikely that if one took a random selection of scientists and researchers who had just submitted a proposal for review by an REC, they would declare themselves in a similar position. Most would be able to cite examples of ethical and/or moral ideas in life as in research. A generic list might include items like ‘duty’, ‘good character’, ‘obligation’, ‘principle’,
‘respect’ or ‘rights’. They might give instantiations of these ideas in research in terms of ‘anonymity’, ‘consent’, ‘privacy’ and so on. But, properly speak- ing, before we can understand at a more reflective level the scheme of things that allows us to recognise all the issues presented in Chapter 1, we must interrogate our understanding of the concepts of ‘ethics’ and ‘morality’.
This will entail, partly, some linguistic analysis and some stipulation. For there are as many definitions of these terms as there are theories spun from them.
One crude way of beginning might be to consider a range of paradigmatic cases of what people would call ethical or unethical in research. We know that making decisions involving issues such as harm, risks, benefits, con-
fidentiality, data protection and so on are encountered in just about every research proposal, whether quantitative or qualitative or some combination of both. For example, in a physiology study, where a muscle biopsy may be required to answer a particular research question, a researcher needs to ask, inter alia, the following: will harm occur from the procedure; have adequate safety precautions been taken; have subjects been sufficiently informed regarding the procedures; have subjects freely consented to take part? In an ethnographic study, has one considered that a particular line of questioning may upset interviewees; do participants know that they can withdraw at any time without sanction; how will sensitive information be dealt with? These and many other questions impinge on the conduct and character of research, and researchers and lecturers themselves. Nevertheless, what might these disparate cases have in common such that we recognise them as ethical decisions or moral matters?
It may not surprise the reader to note that many philosophers, awkward beasts that they are, distinguish ethics from morality contrary to their ordin- ary meanings. It might be typical for Jane or John Smith to think that one’s morality is what governs one’s personal relations while ethics refers to more impersonal or institutional relations. In contrast, philosophers tend to reverse the meanings: ‘ethics’ is the local, particular, thick, stuff of per- sonal attachments, projects and relations while ‘morality’, by contrast, is, detached, general (even universal), impartial, thin rules or norms governing how one should treat others or be treated by them. Typically, ‘ethics’ in this broad scheme of things, which – the reader will not be surprised to learn – is hotly contested, is prefigured by a the name of a particular group or institu- tion: bioethics, business ethics, Christian ethics, feminist ethics, journalistic ethics, medical ethics, military ethics (if that is not an oxymoron), profes- sional ethics, sports ethics and, of present concern, research ethics. This leaves us in something of a difficulty; how shall we understand ethics and research ethics in this book in a way that is coherent and defensible?
First, let us agree that ethics (at least for the purposes of this book) shall uncontroversially be taken to mean the philosophical study of morality. Yet what does ‘the philosophical study of morality’ mean? Well, many sociolo- gists claim to be researching ethical issues in, say, ethnographies of football hooliganism, or dilemmas in nursing practice, or norms of authorship in laboratory-based experiments. These studies might entail gathering data first hand and critically commenting upon them. We are concerned with such studies only as a means to systematically reflecting upon them in order to evaluate post hoc whether the courses of action and the character of the research/ers are good, bad, defensible, indefensible or deplorable; or to con- sider the merits and defects of the design as would an REC in order to determine that such research is good. So our reflections will, in a clear way, be second order. We reflect on the good and the gruesome in research. But determining whether research falls under one heading and not the other, is not a straightforward matter. And the means by which we decide will betray
the choices as to moral theory or the ethical position that we adopt. Even if we accept this designation of morality and ethics, there are still levels of ethics that it will be helpful to distinguish. In this chapter, we will distinguish three levels of ethics: (i) meta-ethics; (ii) normative ethics; and (iii) practical ethics. We shall say only a little about each of these levels. But a word of qualification is necessary here.
We are not suggesting that these levels are either given or necessary in any absolute way, nor that they are evaluatively naïve or theoretically innocent.
The constructions of these levels of philosophical endeavour are a product of writings, over the centuries. And they are the product of Western philo- sophy. The extent to which other cultures might challenge the levels is not considered here. So, for example, many in the West have previously assumed that ethics are moral standards that apply to our general conduct as social beings, which are continuous with Christian moral teaching, such as adher- ence to the Ten Commandments. Different cultures have slightly different systems of thought. Yet in Africa, by contrast, the concept of Ubuntu regu- lates behaviour, and enjoins adherents to act to promote what might best be termed ‘communal humanism’. We shall not reflect further on the challenges of cross-cultural norms or values in relation to levels of ethics, but instead address more pragmatically the issues these raise for researchers engaged in transcultural projects in Chapter 10.
It will be better to think of the levels of ethics, then, as a heuristic device; a way of managing the tortuous terrain of morality. We use them to help us think systematically about the issues of research design, data collection and analysis, report and scientific writing and so on. Neither we nor you, the reader, are logically compelled to think of ethics in this way. In attempting to ask the enduring philosophical questions – Why be moral? What are the strictures of morality? Which are the most pressing of morality’s demands?
Are moral demands universal? Is respect the cornerstone of morality? – these levels of ethics have been found useful.
Meta-ethics
Meta-ethics is that field of ethics where the philosophical abstraction is greatest. While moral philosophy generally attempts to deepen, revise and systematise reflection on how we believe we ought to conduct our lives, meta-ethics reaches to the foundational claims of all moral theories and practices. What are the grounds of moral authority? Is one moral theory more complete than any other? Can there be moral knowledge? Are moral principles unique in character? Are good and evil merely non-cognitive expressions of emotions or preferences? Do moral properties exist in the world or are they merely subjective or cultural constructs? These questions are among the most fundamental for all moral philosophers, sometimes called ‘ethicists’, to pursue.
Much deliberation in research ethics does not directly address these
questions and indeed simply assumes answers to do with the authority of the beliefs of the IRB or REC as a legitimate organ of control, an institution- ally justified gatekeeper for sound research. Most research ethics deliber- ation, by contrast, goes on at a very applied level, which oscillates between normative and practical ethics.
Normative ethics
The impulse to systematise is among the most basic for philosophers.
Normative ethics shares with meta-ethics the need for abstraction from par- ticular persons, or practices, or policies into clear, coherent and consistent approaches. It might be useful to think that meta-ethics addresses issues that relate in a foundational way to all moral theories. Normative ethics can then be thought of as the development of moral theory or theories. There are those who complain that if ethics is not practical then the philosophical engine is somehow idling. While meta-ethics shapes the kind of moral theory espoused, then normative ethics (theoretically informed moral posi- tions) are in themselves a particular kind of theory. It has been argued normative ethics should not to be thought of as a scientific theory (Williams, 1985) but rather as coherent and systematic reflection to guide our practices.
Some would argue that this thought properly belongs to meta-ethics. This dispute illustrates nicely the difficulties of looking for hermetically sealed categories in the levels of moral thought and practice. Uncontroversially, we could say that normative ethics is thought to be substantive: it is about getting one’s hands dirty in the day-to-day stuff of life and offering at least defensible solutions to practical problems of how we ought and ought not to act. But it does so at a level that is consciously theoretically informed.
In the sections that follow, we have identified five celebrated moral theor- ies. Perhaps it is better to think of them as families of theories since they each house a number of interpretations of a subtlety that we shall not attempt to do justice to here. As with the levels of ethical reflection, it can be helpful to distinguish two kinds of moral theories in a rather traditional way.
Some might be thought of as forward-looking, others backward-looking. We do not mean to imply that some are traditional and others contemporary by these vague labels. Rather, when confronted with a problem, we may attempt to organise our reflections around things we hold important before the fact, such as certain duties, obligations or rights. On the other hand we may look to those things that will be directed towards the achievement of a certain goal, such as the greatest benefit to a given population, or the achievement of a desirable character trait such as honesty.
In moral philosophy these two perspectives are usually given the label deontology and teleology. In ancient Greek, deontology referred to the science of duty (deon is taken to mean ‘duty’ roughly translated) while teleology referred to the pursuit of a given purpose or goal (after telos). We will deal with the theories under this description. We will first examine the deontological
family of theories (duty theory; rights theory) and then the teleological fam- ilies (consequentialism – specifically utilitarian theory – and virtue theory).
We will finally present what is probably the dominant approach in bioethics, principalism, an eclectic theoretical position which attempts to cater for deontological and teleological parameters of moral practice and thought.
Practical ethics
As the term implies, practical ethics is concerned with how we ought to act here and now. In the everyday contexts of research, academics find them- selves asking such questions as: ‘Am I obliged to present all data in my discussion or can I leave out certain factors that I deem irrelevant to the main thesis because they do not support it?’, ‘Can I break a promise of confidentiality if I think it will save a subject from being harmed wrongly or unnecessarily?’, ‘Ought I to accept research sponsorship from the tobacco or alcohol or even sports drinks industries?’, ‘Ought I to chal- lenge sexist attitudes of interviewees if that will alter the data or harm the project irreparably?’, ‘Should I accede to including my supervisor’s name on a project when I know that he has not contributed signifi- cantly towards the final article?’, ‘Are deceptive methods justifiable when researching the influence of drug sales representatives on doctors’ prescribing habits?’
All these practical questions apply in everyday contexts in research. How we think about them will be informed or uninformed to the degree that we are willing to engage in philosophical reflections about their theoretical base.
Whether we know it or not, indeed whether we care about it or not, our attitudes and choices with regard to the conflicts above will be nested within a set of theoretical considerations such as the duty to protect research sub- jects; the respect of colleagues; obligations to the profession; integrity to ourselves, and so on. What is being applied here is moral theory (knowingly or otherwise). The label ‘practical ethics’ is indeed relevant; we should not think that the term practical means non-theoretical. Rather it depicts a fea- ture of morality that is widely accepted by philosophers: the conclusions of moral considerations should be action. Once we decide that a given problem is best considered in a given light, the conclusion that follows should be action-guiding. So practical or applied ethics should not be inert. An idea very much like this was propounded by Socrates over 2,400 years ago. It is captured in the phrase: ‘Who knows the good chooses the good.’ While philosophers have challenged the strong cognitivist line (that simply know- ing what to do will somehow transport us directly to act) from a variety of lines of argument (notably Aristotle argued for the existence of the possibil- ity of weakness of will in the face of knowing what we must do), we still argue that as a general norm knowing what ought to be done in research contexts places researchers under a certain ethical pull toward doing the right thing, and being a good person not merely a technically effective researcher.
An awareness of central philosophical theories should, in principle, serve us well in research situations where we begin to understand the push and pull of competing courses of thought, feeling and action. An awareness of such theories can certainly help towards coherent, consistent and transparent modes of response. In short, it can make responses both accountable and transparent. We shall develop these ideas specifically in the context of research ethics below.
First, we shall consider some summary outlines of consequence-based, duty-based, rights-based and virtue-based moral theories more generally.
Again, it should be noted by way of caveat that each of these sketches represents not one theory but a whole family of theories under these labels.
Several excellent collections now exist for the reader who wishes to familiar- ise themselves with these families of theories, and the wreckages of many others that are to be found at the bottom of the moral philosophical ocean (see, for example, Singer, 1993; Lafollette, 2000). We have selected duty, rights and consequentialist theories since they represent the dominant mod- ern moral theories, and we also comment on the ancient virtue-based ethics since the last quarter of a century has seen a very significant revival of interest in it.
Consequentialism
The idea that religion unequivocally provided us with moral rules justified a picture of a kind of moral law. What drove human beings to act rightly was observance of its authority. Consequentialism, by contrast, appeals to the empirical, the here and now of human welfare. It is driven by the idea that what human beings seek is that which is good for them and that they seek to avoid what is not of benefit to them. In a famous passage, the founder Jeremy Bentham claimed that pleasure and pain were our sovereign masters. It fol- lowed then that questions of moral rightness or wrongness hinge upon assessment of good (pleasurable) and bad (painful) consequences. At first sight, ethically evaluating research would seem a perfectly natural extension of utilitarian thinking. What we first look for in research is very often related to the question of what benefits it will bring and what drawbacks too. This is nothing if not consequentialist thinking.
Perhaps the most well-known form of consequentialism is utilitarianism associated most famously with John Stuart Mill, and his book Utilitarianism first published in 1861. It is the clearest exposition of the theory first developed by Jeremy Bentham (see his An Introduction to the Principles of Morals and Legislation, first published in 1789).
The basic idea in this moral theory is quite simple, and is captured in this passage from Mill:
actions are right in proportion as they tend to promote happiness, wrong as they tend to promote the reverse of happiness. By happiness is meant
pleasure and the absence of pain; by unhappiness, pain and the privation of pleasure.
(Mill, 1962 [1863]: 257) Utilitarianism claims that morality is concerned with doing good, so that when we assess the morality of what we choose to do our only consideration should be the utility of acting in one way or another. ‘Utility’ or ‘good’ can have a number of meanings including pleasure, happiness, welfare and the satisfaction of preferences. All of these conceptualisations can be considered under the heading ‘beneficence’ – a principle of action aimed toward good.
Equally, when considering the goodness of certain outcomes we must also consider potential harmful consequences in the form of pain, or general disbenefit. These are generally captured under the principled heading non- maleficence: though strictly speaking this refers to non-harm and is a corner- stone of medical ethics. The Latin phrase primum non nocere (first do no harm) captures this principle most famously. But this might be a principle adopted by deontologists as we shall see below.
Utilitarianism is therefore commonly described as an ‘outcome morality’:
when evaluating or attempting to justify a course of action, utilitarians weigh up potential outcomes of each possibility based on the premise that what ought to be done is always whatever produces greater utility. This is often referred to, after Jeremy Bentham, as the Greatest Happiness Principle.
A chief value of the utilitarian approach is that it provides a method for noting and evaluating benefits and harms even if it is not quite the precise mathematical morality its founders envisaged. Utilitarianism is based on Bentham’s ‘felicific calculus’ which is essentially a means of rational calcula- tion by such measures as the intensity, certainty, extent, nearness in time and duration of pleasure or happiness attained by a given policy or action. A major appeal of utilitarianism, then, is that it produces a right answer in any given situation according to the criteria above. All manner of difficult choices are grouped together and solved merely by seeking a balance between competing considerations that promises to produce the best outcome.
However, there are a number of problems with the ‘felicific calculus’ as we shall see.
A further point must be made in praise of utilitarianism, which relates to philosophical and common-sense language. When the term ‘utilitarian’ is used in everyday contexts it is often as a term of abuse. Thus, describing a researcher’s attitude as ‘utilitarian’ means little more than that conveying the opinion that the researcher merely used their participant as a means to his or her own ends, subject to their will as researcher. By contrast, the philo- sophical theory ‘utilitarianism’ has at its core an impartial ethic. Anyone relevantly affected by a course of action should be counted in terms of harms and benefits. Researchers, or the group they belonged to, could never privilege themselves in the calculation. So to describe the Nazi researchers of our previous chapters as ‘utilitarian’ would clearly be to invoke the rather