She begins by making a distinction that neither Clifford nor James makes: between epistemic and ethical justification And she argues that while there is overlap between these, they are not the same. She points out that there are occasions when one is ethically justified in holding to beliefs that one does not regard as epistemically justified; and that the example of belief without epistemic justification which Clifford uses, concerning the shipowner, has features that are not universal; and, moreover, these are ones that in themselves imply unfavourable moral appraisal.
She counters his position by arguing that: 'my believing, on inadequate evidence, that the apples I just selected are the best on the supermarket, is, like many inconsequential beliefs, harmless', and is therefore not open to serious moral criticism. Interestingly, this is an example which indicates that there are very different conditions than those identified by James where belief on insufficient evidence might be acceptable. Haack also provides an example that is somewhat closer to the conditions specified by James: '[ She claims that Clifford's case depends on two false assumptions: 'that mere potential for harm, however remote, is sufficient for unfavourable moral appraisal provided the subject is responsible for the unjustified belief ; and that a subject is always responsible for unjustified believing'.
And she insists that 'remote potential for harm is not sufficient; if it were, not only drunken driving, but owning a car, would be morally culpable [for road accidents]. And a subject is not always responsible for believing unjustifiedly; the cause, sometimes, is cognitive inadequacy' Haack Despite this criticism, Haack concludes that while 'Like James and unlike Clifford, I do not think it always morally wrong to believe on inadequate evidence.
Moreover, she argues that when it comes to appraisal of the person as inquirer, rather than of any particular belief, the relation of epistemic to ethical appraisal may be more intimate, pointing to 'the moral importance of intellectual integrity' Haack In concluding her article, she quotes C. Lewis: 'Almost we may say that one who presents argument is worthy of confidence only if he be first a moral man, a man of integrity I find much of Haack's argument convincing, but there are some respects in which it needs reformulation and further development, at least for my purposes here.
For one thing, I am not convinced that the distinction between what is morally wrong and what is epistemically wrong can be sustained. To say that something is wrong is to say that, other things being equal, it should not be done - on either moral or prudential grounds. It seems to me that epistemic considerations can only enter as part of moral or prudential arguments. In that sense Clifford was quite right to talk about the ethics , rather than the epistemics, of belief. When we talk of the sufficiency of evidence, we are concerned with what ought to be sufficient in the circumstances.
So, I do not believe that there is some fixed epistemic threshold dividing what is and is not sufficient evidence. Haack herself points out that there is a gradient of credibility, so that knowledge claims are not simply either credible or not credible What I want to suggest is that the point on this gradient we adopt as the threshold that beliefs must reach if they are to be treated as sound knowledge will legitimately vary across situations.
One consideration here, following Haack rather than James, is consequentiality. Whether or not the apples one chose in the supermarket were the best will, in most circumstances, be inconsequential. And, as a result, the threshold for what would be sufficient evidence for or against this belief being adopted legitimately is likely to be low. We can reasonably take a judgment on whatever evidence is available and the direction in which it points; we are unlikely to spend much time inquiring into the matter to assess whether this evidence is convincing in terms of some higher threshold.
The other example that Haack provides is slightly different in character, it suggests that: if there is much to be gained from believing something and little to be lost from being mistaken, then we do not need much in the way of evidence in order to be ethically or prudentially justified in treating that knowledge claim as true. What this points to, in more general terms, is that there are often differential consequences following from the two types of error that can arise in assessing knowledge claims: taking something to be true that is false, in other words a false positive; or rejecting something as false that is in fact true, a false negative.
Now the point is that, in everyday life, we may sometimes legitimately adjust our threshold of acceptance according to the consequentiality of different kinds of error, seeking to minimise the most consequential sort of error even at the expense of increasing the chances of the less consequential kind.
This argument can be taken a further step, to conclude that how we judge the legitimacy of treating something as true or false will depend to a large extent on the role we are playing, on the activity in which we are engaged. As Dorothy Emmet pointed out many years ago, ethical judgments are properly shaped by social role Emmet And the same goes, even more obviously, for prudential judgments. In playing particular roles we have distinctive responsibilities, and these mean that some considerations are high priorities, while others are given less significance than they would be by people performing other roles.
A classic case is the requirement that a doctor assesses what treatment a patient needs independently of any judgment about the financial costs to his or her practice, or to the national exchequer. This does not mean that the best course of medical treatment should always be provided, but in determining the hierarchy of possible treatments in terms of their relative desirability, financial considerations should not play a part. Now, I want to suggest that a similarly distinctive ethical framework applies to research, or at least to academic research.
- The Sandy Effect?
- Kat Fight!
- Ethics of belief!
- From Epistemic to Moral Realism in: Journal of Moral Philosophy Volume 16 Issue 5 ( );
Here, the primary aim is to produce a body of sound knowledge In other words, the function of research is to produce knowledge that is more consistently reliable, in the conventional sense of that word, than that available from other sources. And this has two implications for how researchers should operate. First, knowledge claims must be judged entirely in terms of how likely they are to be true. Other considerations, for example to do with consequentiality, should not play any role.
This means that, by contrast with many other activities, there should be a standard, across-the-board, threshold for accepting or rejecting knowledge claims; it should not vary depending on the specific context. Secondly, a prudential orientation ought to operate that errs on the side of avoiding false positives, at the possible expense of accepting false negatives Above all, researchers must be prepared to suspend belief in the validity of knowledge claims if there is insufficient evidence for them.
Here, in the context of research, more than anywhere else, the spirit of Clifford's ethics of belief applies.
Moreover, it seems to me that the evaluation of knowledge claims in the context of academic research has a collective character What counts is not just a matter of what an individual researcher judges to be true but rather what the relevant research community judges to be true, or would judge to be true.
The researcher, as researcher, must operate in some sense as an exponent of the relevant research community, not simply as an individual. In particular, he or she should not treat as true any conclusions which are not, or would not be, generally accepted by that community. Researchers can present conclusions as true in their own opinion, but any lack of consensus about the validity of these conclusions within the research community must be made explicit. My central argument, then, is that, in the context of research, a particular approach to the evaluation of knowledge claims is an ethical priority: the threshold that has to be reached is determined entirely by epistemic considerations, and through a process of dialectical deliberation within the relevant research community.
Researchers have a primary ethical responsibility not to put forward findings as more likely to be true than they are on the basis of the available evidence, as judged by that community. The effect of this is that the threshold set will be standard, and rather high, since this is what is required if research is to produce knowledge that is consistently more reliable than that available from other sources.
What I am suggesting, therefore, is that there is an important difference in the ethics of belief between research and some other contexts. In the latter, what is sufficient evidence varies, depending on a number of considerations: the relevance of the belief to any past or future action, the consequentiality of that action, the likely practical costs of errors of different kinds, and the reversibility of the action involved or the remediability of its costs.
Where a belief has little relevance to action, or where the action is inconsequential, the threshold can legitimately be set relatively low. If a belief is relevant to action, and that action is consequential, where a false positive would be more damaging than a false negative, or vice versa, a prudential bias should be set up to minimise the cost.
Ethics of Belief and Other Essays, The (Great Books in Philosophy) William Kingdon Clifford
And the direction of that bias will vary according to circumstances. Where an action is reversible, or the costs of it remediable, the threshold may be set quite low, with the idea that we can learn by trial and error, correcting any errors before they produce significant costs.
In the next section, I will address some arguments that might seem to count against the view I have presented about how researchers should treat knowledge claims. A first criticism derives from the point, central to Haack's position, that all knowledge claims are fallible. She writes that 'one's judgment that another's belief is unjustified must, because of the perspectival character of judgments of justification, their dependence on one's background beliefs, be acknowledged to be thoroughly fallible' Haack And it might be concluded from this that all knowledge claims are equally uncertain, so that the very notion of evaluating knowledge claims in terms of their likely truth is unjustifiable However, the conclusion does not follow from the premiss: that all beliefs are fallible does not mean that they are all equally likely to be false see Haack The term 'truth' is sometimes avoided or rejected by social scientists and educational researchers, or encased in scare quotes or what Haack refers to as 'sneer quotes' , on the grounds that it implies that we can have knowledge whose validity is absolutely certain.
However, this is to take over an indefensible conception of truth from foundationalist epistemology. Moreover, it is impossible to avoid reliance on the concept of truth: it is constitutive of the very activity of inquiry, and essential to the distinction between knowledge and belief Even the epistemological sceptic relies on a conception of truth, albeit one whose requirements can never be met.
Thus, it is important here to be clear about what is meant by 'fallibilism'. It is not the same as scepticism, interpreted as the idea that we cannot know anything. Scepticism, like foundationalism, relies on a definition of knowledge as 'what is known without any possible doubt'. Fallibilism rejects that definition, treating knowledge as 'what is currently believed with good reason', and recognises degrees of legitimate confidence in the validity of various knowledge claims rather than some absolute distinction between what is and is not known So, to say that our knowledge is fallible is not to say that it is false, or even that its validity is completely uncertain.
It is to say that while we have justifiable confidence in it, we can never be justified in having absolute confidence in the validity of any knowledge claim, even though some come very close to that limit; and even though some seem to operate as 'hinges' on which various activities depend see Wittgenstein So, the fact that our knowledge is always fallible does not undermine commitment to truth as a requirement of inquiry, or the application of Clifford's 'ethics of belief' to the researcher.
A second criticism of the position I have presented might be that in assessing knowledge claims researchers are entitled to rely on how plausible they find these, in terms of how strongly these claims are implied by what is currently taken to be sound knowledge. Thus, even if the evidence from existing studies does not support a claim, if it seems to fit closely with other beliefs that are themselves well-established by research evidence, then it can be accepted as true by those who judge it to be.
It seems to me that there is some truth in this argument, but it is only part of the truth. Researchers, in assessing the likely validity of any knowledge claim, should take account and, indeed, cannot avoid taking account of plausibility in the sense indicated here. The process of assessing each finding from a study involves assessing whether it is so plausible on the basis of existing knowledge as not to require any further evidence in support of it.
Only if it is insufficiently plausible in this sense is evidence required. Moreover, the evidence will itself have to be assessed in terms of its plausibility, as well as its credibility the likelihood that errors were built into its production However, plausibility here means: how strongly does what we currently take to be research-based knowledge imply the validity of this knowledge claim. Moreover, in line with the collective character of research as an activity, plausibility judgements, like those concerned with the credibility of evidence, must be made in light of the likely reaction of most members of the relevant research community.
Only knowledge claims which are likely to be accepted as plausible by most of a research community should be treated as if they are true. As noted earlier, where a researcher regards a knowledge claim as true, but it is contested or likely to be contested within the relevant research community, he or she must acknowledge this fact by not presenting it as well-established, though this does not rule out its presentation as a matter of personal belief.
Above all, it must not be treated as something that can be relied upon as being true, or presented to lay audiences as fact. So, the role of plausibility in the judgment of knowledge claims is already included in the account of how research findings are assessed that I have presented. Moreover, this rules out a researcher, or even large numbers of researchers, accepting findings on the basis that, as individuals, they find them plausible.
This, I suggest, is what has happened in the case of teacher expectation theory.
An encyclopedia of philosophy articles written by professional philosophers.
It is found plausible by many researchers; but not in contexts where the primary concern has been evaluating its validity. Moreover, other researchers have treated it as in need of investigation.
- The Ethics of Belief and Other Essays (Great Books in Philosophy).
- People | Department of Philosophy | King’s College London;
- W. K. Clifford, ``The Ethics of Belief" - PhilPapers?
- Aeon for Friends?
- Grasping for Heaven: Interviews with North American Mountaineers?
- The Ethics Of Beliefs By William James.
And since the studies generated have not provided convincing evidence, on the argument I have presented in this paper it cannot legitimately be treated by educational researchers as well-established knowledge; though, equally, there is no warrant for treating it as false. Its validity is uncertain. A third criticism that might be made of the claim that the ethics of belief is central to the practice of research is the argument that researchers have an at least equal obligation to respect other values than truth.
It is sometimes suggested that they must therefore incorporate into judgments about the validity of their conclusions a concern with the latter's political implications or the consequences of presenting them. Some social and educational researchers advocate this position, and we must assume that it shapes their research practice.
However, as I have argued elsewhere, it increases the potential for error being introduced into research findings Hammersley As a result, on my argument here, it is unethical - in the sense that it involves a deviation from the primary obligation of the researcher. In the case of inquiry, and especially of academic research, the success of the enterprise in producing conclusions that are consistently less likely to be false than those from other sources depends on researchers being committed to the production of knowledge as their only immediate goal.
As soon as other goals are allowed in, the danger of coming to erroneous conclusions is increased. And it is the primary duty of the researcher to seek to ensure that his or her conclusions are true. So I am arguing that there is a distinctive ethic, very much in the spirit of Clifford's 'ethics of belief', that applies to research as a specialised occupational activity Moreover, I believe that this is an ethical obligation that is currently under threat, not only from methodological arguments in favour of partisanship but also as a result of rapidly growing pressure on researchers from commercial and governmental organisations.
W. K. Clifford and ‘The Ethics of Belief’ by Tim Madigan | Issue 77 | Philosophy Now
The final criticism I will consider here concerns the fact that research is itself a practical activity; for the most part, it is carried out in the world, and must be adapted to that world. This fact is of significance because I have argued not only that other values as well as prudential considerations necessarily play a role in practical activities, but also that in such activities judgements about what is sufficient evidence for a knowledge claim can reasonably vary across circumstances.
This would seem to imply, contrary to my main argument, that prudential considerations, including the evaluation of knowledge claims partly in terms of consequences, must play a role in research. Some care is required here in distinguishing between research findings and the various assumptions on which researchers rely in doing their work.
My argument about the priority of epistemic considerations in determining what is to be treated as true, and about the process by which the threshold of belief should be determined within research, applies to the treatment of research findings. However, the situation is different in evaluating the assumptions involved in doing research.
In carrying out a piece of research one is faced with a host of decisions to which various kinds of belief are relevant: about whether access can be gained to a particular setting, whether one approach to negotiating access rather than another is likely to be more successful, how to present the research to those being studied, what data collection methods to use, what sorts of interview question might facilitate or hinder getting high quality data from informants, and so on.
These are necessarily pragmatic matters, in the sense that the sorts of consideration I outlined as relevant in many practical decisions - consequentiality, remediability, etc - will play a key role. Furthermore, in relation to many issues that they have to resolve for practical purposes, researchers are unlikely to have the evidence, or to be able to collect the evidence, that would be necessary to make a judgment in the way that is required for research findings.
However, there is no contradiction here, since the pragmatic decisions involved in research must be made with a view to maximising the likelihood that the findings produced will be true.