This is the ur-Article I want you to have in Mind

Epistemology and the Directly/Merely Distinction

‘In philosophy it is significant that such and

such a sentence makes no sense; but also that

it sounds funny [komisch]’: Wittgenstein, Zettel 328

1.    The distinction introduced

The theory of knowledge has been central to philosophy since the time of Descartes. It was not ever thus. The Western tradition started with the philosophy of chemistry (‘Everything is water’), and it has been argued more recently, particularly by those influenced by Frege, that it is the philosophy of language that is uppermost, and that ‘What do you mean?’ has replaced ‘How do you know?’ as the philosophical ur-question that leads to more detailed interrogation.

This paper will question this 20th century assumption. Epistemology has not been dethroned, only refashioned, I shall say. However, I shall also consider what sort of epistemology would be most suitable for this century, bearing in mind that the challenges we shall meet over the next decades are going to be very different from those that faced Descartes and Locke.

It will be protested that such questions are pointless — even in principle. The owl of Minerva only spreads her wings at dusk; and epistemology in the ‘early modern period’, with its artificial distinction between rationalists and empiricists, is merely a product of hindsight. So how can we choose the type of epistemology we shall have in this century? We must start with something, or else there is nothing for future historians of philosophy to examine. And — much Enlightenment epistemology is clearly no longer fit for purpose. Nowadays, we need to consider urgently whether the Amazonian rainforest is sustainable, not whether (God permitting) it would instantly cease to exist should we stop looking at it. This is not to belittle Berkeley; but I shall argue that the lessons to be learnt are not always what they appear to be.

In what ways has epistemology already advanced in this century? Two ways in particular spring to mind. First, knowledge is now considered to be a social phenomenon rather than an individual achievement. For example, it matters more whether climate change is known to be real than whether a given individual happens to know whether it is or not.  Thus, stalwarts of contemporary epistemology courses, such as problems about definition (and the somewhat strangulated post-Gettier series of ever-more-prolix counterexamples) are tending to fade from the curriculum to be replaced by 21st Century preoccupations.

Secondly (and relatedly), the status of testimony has leapt to centre stage after centuries of neglect.[1] With the striking exception of Reid (1765: ch. 6, § 24), 17th and 18th century philosophers have tended to take a reductionist line, and have assumed that sensory experience and reason are the only ultimate sources of knowledge. We thus accept the testimony of others only because our senses and our reason tell us that these others are reliable witnesses. Yet there is a problem here, for as Reid noticed, we cannot talk here of ‘reliable witnesses’ unless we have already accepted a conceptual scheme that treats the general reliability of witnesses as fundamental. Moreover, we have the ubiquitous sceptical considerations presented to us most forcefully by Hume: reason does not extend beyond pure logic, and an experience tells us nothing beyond the actual content of that experience.

These two factors (i.e., sociality and testimony) are related. Consider, for instance, scientific knowledge. We still tend to think of the great scientific developments of the past as due to individual genius. Without the influence of these Great Men (and they nearly are all men), science would not have progressed, and we should have stayed in the Middle Ages with some sort of unimaginative Aristotelianism. True, Sir Isaac Newton acknowledged that he stood on the shoulders of giants, and we still assume that he was a greater thinker than Robert Hooke (who is unknown to the public at large). Historians and sociologists of science may question this, and point out that Newton the man was highly unscrupulous (for some reason, Hooke was a favourite target). But it is an axiom of analytical philosophy of science that the history and sociology of science are philosophically irrelevant, and that it is the quality of philosophical argument, not its origin (or its relation to the ‘frilly bits” that give a background to matters) that counts.

This is all ancient history, modern scientists may protest, but I disagree. The Nobel Prize system for rewarding excellence seems to be unfit for purpose given the importance of team-work — yet the Nobel Prizes continue to be awarded. Modern climate science is multidisciplinary, and nobody is sufficiently expert in all the ingredient disciplines to be able to know directly all the relevant facts that the public at large need to know (a fact gleefully seized upon by climate change deniers). Yet, the standard route to a research grant is by completing a traditional doctorate on a highly specialized topic. Clearly, an antiquated model of research is being applied to 21st Century problems, thus showing that modern tertiary education is not fit for purpose. However, the philosophy of education is a marginal branch of the subject, so I shall let the matter drop – at least for the time being.

A useful place to start our analysis is, oddly enough, the nature of collective action (not belief). This is because action is the dual of knowledge and they both concern the chasm that apparently separates the self from the world. Knowledge involves world-to-mind input, and action concerns mind-to-world output. A robust and unified notion of action seems to be needed if we are to avoid ‘volitions’ or bare acts of will – entities viewed as unsatisfactory by a variety of recent philosophers. Wittgenstein presents us with a fundamental challenge. If we subtract from the fact that I raise my arm the fact that my arm rises, what remains? Whatever it is is both crucial and elusive. An individualist epistemologist might phrase her analogue question as: subtract from the fact that I know that I have an arm the fact that I have an arm — what remains? The question is difficult to answer even when dealing with the immediately evident and can only get worse as the target fact moves further and further away from the investigating mind.

Now, the point is that when we move from the individual to the collective, these sorts of difficulties tend to evaporate. Or rather, they disappear, but then reappear in a much more interesting form. Collective action brings with it the morally charged notion of collective responsibility, something that is more than the sum of its parts (as is illustrated by the well-known issue of just who was responsible for just which part of the Holocaust). The notion of collective responsibility, and its allied notion of collective guilt, make more sense than many suppose in the task of getting post-war Germany to face up to the horrors of its past and to move on, and this has several implications. For example, we are no longer disposed to think that the German collective soul in the 1930s and early 1940s fragmented into atoms, which then prodded the pineal glands of the various soldiers and guards directly involved in the administration of Auschwitz, which then organized individual bodily actions, which then mysteriously coalesced into something far greater and more sinister. The middle part is as unnecessary as it is bizarre and should never have been present in the first place. Gilbert Ryle (1949) famously mocked what he called the Dogma of the Ghost in the Machine, which presented itself as the unlikely claim that all human actions involve poltergeist activity in the skull, activity that would not be tolerated if we were told that something like it was happening in the next room but not concealed within a bone casing. Such ‘psychons’ (i.e., soul fragments, as understood by Sir John Eccles, a Catholic neurophysiologist) are as deplorable as they are unverifiable. By contrast, the image of how the German collective soul led to Auschwitz can be narrated satisfactorily without anything similar being introduced as an explanatory factor. The same may be true of the reverse direction involved in the parallel notion of knowledge – though the latter now requires us to take testimony seriously as an independent source of knowledge.

Is the notion of testimonial knowledge intrinsically problematic? Or are such problems mere pseudo-problems that philosophical investigation of any genre seems to create? I think the former, if only because the most natural real-world example of testimonial investigation concerns the deliberations of a jury in an English criminal court. The rules of evidence are strict, and care is taken to ensure that nothing can enter the court’s deliberations that did not come from sworn testimony of witnesses that the jury can (usually) see. The jury must not try to improve on the original police investigation (even if the trial is temporally removed from the courtroom to the locus in quo). Still less must it try to re-fashion the case for the prosecution, even if the latter is visibly incompetent. Rather, the jury must simply decide whether the case – as constructed by others — is sound beyond all reasonable doubt.

So far, so good. Without the assumption that testimony is in general to be relied upon (aided by the fact that witnesses must swear an oath before giving evidence), the trial could not continue. And to be sure beyond all reasonable doubt (as opposed to all possible doubt) will not be affected by the ghost of Cartesian scepticism that is particularly unhelpful given that trials are meant to lead to verdicts and not to endless debate. Testimony is thus just fine — and obviously indispensable.

Only it is not, for the other axiom of a fair trial is that hearsay evidence is always inadmissible. Just why we disallow a witness to state what she knows perfectly well to be true (because she has been told it by a reliable source) simply because she does not know it directly (as opposed to merely knowing it) is a neglected topic in both the philosophy of law and epistemology, and I shall return to it. In the meantime, we have — at long last — the distinction between knowing something directly, as opposed to merely knowing it. Hence the title of this paper.

2    The distinction in action

So, what is this distinction, and what contrast does it mark? It is generally supposed that knowledge must be grounded in evidence, and it is axiomatic in the philosophy of science that theoretical knowledge is less basic than the hard-core empirical results yielded by careful experimentation. This seems obvious, until we remember that ordinary scientific knowledge is heavily reliant on expert testimony. I am much more confident about theoretical facts in physics – facts which are undisputed by experts, and which are comparatively accessible to the non-specialist — than I am about the precise experimental evidence on which these theories are based. So, which comes first, the theory or the experimental data? Like all chicken-and-egg questions, the answer is obvious: they evolve together, not one followed independently by the other. However, residual questions remain, and it is useful to ask of such contrasted pairs which side of the directly/merely distinction they belong to.

 Classical empiricism (the dominant narrative of the English-speaking world) says that what is directly known are ideas in the mind – sense data, impressions, qualia, or whatever you care to call them. Moreover, it is insisted that it is only such unprocessed mental items that can be directly known about, and it is easy to explain why. The individualist Cartesian thought experiment involving dreams, malicious demons and other unlikely obstacles to sensible cognitive processing is massively comprehensible, which is why Descartes is universally regarded as the founder of modern Western philosophy. However, his legacy remains a dark one, and that is because scepticism, like cocaine, is highly addictive and its artificially-induced insights just that – artificial. True, a potential Sherlock Holmes can get away with an occasional seven-percent solution thought experiment without becoming any the less a fiercely independent thinker, or so it obviously seemed to him. Only remember, it did not appear thus to Dr Watson, the eternal straight man whose knowledge is so pedestrian as to be almost laughable.

How do we rescue ourselves from the legacy of these sceptical pipe-dreams without jettisoning an entire philosophical tradition? My answer is again clear: apply the directly/merely distinction, but do so in a J.L. Austin sort of way, i.e., by looking at what we actually think and say and not just at what philosophical theory suggests we ought to think and say. With this in mind, we may declare, with G.E. Moore, that I know directly and non-inferentially that I have two hands, and that dissentient philosophers can park their theories elsewhere (though nowhere unreasonably rude: philosophy must always be a polite discipline). This is not to demand absolute certainty, as Moore himself rather unwisely insisted, for everything depends on context. If I wake up in hospital with my arms covered in bandages, I may well have grounds for doubt as to whether I have two hands. But that is merely an exception that proves the rule. Unless we have some tediously contrived philosophical point in mind, we may say with total confidence that I have two hands – and, further, that each hand is made of flesh and blood in the ordinary way (and not a hard, ultra-thin membrane concealing just empty space within, for example, to take an equally ludicrous possibility).

Just how do we know this? Have we looked carefully? Such questions simply evince a naïve, pre-Wittgensteinian epistemology and should be firmly resisted. This is because basic knowledge is not grounded in evidence at all, a fortiori in questionable evidence. To put it almost more clearly, I do not merely know that I have two hands. I directly know it – end of epistemological story.

An old joke has it that the Mathematics Department is very cheap to run thanks to its absence of expensive laboratory equipment. All that is required are pens, paper, and bins suitable for recycling large quantities of paper. Such is the beauty of being an a priori rather than an empirical discipline. The punch-line, of course, is that the Philosophy Department is even cheaper to run, since we can dispense with the bins. Well, as jokes go it only gets about 7 out of 10 in this author’s opinion. In the Antipodes, real jobs are at risk because certain governments assume that departments of philosophy cannot be creating wealth since they do not themselves require massive investment. By contrast, we may confidently say, even with the world’s economies being destroyed by the Covid-19 emergency, that governments will never give up spending billions on medical research …

… all this is perfectly obvious, if a trifle unphilosophical, you may say. I merely point out in reply that we have already proved decisively that I have direct knowledge of the fact that I have two hands — hands with quite normal anatomical and physiological properties. So, to cut a long story short, we should ask urgently why it is that the Clinical Medicine departments, particularly those attached to NHS teaching hospitals, are the most expensive to run and Philosophy departments the cheapest? Why not the other way round, we may ask again, and with a touch of understandable resentment? However, I shall not pursue this, and shall leave such financial questions to university managers. Beyond that, I shall say nothing — except to point out the obvious, namely that my jokes are a lot better than yours.

3:           An Apology

The English word ‘apology’ has several related meanings and is sometimes referred to as a ‘contranym’. That is to say, the word manages to have meanings that are the exact opposite of each other, even though they are closely related. This is a sort of linguistic a priori impossibility that is refutable by example. Other examples of contranyms include ‘peer’, which may mean a peer of the realm, i.e., a social superior, but which can also mean a social equal (as when one talks about one’s peers). The most common use of ‘apology’ nowadays is to refer to an act of contrition, and one does not usually accept anyone’s ‘apology’ in this sense unless one is satisfied that there is genuine remorse. But, famously, Plato’s dialogue of that name shows Socrates at his feistiest, poking fun at his executioners, and defiantly explaining that true philosophy is not about earning one’s living in a certain way, but of living one’s life in a certain way – a powerful explanation since the life in question was due to end somewhat prematurely.

So which sense of ‘apology’ am I talking about here? It is a measure of the degree of respect with which I (the author) hold you (the reader) that I am simply not going to answer that question. I hasten to add that I mean this in the nicest possible way. The point rather is that you may read what I write in any way you feel like; and as far as I am concerned, the more diversity of interpretation there is, the better is my intended result.

Still, editorial patience is limited, and it should be clear that the style of this article is alarmingly unconventional, even if the content – strangely enough – is not. So, what is going on? Can we give a name to this Phenomenon? Well, names are just names, and the received view nowadays seems to be that they gain their semantic content by merely latching on to their bearers without the assistance of any intervening idea or descriptive content. The snag is that the bearer is no easier to identify than the name. Merely to say — well just look at what you are now reading. That is the Phenomenon! — is a bit intolerant and contemptuous of those who do not like to rush to judgement.

We could, of course, just call our mystery ingredient ‘The Phenomenon’, and be done with it, but such rude clarity is not always helpful. Will it not only encourage a certain sort of critic to ask herself whether the Other, in this context, should be called ‘The Noumenon’? Such questions are mere distractions from the only important fact there is, namely that this article manifestly contains an idea that is genuinely new and is not merely a repackaging of something that has appeared many times and in many guises in the history of human thought.

So, what is this idea? Or since ‘ideas’ are too subjective to be the bearers of meanings, what are the rules that govern the distinction between correct and incorrect usage of the word that has yet to be identified? Now, what has become known in analytical philosophy as the ‘rule-following considerations’ are inevitably going to be relevant here. The only snag is that we live in the 21st Century, where new ideas and inventions are being created at a dizzying and exponentially increasing rate. The idea of ‘going on in the same way’, essential to the development of stable Wittgensteinian semantics, therefore lacks a coherent meaning.

Some radical changes in our basic philosophical assumptions are needed, and since it is my article and no-one else’s, I shall just make some stipulations. Instead of God, Freedom and Immortality, we shall henceforth speak of Metaphor, Toleration and Telepathy. Why, exactly? We need not worry unduly at this stage. After all, Kant never made it particularly clear what the original triad was supposed to mean. Of course, ‘God, Freedom and Immortality’ has a kind of natural sound to it, if only because it is so familiar. However, what seems natural can be made to change, and that is precisely what I am going to do.

Let us begin with Metaphor. That all words are (dead or alive) metaphors is taken for granted by those philosophers primarily influenced by Nietzsche – modern European philosophers.[2] It is also dominant in the semiotic tradition of Umberto Eco (1979) that was so fashionable about 40 years ago. However, both continental traditions will be firmly eschewed in favour of the older (and more respectable) analytical tradition, a tradition that runs continuously from Thales to me and which embraces all (except those who misread Kant rather badly). This is not because it is somehow ‘better’ than its continental neighbours – though it is – but because I wish to prove that continental content can be expressed perfectly well – or even better — within an analytical style. If this leads the reader to conclude that she is currently reading nothing more than a literary conceit of some kind, then so be it. At the risk of sounding like a sulky adolescent schoolgirl, I must say that I am not unduly bothered by this — or by any other related accusations.

4     Metaphor and Toleration

Still, discipline must be maintained in at least some form, albeit in a more relaxed and humorous style, and it should be remembered that Ferdinand de Saussure (1857—1913), the father of modern linguistics, has yet to be introduced. Is he an Anglo or a continental, I hear someone slyly ask? Well, he should be compared with Wittgenstein (which nobody does, as Harris 1998 convincingly argues). Why? Given that Francophone Saussure was Swiss and German-speaking Wittgenstein Austrian, it is difficult to see why the English should have embraced the latter as a native son and should have regarded the latter simply as the natural precursor to the poststructuralist Derrida, the very epitome of all that is evil about French culture. After all, they (the Swiss and the Austrian) talked of similar things – the image of words-in-use as chess pieces defined by legitimate chess moves rather than physical realization seems to have occurred independently to both, for example. Moreover, anyone who thinks that the Swiss writes more obscurely than does the Austrian clearly needs a think-transplant. The Course in General Linguistics is a model of sane clarity, whereas it is considered quite an achievement to say just where in the Philosophical Investigations the infamous ‘private language argument’ even begins, let alone ends.

However, I am no expert on Saussure, and I merely praise him for his two-dimensionalism (as I shall call it), a very rare quality. Let me explain. Most thinkers, when they wish to introduce a technical distinction, resort to a single contrastive dichotomy. Kant particularly exhibits this trait. Saussure, by contrast, invariably gives us a two-dimensional structure, for example in his distinction between the ‘syntagmatic’ and the ‘paradigmatic’. The former concerns how a given word is situated in a sentence regarding its order-relations with the words on either side of it More simply, we have syntactical relations, for example as is illustrated by the difference between the well-formed ‘The cat sat on the mat’ and the nonsensical ‘The sat cat on the mat’. By contrast, paradigmatic relations concern only similar words (listed figuratively above and below the target word when the target sentence is written horizontally). This gives us semantics, as is illustrated by the difference between ‘The cat sat on the mat’ and ‘The dog sat on the mat’ (both meaningful sentences, but with different meanings).

Now, most modern theories of language require a contrast between syntax and semantics, but the contrast tends to be one-dimensional, and the things to be contrasted are not themselves contrasts. With Saussure, contrasts go all the way down — which is as well since contrasts can be visualized and hence understood, and not much else can be. Hence, the label ‘structuralist’.

There are other contrasts as well, such as that between the ‘associative’ and the ‘paradigmatic’. The latter concerns diachronic semantics, with how meanings evolved in time. Humanity was not born with the language of Newton and Shakespeare already in place. That human languages evolved continuously from palaeo-grunts and gestures is obvious – admittedly, we do not know this directly since we cannot observe prehistory – but we (merely) know it for all that. I shall reserve the term ‘metaphor’ to mean just whatever it is that underpins semantic evolution. Abstract nouns usually have pre-metaphorical literal meanings attached (witness ‘abstract’ itself), and this ensures anyone who knows how to abstract a biscuit from a tin has at least the potential to understand abstract nouns qua abstract nouns. This, of course, assumes that the creature in question has something like a ‘metaphor module’ in the brain — which chimpanzees, for example, do not have.

Now does Saussure have much to say about all this? Strangely enough, he is generally thought to be important precisely because he was only interested in how languages are structured today, as opposed to how they evolved. Indeed, structural linguistics came into fashion at the same moment when philology (now called ‘historical linguistics’) went out of fashion. Yet this official story is not quite true. Saussure’s tolerant view is simply that there are other interesting questions besides philological ones. This is a fact of fundamental intellectual importance — why? In a world where publication depends on friendly referees’ reports, i.e., reports which do not speak of ‘the’ problem in a way that assumes that there is only one problem, we need rather more Saussure and rather less Wittgenstein (whose tortured, single-minded genius made him a rather disagreeable companion). Anyway, this is what I mean by Toleration, our second revised Idea of Pure Practical Reason – so called because it is a theme that unites pure thought with actual practice.

5    The Mind-Body Problem

So where does Telepathy, our third and final Idea, come into the equation? Let us start with its predecessor, Immortality, and ask the question I always ask at the beginning of teaching the relevant class: if you believe in life after death, will you please raise your hand? I have never had a group in which nobody raised their hand. The follow-up question is, of course, to raise your hand if you think that it is at least possible that there is life after death. This may sound innocent, but it in fact leads to difficulties. First, it can be difficult to explain that if you answered ‘yes’ to the first question, then you must answer ‘yes’ to the second, since the pragmatic exclusion of ‘actual’ from ‘possible’ cannot be adequately explained without addressing the whole semantics/pragmatics distinction. Second, there is the technical distinction between epistemic and metaphysical possibility that has proved to be so influential in this context. Now, both these points can be dealt with given a little time and attention. But they tend to distract from the pure narrative, so I skate over them. Instead, I emphasize that life after the death of the body is still supposed to happen regardless of what happens to the body in question. Early Christian martyrs supposed that they would soon – very soon — sit on the right hand of God even if their bodies were about to undergo irreversibly destructive changes — even total annihilation in a nuclear explosion, if one insists on taking things to extremes.

Between a quarter and a third of the student body present remains convinced of this possibility of immortality each time I ask, and I have taught such courses almost continuously for well over 30 years This suggests a certain statistical reliability. I hastily qualify this, however, by admitting that I have only ever taught in England and have never tested North American students on this issue. Anyway, the fact that we all have powerful dualist intuitions is thus established, and that is all that matters.

The next stage, of course, is to present the Rylean counter-narrative, and to consider the sheer lunacy that is the view that conscious thought and action require continuous poltergeist activity in the synapses of the brain. The remainder of the 10-week course deals with the fall-out of the clash between these two narratives. The problem of other minds becomes unavoidable, and with it the Idea of Telepathy starts to take over from that of Immortality, since Cartesianism says (unwisely) that Telepathy is actually known to be a priori impossible. This is because other minds are even more inaccessible than other bodies, and so the evil demon can introduce uncertainty at more places. And (we might add) if it is possible to read my mind, as they say, how come you cannot guess my PIN number? The Rylean narrative, by contrast, insists that your pain is much an undeniable Moorean fact as my own, and that ordinary knowledge about other minds belongs on the ‘directly’ side of our chief distinction. True, it takes a more discursive, specialist investigation to determine just where you lie on the Asperger’s/autism spectrum, but the fact remains that is only that sort of neuroatypicality, and not the absence of a telepathy-port in the brain, that prevents you from directly knowing what I am thinking.

If the word ‘telepathy’ is given its normal meaning, are we telepathic? And if not, is there anything we can do to increase telepathic awareness? And if there is, is there a kind of ultimate destination here, a sort of higher level of consciousness to which we must strive? And if there is, what would a society of such telepaths be like? Would it be like a single, supercharged individual consciousness (and therefore not a society at all – think of the ‘Borg’ in Star Trek)? Or if not, then what would the untraversable boundaries between these telepathic consciousnesses consist in? Or finally, is there a particular work in science fiction that best depicts the sort of telepathy that we should (and can) strive for?

Answers to most of these questions are a little hard to give immediately. However, the answer to the last question is affirmative. The work in question is one of John Wyndham’s less well-known novels, namely The Chrysalids (1955). It is set in the distant future when much of the world has been rendered uninhabitable by a nuclear war. A consequence is that there are many genetic mutations at work, mutations that are fanatically resisted by religious conservatives who quote the Bible and destroy the offending variants whenever possible. The narration takes place in what we now call coastal Labrador, though human mutants are forced out of ordinary society into what are called the Badlands (inland), where mutation is even more frenetic. The heroes are a small group of children who find that they can communicate directly with each other in a way that they feel would be dangerous to try to explain to most adults. The denouement is delightful, and the whole novel is essentially a coming-of-age story that is full of optimism and hope (unlike most other novels by Wyndham).

Before getting going into details, I see that some hands are raised and that answers are required to the question, ‘How do you actually know that this novel provides the most appropriate example of telepathy?’ Ditto to the follow-up question, ‘On which side of the directly/merely distinction does this knowledge belong?’ Well, at the risk of sounding rude again, I shall just remind you once more – and in the nicest possible way – that I am the author, whereas you are merely the readers, and that this ends complaints of this kind once and for all.

So, to return to The Chrysalids:  how may I introduce into your mind knowledge of its plot and overall message of hope? If you were already accomplished telepaths, this would be easy; but you are not, and it isn’t. One way of ensuring that the reader gains a good knowledge of a work that they cannot purchase for themselves is by extensive quotation. However, as the Academic Officer of my department, with the duty of investigating and punishing plagiarism by students, I am understandably reluctant to go down that road. Good referencing will not enable you to escape a serious charge if the quoted text consists of large, continuous chunks.

There is also another problem, which is that what logicians call ‘quotational contexts’ are seriously problematic in that they exhibit what Quine calls ‘referential opacity’. This is relevant because Quine rejects psychological talk altogether, and precisely because operators such as ‘Ralph believes that …’ yield referentially opaque contexts.

An opaque context seals in semantic content in such a way that outside operations, such as ‘quantifying in’, do not work. This ensures that Leibniz’s Law can be broken (or can be made to appear so). This phenomenon of referential opacity is about the most abstruse and important to be found in the philosophy of language (see Cappellan et al. 2020 for definitive proof of this). One slightly neglected aspect of it is that this ‘sealing in’ is not absolute even if the context is quotational. Consider, for example, words that are offensive in themselves. Should my submitted article consist in extended repetition of what we are now pleased to call the ‘N word’, then any sane editor would reject it peremptorily. This would remain so even if the offending words were all sealed in quotation marks and were therefore never used or endorsed. It is the word itself that is offensive, and that is that.

That is an exceptional case, you might think, but there are subtler examples. Many of us were first taught formal logic via a well-known textbook, Beginning Logic, by E.J. Lemmon. Admirable in many ways, not least in that it uses natural deduction techniques rather than the technically easier proof-trees that are increasingly fashionable, it nevertheless fell afoul of the Thought Police by containing highly sexist examples of sentences to be translated. A comparatively inoffensive example was the sentence ‘Every woman owns a featherbrained dog’, but it gets worse. Now, are we to insist that Professor Lemmon (unfortunately now deceased) make an abject apology for his sexism and withdraw the book immediately (or at the very least, make the necessary changes in a new edition) — on pain of public humiliation and the withdrawal of every honorary award ever bestowed on him? Well, yes, that is pretty much what did take place, as it happened, and a teaching companion with different examples is now in print (Schumm & Lemmon 1979). Is this not a total outrage, shrieks the Alt-Right, since the poor man never said in the first place that every woman owns a featherbrained dog, so what has he to apologize for? Now, the Woke-Left has a lot to answer for in contemporary politics, to be sure, but it is a little unfair to go to town on this sort of example. The simple fact is that even the most insulating quotational context admits some semantic leakage, and it is possible to suggest offensive things even if you do not (strictly speaking) say them.

If this does not convince, then consider the most straightforwardly crude word in the English language, namely the four-letter word beginning with ‘F’. Now, repair to your local alehouse and relate, in a medium-loud voice, all the interesting things you have to say about this word – always keeping the quotation-mark gestures in full view, of course. We all know what will happen. First, the proffered swear-box, and then finally the polite but firm physical exiting that only experienced landlords know how to do. Now, my question is this. Will your demise be delayed if only you could convince the landlord that the offending words were merely being mentioned, not used? You decide, as they say on Fox News.

6    Philosophical aesthetics

The chief result here is the Intentionalist Fallacy, which is the conjunction of two related claims. The first is that we should never appeal to the intentions of the artist in order to evaluate aesthetically the work of art itself. The second is that it is, in any event, impossible to gauge the intentions of the artist simply by looking at her art. Since works of literature count as works of art in this context, we can see that the role of the author becomes significant. This takes on an interesting twist when we consider cases where the author claims some of the ownership of the fictional realm she created for her stories. A notable example of this was J.K. Rowling’s recent claim that her character Albus Dumbledore was gay. Now, she might claim that she may do this (my world, my decision, and so on), even if AD’s sexuality was never explicitly discussed in the text. Yet, I recall vividly a student in a logic class of mine saying that Rowling really needed to be told about the ‘death of the author’. Once the work is released into the world of the reading public, all such authorial privileges become null and void, my student went on to insist. I suppose there must be limits to these strictures, and that a critic who claimed that AD was really the Hogwarts caretaker and that Mr Filch was the real headmaster might not unreasonably be regarded as just ignorant, and not be thought to have produced a startlingly original interpretation. But pedantry aside, we may agree that, provided that our text may be regarded as imaginative literature, authorial intentions should not really enter the picture.

But when does prose turn into poetry, or an article in a research journal into something fundamentally different? This question cannot be easy to answer, even though it may need to be, if only because we clearly have a sliding scale rather than a strict binary divide. Still, life is short, and yet the addition of just one little poem into a lengthy surrounding text does not one big poem make. Apart from anything else, the little poem may be put into a quotational context, or something analogous, as is illustrated as follows:

Kate’s Poem [to get the full effect, mentally rotate the first five lines by 30 degrees anticlockwise, and move the picture a little to the right]

Warbling, wuthering, bubbling me

Looks like a bird — I just want to be free

Now I’m an Aerial — cannot you see

Oh, don’t be so disgusted at She

Now like a child, and so into Cloudbusting

But I just feel that it must need adjusting

So, the addition of Kate’s Poem will not itself affect the literary status of this paper to any significant degree. That much is very clear — and logically inevitable. We now need simply to work out the literary status of this (whole) paper, so amended. To do this, it is helpful to look at another example of a problematic text, namely 1984.

Perhaps it is just pure, literal truth, for we do not expect the author of Orwell 1949 (‘Politics and the English Language’) to have much time for literary subterfuge. Only the language is not literal. The opening sentence, which includes the famous sub-sentence, ‘The clocks were striking thirteen’, proves that Eric Blair was apparently an appalling liar, and this is even before we comment on his going around literary circles under a false name.

Okay, we say patiently, we mean that 1984 is pretty much what it looks as though it is: an awful warning about how things might turn out if steps are not taken to prevent it. I am happy with that, but it is worth noting that, for many years, the dominant interpretation was that the text was not a dystopia at all, but a satire of life in 1948, the year in which it was written. Others, yet more adventurous, have claimed, that it is first and foremost a love story (between Winston and Julia) set against a terrible backdrop. If the genre of romantic literature is in the class of Jane Austen, this might work. If, however, it is Barbara Cartland who dictates the paradigm, then matters are getting a little stretched. Still, one may react impatiently here since, as everyone knows, works of art always have multiple interpretations — the more such interpretations, the better the work of art.

We are evidently getting nowhere fast, so let us start from the other end and look at the authorial intentions of those who submit orthodox articles to respectable journals. Surely, there is nothing sinister to be found here, as academics just aim at truth, do they not — it is in their very natures? To which I say yes, but …

… if I submit a piece to journal A rather than journal B, it may be for several complex reasons in which my own promotion and/or basic employment status needs to be uppermost in my mind. And why not? Academics must put food on the table for their families just like anyone else, and the needs of REF 2021, as British academics call it, may take priority over advancing the cause of pure truth. At least, this is what you might say.

But … for you to say even this little is extraordinarily difficult, and there is no better way that I know of to reward such courage than to add even more of the same, this time from my own store. I thus freely confess that my choice of style and destination of a potential article for the REF did not even depend on seeking the approval of journal editors – who are probably ignorant of my identity anyway thanks to the quadruple-deaf anonymization processes. No, it is not the judgement of top academics – who get appointed to Editorial Boards of prestigious journals – but that of the other sort of hack, e.g., those whose rather less exciting job it is to manage the author’s department’s REF 2021 submission that dominate the research environment nowadays. The former class of academic knows not who you are (and does not care), whereas the latter know all too well who you are, and are itching to confront you with the latest retirement package. To cut a long story short, the wise academic would do best to placate these latter individuals — lest they end their own careers by looking just like them.

Even poets know when danger is present, so I shall return immediately to the directly/merely distinction. A question left hanging when we discussed our a priori knowledge of the internal anatomy and physiology of our hands concerns our original topic, namely the infamous tree in the quad. Do we know that it has been around for many years and without having to saw into its interior to count its tree-rings? If so, do we know the latter directly, or do we merely know it? Berkeley had no good answer to such questions, and neither do most contemporary graduates from the Emerald Isle. They equally know little about the basic problem of what it is to have to go on, and on, and on, and on, and on, and on — in same way — until the tea break. However, a faux-seminar discussion of Mrs. Doyle on Kripkenstein is beyond the scope even of this paper. Some forms of satire, particularly religiously offensive satire, should be strictly off-limits to all serious scholars.

7: Theory and Practice

Now safely ensconced on the respectable side of the Irish Sea, we can look briefly at some secondary literature. To keep matters simple, this is essentially a story of two sisters, Elizabeth and Miranda Fricker, who between them have been highly influential. (Their works are cited in the bibliography, so it is not necessary to say much more about them.)

What does need talking about, however, is the general theme of this article, namely The Chrysalids and the accompanying mood music that it is the author’s privilege to impose on his audience. I thought of The Pentangle’s Cruel Sister, a folk album that was influential when I started the BPhil degree at Oxford, conscious that my new peers may well outshine me in my later career even if they were a year or so younger than me. However, these memories are full of the Freudian impulses of youth and are mostly still too intimate and painful to share. There is also the risk that the ‘death of the author’ will cease to be just a literary metaphor and will instead go on at least one person’s wish list if I continue in this vein. I shall thus choose a more classical theme.

Recall a live orchestral concert: before any actual recording takes place, there is the vital but critically neglected sound of the collective tuning up. This is genuinely anarchic, with each instrument playing whatever phrase, in whatever key, and in whatever tempo it likes. Then, finally, there is The Note. It is the clear, piercing sound of the oboe, playing a single sustained note at precisely 440Hz, a note that calls for order without any obvious authority to back it up. I often think of this sound as it gets louder and more authoritative (though before the rat-a-tat of the conductor’s baton which signals an altogether more conventional kind of authority). It is at once intensely familiar, and yet also highly alien, just like a Chrysalid.

Just one further detail. Miranda Fricker’s chief contribution to epistemology – the concept of epistemic injustice – is essentially normative, and difficulties emerge when we ask how normative facts can arise out of pre-normative ones. The gap is hard to traverse, as is the ancient problem of explaining why knowledge of the Form of the Good automatically leads to love of the Good, and hence movement in the direction of the Good. The matter is delicate, and largely concerns how an emotivist conception of ethics ties in with the fact that ethical propositions are open to reasoned debate. This is known as the ‘Frege-Geach problem’ – which is, roughly, to explain why the following normative argument works:

Telling lies is wrong [emotive]

If telling lies is wrong, then getting your little brother to tell lies is wrong [reasoned]

(Therefore) Getting your little brother to tell lies is wrong [emotive]

Its relevance to the matter in hand may seem obscure, but is rendered immediately obvious when we make a small adjustment:

Teasing Lizzie is wrong [emotive]

If teasing Lizzie is wrong, then teasing her little sister is wrong [reasoned]

(Therefore) Teasing her little sister is wrong [emotive]

Personally, I prefer the easier version where we substitute ‘fun’ for ‘wrong’, since all problems vanish — and emotion and reason manage to unite in beautiful harmony – just as Plato would have wished.

But enough is enough and, when combined with the message given at the end of §3, these considerations now yield a total philosophy of life — a small bonus to their contribution to 21st Century epistemology.

8:    Conclusion

There is little more to do, except to unwind a little and reflect on a long career as a professional philosopher. It has not always been happy, and I retain some old-fashioned ideas that are perhaps not entirely to my advantage to retain: a preference for the old-fashioned public school humour of Private Eye over Irish sit-com; an admiration for classical music (where one need not pretend to be chained to an adolescent madwoman in order to receive pleasure); and a proper sense that young women who fail to show adequate respect for their male elders are not always feminist heroines.

Resentments? Just one. Cast your mind back a few years to an Institute of Advanced Studies conference titled ‘Aiming at Truth’, where the star of the show was a young mother from UCL who was clearly far too pretty to be entirely respectable, and who had the nerve to let her own infant daughter deliver the punchline, ‘Hello, I’m an elephant!’, to a long-forgotten joke. The crowd loved it, so it is fortunate that very few actually remember this ghastly incident.

We may console ourselves with the thought that Little-Miss-Perfect’s meteoric rise to fame has probably gone the way of all meteors – downwards until it hits the Earth. I promise I shall reveal all should any further information emerge, but … (cont’d, p. 94 – Ed.)

Bibliography

Adler, Jonathan (2017), ‘Epistemological Problems of Testimony’, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2017/entries/testimony-episprob/>.

Cappelen, Herman, Ernest Lepore, and Matthew McKeever (2020), ‘Quotation’, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/quotation/>.

Eco, Umberto, (1979) The Role of the Reader. Indianopolis: Indiana University Press.

Fricker, Elizabeth (1987), ‘The Epistemology of Testimony’, Proceedings of the Aristotelian Society Supplementary, 61: 57–83.

––– (1995). ‘Critical Notice: Telling and Trusting: Reductionism and Anti-Reductionism in the Epistemology of Testimony’, Mind, 104: 393–411.

Fricker, Miranda (2007). Epistemic Injustice: Power and the Ethics of Knowing, Oxford: Oxford University Press.

Harris, Roy (1988). Language, Saussure and Wittgenstein. London: Routledge.

Hills, David, (2017). ‘Metaphor’, The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2017/entries/metaphor/>.

Orwell, George, (1946). ‘Politics and the English Language’. Horizon (76). (Pdf file at http://www.public-library.uk/ebooks/72/30.pdf)

­­———(1949). Nineteen Eighty-Four. London: Secker & Warburg.

Quine, W.V. (1971), ‘Quantifiers and Propositional Attitudes’, in Leonard Linsky (ed.), Reference and Modality. Oxford: Oxford University Press: 101–111.

Reid, Thomas (1765), An Inquiry into the Human Mind and the Principles of Common Sense. Web version at http://www.archive.org/details/worksofthomasrei01reid (Scroll forward to page 194).

Saussure, Ferdinand de, (1998). Course in General Linguistics. London: Open Court.

Schumm, George F., and Lemmon, E.J. (1979), A Teaching Companion to Lemmon’s Beginning Logic. London: Hackett.


[1] See, for example, Elizabeth Fricker 1987, 1995, and Miranda Fricker 2003. A fuller survey (including an up-to-date bibliography) may be found in Adler 2017.

[2] See Hills 2017 for a full survey of the notion of metaphor as used in philosophy.

Published by unwinn

I am a lecturer in Philosophy at Lancaster University. I was born in London, and went to school at Eton College. I studied Mathematics and Philosophy at Merton College, Oxford. I live in Bolton.

Leave a comment