Philosophy 1850

Readings

As well as a list of readings and such, this page contains links to the various papers we shall be reading. All of the files are available in the familiar PDF format. I trust that you are familiar with it and have a way of reading PDFs. Some files are also available as DjVu files, a format that was specifically designed for scanned text and that typically produces much smaller files.


Class Schedule

DateReadings, Etc
4 September

Introductory Meeting

Literal Meaning
6 September

H.P. Grice, "Meaning", Philosophical Review 66 (1957), pp. 377-88 (PhilPapers, PDF , DjVu, JSTOR)

Show Reading Notes


Grice has two main goals in this paper. The first is to distinguish 'natural' from 'non-natural' meaning. Ultimately, Grice's interest is in non-natural meaning, which is supposed to be the kind of meaning that linguistic utterances have (and so the kind of meaning that language has). The second goal is to offer an account of 'non-natural' meaning, or at least of one important variety of it.

So one main reason we are reading this paper is to try to get a little clearer about what notion of meaning concerns the philosophy of language. However, when Grice talks about 'meaning' here, even 'non-natural' meaning, he does not have in mind 'meaning' in the sense in which words have meaning. See below for more on this.

There have been attempts to reduce the notion of non-natural meaning to that of natural meaning. The motivation for this is that natural meaning seems, in some cases, to be a natural phenomenon. Consider, e.g., the case of tree rings: that the tree has 36 rings means that it is 36 years old. So some philosophers (such as Fred Dretske and Robert Stalnaker) have wanted to start with this notion of 'information' and use it to try to explain non-natural meaning, which is more puzzling. We will not discuss such issues, however, as they are more in philosophy of mind.

On p. 387, Grice mentions five 'tests' that can be used to separate natural from non-natural meaning. What exactly are the five tests? How good are they? Are some better than others? (One of the tests just seems wrong to me.)

<--! 1. Factivity 2. 'what was meant' (similar to next one) 3. someone meant, or not 4. inverted commas 5. "The fact that..." is what means in the natural case -->

One of Grice's tests is that, where natural meaning is concerned, "x means that p" implies that p is true. A verb with this property is said to be 'factive'. The usual example is "know": If Alex knows that p, then p must be true, so "know" is factive. Grice is thus claiming that "mean" in the natural sense is factive, whereas "meanNN" is not factive.

One good way to check your understanding of Grice's distinction would be to give some of your own examples of the two kinds of meaning. Do Grice's 'tests' seem to characterize them correctly?

The use of 'mean' that Grice notes at the bottom of p. 378—where "A means to do X" means something like: A intends to do X—is arguably not a use of the same word. Other languages have different words for these two 'senses' of mean, which suggests that this is a genuine lexical ambiguity.

Grice very much seems in this paper to be trying to 'analyze' (roughly, define) certain ordinary uses of the verb "mean". That, presumably, is what justifies the close concern that some of his tests display for the niceties of ordinary usage. But is that really what Grice's focus is (or what it should be)? What else might one read him as trying to do?

Grice's second goal is to offer an account of non-natural meaning. He first considers and rejects a 'causal' account due to Charles Leslie Stevenson. Our main attention will be elsewhere, but you should nonetheless read this part of the paper, as it helps to motivate Grice's own view, which in some ways builds on Stevenson's account.

Note that Grice's target here is not what words or sentences mean. It is, rather, what someone might mean by an utterance of certain words, which might diverge from what those words mean in their own right. Irony and metaphor would be obvious examples. (We'll look more deeply into this matter when we read Grice's paper "Logic and Conversation", and then further when we get to the unit on Contextualism.) Note also that Grice means his analysis to apply to gestures and other non-linguistic forms of communication, as well as to the use of language. So Grice's goal is to explain what it is for a particular 'utterance' (in a very general sense) to mean something, in the non-natural sense. He initially restricts himself, for convenience, to 'informative' utterances (as opposed, e.g, to questions).

What exactly does Grice have in mind when he speaks of "the difference between 'deliberately and openly letting someone know' and 'telling'" (p. 382)?

Grice's account may be formulated as follows: S meant by the 'utterance' U, made to an audience A, that p iff S made this utterance with the intention that A should come to believe that p as a result of A's recognizing that S has that very intention. There are three main aspects to this account:

  1. It incorporates Stevenson's idea that meaning something is, in some sense, connected with trying to get someone to believe something. But, as Grice notes, there are different ways in which that can be done, so there has to be more to it than that. (Here again, a good way to check yourself is to try to come up with your own example that serves the same purpose as Grice's.)
  2. It incorportates an 'overtness' condition: that one's intention to get someone to believe something should be an intention that one intends the audience to recognize. But that is not enough, once again, to rule out certain cases that Grice think should not be included. (Try to formulate your own example again.)
  3. It incorporates a condition that (1) and (2) should be connected: The audience's recognition of the speaker's intention—that they should come to believe that p—is supposed to play a role in getting the audience to believe that p. Grice motivates this condition with a single example involving a photograph. Is that really enough?

As Grice notes, the 'self-referential intention' with which he is supposing people speak might seem puzzling. But this is not usually taken to be a problem. Self-reference by itself does not lead to paradox. (Consider "All complete English sentences have a subject and a verb". Among the English sentences in question is that very sentence.)

Can you think of any cases that pose a problem for Grice's account? (There are plenty!)

How might one try to extend Grice's account to questions? What about other sorts of 'speech acts'? (Grice mentions imperatives on p. 384, but does not offer a detailed account of them, as he has for indicatives.)

Grice briefly mentions on p. 386 that he wants to restrict the relevant intention to the 'primary' intention with which one speaks. But he only describes in general terms why and does not give an example. Can you? (Note how much easier it would be to understand this paragraph if Grice had given an example. Let that be a lesson.)

Grice finishes the paper by making some remarks about the notion of intention that is deployed in his account. What is the purpose of those remarks?

Does Grice's account of the non-natural meaning of 'utterances' presuppose some prior account of the 'meaning' of mental states, such as beliefs and intentions? Does his account presume that we're able to have beliefs and intentions with particular contents before we understand language with that same content? If so, are these problems for his view?

9 September

Sir Peter Strawson, Introduction to Logical Theory (London: Methuen, 1952), section 3.2 (PDF, DjVu)

Only the material from Chapter 3 is required, but I encourage you also to read the material from Chapter 7, which raises similar sorts of questions about quantification.

There is a huge literature on conditionals. See the articles from the Stanford Encyclopedia of Philosophy for surveys.

Show Reading Notes

Related Readings

  • W. V. O. Quine, "Mr Strawson on Logical Theory", Mind 62 (1953), pp. 433-51 (PhilPapers)
    ➢ Well worth reading, this is a reply on behalf of the 'formal logicians'. In effect, Quine concedes the divergences Strawson describes, but claims that they are of no interest to formal logicians. Quine may be right about that (or not), but semanticists cannot simply ignore these divergences, if such they are.
  • Dorothy Edgington, "Indicative Conditionals", Stanford Encyclopedia of Philosophy (2020) (SEP)
  • Will Starr, "Counterfactuals", Stanford Encyclopedia of Philosophy (2019) (SEP)
  • Paul Egré and Hans Rott, "The Logic of Conditionals", Stanford Encyclopedia of Philosophy (2021) (SEP)
  • Edwin Mares, "Relevance Logic", Stanford Encyclopedia of Philosophy (2020) (SEP)
    ➢ A survey of relevance logic, which is a form of non-classical logic in which the 'paradoxes of material implication' are not valid, and in which the rule that allows us to infer "p vee; q" from p is not valid, either.
  • David I. Beaver, Bart Geuts, and Kristie Denlinger, "Presupposition", Stanford Encyclopedia of Philosophy (2021) (SEP)
    ➢ A survey of work on presuppositon.

Strawson refers in this discussion to a number of 'logical laws' that he had listed earlier in the book. These are:

(6) ~(p . ~p)

(7) p ∨ ~ p

(9) ~~p ≡ p

(11) p . q ≡ q . p

(19) ~p ⊃ (p ⊃ q)

(20) ~p ⊃ (p ⊃ ~q)

(21) q ⊃ (p ⊃ q)

(22) q ⊃ (~p ⊃ q)

(23) ~p ≡ [(p ⊃ q) . (p ⊃ ~q)]

(19)-(23) are all forms of the so-called 'paradoxes of material implication' (and there are some redundancies here, too). Note that Strawson uses "." for conjunction and "⊃" for the material conditional.

The main question Strawson discusses here is: What is the relationship between the truth-functional connectives studied in logic and the corresponding words of natural language? With the possible exception of negation, Strawson argues that none of the usual identifications are really correct. For example, "and", according to Strawson, does not mean truth-functional conjunction.

As said, Strawson is prepared to accept the claim that "not" means truth-functional negation, though of course there are complications involving how negation is expressed in natural langauge.

The remarks at the end of §7, about truth and falsity, result from Strawson's view that some sentences, though perfectly meaningful, do not always have truth-values. An example would be "Alex has stopped smoking crack", if Alex has never smoked crack. In this case, the sentence 'presupposes' that Alex used to smoke crack. For Strawson's reasons for this view, see his "On Referring", Mind 59 (1950), pp. 320-44 (PhilPapers) See the SEP article listed as optional for a survey of work on presupposition.

Strawson raises two sorts of objections to the identification of truth-functional conjunction and "and". The first is that "and", in English, has other uses than as a sentential connective. The most interesting of these involves cases such as "Alex and Drew got married", where that may not be equivalent (in any sense) to "Alex got married and Drew got married". But, as Strawson notes, the identification can reasonably be restricted to the case of sentential "and". In that case, however, Strawson observes that

(a) They got married and they had a child.

is "by no means logically equivalent" to

(b) They had a child and they got married.

This seems to show that "and", in English, can have some sort of temporal meaning.

What is Strawson's argument that (a) and (b) are not logically equivalent? What assumptions does that argument make?

Can you think of similar examples where the 'additional content' is not temporal but some other kind of relation between the clauses?

Strawson notes, somewhat obliquely, that the temporal effect of "and" in the example above seems to arise even when we are dealing with separate sentences. I.e., these two utterances:

(c) They got married. They had a child.

(d) They had a child. They got married.

seem to suggest different orders of events. What is the significance of this point for Strawson's argument?

Strawson raises lots of objections to the identification of "⊃" with "if...then...". Unfortunately, however, he plays fast and loose with the distinction between indicative and subjunctive conditionals. A standard example of the contrast involves:

(e) If Oswald did not shoot Kennedy, then someone else did.

(f) If Oswald had not shot Kennedy, then someone else would have.

Note that (e) seems clearly true—someone shot Kennedy—but (f) is much less clear. Conditionals like (f) are known as "counterfactuals", because the antecedent is contrary to (known) fact. No one thinks, or has ever thought (so far as I am aware), that the conditional in (f) was material.

See the SEP article on counterfactuals, listed as optional, for a survey of work on them.

If we set aside Strawson's objections concerning subjunctive conditionals, then there seem to be two sorts of worries left. First, Strawson claims (p. 83) that we 'use' a conditional when (i) we do not know whether the antecedent or consequent is true but (ii) believe "that a step in reasoning from [the antecedent] to [the consequent] would be a sound or reasonable step...". (See also the restatement of this point on p. 88.) This implies that it would be a misuse to assert a conditional whose antecedent one knew to be false or when there was no connection between the antecedent and consequent.

Do Strawson's two conditions seem accurately to characterize the use of the English conditional? If so, does it follow that the English conditional is not material?

Strawson's second objection is that the "joint assertion in the same context" of "If it rains, the match will be canceled" and "If it rains, the match will not be canceled" is "self-contradictory" (p. 85), whereas "p⊃q" and "p⊃~q" are consistent (and imply ~p). Is that right? If so, does it follow that the English conditional is not material?

Suppose that ~pq. I aim to show that, if p, then q. So suppose p. But of course p and ~pq together imply q. So q. Hence, the inference from p to q is reasonable. That is: If p, then q. Since Strawson concedes that "if p, then q" implies "~pq", it would seem to follow that "if p, then q" is equivalent to "~pq". (This is sometimes known as the 'direct argument' for the conclusion that the English indicative conditional is material.) What should Strawson say about this argument? See p. 92 for some ideas.

Perhaps most surprisingly, Strawson rejects the identification between "∨" and "or". He gives two very different sorts of reasons, both of which involve the rejection of the inference from p to "p or q". (There are logics, known as 'relevance' logics, in which this inference is not valid. See the Stanford Encyclopedia article listed as optional for a survey.) What are Strawson's two reasons? How do they differ? What argument does Strawson give (or what argument might he give) for the claim that "pq" can be true even if "p or q" is not true?

11 September

H.P. Grice, "Logic and Conversation", in Studies in the Ways of Words (Cambridge MA: Harvard University Press, 1989), pp. 22-40 (PDF, DjVu)

You may well notice a sort of casual sexism at certain points in this paper. Unfortunately, that was not uncommon when this lecture was originally given (1966-67)—and for some time after that, too.

Show Reading Notes

Related Readings

  • H.P. Grice, "Utterer's Meaning and Intentions", Philosohpical Review 78 (1969), pp. 147-77 (PhilPapers)
    ➢ Grice had a larger philosophical program to reduce the notion of linguistic meaning to intentions and other mental states. This paper articulates some of that program. For the rest of it, see the other essays in Studies in the Ways of Words.
  • Wayne Davis, "Implicature", Stanford Encyclopedia of Philosophy (2024) (SEP)
    ➢ A survey of the enormous literature on implicature.
  • Richard Kimberly Heck, "Reason and Language", in C. Macdonald and G. Macdonald, eds., McDowell and His Critics (Oxford: Blackwell Publishing, 2006), pp. 22-45 (PhilPapers)
    ➢ Section 1 of this paper develops an account of implicature.
  • Kent Bach, "The Myth of Conventional Implicature", Linguistics and Philosophy 22 (1999), pp. 327-366 (PhilPapers)
  • Christopher Potts, "Into the Conventional-Implicature Dimension", Philosophy Compass 2 (2007), pp. 665–679 (PhilPapers)

Grice's central goal in this paper is to distinguish what is "strictly and literally said" in an utterance from other things that might be "meant" or "communicated" in the making of that utterance and, in particular, to articulate a notion of what is implicated by an utterance that is not part of what is said but is part of what is 'meant'. (The term "implicate" is a technical one, but it is meant to echo one informal use of 'implies'.)

As is made clear at the outset, Grice's target here is, in part, Strawson. But he does not, in this lecture, actually formulate his reply to Strawson. His target is really more general. There was a common sort of claim at this time that Grice also wants to oppose. For example, people would often say such things as that it is only appropriate to say "X knows that p" if there is some doubt whether p, so "X knows that p" is only true if there is some such doubt (hence, the usual sorts of analyses of knowledge must be wrong); it is correct to say "X performed action A voluntarily" only if there is some question whether X's action was coerced, so the sentence is only true if there is some such question (which was alleged to have various consequences for discussions of free will); and so forth. In a larger sense, then, Grice is interested in the question when facts about 'correct usage' tell us something about the literal meaning of a word and when they do not.

The basic claim Grice wants to make is this: It is common, even normal, for utterances to suggest or convey or communicate things beyond, in addition to, or even at odds with the literal meaning of the words used. Hence, the fact that "They got married, and they had a baby" will very often suggest or convey a temporal order of these events cannot by itself show that the literal meaning of the sentence is not just: these two things both happened.

Grice begins by distinguishing between what is "said" from what is, in Grice's technical sense of the term, 'implicated'. He introduces this distinction with an example (always a good idea). Its chief advantage is that the 'implicature' (e.g., that C's colleagues are treacherous) is clearly no part of what B's words mean. Grice then makes some very general remarks about the nature of this distinction. What is said is supposed to depend upon the "conventional meaning of the words...uttered" plus certain other factors, such as the resolution of ambiguity and 'context-dependence' (e.g., who was referred to by "he"). What is implicated depends upon a good deal else.

Grice distinguishes two types of implicatures, which he calls "conventional" and "conversational". Conventional implicature will not be our focus, but it is worth making sure you understand the difference between them. Make sure you can give a couple examples of your own, of each type.

It is commonly said that the difference between "and" and "but" is a conventional implicature. So, on this view, "A and B" and "A but B" will always have the same truth-value, though they will often differ in what they 'implicate'. So, "Joan is a woman, and she is good at math" and "Joan is a woman, but she is good at math" seem very different, but the latter is (on this view) still true. What might be the advantages and disadvantages of such a view? (Other cases to consider: It's gorgeous, but it's expensive; she's a great soccer player, but not such a great violinist; I have visited Paris but have not visited Rome.)

There is some controversy whether there is any such thing as conventional implicature. See the optional papers by Bach and Potts.

Grice proceeds to sketch an account of how conversational implicature works. It has two parts: The first is a very general principle that Grice dubs the "Cooperative Principle":

Our talk exchanges do not normally consist of a succession of disconnected remarks, and would not be rational if they did. They are characteristically, to some degree at least, cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction. This purpose or direction may be fixed from the start..., or it may evolve during the exchange; it may be fairly definite, or it may be so indefinite as to leave very considerable latitude to the participants.... But at each stage, some possible conversational moves would be excluded as conversationally unsuitable. We might then formulate a rough general principle which participants will be expected (ceteris paribus) to observe, namely: Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. (p. 26)

Grice then describes (on pp. 26-7) four "categories" that are, in effect, special cases of the Cooperative Principle, with each category containing various "maxims". These tend to apply most obviously to conversations in which information is being exchanged, though some of them apply to other sorts of conversations, as well. It is important to see that the maxims are not meant to exhaust the possible bases for implicatures. That is, Grice is open to the idea that there are implicatures that do not involve any of these maxims.

NOTE: People nowadays tend to speak of, e.g., the "maxim of Quality". Grice, by contrast, tends to speak of various 'maxims' that fall under the category of Quality.

Grice spends a bit of time on pp. 29-30 exploring the question what might be the basis of the Cooperative Principle: Is it merely an empirical fact about how we happen to talk? Or is it somehow inherent in the nature of conversation? Can you sketch an argument for one of the maxims of the form: If one wants to exchange information in an efficient way, then one should obey that maxim (i.e., that it is rational to do so, and would be irrational not to do so)? It would also be worth checking your understanding by giving your own examples of implicatures that "flout" each of the four (categories of) maxims.

There is, in a way, an even deeper underlying principle, namely, that speaking is a form of acting, so it is something that is (typically) done for reasons.("[O]ne of my avowed aims is to see talking as a special case or variety of purposive, indeed rational, behavior..." (p.28).) Grice's idea is that people's reasons for saying what they do can sometimes be revealing of other of their attitudes and that, when this is so, it can lead to their meaning things they do not (strictly and literally) say.

Implicatures are supposed to be generated when speakers "flout" conversational maxims: that is, when they appear not to obey a maxim, but do so openly, but without simply refusing to cooperate. The question then arises: "How can his saying what he did say be reconciled with the supposition that he is observing the overall Cooperative Principle" (p. 30)? Answering this question will reveal the implicature.

Grice will proceed to give a number of examples of this phenomenon. First, though, he gives a general definition:

A man who, by...saying...that p has implicated that q, may be said to have conversationally implicated that q, provided that (1) he is to be presumed to be observing the conversational maxims, or at least the Cooperative Principle; (2) the supposition that he is aware that, or thinks that, q is required in order to make his saying or making as if to say p (or doing so in those terms) consistent with this presumption; and (3) the speaker thinks (and would expect the hearer to think that the speaker thinks) that it is within the competence of the hearer to work out, or grasp intuitively, that the supposition mentioned in (2) is required. (pp. 30-1)

The key point here is that "a conversational implicature must be capable of being worked out" by the audience (p. 31, emphasis added), where that amounts to formulating a certain kind of argument, which Grice goes on to illustrate. In working out the implicature, the audience will draw upon what they know about "the conventional meaning of the words used", which means that literal meaning has a kind of primacy.

It's notable that Grice's definition already assumes that q is "implicated" and only purports to say when it has been conversationally implicated. Why might that be? Is this condition actually required?

Grice's account of implicature refers to what the speaker believes, but what is implicated seems to be the proposition itself. E.g., in the example at the top of p. 32, what's required to make the statement accord with the Cooperative Principle is that the speaker believe the gas station is open, but what's implicated is just that the gas station is open. By contrast, in the example at the bottom of that page, the implicature is just that the speaker doesn't know exactly where C lives. Is there a problem here? What exactly is implicated in the example involving the letter of recommendation? that the writer believes that X is no good at philosophy, or that X is no good at philosophy?

Toward the end of the paper, Grice introduces the notion of generalized conversational implicatures. These are similar to conventional implicatures, because they are supposed to be, in a way, the normal case. The examples Grice gives here are not terribly convincing, it seems to me. A better one involves the word "most". (This is a variety of what are known as 'scalar' implicatures.) If someone says, "Most of the students passed the final", then one would normally suppose that not all of the students passed. This is a conversational implicature. (Which maxim is involved?) One of the signs that it is an implicature is that it can be explicitly "canceled" (p. 39): It would be fine to follow up by saying, "In fact, all of them did". If so, "most" cannot mean most but not all. Otherwise, the follow-up would contradict the original statement.

Consider the example mentioned above, involving "most": Is the 'generalized conversational implicature' that not all something that is always, normally, or typically meantNN by the speaker? What might be the significance of this point for Grice's account?

As said, Grice does not formulate his reply to Strawson here, but it is clear what its outlines will be: What Strawson has noticed are certain implicatures. Can you fill in some of the details? How might Grice use the machinery developed here to argue, say, that "or" in English really is just truth-functional disjunction?

Conversational implicatures, Grice suggests, have a number of typical features (pp. 39-40). Conversational implicatures are:

  1. Cancelable: One can say, e.g., "There is a gas station around the corner. But I'm not sure if it's open." Note the difference with conventional implicatures, which are not cancelable: "Joan is a woman, but she's good at math. But I don't mean to suggest that women aren't generally good at math!" Sorry, you just did suggest that.
  2. Non-detachable: Since implicatures depend upon what has been said, they will usually be preserved by changes of wording. For example, the exact words used in the letter of recommendation do not matter. There are, though, special cases where the implicature does depend upon the specific words used. (Generally, these are manner implicatures.)
  3. Not part of the literal meanings of the associated words but, rather, depend upon the literal meaning.
  4. Not relevant to the truth or falsity of what is strictly and literally 'said'.
  5. Often somewhat non-specific (despite Grice's often speaking as if something quite specific is implicated).

It's a reasonable thought that conversational implicature is a type of non-natural meaning, in Grice's own sense. Can we argue for this claim? I.e., can we show that what someone conversationally implicates, according to the definition Grice gives on pp. 30-31, is also meantNN by them? One interesting question here is what guarantees that the overtness condition for meaning will be satisfied.

Meaning and Truth-Conditions
13 September

Noam Chomsky, Aspects of the Theory of Syntax (Cambridge MA: MIT Press, 1965), sections 1.1–1.6 (PDF, DjVu)

You can skim (or even skip) §§2-3, on pp. 10–18, as well as §5, on pp. 27-30. I'll highlight the important points in these sections in the notes.

Chomsky quotes a number of sources in their original, non-English form. Rough translations, thanks mostly to Google, can be found in this document.

This was not Chomsky's first published book on syntax. That was Syntactic Structures (Internet Archive), first published in 1957. An even earlier book was The Logical Structure of Linguistic Theory (Internet Archive), which was written in 1955, though not published until 1975 (though it was circulated in other forms, and was very influential). Aspects was a more comprehensive treatment, and it is far more explicit about the philosophical underpinnings of Chomsky's new approach to language. It is widely considered one of the foundational documents of 'generative' linguistics.

For discussion of the history, see the talk by Robert May listed as optional, and the other talks in that series. (May's was the first in the series.)

Show Reading Notes

Related Readings

  • Noam Chomsky, "A Review of Verbal Behavior", (PhilPapers)
    ➢ This was a review of a book about language by B.F. Skinner, one of the leading pscyhologists of his day, and an arch-behaviorist. Chomsky's review amounts to a manifesto for what we now know as 'cognitive science' and, in many ways, signaled the end of behaviorism, though it would be a while before behaviorism's demise (and, in many ways, echoes of behaviorism continue to be felt in various fields).
  • W. V. O. Quine, "Methodological Reflections on Current Linguistic Theory", Synthese 21 (1970), pp. 386-98 (PhilPapers)
    ➢ A critique of Chomsky's ideas, from one of the giants of American philosophy, and a sympathizer with behaviorism.
  • Noam Chomsky, "Quine's Empirical Assumptions", Synthese 19 (1968), pp. 53-68 (PhilPapers)
    ➢ A direct criticism of Quine's ideas about language. Even though it was published (and presumably written) before Quine's paper, it might as well be a reply to it.
  • Peter Ladefoged and Donald E. Broadbent, "Perception of Sequence in Auditory Events", Quarterly Journal of Experimental Psychology, 12 (1960), pp. 162-170 (Sage Publications)
    ➢ An early paper arguing that grammatical structure is present in the perception of speech.
  • Jerry Fodor and Thomas Bever, "The Psychological Reality of Linguistic Segments", Journal of Verbal Learning and Verbal Behavior 4 (1965), pp. 414-20 (Science Direct)
    ➢ Another paper making the same sort of argument, though with more explicit attention to its foundational consequences.
  • Edward C. Tolman, "Cognitive Maps in Rats and Men", Psychological Review 55 (1948), pp. 189-208 (APA PsycNet)
    ➢ Another early salvo in the cognitive revolution.
  • Robert May, "The Genesis of Generative Grammar: Syntax and Logical Form", 11 November 2023 (YouTube)
    ➢ One of a number of talks concernings the foundations and history of generative grammar, delivered by people who were part of that history. May's dissertation introduced the level of linguistic description now known as LF (or Logical Form). The actual talk starts at about 7:40.

Our main goal in reading this material from Aspects is to get clear about some basic distinctions that structure linguistic theory, even today.

The first of these is the distinction between competence and performance. The basic idea is that performance, "the actual use of language in concrete situations" (p. 4), is the result of a speaker's applying some general ability that they have: their competence. They might make mistakes of various kinds, or for some other reason be unable to make use of the ability they have. Performance is not a direct reflection of competence.

Chomsky offers a number of examples to illustrate this point. I'll mention some others, for the sake of variety. Consider:

(1) The horse raced past the barn fell.

This might seem like gibberish, but careful consideration shows that (1) is in fact a grammatical sentence of English with a perfectly good meaning. It means the same as:

(1') The horse which was raced past the barn fell.

(1) is a so-called 'garden path' sentence. What makes it hard to parse is that you tend to hear "raced" as the main verb of the sentence, which then seems to end with "barn", so what is "fell" doing? Or consider:

(2) No eye injury is too trivial to ignore.

This sentence was once found on a sign in a hospital, and most people, on first encountering it, think it means: No eye injury is so trivial that you should ignore it. But careful consideration shows that it means the opposite: No eye injury is so trivial that you shouldn't ignore it; i.e., no matter how trivial an eye injury may be, you should ignore it. It's (in part) because that makes little sense that we tend to hear the sentence differently.

Other examples, which Chomsky discusses later, concern possible ambiguities. To modify one of his examples (in a way I owe to Jim Higginbotham), consider:

(3) I almost had my wallet stolen.

As Chomsky notes, there are (at least) three things this sentence can mean, though one of them is almost impossible to hear, because it makes no sense. (It is, as it is sometimes said, 'pragmatically excluded'.)

What are the three possible meanings of (3)? Which one is so hard to hear, and why? Can you explain in detail what point Chomsky trying to make with this example?

Linguistics, Chomsky insists, is interested in this underlying ability, that is, in competence. He also insists that this ability consists in certain knowledge (or, maybe better, information) that speakers have about their language. The question then becomes: What knowledge do speakers possess about their language that allows them to speak it? In thinking of competence in this way, Chomsky was at the forefront of the 'cognitive revolution' in psychology (though he was by no means the only one). In particular, Chomsky is here rejecting the behavioristic psychology that was still popular, indeed dominant, in the 1960s. (For detailed criticism of behavioristic accounts of language, see Chomsky's review of Skinner's Verbal Behavior, listed as optional. For an even earlier anti-behavioristic work, one of the very first papers in what we now call 'cognitive science', see the optional paper by Tolman.)

Chomsky also insists that the grammar should be 'fully explicit'. What he means is clearest from the examples he gives: traditional grammars tended to focus on exceptions (e.g., 'irregular' constructions) and to say little about the rules that govern the normal cases. By contrast, Chomsky is insisting that the rules that govern the normal cases are precisely what should be of interest to linguistics.

Even more importantly, Chomsky notes that the range of possible sentences a competent speaker might understand (or reject as ungrammatical) is unbounded. Speakers' knowledge cannot, therefore, be 'list-like': We don't just have a list of grammatical sentences, or know one-by-one what the meanings of sentences are. So one has to state some general principles that speakers are using to interpret utterances, and that they can use to interpret sentences they have never previously encountered. Chomsky took as inspiration work in logic and the theory of computation: What he is seeking is something like a set of rules for 'generating' the set of grammatical sentences, or for assigning a meaning (or range of possible meanings) to a sentence.

On p. 8, Chomsky writes: "Obviously, every speaker of a language has mastered and internalized a generative grammar that expresses his knowledge of his language". Is that so obvious? Why does it seem so to Chomsky? What sorts of assumptions might he be making? What does he mean when he adds: "Any interesting generative grammar will be dealing... with mental processes that are far beyond the level of actual or even potential consciousness"?

You can skim or skip §2. This is mostly a critique of approaches that attempted to account for linguistic performance without reference to an underlying theory of the structure of linguistic competence. Behavioristic psychology is Chomsky's main target here. The notion of 'acceptability' is what behavioristic theories of language used in place of grammaticality.

You can also skim or skip §3. Here, Chomsky outlines the overall structure of a linguistic theory, dividing it into syntactic (concerned with what expressions are grammatical), semantic (concerned with meaning), and phonological (concerned with sound) components. Chomsky also distinguishes "deep structure" from "surface structure", but this will not be particularly important to us. (Although something like this distinction is still made, it is drawn in a very different way.)

In §4, Chomsky addresses some methodological questions about how the study of competence should proceed. The basic idea is fairly simple: The data for the theory are actual linguistic performances (including 'intuitions' of native speakers about whether a given sentence is grammatical and, if so, what it can or must mean). The goal is then to formulate a set of rules that will explain this data—that is, to specify the 'tacit' (roughly: unconscious) knowledge that gives rise to the observed performance.

Chomsky's repeated dismissal of 'objective tests' and 'operational procedures' is again bound up with his rejection of behaviorism.

This brings us to the second crucial distinction that Chomsky draws in this chapter: between descriptive adequacy and explanatory adequacy. In the case of syntax, a grammar is descriptively adequate if it correctly categorizes each sentence as grammatical or ungrammatical. But explanatory adequacy demands more: that the theory be one that a child might 'acquire' on the basis of the limited exposure to 'primary linguistic data' that they actually enjoy.

The term 'explanatory adequacy' suggests, rather, that what we are looking for is a theory that does not just correctly categorize sentences, but also explains why those sentences are grammatical, or not. Or perhaps: a theory that explains why speakers so characterize them. How might this be related to the way Chomsky explains 'explanatory adequacy'?

In §5, Chomsky introduces a third important idea (though one that will not be our central focus): that the child brings with it, to the setting of language acquisition, innate knowledge of certain 'linguistic universals' (that is, principles that apply to all possible human first languages). This is what is known nowadays as universal grammar, or UG.

In §6, Chomsky sketches a general picture of what an explanatorily adequate linguistic theory might look like. This amounts to sketching a picture of what the process of language acquisition might itself look like, and how a child's innate linguistic knowledge might be put to us in developing a grammar for the language to which they are being exposed.

Part of what Chomsky is elaborating here is what have been called 'poverty of the stimulus' arguments. The thought is that the actual linguistic data to which actual children are exposed are extremely limited: too limited to determine any set of principles that would suffice to characterize arbitrary strings as grammatical or not, or to assign the grammatical ones determinate meanings. If so, then the child must bring something to the learning environment which, when added to the data, does suffice to select a particular 'grammar' for its language. The way Chomsky imagines this happening, in Aspects, is that the set of possible grammars is itself severely restricted: If the problem were to settle upon an arbitrary set of principles that accounted for the 'primary linguistic data', the problem would be unsolvable; but if the problem is to select one of a limited range of possible grammars on the basis of that same data, then the problem becomes much easier. Indeed, the smaller the range of possible gramamars, the easier the problem.

The papers we will be reading next all concern, though they do not speak in quite these terms, what a generative semantic theory might look like.

16 September

Donald Davidson, "Theories of Meaning and Learnable Languages", in his Inquiries into Truth and Interpretation (Oxford: Oxford University Press, 1984), pp. 3-15 (PhilPapers, PDF, DjVu)

This paper was originally published in 1965, in a somewhat obscure volume.

You can skip the last example that Davidson discusses, on pp. 14-5.

Show Reading Notes

Related Readings

  • Zoltan Gendler Szabó, "Compositionality", Stanford Encyclopedia of Philosophy (SEP)
  • Donald Davidson, "Quotation", Theory and Decision 11 (1979), pp. 27-40 (PhilPapers)
    ➢ Davidson's own attempt to sort out the problems with quotation.
  • Hermann Cappellen and Matthew McKeever, "Quotation", Stanford Encyclopedia of Philosophy (SEP)

What is distinctive of language, as opposed to other forms of communication, is that words have "conventional meaning" as Grice put it, and that this meaning is partly determinative of one core part of the meaning of an utterance: what is said. But note that we think of both individual words and sentences as having meaning: Somehow, it seems, the meanings of the words in the sentence "Snow is white" 'combine' to give one the meaning of the sentence. The central question of the next couple papers we will read is: What does that really mean? What is sentence meaning? What is word meaning? And how do the meanings of words "combine"?

In this paper, Davidson is concerned to motivate what has come to be called the 'principle of compositionality', which asserts that the meaning of a sentence is a function of the meaning of the words of which it is composed, and of how that sentence is composed from those words. Additionally, he argues that compositionality imposes genuine constraints on theories of meaning, i.e., that some theories of how some constructions work can be ruled out on the ground that they conflict with compositionality.

Davidson begins by considering a dispute between Strawson and Quine over whether there could be a natural language that contained no singular terms (roughly, names of objects). You need not worry too much about the details of this discussion. Davidson's main point is just that claims about what is required for a language to be learned are empirical and cannot be established a priori.

Note the need for care in reading here. Davidson outlines a 'theory' of language learning on pp. 3-4, and you might well get the impression, from the way he presents it, that he means to endorse it. But then he says that it "is now discredited in most details".

At the beginning of section II, Davidson outlines, very quickly, a set of considerations that are supposed to motivate compositionality. He first introduces an analogy with syntax: It must be possible, he says, for us to "define a predicate of expressions...that picks out the class of meaningful expressions" (p. 8), that is, roughly, the grammatical ones. He does not explain why, but the thought, presumably, is that this is something that ordinary speakers know (e.g., which expressions are grammatical sentences of their language), and so that there must be some answer to the question how they are able to tell whether an expression is a (grammatical) sentence; if so, then (in principle) we should be able to tell some story about what features of an expression make it a sentence (or not).

Similarly, then, for semantics (the study of meaning): Ordinary speakers are able to determine, even of many sentences they have never previously encountered, what those sentences mean; there must be some way they do this; there must be some story to be told about how; the only reasonable hypothesis seems to be that they know what the words comprising some "novel sentence" mean, and they are able to put those word-meanings together to determine the meaning of the sentence.

Do not worry too much yet about the reference to Tarski's theory of truth. Davidson will have more to say about that in the next paper we read.

Consider now a language that (i) contained, as natural languages do, infinitely many sentences and (ii) did not satisfy the principle of compositionality, so that there was no story to be told about how one could generate, from some finite basis (of word-meanings), the meanings of all the sentences of that language. Then, Davidson claims, the language would be unlearnable in principle. What is his argument for this claim? The argument is on pp. 8-9.

Davidson goes on to argue that English, e.g., must contain finitely many "semantic primitives" from which the meanings of all other expressions can be generated. To what extent does the argument depend upon there really being infinitely many sentences of English? Are there really infinitely many sentences of English? In what sense? Is each of us actually capable of understanding infinitely many sentences? In what sense?

Davidson proceeds to give four examples of theses philosophers have held about language that would imply that English contains infinitely many semantic primitives. We will work through Davidson's discussion of quotation in class, in some detail. The key claim here is that a quotation name (such as: "snow") cannot be thought of as having its meaning determined as some function of the meaning of the word contained within the quotation marks. (This is what Davidson means when he says that quotation marks are not a functional expression.) The contained word is not used at all, and its meaning is irrelevant. This is particularly clear in the case of quotation names of non-words, e.g., in the sentence: The expression "snaw" is not a word of English.

Davdison alludes on p. 9 to paradoxes involving quotation. But it is not so much quotation as semantic notions like denotation and truth that threaten to give rise to paradox. For an example of one involving denotation, consider Richard's paradox. Only finitely many numbers can be uniquely picked out by expressions of less than twenty-two syllables. This is because there are finitely many such expressions. So some number is not so named and, by the least number principle, there is a least such number. But then "The least number not named by any expression of less than twenty-two syllables" picks out that number, and does so in twenty-one syllables! Contradiction.

Here are some questions to think about ahead of class: Why might all of that have led Church to say that quotation is "misleading"? What is misleading about it? Why does Davidson insist that the claim that a quotation names its interior (i.e., what is within the quotes) does "not provide even the kernel of a theory" (p. 9)? Quine's proposal, which Davidson mentions but does not explain, was to replace quotation by a form of spelling out: Thus, instead of "'snow'" we would have: ess + en + oh + double-u. How would that help solve the problem? Does it solve the problem?

Exactly why can we not think of quotation marks as a kind of functional expression? If we write "q(·)" for quotation of, then what is wrong with: q(snow)? Note that this is meant to be analagous to thinking of, say, "the color of" as a function: so color-of(snow) = white, color-of(grass) = green, etc. Why not something similar for quotation?

Davidson considers three more examples which we may not have time to discuss in class. You can skip the last one. But you should read the second and third examples, and think them through, as they will be mentioned in later readings.

18 September

Donald Davidson, "Truth and Meaning", Synthese 17 (1967), 304-23; reprinted in Inquiries, pp. 17-36 (PhilPapers, PDF, DjVu, Springer)

We'll primarily focus on pp. 304-18. There is a very complicated, and very compressed, argument that Davidson gives at the top of p. 306 which you are welcome to skim. Do not worry if you do not understand it. It will not play a significant role in what follows. You can also skim the discussion of Tarski on pp. 313-6 and pick up again at the bottom of p. 316.

Show Reading Notes

Related Readings

  • Richard Kimberly Heck, "Is Compositionality a Trivial Principle?" Frontiers of Philosophy in China 8 (2013), pp. 140-55 (PhilPapers)
    ➢ Essentially develops Davdison's arguments on pp. 306-8.
  • Richard Kimberly Heck and Robert May, "The Function is Unsaturated", in M. Beaney, ed., The Oxford Handbook of The History of Analytic Philosophy (Oxford: Oxford University Press, 2013), pp. 825-50 (PhilPapers)
    ➢ Discusses and defends Frege's views that predicates are 'unsaturated' or 'incomplete'.
  • Richard Kimberly Heck and Robert May, "Truth in Frege", in M. Glanzberg, ed., The Oxford Handbook of Truth (Oxford: Oxford University Press, 2018), pp. 193-213 (PhilPapers)
    ➢ Discusses Frege's views about truth.
  • Richard Kimbelry Heck and Robert May, "The Composition of Thoughts", Noûs 45 (2010), pp. 126-66 (PhilPapers)
    ➢ Attempts to explain how, according to Frege, the 'meanings' (or, in his terminology, 'senses') of words combine to form the meanings of sentences.

In "Theories of Meaning and Learnable Languages", Davidson argued that, if a language is learnable, we should be able to give a compositional theory of meaning for it: a theory that shows how the meaning of a sentence depends upon, and is determined by, the meanings of its parts. In this paper, Davidson argues that a theory of truth can play this role and, moreover, that nothing else can. This amounts (or is intended to amount) to an argument that the meaning of a sentence is the condition under which it is true, i.e., for what is known as a truth-conditional theory of meaning. This sort of view is well represented in the history of analytic philosophy, having been held by Gottlob Frege, Ludwig Wittgenstein (in the Tractatus), Rudolf Carnap, and many others.

The central question of the paper is thus: What is required to give a genuine explanation of how sentence-meaning is determined by word-meaning? The paper largely proceeds by dismissing various proposals until only one is left.

One desideratum on a proper solution is to avoid a certain regress that Davidson mentions in the second paragraph. If you just take each word to denote some entity as its meaning, then you have the problem how to combine them into the meaning of the sentence. What you cannot do is identify some further thing (instantiation, e.g.) as what does that. For then you just have yet another thing that needs to be combined with the rest. (How do the meaning of "Theaetetus", the meaning of "files", and instantiation combine to give the meaning of "Theaetetus flies"?)

One of the papers by May and me listed as optional, "The Function Is Unsaturated", attempts to unpack and defend the view of Frege's that Davidson derides.

Davidson then considers the problem of explaining the meaning of all terms of the form "the father of...the father of Annette". He notes that the following two axioms will allow us to do so:

  • "Annette" refers to Annette
  • "the father of t" refers to the father of whoever the term t refers to

From these two principles, one can prove, e.g., that "The father of the father of Annette" refers to the father of the father of Annette. And, Davidson says, we have done that without assigning any "entity" as meaning to "the father of". Implicit here is the idea that an "explanation" of sentence-meaning in terms of word-meaning might amount to a theory with axioms specifying what the words mean, which will then allow us to derive theorems saying what the sentences mean.

The discussion in the full paragraph on p. 305 seems to me confused. Don't the axioms 'give the meanings' of the atomic parts? Probably what Davidson has in mind is that these axioms only state the reference of "Annette" and 'what contribution' the phrase "the father of" makes to determining the reference of larger expressions. But it seems odd to treat such a phrase as syncategorematic.

Davidson then argues that, although we can try to extend this treatment to sentences, that will not work. His argument for this claim is known as the Slingshot, and it is a large topic in its own right. Suffice it to say that Frege's own approach takes sentences to refer to their truth-values. (The Slingshot—which is not an argument Frege himself gives—purports to show that this choice is forced: that if sentences refer to anything, it must be to their truth-values.) But then it turns out that all true sentences refer to the same thing, as do all false ones, which seems a bit odd if we're thinking of the truth-value as the meaning of the sentence.

The notation x̂(...x...) means: the class (or set) of x such that ...x.... See the paper by May and me, "Truth In Frege", for discussion of Frege's views about truth.

As Davidson notes, however, Frege himself insisted (in part for this kind of reason) that we distinguish meaning from reference. (What Davidson is calling "meaning" here, Frege called "sense".) E.g., Frege thought that the names "Mark Twain" and "Samuel Clemens", though they refer to the same person, have different meanings (i.e., senses), because "Mark Twain is Mark Twain" seems to be a logical truth, whereas "Mark Twain is Samuel Clemens" is not (but looks to be something one could discover). So the next set of proposals attempts to work with the notion of meaning rather than that of reference. This is where the real work of the paper begins.

Davidson considers this approach on pp. 306-7. The first problem is that we have been given no substantive account of what the meaning of "flies" is or how it "yields" the meaning of "Theaetetus flies" when given the meaning of "Theaetetus" as argument: The meaning of "flies", on this account, just is whatever function maps the meaning of "Theaetetus" to the meaning of "Theaetetus flies", and so forth. It is also explanatorily useless: One can't specify the function without already knowing what all sentences containing "flies" mean.

The second problem, which is the really serious one, and which is shared with the suggestion Davidson considers on pp. 307-8, is that such a theory makes no real use of the structure of a sentence, that is, of how its words are combined into a sentence.

The paper by May and me, "The Composition of Thoughts", attempts to explain how Frege thought this was supposed to work. The paper of mine, "Is Compositionality a Trivial Principle?" develops these arguments of Davidson's in detail.

We'll discuss this point in class, but one important instance with which you should already be familiar involves quantification: In effect, what Davidson is pointing out is that, while quantifiers appear in essentially the same places that names do, they work semantically in a completely different way: To understand how the meaning of "Someone smokes" is determined by the meanings of "Someone" and "smokes", we need to understand this difference. Can you develop that point a bit?

Davidson traces the underlying problem to a lack of clarity about what 'meaning' is and a corresponding lack of clarity about what a 'theory of meaning' is supposed to do: We want the theory to show us how the meaning of a sentence is determined by the meanings of its parts, but we do not even seem to know what kind of thing the meaning of a word or sentence is! (I.e., we do not know how properly to specify these things in our theory.) Davidson concludes that the shift from reference to meaning has not accomplished what we had hoped it would.

On p. 309, Davidson introduces his positive proposal. It seemed, Davidson suggests, that what we needed was a theory that would somehow deliver, from axioms that specified the meanings of the words, theorems of the form:

  • S means that p

where "S" is replaced by a name of a sentence (that is close enough, for our purposes, to what Davidson means by a 'structural description') and "p" is replaced by that very sentence (or a translation of it), e.g.:

  • "Alex runs" means that Alex runs
  • "Drew smokes" means that Drew smokes

The problem is that it is very unclear how to do this. Davidson's remarks about the non-extensionality of "means that" amount to an observation that we do not even know what the logic of such a theory might be like.

What Davidson speaks of the 'meta-language', he means the language in which we are stating our theory of meaning. By contrast, the 'object language' is the language for which we are giving the theory.

But, Davidson suggests, maybe "the success of our venture depends not on the filling but on what it fills" (p. 309), that is, not on the words "means that" but on the relation between "S" and "p" that the theory secures. So we might instead take the target to be theorems of the form:

  • S is true if and only if p

where, again, "S" is replaced by a name of a sentence and "p" is replaced by that very sentence. (This is Tarski's "Convention T" which says, more or less, that such sentences will always be true, and that any predicate for which they are all true will be co-extensive with "true" itself.) What we want, that is to say, is for our theory to deliver such theorems as:

  • "Alex runs" is true iff Alex runs
  • "Drew smokes" is true iff Drew smokes

on the basis, again, of axioms specifying the meanings of the words. Note that, on this approach, saying what a sentence means amounts to saying what is required if it is to be true, i.e.: the meaning of a sentence is the condition for it to be true, its 'truth-condition'.

The great advantage to this proposal, for Davidson, is that we have some idea from logic how to do this, at least for a reasonable range of sentences, thus:

  • "Alex" refers to Alex
  • The extension of "runs" is the set of things that run
  • A sentence of the form "name one-place-predicate" is true iff the thing to which the name refers is in the extension of the predicate

Those three principles imply that "Alex runs" is true iff Alex runs. And we can do similar things for relational predicates, like "loves", and even for quantifiers (although the formal details get messy in that case).

As it happens, Davidson prefers a slightly different theory:

  • "(1) runs" is true of x iff x runs
  • A sentence of the form "name one-place-predicate" is true iff the predicate is true of the thing to which the name refers

We'll see other people talk this way, but for now the difference will not matter. This is more or less how Alfred Tarski did things. Davidson mentions him here because he is the one who first sorted out all the details (though most of them are already present in Frege).

Question: Does Davidson really have an argument for this proposal? If so, what is it?

Davidson suggests, on p. 311, that there is no trouble telling when we have a correct theory. At least in the case where the meta-language contains the object language (i.e., we are giving a theory of meaning for English in English, "anyone can tell whether it is right". Is this consistent with his claim, earlier in the same paragraph, that the theory is "empirical"?

On pp. 311-2, Davidson considers an important objection, one that hearkens back to the Slingshot. Both the following sentences are true:

  • "Snow is white" is true iff snow is white
  • "Snow is white" is true iff grass is green

(That is because "iff" here is the material conditional. If it were not, we'd be back in the intensional soup.) For all that has been said so far, then, a theory that yielded the latter would be as good as one that yielded the former. But surely that has to be wrong? Surely the latter gets the meaning of "Snow is white" wrong?

What is Davidson's response? How good is it? (We will consider this issue in more detail in a week or so.)

One case that might pose a challenge for Davidson would be a theory of meaning for the language whose sentences are just arithmetical equations involving 0, 1, +, ×, =, and <. Since truth for this language is decidable it is possible to write down a theory which would, for every sentence S, allow us to derive a (true) theorem of one of these two forms:

  • S is true iff 0 = 0
  • S is true iff 0 = 1

Would such a theory count as one that 'gave the meaning' of each sentence? Why or why not? (Note: "It doesn't seem like it" or "That's not intuitive" is not an answer.)

The end of the paper is a catalog of open problems that such an approach to semantics suggests. Why, in particular, does "Bardot is a good actress" pose a problem? How is that similar to or different from the problem posed by "Katelyn is a tall gymnast"? (Hint: She might be short for a 21-year old woman.)

20 September

Richard Kimberly Heck, "Tarksi's Theory of Truth" (PDF)

You can skip §§4-6 of the PDF. You also do not have to do any of the exercises! Do not worry if some of the formal details (especially in the section on quantifiers) are not entirely clear. The goal here is to get a general idea for how theories of truth can be formalized.

Show Reading Notes

Related Readings

  • Alfred Tarski, "The Concept of Truth in Formalized Languages", in his Logic, Semantics, and Metamathematics (Oxford: Clarendon Press, 1956), pp. 152-278 (PDF, Djvu)
    ➢ The technical paper in which Tarski introduced his ideas about truth.
  • Alfred Tarski, "The Semantic Conception of Truth and the Foundations of Semantics", Philosophy and Phenomenological Research 4 (1944), pp. 341-76 (PhilPapers)
    ➢ A more 'philosophical' discussion of Tarski's ideas about truth.
23 September

Sir Peter Strawson, "Meaning and Truth", in his Logico-Linguistic Papers (London: Methuen, 1971), pp. 170-89 (PDF, DjVu)

In case you are curious about the beginning of the paper, this was Strawson's "inaugural lecture" when he became the Waynflete Professor of Metaphysical Philosophy at Oxford University. It is customary on such occasions to praise one's predecessor, in this case, Gilbert Ryle.

Show Reading Notes

Related Readings

  • Donald Davidson, "Communication and Convention", Synthese 59 (1984), pp. 3-17 (PhilPapers)
    ➢ Not a response to Strawson, but some of the ideas in this paper might be used to construct one on Davidson's behalf. (Doing that might make for a good final paper.)

Strawson is concerned in this paper to mediate a dispute between two approaches to questions about meaning. As you will see, he is not exactly a neutral party.

The central question, as Strawson sees it, concerns meta-semantics: what it is (that's a metaphysical question) for sentences to have meaning (that is, to have semantic properties). More precisely: "What is it for a particular sentence to have the meaning or meanings it does?" As Strawson sees it, there are two broad approaches to the problem, represented here by Grice and Davidson.

  • The 'theorists of communication intention' hold that a sentence's meaning what it does consists, somehow, in speakers' having certain communicative intentions: their using that sentence with the intention to communicate some particular proposition. Of course, that will not always be true (due to implicature, irony, and the like), but the hope is that those issues can be finessed somehow. Strawson sketches some of the ideas about how to implement this approach on pp. 173-6, but you need not worry too much about them. The crucial issue does not lie there (at least according to Strawson).
  • The 'theorists of formal semantics', on the other hand, hold that there are 'rules' that determine the meaning of sentences but that these rules are not, in any sense, rules for communicating. Rather, the 'rules' fix truth-conditions. As Strawson sees it, Davidson's claim is that someone who knows the truth-condition of a sentence knows what it means, without necessarily having any idea how that sentence might be used to communicate with someone.

Most of the beginning of the paper is spent outlining these two views.

On p. 179, Strawson arrives at what he takes to be the crucial question. Strawson is prepared to agree that the meaning of a (declarative) sentence can be identified with its truth-condition. But we "cannot be satisfied that we have an adequate general understanding of the notion of meaning unless we are satisfied that we have an adequate general understanding of the notion of truth" (p. 180). So the question is: How is the notion of truth-condition to be explained, without any appeal to the notion of a communicative intention? Or, more precisely, how is the notion of truth itself to be explained?

In the middle of p. 180, Strawson considers the response that truth itself is explained by Convention T, or by a definition of truth that satisfies it. He alleges that this is circular. Why?

One might worry that Strawson's formulation of the "crucial question" is imprecise. What is really at issue here is not so much the notion of truth-condition but what it is for a given utterance to have a certain truth-condition. To answer that question, we do need to know, among other things, what it is for an utterance to be true. Can Strawson just accept this clarification? Or does it affect his argument?

What Strawson initially suggests is that truth should be regarded, primarily, as a property of 'sayings', characterized roughly as follows: "One who makes a statement or assertion makes a true statement if and only if things are as, in making that statement, he states them to be" (p. 180). If so, Strawson continues, then the very notion of truth (as applied to sayings) is bound up with the notion of what is stated by someone who makes a given utterance. And that, one might think, can only be explained in terms of the general notion of 'making a statement', that is, in terms of that sort of 'speech act', which is, in the normal case, performed with certain communicative intentions. Indeed, Strawson suggests, what it is to 'make a statement' is, at least in the normal case, to speak with such intentions.

You will note that Strawson speaks here as if the communicative intention characteristic of assertion is that one's audience should come to believe not, say, that there are pigs in the garden, but that the speaker believes themselves that there are pigs in the garden. This is because of counterexamples to the former view that led to the latter view. Is the view he's articulating more or less plausible depending upon which of these two views one holds?

On this conception, then, the 'rules' that determine the meaning of a sentence determine what that sentence can be used to say, in Grice's sense, and so what that sentence might (in otherwise unexceptional circumstances) be used to mean or communicate (see p. 181).

I think the line of thought here might be articulated as follows:

  1. An utterance of "Snow is white" is true iff snow is white. (This is the 'rule' for that sentence.)
  2. An utterance of "Snow is white" is true iff what the speaker states to be the case is in fact the case. (Connection between truth and stating.)
  3. What is in fact the case if things are as someone who utters "Snow is white" states them to be is that snow is white. (From 1 and 2.)
  4. How things are stated to be by someone who utters "Snow is white" is that snow is white. (From 3.)

So it is in that way that the 'rule' articualted at (1) fixes what is stated when someone utters "Snow is white". Does that seem right (as an account of Strawson's line of thought)? More substantively, should 'theorists of formal semantics' object here? Or can they accept this much? Strawson clearly thinks that, if they do, then they are committed to the view that communicative intentions are essential to the analysis of meaning. But where do communicative intentions enter here?

The discussion on pp. 182-3 pushes the issue from a slightly different direction, noting that merely 'associating' the sentence "Snow is white" (say) with the 'state of affairs' of snow's being white cannot suffice.

The discussion recalls, for me, a question asked by Sir Michael Dummett. Suppose that we take Davidson's theory but substitute some nonsense word for "true", say, "alby". So we have such theorems as: "Snow is white" is alby iff snow is white. Clearly, this does not carry any information about the meaning of "Snow is white". But why not? What was it about the use of the word "true" that was so important? Why should truth-conditions seem to have any essential connection with meaning? Does Davidson have an answer to this question?

Seeing no possibility of progress in that direction, Strawson offers the formal semanticists another option: to connect the notion of 'what is stated' not with the notion of belief-communication but rather with the notion of belief-expression, where expressing a belief need not involve any intention to communicate it to someone else. So, on this account, "...the meaning-determining rules for a sentence of the language are the rules which determine what belief is conventionally articulated by one who...utters the sentence (p. 184). (Chomsky, to whom Strawson also refers, has certainly made this kind of suggestion.) As Strawson notes, this is very reminiscent of his own view: If one looks again at p. 181, one can see Strawson himself suggesting that the truth-condition of a sentence determines what belief someone who makes a literal utterance of that sentence would thereby be expressing (that is, what the 'content' of that belief is).

The main objection Strawson pushes here (on pp. 185-6) is that it less clear than it might seem what 'expressing a belief' really is. Dialectically, the key point is that this cannot—from the point of view of the formal semanticists—be something one does with the intention of getting other people to think one has that belief. That would be a communicative intention, which is what we're supposed to be trying to avoid. But if we don't think of it that way, then what is it?

Strawson goes on to suggest that, even waiving that point, such a view would make it a mystery why people who live in the same area tend to speak the same language, rather than all having their own way of expressing their beliefs. The thought, I take it, is that, if it's just incidental whether one's expressions of belief are understood as such by other people, then why can't one just have one's own special way of expressing one's beliefs? But Strawson does not really have much to say about this matter, other than that it seems to him to lead to a view that is "too perverse and arbitrary to satisfy the requirements of an acceptable theory" (p. 188). But this issue, though interesting in its own right, is very complex and it's a reasonable guess that the theorists of formal semantics will have wanted to get off this train long before it reached this station.

Here's another way one might try to raise the question in which Strawson is interested. Davidson's idea seems to be that someone who knows the truth-conditions of all the sentences of some language will therefore be able to speak it. But why exactly? How is one to put this 'knowledge of truth-conditions' to work? What precisely does knowledge of truth-conditions enable one to do, and how?

What difference might it make to this discussion if we make room for implicature? It's not, in general, true that someone who utters a sentence wants their audience to believe its content. The sentence might be obviously false, and the fact that it's obviously false is being used to allow the speaker to implicate something else. Will one of the two views have an easier time explaining this phenomenon? Or is it somehow unfair even to raise this issue?

25 September

David Lewis, "Languages and Language", Minnesota Studies in the Philosophy of Science 7 (1975), pp. 3-35 (reprinted in Lewis's Philosophical Papers, vol.1 (New York: Oxford University Press, 1983), pp. 163-88) (PhilPapers, PDF, DjVu, PDF (original), DjVu (original), Minnesota Studies)

I have provided versions of both the original publication and the reprint. The former comes from the journal (as linked), and the scans are not great. I've provided page references to both in the notes.

You should concentrate on pp. 3-10 (163-9 of the reprint), in which Lewis summarizes a modified version of the account of linguistic meaning given in his book Convention (Cambridge MA: Harvard University Press, 1969). You can stop when Lewis starts to discuss how the account given in this paper differs from the one given in Convention. You should also read pp. 17-23 (pp. 174-9), where Lewis discusses a series of objections connected to compositionality. You are of course free to read the rest, but this is a long and complicated paper, and that much will be more than enough to keep us busy.

Lewis was one of the most influential American philosophers of the second half of the 20th century. His work on presupposition was particularly important.

Show Reading Notes

Related Readings

  • Stephen R. Schiffer, Meaning (Oxford: Clarendon Press, 1972) (PhilPapers)
    ➢ A book-length synthesis of Grice and Lewis. Very influential.
  • David K. Lewis, "General Semantics", Synthese 22 (1970), pp. 18-67 (PhilPapers)
    ➢ An important paper, historically, in which Lewis outlines the general structure of a semantic theory. To some extent, his discussion of 'grammars' in "Languages and Language" recapitulates much of the discussion in this paper.
  • David Lewis, "Scorekeeping in a Language Game", Journal of Philosophical Logic 8 (1979), pp. 339-59 (PhilPapers)
    ➢ Lewis's extremely influential discussion of presupposition.

Lewis here offers a different perspective on the sorts of questions that exercise Strawson in "Meaning and Truth". Lewis himself advertises it as a 'synthesis' of the two sorts of positions that Strawson considers (though he does not mention Strawson, and this paper actually pre-dates Strawson's, though it was only published later). In particular, Lewis argues that (what he calls) languages, which are the concern of "formal semanticists", are central to an account of (what he calls) language, which is the social phenomenon of concern to philosophers like Grice. In particular, Lewis wants to explain what it is for a sentence to have a certain meaning in a given speech community by analyzing the relation: L is the language used by some population P, where that relation is itself explained in terms of a particular convention that prevails in P. In the background is a general account of what a convention is.

Lewis suggests that we think of a language, as far as meaning is concerned, as a function from sentences to 'meanings', which he takes to be truth-conditions. Following many then, and inspiring many now, Lewis suggests that we may think of truth-conditions as sets of "possible worlds" (situations, circumstances, ways that the world might be). So, e.g., the truth-condition of "Dogs meow" is the set of possible worlds in which dogs meow. Fortunately, this aspect of Lewis's view will not matter much for our purposes.

Lewis has much to say elsewhere about exactly what 'possible worlds' are supposed to be. But this metaphysical question will not be our concern. If you're curious about these issues, see the SEP article on possible worlds.

The notion of language as a social phenomenon is harder to characterize, but Lewis characterizes it in terms that should, at this point, be recognizably Gricean. What he emphasizes (even more than Grice does) is the conventional character of the association between "sounds and marks" and certain kinds of communicative intentions.

Lewis then offers a philosophical analysis of the general notion of a convention. Whether it is correct is a controversial issue in its own right, quite independent of questions about language. So we'll not pursue that issue and, for now, let Lewis have this analysis.

Test your understanding of Lewis's account of convention by explaining why, according to it, it really is a convention (in the US) to drive on the right.

On pp. 7-8 (166-7), Lewis turns to the question what connects languages (as formal objects) with language (the social phenomenon). His answer is: certain kinds of conventions. As usual, Lewis focuses on informative uses of language (assertion and the like). The trick is to say what the convention is. To focus on a single sentence (instead of a whole language), Lewis's proposal is that S means that q in the population P iff there prevails amongst the members of P a convention (i) to utter S only if one believes that q (the convention of truthfulness) and (ii) to "tend to respond" to someone else's utterance of S by coming to believe that q (the convention of trust).

Quick question: Why not "iff" in (i)?

Less quick question: Might it be better to modify (ii) so that people tend to believe that the speaker believes that p? Why or why not? (If you look closely at the two papers from Grice that we read, you will see that his view changes between them in exactly this way.)

Paper- or book-length question: How exactly is Lewis's view related to the Grice-Strawson view? (Lewis does address this question, obliquely, later in the paper.)

On pp. 8-9 (167-8), Lewis argues that, if some language L is used by a population P, then there always will be convention of truthfulness and trust in L that prevail in P. The argument consists in checking that the six conditions that characterize conventions are all satisfied.

It would be worth working through these for the case of a single sentence. (In doing so, it is fine to ignore or bracket context-dependence, e.g., tense. Lewis discusses how his view accommodates context-dependence on pp. 13-5 (171-2).) Must all these conditions be satisfied? Are there really conventions of truthfulness and trust?

On pp. 17-23 (175-9), Lewis considers a series of objections based upon the fact that languages, as he describes them, only pair sentences with meanings and do not involve any sort of compositionality. The first of these is clearly a version of Davidson's main idea in "Theories of Meaning and Learnable Languages". In response, Lewis articulates an account of what (perhaps following Chomsky) he calls grammars. These assign meanings to sub-sentential components of sentences (i.e., words, roughly) and explains how these combine to determine the meanings of complete sentences. The details are not crucial, so you can skim pp. 18-19 (175-6), until the objection is restated (at the top of p. 20 (bottom of 176).

The key semantic idea here is to relativize assignments of reference to possible worlds. So we'll have clauses like

"is green" is true of x, in possible world w, iff x is green in w

That will lead to relativized T-sentences like:

"Grass is green" is true, in possible world w, iff grass is green in w

This is essentially the alternative we shall shortly see John Foster offer Davidson.

The new version of the objection is: We need to be able to say that grammars are used by populations, not just languages, i.e., that words have certain meanings for speakers, not just sentences. In response, Lewis says that he simply does not know how to make sense of that idea or, more precisely, that he does not know what a convention to use a certain grammar would be like. Of course, one might reply that that only reveals a limitation of the notion of convention. But what other option might there be?

This kind of worry was originally raised by Quine (who was Lewis's PhD supervisor). The rough idea is that only sentences ever really get used by speakers, so everything about meaning must ultimately cash out in terms of sentences. But if two grammars generate the same language, then they make all the same predictions about sentences.

The second objection seems to echo Chomsky: If we want to explain how it is that speakers can have an 'infinite capacity' to understand novel sentences, then we need to think of them as 'knowing' grammars, i.e., knowing what their words mean and how those combine to determine sentence meanings.

Lewis's response is that this is an empirical hypothesis. As such, it can have no role in an analysis of what it is for a language to be used by a population. What does this say about how Lewis understands his project? How might Chomsky respond?

Oddly enough, Davidson ends up holding a similar view in "Reality Without Reference", but I have never understood how to reconcile that view with what he argues in "Theories of Meaning and Learnable Languages".

On pp. 22-3, Lewis considers the suggestion that the convention ought just to be to 'bestow' certain meanings on certain sentences. His objection is that this is not a convention at all, because conventions are regularities in action and belief, and 'bestowing meaning' is neither.

So consider the following view: S means that q in the population P iff everyone in P believes that S is true iff q (relativizing to worlds, if one so likes), and if the fact that everyone so believes is common knowledge in p. Why is this not a convention? Does it matter if it is convention? When Lewis says, on p. 7 (166), that "It is a platitude—something only a philosopher would dream of denying—that there are conventions of language...", is that meant to rule out this kind of view?

27 September

Discussion Meeting

First short paper due

Meaning and Truth-Theory: The Foster Problem
30 September

John Foster, "Meaning and Truth-Theory", in G. Evans and J. McDowell, eds., Truth and Meaning: Essays in Semantics (Oxford: Oxford University Press, 1976), pp. 1-32 (PDF, DjVu)

You need only read sections 1-2, on pages 1-16, carefully. The discussion in section 3 concerns Davidson's "revised thesis", which we have not year encountered, and section 4 contains Foster's emendation of Davidson's position, which, as we shall see, falls to a version of Foster's own objection to Davidson.

Do not worry if the details of the toy truth-theory described on pp. 12-3 are not clear to you. The point of all this will be made plain in the notes.

Show Reading Notes


Foster's paper is important for a certain sort of objection that it brings against what Foster calls "Davidson's Initial Thesis". As we shall see, the objection threatens rather more than just that.

In the first section of the paper, Foster attempts to motivate a certain sort of approach to philosophical questions about meaning that puts "theories of meaning" (in the sense in which Davidson uses that term) at the center. To some extent, this just reiterates the way Davidson himself motivates the principle of compositionality in "Theories of Meaning and Learnable Languages". As Foster initially motivates the proposal, the goal is to give an articulate statement of what competent speakers know about their languages, specifically, what they know about the meanings of words, and the significance of various modes of combination, which allows them to understand sentences they have not previously encountered.

However, Foster worries that the explicit statement of these principles—think, perhaps, of the semantic principles governing words like "every" and "some"—are not ones many speakers would even be able to understand. So Foster proposes to recast the original project: What we seek is a theory explicit knowledge of which would suffice for competence. Foster argues that approach implicitly requires that the theory be 'scrutable'. He illustrates how this condition might be violated by a theory that, nonetheless, did assign the right meanings to the right sentences.

Would it be reasonable to interpret Foster as requiring that the meaning of a sentence should be effectively calculable from a 'structural description' of that sentence? Should we require something even stronger?

We shall soon turn ourselves to questions about the nature of semantic knowledge. But it is worth asking now what difference it might make whether we think of a semantic theory as, in some sense, known by competent speakers. Why might it be important to say that it is?

Foster then raises the question what philosophical significance theories of meaning might have. His answer is that "if we put the right constraints on what giving-the-meanings involves, then characterizing the general method by which such theories can be constructed and verified reveals what meaning-in-language really amounts to" (p. 4). That is: If we know (i) what a theory would need to tell us to tell us what "Snow is white" means and (ii) what would make such a theory correct (or, at least, 'confirmed'), then we will have the answers to the philosophically interesting questions about the nature of (linguistic) meaning.

The argument Foster gives on pp. 4-6 is a version of one that has been given by many philosophers, including Davidson and Lewis. It concerns the question whether a translation manual could serve as a theory of meaning. What is Foster's argument that it cannot? How good is that argument?

It is in the second section that Foster criticizes Davidson's Initial Thesis. That is the thesis that a theory of meaning may take the form of a theory of truth: that it is enough to have a theory that, for every sentence S, yields a theorem of the form:

S is true iff p

where p is a translation of S, and so can serve to 'give the meaning' of S. This is, of course, the view put forward in "Truth and Meaning".

One way to think about the issue here is to focus on the following inference, which I'll call the 'T-M inference':

  1. S is true iff p
  2. So, S means that p

In general, such an inference is invalid: Davidson already notes in "Truth and Meaning" that "Snow is white" is true iff grass is green, but we should not want to conclude that "Snow is white" means that grass is green.

Foster emphasizes and elaborates this point on pp. 10-1. Davidson remarks in "Truth and Meaning" that his theory "works by giving necessary and sufficient conditions for the truth of every sentence, and to give truth conditions is a way of giving the meaning of a sentence" (p. 310). Foster argues that Davidson's theory does no such thing. The truth-condition of a sentence, Foster says, is what would have to be true for the sentence to be true. But the "iff" occuring in T-sentences is the material biconditional, so

(SW) "Snow is white" is true iff snow is white

only tells us that "Snow is white" is true iff snow is white. It does not tell us that "Snow is white" would have been false had snow not been white. In fact, (SW) has nothing whatsoever to say about the truth-value of "Snow is white" in any circumstances other than the actual circumstances.

But one might think that if we add some additional premise, then the T-M inference would be valid. And now the question becomes: What should that additional premise be? Davidson's original suggestion, on pp. 311-2, was that it is enough if we expand our view from the T-sentences themselves to the theory as a whole: If (1) issues from a theory which states correct T-sentences for all sentences of the language, then the inference goes through. I.e., we have something like:

  1. S is true iff p
  2. (1) issues from a truth-theory that implies correct T-sentences for all sentences of the language.
  3. So, S means that p

Foster's counter-example to this revised T-M inference (slightly simplified) is as follows. Waiving a lot of complications, let us imagine that the semantic clause for "is white" is:

(W1) "is white" is true of x iff x is white

Suppose this clause, which is of course true (at least in this simplified form), is part of a theory that issues in correct T-sentences for all sentences of the language. If we replace (W1) with some other clause, then that will change what T-sentences are implied by the theory. But so long as we replace (W1) with some other true clause, the implied T-sentences will still be true. And that, Foster notes, is easy to do:

(W2) "is white" is true of x iff (x is white and the earth moves)

This is still true, since the earth does indeed move. Since true axioms imply true theorems, the T-sentences we get if we replace (W1) with (W2) will still be true. And they will still be generated in a compositional way. But the T-sentence for "Snow is white" will now be:

(SW2) "Snow is white" is true iff snow is white and the earth moves

And so the revised T-M inference would lead to both of these conclusions:

  • "Snow is white" means that snow is white
  • "Snow is white" means that snow is white and the earth moves

And those two things cannot both be true.

We can pull a similar trick even for sentential connectives. What might a Foster-style clause for "and" look like? What is the lesson of that clause for theories of meaning? (Foster does discuss this, very briefly, on p. 16.)

At the end of §II, Foster considers the suggestion that this problem would not arise if we were dealing with a language that contained intensional constructions, such as "X believes that p". The thought is that theories with (W1) and (W2) would then generate T-sentences like the following:

  • "Fred believes that snow is white" is true iff Fred believes that snow is white
  • "Fred believes that snow is white" is true iff Fred believes that (snow is white and the earth moves)

And the second might not even be true (and we can definitely get a false case if we choose the extra conjunct appropriately). Foster argues, however, that this is not the right response to the problem he has raised. What are his reasons? Are there other reasons that might be given as well or instead?

Foster proposes, in the later parts of the paper, to resolve the 'Foster problem' by strengthening (SW) to:

(SWN) Necessarily: "Snow is white" is true iff snow is white

or, equivalently:

(SWW) "Snow is white" is true, in a possible world w, iff snow is white in w

As we shall see, this arguably falls to a version of Foster's own objection. Can you see how? (There are possible replies to this version of the objection, if one is prepared to accept that all necessarily equivalent sentences have the same meaning.)

2 October

Donald Davidson, "Radical Interpretation", Dialectica 27 (1973), pp. 314-328 (also in Inquiries, pp. 125-39) (PhilPapers, PDF, DjVu, Wiley Online)

Show Reading Notes

Related Readings

  • Donald Davidsion, "Reply to Foster", in Truth and Meaning, pp. 33-42, and reprinted in Inquiries, pp. 171-9 (PDF, DjVu)
    ➢ Davidson's reply, originally published in the same volume as Foster's paper. Most of it concentrates on Foster's criticism of Davidson's 'revised thesis', which is the view being presented in this paper.
  • David Lewis, "Radical Interpretation", Synthese 23 (1974), pp. 331-44 (PhilPapers)
    ➢ A critique of and alternative to Davidson's approach.

Recall that, in "Truth and Meaning", Davidson observed that both of these "T-sentences" are correct:

  1. "Snow is white" is true iff snow is white
  2. "Snow is white" is true iff grass is green

But, whereas the former seems in some sense to 'give the meaning' of the sentence "Snow is white", the latter does not. So it cannot, in general, be true that a correct T-sentence is also meaning-giving. Nor, as Foster pointed out, is it enough simply to require that these T-sentences be compositionally generated, since

  1. "Snow is white" is true iff snow is white and grass is green

can be compositionally derived. (Of course, we're ignoring all the actual complexities here.) As it happens, Davidson had discovered the "Foster problem" for himself, and "Radical Interpretation" is (among other things) his attempt to solve it. (He also addresses it in "Reply To Foster", which is listed as optional.)

Davidson mentions in the paper a notion he calls satisfaction. This notion comes from Tarski, and it is a purely technical tool. It is just what we call, in logic, "truth under an assignment of objects to variables". So if we have a formula, say "x > y", where ">" means what it usually does, then we would say, in Phil 0640, that the formula is true when "x" is assigned 5 and "y" is assigned 3, but false if "x" is assigned 5 and "y" is assigned 7. Tarski simplifies this (in one sense) by thinking of the variables as coming in some fixed order, which is easiest if we think of them as being x1, x2, x3, etc. Then, instead of our needing to assign objects to variables, we can think of ourselves as just being given a sequence of objects, e.g., [5,3,4,6,...]. The first thing in the sequence goes with x1, the second thing with x2, etc. These sequences are sometimes thought of as infinite, i.e., as assigning objects to all the variables, even though only finitely many variables will ever be involved at any one time.

So Tarski would say: The sequence [5,3,4,6,...] satisfies the formula "x1 > x2", but the sequence [5,7,4,6,...] does not satisfy the formula. The reason this notion is important is just because of the quantifiers: To say what it is for "∃x1∃x2(x1 > x2)" to be true, we need to be able to talk about when "x1 > x2" is true, for different assignments to the variables; i.e., we need to be able to talk about satisfaction.

At the beginning of the paper, Davidson elaborates an approach to questions about meaning. He takes the central issue to be: What could we know that would allow us to interpret (understand) the utterances of other people? How could we come to know that? As he emphasizes, these are hypothetical rather than empirical questions.

It's obvious why the empirical question how we do manage to understand 'novel' sentences is worth asking. What's the interest of Davidson's 'doubly hypothetical' question? Could the answer to it explain our ability to understand novel sentences? In what sense? Another important question, which we'll spend significant time discussing shortly, is: What notion of knowledge does Davidson have in mind here when he denies that ordinary speakers must know such things? What notion should one have in mind?

What does Davidson mean when he writes: "All understanding of the speech of another involves radical interpretation"? How does that relate to his claim that the question in which he's interested is 'doubly hypothetical'?

Davidson proceeds to eliminate a number of possible approaches and to elaborate other aspects of the problem, such as the need to account for the unbounded capacity to interpret novel utterances (as discussed in the other papers of Davidson's we have read). He emphasizes that we need not think of a theory of interpretation as defining a function from utterances (or, in the simplest case, sentences) to 'meanings' or 'interpretations', contra Lewis, but instead simply as enabling someone who knew the theory to 'interpret' the language in question, that is, to understand it (and, presumably, to speak it, though Davidson seems most focused on the case of language comprehension).

On p. 316 (128, in the reprint), Davidson claims that "a theory of interpretation" must be able to "be supported or verified by evidence plausibly available to an interpreter". But he gives no explicit reason for this claim. What might he have in mind? How does this relate to the 'doubly hypothetical' character of his investigation? (Davidson will come back to the other claims he makes here.)

On pp. 316-7 (128-30), Davidson considers Quine's suggestion that such a theory may take the form of a translation between the 'new' language and one's own. What is Davidson's objection to this approach? How good is it?

Unsurprisingly, given Davidson's earlier writings, the alternative he suggests is a theory of truth that issues in T-sentences like:

  1. "Es regnet" is true at time t iff it is raining at t.

I.e., his claim is that, if you knew a theory of truth for (say) German, then you could use that theory to interpret (speak and understand) German.

On p. 319 (131), Davidson lists three questions about this approach. The first, whether a theory of truth can actually be given for a natural language, will not concern us (though it does much concern people who work on natural language semantics). The second and third questions are the crucial ones philosophically.

The second question is: Can a theory of truth be verified by evidence plausibly available to a radical interpreter? Note that the question here is simply whether we can decide, on the basis of such evidence, whether the T-sentences the theory implies are true—so we are not yet trying to distinguish (1) from (2). The question is especially pressing because Davidson has a very austere conception of what evidence is available to the radical interpreter. It amounts to facts about what sentences are 'held true' under what sorts of circumstances. So, roughly, if Kurt tends to 'hold true' the sentence "Es regnet" when it is raining, that is some evidence for (4).

Contra Grice and Strawson, Davidson denies the radical interpreter access to "finely discriminated" facts about beliefs and intentions. (He seems to have in mind any mental states more finely discriminated than that a given sentence is true: see p. 322 (135).) He argues for this restriction first on p. 315 (127), and then states a general restriction on the possible evidence on p. 316 (128): "[I]t must be evidence that can be stated without essential use of such linguistic concepts as meaning, interpretation, synonymy and the like". What is Davidson's argument for this restriction? How good is it?

How much of an argument does Davidson offer for an affirmative answer to his second question? It might help to imagine that we have to deal only with a very simple language, say one with just ten proper names, a few one-place predicates, a few two-place predicates, and (perhaps) the propositional connectives. (As we'll see shortly, Gareth Evans discusses this sort of simple language.)

The third question is: Could such a theory, if known to be justified by such evidence, be used to interpret the target language? It is here, on pp. 325-6 (138-9) that the Foster problem rears its head. It might seem plausible that a truth-theory will allow you to interpret German if the theory proves (4), but not so much if it proves

  1. "Es regnet" is true at time t iff it is raining at t and there are everywhere continuous real-valued functions that are nowhere differentiable.

even though such a T-sentence is still true. Hence, not just any correct theory of truth will suffice for interpretation. A theory of truth that is 'adequate for interpretation' must thus meet some additional condition. The additional condition is supposed to be, precisely, that the theory is verifiable on the basis of the sort of evidence that is available to a radical interpreter. What Davidson is proposing is thus that the inference from T-sentences to meanings might be understood as follows:

  1. "Schnee ist weiß" is true iff snow is white.
  2. The T-sentence (i) is a theorem of a truth-theory that is verifiable on the basis of the sort of evidence that is available to a radical interpreter.
  3. So, "Schnee ist weiß" means that snow is white.

Another way to think of this is that the inference from (i) to (iii) goes through so long as one's justification for (i) is of the kind described by (ii). I.e., we treat (ii) more as a background, 'enabling' condition for the inference than as an additional premise.

What Davidson means by a 'canonical proof' is one that derives the T-sentence in a particularly direct way. (We can define precisely what that means but will not try to do so here.) Some such restriction is necessary because any theory of truth will, in fact, prove lots of T-sentences for any given sentence. For suppose that the theory proves "S is true iff p". Then if q is any other statement the theory proves, it will also prove "S is true iff (p and q)". (Convince yourself of that.) So, really, we are interested only in the 'canon ically proven' T-sentences. Those are the ones that are supposed to be 'interpretive'. Still, Foster's objection shows that this cannot, by itself, resolve the problem posed by (4) and (5): If there is a correct theory of truth that canonically proves (4)—that is, one whose canonically proven T-sentences are all true—then there is also a correct theory that canonically proves (5).

Davidson's argument that the inference described is valid (or usually valid, or something in that vicinity) is contained in the final two paragraphs of the paper. What is the argument? One very important question, which Davidson does not really address, is: What is it to 'interpret' a sentence anyway?

4 October

Scott Soames, "Truth, Meaning, and Understanding", Philosophical Studies 65 (1992), pp. 17-35 (PhilPapers, PDF, DjVu, JSTOR)

First short paper returned

You need only read pp. 17-29 carefully. The last couple pages sketch an alternative to the Davidsonian picture that we shall not consider in any detail.

Show Reading Notes

Related Readings

  • Scott Soames, "Semantics and Semantic Competence", Philosophical Perspectives, Vol. 3, Philosophy of Mind and Action Theory (1989), pp. 575-596 (PhilPapers)
    ➢ An earlier paper that raises many of the same issues as the one we are reading.
  • Scott Soames, "Linguistics and Psychology", Linguistics and Philosophy 7 (1984), pp. 155-79 (PhilPapers)
    ➢ Argues against Chomsky's view that linguistics is a branch of cognitive psychology.
  • Scott Soames, "Semantics and Psychology", in J. Katz, ed., The Philosophy of Linguistics (New York: Oxford University Press, 1985), pp. 204-26 (PhilPapers, PDF, DjVU)
    ➢ Argues that semantic theories are not theories of speakers' knowledge.

Our main interest here is in Soames's re-articulation of the Foster problem, on pp. 19-25, and his criticism of Davidson's response to it, on pp. 25-9. In between, on p. 25 itself, are two paragraphs (and two very long footnotes) about James Higginbotham's view, which we shall read next. (The two papers were delivered as part of symposium.) Since we have not yet read Higginbotham's paper, you need not worry about that material now, but you should come back to it after you do read Higginbotham's paper.

Soames begins with a very nice formulation of the motivation for Davidson's approach: "[I]f a theory tells us everything we need to know in order to understand a language, then it must be counted as specifying all essential facts about meaning, even if it does not issue in theorems of the form 'S' means in L that p, which state the meanings of individual sentences one by one" (pp. 17-8).

Soames claims that there are basically two ways to interpret the claim that a theory of truth may "serve as" a theory of meaning.

  1. Knowledge of what an interpretive (or, as Soames puts it, "translational") truth-theory states is sufficient for understanding.
  2. Knowledge of what a truth-theory states is necessary for understanding.

The latter view comes in two forms, as well: that knowledge of everything the theory states is necessary for understanding, or that the theory states everything knowledge of which is necessary for understanding.

Soames argues on pp. 19-21 that all forms of the sufficiency view fall to a form of the Foster problem. The crucial argument is the one that surrounds (3) on p. 21. Can you summarize this argument in your own words? How might Davidson respond?

Note that Soames's example does not just threaten Davidson's view but also views that would strengthen T-sentences in the way suggested by Foster:

"Snow is white" is true in world w iff snow is white in w.

Or, alternatively, by requiring the "iff" to be not material but 'strict', so that the equivalence is necessary. The point is that we can make the same move Foster makes but make the extra conjunct itself be some necessary truth. That is why Soames claims that even knowledge of truth-conditions is insufficient for knowledge of meaning.

Soames is a bit sloppy here, using "arithmetic is incomplete" as his stock example of an English sentence that is necessarily true. Arithmetic itself is not 'incomplete'. Rather, formal theories of arithmetic, satisfying certain sorts of conditions, are always incomplete. But, of course, the point does not matter here.

The argument against the first form of the necessity view (on pp. 22-3) has two parts:

  1. Knowledge of the compositional axioms, etc, does not seem necessary for understanding, since ordinary people do not have such knowledge. (What Soames has in mind here are e.g. the axioms that govern quantifiers, or attributive adjectives, or whatever.)
  2. Even a totally trivial theory would state things necessary for understanding.
  • Concerning (i), could one grant the point and suggest, nonetheless, that knowledge of the T-sentences was necessary for understanding? so that, a theory of truth would at least 'serve as a theory of meaning' in that sense? What would be lost if one went that route? Alternatively, might there be a way of insisting nonetheless that ordinary speakers do know the axioms that govern quantifiers, etc?
  • Concerning (ii), could one reply that even a trivial theory will at least say something about meaning even if it does not tell us everything there is to know about meaning? Theories of truth, on this view, are attempts to state things knowledge of which is necessary for understanding. Is it a threat if some otherwise correct theories do not go very far in this direction?

The argument against the second form of the necessity view is that the Foster problem applies to it, as well. The central claim here is that no truth theory can ever state everything knowledge of which is necessary for understanding, since knowledge of the truth-theory is compatible with false beliefs about what sentences mean.

Here again, see if you can sketch Soames's argument in your own words.

All that might make one wonder whether the second form of the necessity view is really very different from the sufficiency view. What is the difference between saying that knowledge of what some theory T states is sufficient for understanding and saying that T states everything knowledge of which is necessary for understanding? If you know everything necessary for understanding, won’t you then understand? If there's a gap there, is it one that will matter for this argument?

On p. 25, Soames begins consideration of Davidson's response to the Foster problem. He interprets Davidson as holding that, if one adds to one's knowledge of a truth-theory the knowledge that the theory is "translational"—i.e., that the sentences that appear on the right-hand sides of the T-sentences it generates translate the sentences mentioned on the left-hand sides—then that will be enough information to allow one to draw inferences about what the various sentences mean. He formulates this, on pp. 27-8, as a six-step argument. Soames finds almost every step of the argument to be problematic.

  1. Soames notes that the compositional structure of the theory plays no role. Was it supposed to?
  2. One of Soames's objections is that truth essentially plays no role in the argument. What role was it meant to play in Davidson's account?
  3. Might it help to replace step (1) by "The following is a translational truth-theory for Maria's fragment of Italian: ...", where the dots are filled by the theory? The thought is that, in that case, step (2) is unnecessary, and we can move directly to step (3).
  4. Alternatively, could one skip step (3) and reformulate step (4) so that it did not mention T-sentences?
  5. Regarding the move from step (3) to step (4), Soames is of course correct that any such theory will imply lots of T-sentences for any given sentence. (This is just because "p ≡ q" and "r" always imply "p ≡ (q & r)". So you can always conjoin any theorem of the theory to the right-hand side of any of the T-sentences it proves, and that will still be a theorem.) But any such view will have to deal with this kind of problem, i.e., find some way to specify what's meant by the 'canonical' T-sentence that the theory proves for a given sentence. There are a variety of ways to do that.

Does that take care of Soames's objections? Or there others not affected by these reformulations?

The sort of objection mentioned at (iii) above is what I call a `fiddly techincal objection'. Such objections can sometimes reveal serious problems with a view, so they cannot just be dismissed. But then tend to have `fiddly technical answers'—which is to say that they often do not go very deep.

Here's a different sort of view we might consider. One way to think of the Foster problem is as raising the question what additional premise might permit the inference from "S is true iff p" to "S means that p". As Soames interprets Davidson, and as Davidson states his view in "Radical Interpretation", the idea is that the additional premise is that the T-sentence is a theorem of a truth-theory that is verifiable on the basis of the sort of evidence that is available to a radical interpreter. So what's needed is knowledge about how the theory can be justified. The problem is that, if we think of it that way, then we have to explain how one can get from (i) knowledge of a T-sentence and (ii) knowledge of how one might justify a theory that proves that T-sentence to (iii) a claim about what the sentence means. And that looks complicated. This complication is what is driving Soames's objection to the six-step argument.

But if we're thinking of the theory as one that someone is actually using to interpret other speakers, then we do not necessarily have to suppose that they have knowledge about how the theory is justified. Rather, we could say that they will be able to make the truth-to-meaning inference so long as their knowledge of the T-sentence is in fact justified in a certain way, namely, on the basis of the sort of evidence that is available to them as a radical interpreter. The thought is that the mode of justification for the T-sentence is working as a kind of background, 'enabling' condition for the inference.

One might compare this to so-called 'rules of proof' in certain sorts of logical theories, such as modal logic. In such systems, there is an inference rule of the following form:

A.
So, it is a necessary truth that A.

Obviously, such an inference is invalid, in general. But, in such systems, the inference is permitted only when A is itself a theorem, i.e., if it has been justified in a certain way (namely, by means of a proof with no undischarged assumptions).

How much help (or how little) might all that be to Davidson?

7 October

James Higginbotham, "Truth and Understanding", Philosophical Studies 65 (1992), pp. 3-16 (PhilPapers, PDF, DjVu, JSTOR)

You should focus on sections 1 and 2 (pp. 3-10). We will not discuss section 3 directly.

Higginbotham was both a philosopher and a linguist, and he made important contributions to both fields.

Show Reading Notes

Related Readings

  • Richard Larson and Gabriel Segal, Knowledge of Meaning: An Introduction to Semantic Theory (Cambridge MA: MIT Press, 1995), Chs. 1-2
    ➢ Develops an approach different from, but similar to and inspired by, Higginbotham's. Most of the book is devoted to developing formal semantic theories that accord with those ideas.
  • James Higginbotham, "Knowledge of Reference", in A. George, ed., Reflections on Chomsky (Oxford: Basil Blackwell, 1989), pp. 153-74 (DjVu, PDF)
    ➢ More on knowledge of reference.
  • James Higginbotham and Robert May, "Questions, Quantifiers and Crossing", Linguistic Review 1 (1981), pp. 41-80 (PhilPapers)
    ➢ Maybe Higginbotham's most important contribution to linguistics.
  • Ernest Lepore and Barry Loewer, "What Davidson Should Have Said", in E. Villanueva, ed., Information, Semantics and Epistemology (Cambridge: Blackwell, 1990), pp. 190-9 (PDF, DjVu)
    ➢ Discussed by Higginbotham, this paper attempts to solve the Foster problem on Davidson's behalf.
  • Ernest Lepore and Barry Loewer, "Translational Semantics", Synthese 48 (1981), pp. 121-33 (PhilPapers)
    ➢ Mentioned below.

This is an extremely difficult paper—maybe the hardest paper we will read. Part of the difficulty is that the dialectical structure of the paper is very complicated. I'll provide guidance by outlining the paper and asking some questions along the way. Note that by ¶1, I mean the first full paragraph on a page; ¶0 is the paragraph continuing from the previous page, if any.

In section 1, Higginbotham introduces his own account of the relationship between truth and meaning. It is perhaps worth emphasizing that Higginbotham is looking for an account of what meaning is that is "in harmony with semantic theory as it is actually practiced" (p. 3).

On pp. 3-4¶2, Higginbotham quickly sketches an account of why reference is essential to the theory of meaning. This essentially summarizes the arguments of "Truth and Meaning". (I.e., reference is the key to compositionality.) The rest of p. 4 quickly introduces the Foster problem, arguing that no appeal to structure (as Davidson had suggested in "Truth and Meaning") can solve it. The basic problem is that, so long as different words can have the same reference, there will be different sentences with the same content. But speakers can, in principle, 'respond' to those sentences differently.

In the rest of section 1 (pp. 5-6), Higginbotham articulates a positive conception of what role a theory of truth might play in an account of a speaker's linguistic competence, in particular, of their knowledge of meaning. He proposes to regard knowledge meaning as knowledge of facts about reference. E.g., someone's understanding the sentence "Alex runs" would be regarded as consisting in their knowledge that "Alex" refers to Alex, that "runs" is true of things that run, and so that "Alex runs" is true iff Alex runs. But that is not quite all. The thought is that these are facts one not only knows but also expects other speakers to know, somewhat the way that Lewis regards conventions as commonly known. Thus:

From this point of view, meaning does not reduce to reference, but knowledge of meaning reduces to the norms of knowledge of reference [that is, to facts about what one is supposed to know about reference]. Such norms are iterated, because knowledge of meaning requires knowledge of what others know, including what they know about one's own knowledge. To a first approximation, the meaning of an expression is what you are expected, simply as a speaker, to know about its reference. (p. 5)

Higginbotham does not here provide much motivation for this view. As he notes in footnote 6, however, there is an obvious link to notions like overtness that figure prominently in Grice and Lewis.

"[K]nowledge of meaning reduces to the norms of knowledge of reference." What precisely does that mean? What norms does Higginbotham have in mind?

How might one's own knowledge of T-sentences, and ability to rely upon others' knowing them, figure in an account of interpretation and communication? Take a simple case: Sam says to Toni, "Alex runs", in an effort to communicate that Alex runs. How, on Higginbotham's account, might this lead Toni to believe that Alex runs? How does it compare to the way that Lewis uses common knowledge in his account of conventions of truthfulness and trust and their role in communication?

What is most distinctive of this view, compared with others are have seen, is that it is unapologetically psychological: It is a view about what competent speakers actually know that allows them to speak and understand their language. Note, however, that some such knowledge is, as Higginbotham remarks at the bottom of p. 5, "tacit": These are not things one knows explicitly, but only sub-consciously. We'll spend a good deal of time, shortly, talking about this notion (which plays a major role in linguistic theory and in other parts of cognitive science).

In section 2, Higginbotham argues against certain other types of solutions to the Foster problem, and then argues that his own view does solve it. The first page or so rehearses Soames's way of formulating the problem. The question is: Why isn't knowledge of the truth-conditions of "Firenze è una bella città" compatible with false beliefs about what it means?

At p. 7¶1, Higginbotham distinguishes two sorts of responses to the Foster problem, very briefly articulating the "immanent" response, but then turning his attention to the "transcendent" response. He'll return to the 'immanent' response later. (This is one of the places the structure of the paper is most confusing.)

The transcendent response is discussed from p. 7¶2, through p. 8¶1. The idea is to deny that there are any facts about meaning beyond what would be apparent to a radical interpreter. That is: If there really is a difference of meaning of the sort on which the Foster problem rests (e.g., between "Snow is white" and "Snow is white and arithmetic is incomplete"), then that difference will have to be one that "disrupt[s] communication". If it does not, then the theories do not really differ in any way that matters.

The criticism is that, insofar as that seems plausible, it is because we are appealing to facts about the contents of speech acts, in which case such an account "swallows meaning whole" (p. 8). One might see this line of thought as continuous with Strawson's: What I need to know, on this account, to decide what theory of truth is correct for Gianni, is what his communicative intentions are when he speaks to me. (See esp. p. 8, ¶1: "...I guessed at Gianni's meaning based upon my beliefs about what he would be likely to be interested in telling me".) But if we have access to facts about communicative intentions, then we already know quite a lot about meaning. (And, of course, Davidson explicitly denies that he wants to appeal to such facts.)

How would Davidson respond to this argument? One option would be to insist that there are facts that would be apparent to a radical interpreter that would distinguish the two theories in question. What might those facts be? (Remember, again, Davidson's insistence that the radical interpreter does not have access to 'fine grained' facts about beliefs and intentions.)

The other option would, indeed, be to deny that there is any significant difference between the two theories. That makes the transcendent response essentially a version of Quine's thesis of the indeterminacy of translation, for which he argues in Word and Object. Davidson endorses a version of this claim at the very end of "Radical Intepretation". But one might wonder just how much indeterminacy it is possible to tolerate. Can we simply accept that there is no difference of meaning of the sort Soames and Foster are claiming?

Higginbotham criticizes the immanent response from p. 8¶2 through p. 9¶0. This part of the argument is a bit easier to understand. What is the argument?

Lepore and Loewer would definitely reject the claim that their response to the Foster problem "amounts to the thesis that translation into one's own speech...is sufficient for understanding..." (pp. 8-9). See their paper "Translational Semantics", listed as optional. But Higginbotham is not claiming that this is explicitly their view, only that their arguments commit them to it.

At p. 9¶2, Higginbotham explains own response to the Foster problem: The account outlined in section 1 is immune to it. On his account, we are actually to think of Gianni as knowing that "Firenze è una bella città" is true iff Florence is a beautiful city, and that this is what others also know and expect one another to know. This is of course an empirical fact (assuming it is a fact). But, Higginbotham is insisting, knowing that is different from 'knowing' that "Firenze è una bella città" is true iff Florence is a beautiful city and arithmetic is incomplete, and that this is what others also know and expect one another to know.

The crucial question here is parallel to one asked above: How would 'knowing' one of these things rather than the other affect one's use of language to communicate beliefs to other people, and to acquire new beliefs from them? That is: How might such knowledge be put to use in communication? The key to answering this question is implicit in Higginbotham's remark that "Statements of truth conditions that go beyond these bounds are irrelevant to understanding, resting as it does on common knowledge, and so irrelevant to meaning as well" (p. 10).

In the remainder of the section, Higginbotham briefly explores the role that knowledge, and common knowledge, play in his account. He remarks that "...the theory of truth...is not something that one starts with, augmenting it with conditions or constraints so as to make it acceptable as a theory of meaning. Rather, truth comes in as something [a competent speaker] knows about, and the deliverances of the theory are of interest only insofar as knowledge of them is part of [their] linguistic competence" (p. 10). This echoes Higginbotham's earlier remark that his account "makes use of the information that a person tacitly possesses about the truth conditions of her own utterances" (p. 5).

There is an implicit criticism of Davidson here, and an attempt to re-orient the focus of the theory of meaning (that is, of semantics). What is the criticism, and what is the new focus supposed to be?

9 October

Ian Rumfitt, "Truth Conditions and Communication", Mind 104 (1995), pp. 827-62 (PhilPapers, PDF, DjVu, JSTOR)

This is a(nother) long and difficult paper, but you really only need to read the introduction and Part I, on pp. 827-44. The rest is well worth reading, and I definitely suggest that you at least skim the rest. But we obviously cannot discuss the whole of this paper in one class (and I certainly do not expect you to work carefully through all 36 pages).

Show Reading Notes

Related Readings

  • Richard Kimberly Heck, "Meaning and Truth-Conditions", in D. Griemann and G. Siegwart, eds., Truth and Speech Acts: Studies in the Philosophy of Language (New York: Routledge, 2007), pp. 349-76; originally published under the name "Richard G. Heck, Jr." (PDF)
    ➢ Develops a view similar to Rumfitt's.
  • Hartry Field, "Tarski's Theory of Truth", Journal of Philosophy (1972), pp. 347-75 (PhilPapers)
    ➢ An early paper on the philosophical signficance of Tarski's theory of truth, with a viewpoint very different from Davidson's.
  • John McDowell, "Meaning, Communication, and Knowledge", in Z. van Straaten, ed., Philosophical Subjects (Oxford: Oxford University Press), 1980; reprinted in McDowell's Meaning, Knowledge, and Reality (Cambridge MA: Harvard University Press, 1998), pp. 29-50 (PhilPapers, DjVu, PDF)
    ➢ Another important discussion of this same set of issues.
  • Sir Michael Dummett, "Language and Communication", in his Seas of Language (Oxford: Clarendon Press, 1996), pp. 166-87 (DjVu, PDF)
    ➢ Another discussion of Strawson's paper.
  • H.P. Grice, "Utterer’s Meaning and Intentions", Philosophical Review 78 (1969), pp. 147-77 (PhilPapers)
    ➢ An important paper in Grice's development of his ideas about meaning. Discussed by Rumfitt.

You will recall that Strawson claimed that 'formal semanticists', such as Davidson, could give no reasonable account of the notion of expressing a belief, without appealing to the notion of a communicative intention. Rumfitt here means to answer Strawson, though he is going to argue that, to explain that notion (or one in the same vicinity), we need both the sorts of resources typically deployed by Griceans and the notion of truth-conditions. Rumfitt's focus, for our purposes, is on the notion of expressing a thought. This is something one does not only when one makes an assertion, but even when one makes a conjecture or a supposition.

The first two sections mostly summarize material we have already discussed. But there are a couple of new points. In §II, Rumfitt argues, on Strawson's behalf, that the notion of truth, as it appears in the T-sentences that are supposed to 'give meaning', cannot be one that is simply defined in a certain way. This kind of point has a long history in the literature on Tarski. (See e.g. the paper by Hartry Field listed as an optional reading.) As Rumfitt says, Davidson does not (by 1990, anyway) disagree, so we need not worry too much about this point, a version of which we have already seen in Strawson himself.

The short version of this point is that if truth is defined by a Tarski-style recursive definition, then T-sentences like:
(*) "Snow is white" is true iff snow is white
turn out to be things we can prove mathematically. But the ordinary version of (*) is simply not a theorem of mathematics: You cannot find out what "Snow is white" means by doing math.

It is also important to note that it is simply not clear what kind of 'account' of truth is needed here, if any. As Rumfitt notes, Frege famously insisted that truth is not definable at all, and Davidson once wrote a paper titled "The Folly of Trying to Define Truth" (PhilPapers). Perhaps we can simply treat truth as 'primitive'. Or perhaps the most we can hope is to explain how truth is related to certain other equally basic concepts: meaning, belief, and the like.

In §III, Rumfitt considers whether Strawson is right that the notion of assertion, and that of the content of an assertion, can be explained in terms of communicative intentions. The question that Rumfitt presses is what, precisely, my communicative intention is supposed to be when I assert, say, that (my cat) Lily is asleep. We've discussed this question before, and Rumfitt recounts some of the back and forth that led to the idea that my intention is to get my audience to believe that I believe that Lily is asleep, not to believe that Lily is asleep. But Rumfitt argues that there are problems with that view, too, and suggests at the beginning to §IV that there is not going to be a satisfying resolution to this problem: There just isn't any intention that one must always have when asserting that Lily is asleep.

Formulate, as clearly and precisely as you can, Rumfitt's argument against Gricean analyses of assertion. Explain how that argument would equally threaten views that take communicative intentions to involve knowledge.

Why not just say that the intention constitutive of asserting that Lily is asleep is: that one intends to express the thought that Lily is asleep?

For additional discussion of accounts of assertion, see the SEP article thereon.

What Rumfitt suggests is that we should abandon the attempt to explain in such terms what assertion is and attempt instead to explain a more general notion: that of expressing a thought, e.g., the thought that Lily is asleep. The key idea is to appeal to the structure of a speaker's reasons for speaking as they do, and to try to extract from those reasons an understanding of what it is to express a thought. The main point in this section is that, whatever the speaker's reasons are, they will always have to involve (and, in a certain sense, terminate in) the speaker's reasons for uttering a particular sentence. This is what is known as a 'basic action': It is something one can 'just do'; one does not have to do it by doing anything else. (By contrast, Rumfitt is claiming, expressing a thought is not something I can 'just do'. I have to do it by uttering a sentence and, indeed, by choosing a particular sentence which will express that thought in the context in which I find myself.)

There are some complex issues about the nature of action in the background here. To utter a sentence, I have to move my lips and tongue, etc, in a certain way, and contract and relax my vocal chords. But these are not things I can do as such: They happen when I speak, but they are not directly under my intentional control. (Compare: To type these words, the muscles in my fingers have to move in certain ways; but the very specific ways in which they move is not something I directly control.) See the SEP article on action for further information.

Formulate, as clearly and precisely as you can, Rumfitt's argument that our reason for speaking must always involve reasons to utter a particular sentence. What would speaking be like if that were not so?

As Rumfitt remarks, there is no assumption here that people consciously go through the kind of reasoning he discusses. The point of his reconstructions are to allow us to get a sense for what people's reasons actually are, and how our beliefs and desires lead us to perform the linguistic actions we do. This is a common and relatively uncontroversial way to think about reasons for action. Suppose I walk across the room and pick up the eraser. One might ask why I did so. The answer might be that I wanted to erase something on the board, and I wanted to use the eraser to do so (rather than my hand, or my shirt). I do not have consciously to have thought, "Oh, I should use the eraser rather than my shirt" for that to be a correct answer to the question why I did what I did—even in a really strict sense according to which my 'reasons' must also be causes of my action.

Rumfitt's analysis is developed in §V. The key thought here is that the speaker's reasons for uttering a given sentence will, in typical cases, include something that establishes a link between that sentence and a certain proposition that they mean to express. (Maybe one just wants to hear that particular sound, but that is not the typical case.) T-sentences are then supposed to make this connection between a sentence and the proposition one expresses.

We can pose the central question this way. Consider the (A) syllogism on p. 838. Why will the students learn that the Battle of Waterloo was fought in 1815 if the instructor utters the sentence "The Battle of Waterloo was fought in 1815" (that sentence is the one dubbed S0)? The connection between the sentence and what the students learn is left out of this account of the instructor's reasons for speaking as they do. Rumfitt's idea is that we can fill in the missing details as follows:

  1. The students know that an utterance of "The Battle of Waterloo was fought in 1815" is true iff the Battle of Waterloo was fought in 1815. (Why? Because they understand English.)
  2. The students will regard my utterance as true. (Why? Because I'm their teacher, and they trust me to speak the truth about such matters.)

If I utter "The Battle of Waterloo was fought in 1815", then, I can expect the students to use (i) and (ii) to reach the conclusion that the Battle of Waterloo was fought in 1815. So it is rational for me to utter that sentence, if my goal is to get them to believe that the Battle of Waterloo was fought in 1815.

Something similar applies in the case of the (B) syllogism. Fill in the details.

So the idea is: What thought someone expresses when uttering a given sentence is determined by the T-sentence that, the speaker expects their audience to deploy in interpreting and responding to their utterance.

A form of the Foster problem arises on p. 841, as an objection to the analysis stated at the top of that page. The observation is that various T-sentences might appear among my reasons for speaking. (Rumfitt gives a very complicated example to show this.) The idea behind his response is that, nonetheless, among these, there will be one that is most basic. Here is a simpler case. Suppose I want you to come to believe that Alex has left, but that I already know that you know that Drew has left if, and only if, Alex has left. (They always leave together.) Then I could utter "Drew has left" and leave the inference to you. So one of my reasons for uttering "Drew has left" is that

  1. You will know that my utterance of "Drew has left" is true iff Alex has left.

But my utterance does not express the thought that Alex has left, but rather the thought that Drew has left. Rumfitt's idea is that (iii) is not my most basic reason for uttering "Drew has left" but is itself grounded upon other things, namely:

  1. You will know that my utterance of "Drew has left" is true iff Drew has left.
  2. You know, independently, that Drew has left iff Alex has left.

So it is (iv), the more basic T-sentence, that tells us what thought was expressed.

By the way, if you replace "Drew has left", in the foregoing, with an arbitrary sentence and "Alex has left" with some arbitrary other sentence, then this kind of example can be used to prove that the utterance of any sentence can implicate any proposition, given an appropriate context. For that purpose, one only needs the left-to-right part of (v), not the biconditional. (Rumfitt's argument needs the biconditional in (v), because otherwise (iii) would not follow from (iv) and (v).)

To explain this notion of a 'most basic' T-sentence, Rumfitt introduces the notion of an "I-source" of knowledge of a biconditional. It is certainly worth considering whether that notion, as Rumfitt explains it, is adequate to the task for which it is introduced. However, it's also worth remembering that the underlying idea is simply that one of these T-sentences is 'most basic'. There are other ways one might try to explain that. (I pursue a somewhat different strategy in the paper of mine listed as optional.)

Say, as clearly and precisely as you are able, in what sense (iv) is meant to be 'most basic'. (As just indicated, this is far from trivial. It could make for a good final paper topic.)

Restricted version of the previous question: Is the notion of an I-source of knowledge really needed in the analysis with which Rumfitt finally settles on p. 842? That is: Could the T-sentence that figures in the 'lowest practical pro-syllogism' be one that did not issue from an I-source of knowledge? Why or why not? If not, can we explain what is special about those cases? (This again could make for a good final paper.)

There is arguably a close relationship between Rumfitt's proposal and Higginbotham's answer to the Foster problem. In particular, both of them talk about "expectations" about what one's conversational partners will know. What more can be said about the relation between these two views? (Exploring that question would also make for a good final paper topic.)

Part II, which you do not have to read, but are invited at least to skim, explores the same sort of issue, but in connection with "taking in" or "apprehending" a thought expressed by someone else. The analysis Rumfitt eventually gives, at the bottom of p. 850, is parallel to the analysis of expressing a thought, and many of the arguments Rumfitt gives in favor of it are parallel to those that are given in Part I.

11 October

Discussion Meeting

Revised first short paper due

Tacit Knowledge
14 October

No Class: Indigenous People's Day

16 October

Noam Chomsky, Knowledge of Language: Its Nature, Origin, and Use (London: Praeger, 1986), Chs. 1-2 (PDF, DjVu)

There is more reading here than usual, but it is not nearly so dense as most of what we have been reading. You can also skip from the bottom of p. 28 through the end of §2.4.2, on p. 40. And you can stop, if you wish, at the top of p. 44.

Show Reading Notes


The main reason we are reading this material is to get an idea of the way that (most) theoretical linguists think about knowledge of language, and their reasons for endorsing a broadly 'cognitive' approach to linguistic competence. We have seen some of these ideas in our earlier readings, but the views presented here are more developed than those in Aspects. Probably the most important distinction that Chomsky makes, for our purposes, is between E-language and I-language. We will need to appreciate both what the distinction is and why Chomsky thinks that I-language should be the focus of linguistic inquiry.

Chomsky sometimes uses the term "grammar" to mean I-language and "language" to mean E-language. This is an older terminology that he had used in Aspects of the Theory of Syntax. It is potentially confusing, since, if we speak this way, then Chomsky's view is that 'language' is irrelevant to linguistics!

Chomsky begins by motivating the study of language from a psychological point of view. He urges that there are three basic questions to be asked, which I will rephrase as:

  1. What does a competent speaker of a language know that allows them to be able to speak their language?
  2. How is that knowledge acquired?
  3. How is that knowledge deployed and employed in the actual day-to-day practice of speech?

Chomsky himself has suggested that, if we're uncomfortable talking about speakers' 'knowing' linguistic principles, we can instead talk of speakers 'cognizing' them. Another option would be to reformulate (1) as: What information does a competent speaker have...?

Question (1) can be asked in more or less specific ways. One might be interested in what some particular speaker knows; or what speakers of some language like English know; or what sort of knowledge is common to all human language-users (that being 'universal grammar', or UG). All these questions are empirical—including the question whether there is any knowledge common to all human language-users. But Chomsky thinks there are good reasons to pursue these questions.

What kinds of reasons does Chomsky give to be think that these are the right questions to ask? (See especially pp. 7-13.) Which do you find most (or least) impressive?

In §2.1, Chomsky discusses a number of idealizations that are (he suggests) commonly made in theoretical linguistics. He defends these idealizations on pp. 17-8. It's a reasonable question, though, how the study of an idealized situation can give us information about more realistic, usually very un-ideal situations. How might Chomsky answer that question?

Regarding the distinction between E- and I-language: When Chomsky introduces this terminology in §§2.2-2.3, he remarks that E-language is external language, and I-language is internal language, and many of his remarks accord with that usage. But there is another distinction that is at least as important, and that is the one he really has in mind: between language thought of extensionally and language thought of intensionally.

If we restrict ourselves to semantics for the moment, then, on the extensional conception, a language would be thought of as a function from sentences to meanings (or something like that); this is how Lewis thinks of 'languages'. On the intensional conception, language would be thought of as a system of rules for assigning meanings to sentences. Note that, since there can obviously be many sets of rules that yield the same function from sentences to meanings, languages in the intensional sense are finer-grained than languages in the extensional sense: Many I-languages will generate the same E-language.

A similar distinction is often made in mathematics: between functions thought of extensionally and functions thought of intensionally. Extensionally, a function is just a pairing of inputs (arguments) and outputs (values). Intensionally, a function is a rule for calculating the output from the input. So, extensionally, f(x) = x2-1 and g(x) = (x+1)(x-1) are the same function; intensionally, they are different functions.

A similar distinction is also made in logic (from which Chomsky is borrowing the terminology). There are two different ways to think of a 'theory': as a set of theorems, or as a set of axioms that 'generate' a set of theorems (namely, the theorems that can be proved from the axioms). The former is the 'extensional' way of thinking the theories; the latter, the 'intensional' way. Both have their uses, but the intensional notion is arguably the more fundamental one. (In fact, there is an even finer-grained 'intensional' notion of a theory, which concerns not just what the axioms are but how they are 'presented'. This is especially important when there are infinitely many axioms, so that one cannot just present them as a list.)

Think just of the case of syntax, where we are concerned not with meaning but just with grammaticality. How then would we think of languages on the extensional and intensional conceptions?

The focus on I-language is arguably connected to the conscious inattention to 'normative-teleological' aspects of the ordinary notion of language. How so?

Note that I-languages, in Chomsky's sense, are every bit as abstract as 'languages' in Lewis's sense (see pp. 22-3). But they are not just mappings from sentences to meanings. They are collections of rules by means of which a meaning can be determined for a given sentence. So whereas Lewis asked: What E-language L does a given population P use? And what is it for L to be the language of P? Chomsky wants us to ask: What I-language does a given speaker 'know'? What it is for them to know it? How are I-languages acquired and shaped by experience? Universal Grammar then becomes the theory of the 'initial state' from which language acquisition begins and so, derivatively, a theory of what I-languages it is possible for human beings to acquire.

What is the relation between these two ways of interpreting Chomsky's distinction? There are four possible categories: extensional external; extensional internal; intensional external; and intensional internal. Which does Chomsky have in mind? Might some of the others also be interesting and worth studying? Do some of them seem hard to make sense of?

The distinction between E- and I-languages might also seem to be related to the distinction between public or common languages, such as English or German, and 'idiolects': language as understood by a particular speaker at a particular time. Are these related? If so, how? (One might want to consider here Chomsky's remarks in Ch. 1.)

Recall Lewis's remark that he "know[s] of no promising way to make objective sense of the assertion that a grammar Γ is used by a population P whereas another grammar Γ', which generates the same language as Γ, is not", which Chomsky partially quotes. How would Chomsky respond? (See esp. pp. 23, 30-1, 37-8, and 39.)

Chomsky ends up, in §2.4.1, being very dismissive of the notion of E-language, as well as of 'common' languages. Is there something he's missing? Are there, at least, questions he should be taking more seriously? Just how dismissive is he of 'common' languages?

Note: When Chomsky speaks of the 'language of arithmetic' here, you can think instead of the language of basic logic.

Much of Chomsky's discussion revolves around syntax and, in one striking passage (pp. 41ff), phonology. To emphasize a point that may seem obscure: Despite how it will seem to you, there is not actually an "n" sound in the word "bent" (as it is normally pronounced by English speakers). You hear the "n", but it is not really there in the sounds themselves. What you hear is determined not just by the actual sounds but by the phonological representation that your mind constructs of that word. Similarly in the other cases: What is in column (II) represents what you hear; what is in column (III) represents the sounds that are actually present.

It would be natural—despite what Chomsky says on pp. 44-5—to want to apply these same sorts of considerations to the case of semantics. That is what we saw Higginbotham explicitly propose to do: "I am applying to semantics a research program that goes forward in syntax and phonology, asking, 'What do you know when you know a language, and how do you come to know it?'" ("Truth and Understanding", p. 13).

How might the sort of program in semantics that Higginbotham is proposing (and pursuing) be motivated, in a broadly Chomskyan way? One way to answer this question would be to consider the three "basic questions" that Chomsky asks on p. 3: How do they arise in the case of semantics? How are those questions similar to or different from the ones that arise in the case of syntax?

For those with relevant background: Chomsky clearly thinks that the psychological perspective he favors demands an internalist conception of mind and language. Whether that is so is controversial. My own view, and that of many others, is that it does not. But we shall not pursue the matter, since it will not significantly affect the issues we will be discussing.

18 October

Jerry Fodor, "Special Sciences, or Disunity of Science as a Working Hypothesis", Synthese 28 (1974), pp. 97-115 (PhilPapers, PDF, DjVu)

The paper also appears (in somewhat modified form, if I remember correctly) as Ch. 1 of Fodor's 1975 book The Language of Thought.

Show Reading Notes

Related Readings

  • Jerry Fodor, "You Can Fool Some of the People All of the Time, Everything Else Being Equal: Hedged Laws and Psychological Explanation", Mind 100 (1991), pp. 19-34 (PhilPapers)
    ➢ General discussion of ceteris paribus laws.
  • Jaegwon Kim, "Multiple Realization and the Metaphysics of Reduction", Philosophy and Phenomenological Research 52 (1992), pp. 1-26 (PhilPapers)
    ➢ A now-classic reply to Fodor, but one that is focused primarily on the metaphysical issues raised by multiple realizability. (Kim spent much of his career at Brown, and is as responsible as anyone for the fact that we have a world-class department.)
  • Jerry Fodor, "Special Sciences: Still Autonomous After All These Years", Philosophical Perspectives 11 (1997), pp. 149-63 (PhilPapers)
    ➢ Fodor's reply to Kim.
  • Alexander Reutlinger, Gerhard Schurz, Andreas Hüuuterman, and Siegfried Jaag, "Ceteris Paribus Laws", Stanford Encyclopedia of Philosophy (2024) (SEP)
    ➢ A general survey of the state of the art.
  • Steven Yalowitz, "Anomalous Monism", Stanford Encyclopedia of Philosophy (SEP)
    ➢ Discusses a view due to Donald Davidson that has some points of contact with the view Fodor is developing here.

This paper is about some general issues in philosophy of science, but with important applications to philosophy of mind (and so to philosophy of linguistics). They have become especially pressing recently, with developments in artificial intelligence.

One way to think about the central question of the paper is: Why are there sciences other than physics? Physics is meant to be 'general' in the sense that everything that happens in the physical world admits of physical explanation. That is as true of the monetary transactions and animal development as it is of billiard balls bumping into each other. In some sense, that is to say, the various events (e.g., hand movements, computer transactions, etc) that contitute my paying for groceries can all be explained, in principle, in physical terms; the same goes for, say, the transformation of a caterpillar into a butterfly. Really, it's all just atoms moving around (or various sorts of fields transforming in certain ways).

Why, then, are there such sciences as economics or biology? or linguistics and (other parts of) cognitive science? The epistemological answer would be that the physical explanations are simply too complicated for us to understand. The sorts of explanations given in economics and biology are, so to speak, high-level summaries of physical explanations. But ultimately, and ideally, they would be replaced by the really fundamental explanations, which would be given entirely at the physical level. We have examples of this kind of 'reduction' in the molecular theory of heat and of chemical valence to atomic structure. Hence, as Fodor says, on this view, "the more the special sciences succeed, the more they ought to disappear" (p. 97), because the explanations given in the special sciences will ultimately be replaced by one at more fundamental levels.

In the case of cognitive science, this idea takes a very particular form. Setting aside Cartesian dualism, mental states are brain states; cognitive development involves certain sorts of changes in the brain; mental events and processes are neurological events and processes; etc. So, one might think, every mental phenomenon must ultimately be explicable in purely neurological terms, without reference to such notions as belief, perception, information, and the like. We do cognitive science and psychology only because neurology is too complicated; once we know enough about the brain, we won't have to do cognitive science any more.

The position just described is sometimes known as 'eliminativism' and is much identified with the work of Paul and Patricia Churchland. See the SEP article on eliminative materialism for more information.

This is not just an abstract philosophical position but one that concretely affects the practice of science. For example, it affects who gets grants, what kinds of departments receive university support, and so forth. The excitement caused by advances in neuroscience and artificial intelligence is today leading (otherwise sensible) people to suggest that linguistics and cognitive science are soon to be out-dated, because we will soon be able to give explanations of linguistic and cognitive phenomena in more basic terms.

Fodor is out to argue that view is fundamentally mistaken. Fodor considers a very simplified form of 'reduction', one that is focused entirely on 'laws'. The central question is then is what is required to 'reduce' a scientific law at one level to laws at a more basic level. (I think Fodor would claim that the simplification can only hurt his case: The more kinds of 'reductions' we have to carry out, the harder it will be.)

Fodor mentions Gresham's Law, from economics, as an example. For an example of a psychological law, consider Weber's Law, which states that the change in a stimulus needed to produce a change appreciable by an organism is proportional to the original stimulus. For example, if you have an array of dots, then how many dots you need to add to get it to look noticeably different to an organism is proportional to the original number of dots. (Adding two dots to two dots will do it; adding two dots to a hundred dots will not.) The same sort of thing is true for the number of clicks in an auditory stimulus, or the number of bumps in a tactile stimulus.

Fodor frequently mentions that laws must 'support counterfactuals'. What this means is that laws do not just concern what actually happens but also what would have happened had things been different. Not only is it true that my pen just fell to the floor when I let go of it. It is also true that, if I had let go of it a few moments before, it would have fallen to the floor then.

If we take the special science law to be "S1 events cause S2 events", then what's required to 'reduce' this law to physics are:

  1. Bridge laws that say what physical events constitute the special science events: S1s are really just P1s; S2s are really just P2s.
  2. A corresponding physical law: P1 events cause P2 events.

I'll speak here in terms of 'events', since Fodor does so. But we could also speak in terms of 'states'.

Fodor argues on p. 99 that the bridge laws have to be formulated as identities: Psychological states just are brain states; no weaker claim will do. What is the argument?

Those who are familiar with Saul Kripke's work may balk at Fodor's claim that bridge laws express "contingent event identities". There are things to be said about this, which I'd be happy to discuss outside of class. For now, let me simply assert that nothing Fodor is claiming is actually inconsistent with Kripke's thesis that identities are always necessary. (But Kripke himself is guilty, in my view, of overlooking one of Fodor's main points here in his argument for dualism in the third chapter of Naming and Necessity.)

Fodor then argues for two crucial claims.

The first claim is that, in order for the reduction to work, kinds of special science events must 'reduce' to kinds of physical events. This is because physical laws relate physical kinds. There can't be a law of the sort mentioned at (ii) unless P1 and P2 are physical kinds. Fodor suggests that, in so far as this is not obvious, it is because we forget that not every physical description determines a physical kind: for example, "is less than three miles from the Eiffel Tower" does not determine a physical kind; there are no laws about the things less than three miles from the Eiffel Tower. Of course, other physical laws will apply to those objects, but not in virute of their satisfying that description.

The second crucial point is that 'kind identity' is not guaranteed by what Fodor calls "token physicalism": the claim that every individual event that falls within the domain of any special science is (identical to) a physical event. The reason is that token physicalism (as opposed to "type" physicalism) allows events that are of the same kind from the point of view of the special science to be (identical to) events that are of different kinds from the point of view of physics.

And this is how Fodor thinks things are: Events that are of the same kind, from the point of view of a special science, are typically 'multiply realizable'; even though each S1 event is identical to some physical event, they are not all identical to the same kind of physical event. From the point of view of physics, that is to say, S1 events might be 'wildly disjunctive' (that is, of arbitrarily many different physical kinds that have little or nothing in common, physically speaking). And yet, there might be "interesting generalizations" (i.e., laws) to be stated about S1 events (p. 103). The special sciences, as Fodor sees it, are in the business of making just such generalizations.

To see what kind of thing Fodor has in mind, note just how unlikely it is that all utterances of the word "extraordinary" have anything physically in common. Not only can the word be spoken, signed, and written (with ink, pixels, or even condensed water vapor, in the sky), even spoken forms differ significantly: As Linda Wetzel once pointed out, "extraordinary" can be pronounced with anywhere from two to six syllables. (The two syllable version is the British: strord-nreh.) But there are interesting things to say about utterances of that word (for example, how they are perceived), in all these different forms.

It is important to see that the argument here is not at all special to the case of the human sciences. Fodor argues elsewhere that similar remarks can be made about geology and aerodynamics: Aerofoils, for example, can be made out of just about anything. In particular, the explanation of why planes can fly is pretty much the same as the explanation of how it is possible to sail into the wind, which is pretty much the same as the explanation of how hydrofoils work (both on dolphins and on boats), even though sails and wings and flippers tend to be made of different things and to have quite different shapes, and even though hydrofoils move through water, whereas wings and sails move through air. Aerodynamics is in the business of telling us what all these things, physically different as they may be, have in common.

As Fodor remarks elsewhere, then, there can be no general reasons to think that psychology must one day give way to neurology that would not equally show that aerodynamics, geology, and biology must also give way one day to more basic sciences. But, simply as a matter of sociological fact, there does not seem to be anything like the same clamor for the latter as for the former.

Is there some significant difference between the human sciences and non-basic physical sciences that Fodor might have overlooked?

In §III, Fodor sketches an alternative account of how special sciences relate to more basic sciences. The discussion proceeds in terms of yet another simplifying assumption: that there is some disjunction of physical event types to which a special science kind will correspond. One might think that this assumption is less innocent: that it matters just how 'wild' the disjunction is. For example, if there were just two physical kinds that 'realized' a special science kind, that would frustrate reduction strictly speaking, but perhaps not in spirit. In practice, however, special science kinds are, indeed, wildly disjunctive.

The basic idea here is that the more basic sciences can help us understand how the special science laws are 'implemented': for each of the possible realizations of S1, we can see how it gives rise to a corresponding realization of S2. But why it is worth grouping the possible realizations together in this way is opaque from the point of view of the more basic science.

Another important point Fodor makes here is that the laws of special sciences very often have exceptions, and that it is hard to see how that could be if reductionism was true. Special science laws only hold, as it is put, ceteris paribus, that is, other things being equal. (A simple if boring way to see this is to note that there's always the possibility, however small, of some bizarre quantum mechanical phenomenon that leads to a violation even of the laws of chemistry, let alone those of economics or psychology.) Fodor's idea is that this is because some realizations of S1 will not be lawfully related to realizations of S2, even though almost all of them are. A different idea would be that the reduced laws—that P1s cause P2s—might not hold in full generality, but only 'other things equal'. (This need not imply that real physical laws have exceptions.) For our purposes, this point will not be particularly important, but see the optional readings for more, if you're interested.

The moral of the story is that special sciences do not exist simply because it would be so difficult to understand a purely physical explanation of, say, the relation between inflation and unemployment. Rather:

[T]here are special sciences not because of the nature of our epistemic relation to the world, but because of the way the world is put together: not all natural kinds (not all the classes of things and events about which there are important, counterfactual supporting generalizations to make) are, or correspond to, physical natural kinds. (p. 113).

Explain exactly what Fodor means in the passage just quoted and why it implies that there are, and will continue to be, special sciences. How does it ensure the 'autonomy' of psychology and so 'protect' it from eventual elimination by neurology?

21 October

Gareth Evans, "Semantic Theory and Tacit Knowledge", in his Collected Papers (Oxford: Oxford University Press, 1985), pp. 322-42 (PDF, DjVu)

You should read section III of Evans's paper at least quickly, as there is an important point to be gathered from it, but our discussion will mostly focus on the rest of the paper.

Evans was one of the most talented philosophers of his generation. Tragically, he died of lung cancer at the age of 32.

Show Reading Notes

Related Readings

  • Crispin Wright, "Rule-following, Objectivity, and the Theory of Meaning", in S. Holtzman and C. Leich, eds., Wittgenstein: To Follow a Rule (London: Routledge and Kegan Paul, 1981), pp. 99-117 (PDF, DjVu)
    ➢ Evans's paper was written as a commentary on and response to this paper by Wright.
  • Hilary Putnam, "The 'Innateness Hypothesis' and Explanatory Models in Linguistics", Synthese 17 (1967), 12-22; reprinted in his Mind, Language, and Reality: Philosophical Papers, v. 2 (Cambridge: Cambridge University Press, 1975), pp. 107-16 (PhilPapers)
    ➢ An earlier paper that expresses worries not unlike Wright's.
  • Crispin Wright, "Theories of Meaning and Speakers' Knowledge", in his Realism, Meaning, and Truth (Oxford: Blackwell, 1986), pp. 204-38 (PDF, DjVu)
    ➢ A later paper in which Wright continues the discussion.
  • Martin Davies, "Tacit Knowledge and Semantic Theory: Can a Five per cent Difference Matter?", Mind 96 (1987), pp. 441-6 (PhilPapers)
    ➢ A reply to one of Wright's criticisms of Evans in the preceding paper.

Evans's main goal in this paper is to try to explain what motivates the requirement that theories of meaning should be compositional, and to explain as well what might allow us to distinguish "extensionally equivalent" theories (that is, theories that prove all the same T-sentences, but use different sets of axioms to do so).

As was the case in Lewis's discussion, Evans is assuming, at least for the purposes of this paper, that speaker's linguistic behavior ultimately concerns sentences, and that facts about the meanings of words (e.g.) must somehow supervene on facts about the meanings of sentences. One could question this assumption.

Evans begins by insisting that, if compositionality is to be motivated at all, then it must be motivated in terms of speakers' actual abilities to understand 'novel' sentences. The question is what the relation is between a compositional (or, as Evans often says, "structure reflecting") theory of truth and actual speakers of the language. There are two options. On the weaker, one 'knows' such a theory if one acts as if one knew it. But then there is no distinguishing between extensionally equivalent theories. (Recall Lewis's skepticism about the possibility of making good sense that one 'grammar' rather than another is correct.) The stronger interpretation is that speaker's somehow unconsciously deploy the information contained in the theory to figure out the meanings of novel sentences. The rest of the paper tries to give some substantial content to this view.

What would be the analagous distinction in the case of syntax? A much larger question is: How might Evans's arguments be adapted to that case?

In section II, Evans discusses a toy language with ten unary predicates and ten names, and so 100 sentences, and considers two theories of truth, T1 and T2, that (respectively) do and do not ascribe any sort of structure to the sentences of this language. Evans suggests that, considered as accounts of what a speaker of the language knows, these two theories can be distinguished empirically. The core idea is that the axioms of the theories can be associated with certain abilities (or dispositions) that the speaker has.

Evans specifies these dispositions using what are known as "substitutional" quantifiers, but we can (for our purposes, at least) rephrase the disposition that constitutes knowledge that 'a' refers to John as:

(*) For any predicate P and any property φ, if U (tacitly) knows that P is true of the things that have the property φ, and if U hears an utterance of "Pa", then U will judge that the utterance is true iff John has the property φ.

This is supposed to capture the idea that U (whoever that is) take sentences that contain 'a' to be 'about' John, to the effect that he has whatever property is associated with the predicate P.

Test your understanding by re-writing Evans's statement of the disposition associated with the predicate 'F' in a similar fashion.

Explain in detail how these two dispositions give rise to a third disposition: Whenever U hears an utterance of 'Fa', they will judge that the utterance is true iff John is bald. (Do you need to appeal as well to what, on the bottom of p. 327, Evans calls the 'compositional axiom'? Why or why not? Note that Evans never tells us what disposition is supposed to correspond to it.)

Evans insists that these dispositions must "be understood in a full-blooded sense". Evans is alluding to a distinction generally made about dispositional properties. For something to be fragile, for example, is for it to be disposed to break easily. But (we ordinarily suppose) that is not all there is to the story: Something that is fragile has some other property that explains why it breaks easily. It is in its possession of this other property that the fragility of the thing consists. (The underlying property is sometimes known as the 'categorical basis' for the disposition.)

A famous example often used to illustrate this distinction is 'dormativity': the disposition something might have to make one sleepy. If we ask, "Why does this pill make one sleepy?" the answer can't just be: Because the pill is dormative. That just means that the pill tends to make people sleepy. It would be visciously circular to say that it makes people sleepy because it tends to make people sleepy. There has to be some other (presumably chemical, in this case) property the pill has that explains why it tends to make people sleepy. So "there is a common explanation to all those episodes of" the pill's making people sleepy.

So, in this case, Evans wants us to think of (*) not just as saying something about what beliefs U will form under what circumstances but as saying that there is some underlying (presumably cognitive) state corresponding to this disposition that explains why U tends to form the beliefs they do.

Explain as clearly as you can why Evans thinks that the dispositions in which (he is arguing ) tacit knowledge consists must be 'full-blooded'. (The argument is on pp. 329-30, and the basic point is that, otherwise, we can't distinguish extensionally equivalent theories.)

Evans goes on to argue (pp. 331-4) that there are empirical differences between the two theories, i.e., that there could be differences in the way a speaker behaved that would justify our saying that they knew one or the other of T1 and T2. What are these differences? Can you think of any other such differences we might expect to find?

It's perhaps worth noting that Evans's 'psychological model' for T1 is compatible with, and seems even to embody, the assumption that the subject's thoughts are structured. What's at issue is simply whether the subject's language has a similar structure.

The paper by Ladefoged and Broadbent to which Evans refers in note 5 was listed as optional for the excerpts from Aspects. It's really quite a fascinating experiment, so the paper is well worth reading.

In section III, Evans outlines some reasons for thinking that 'tacit knowledge' is quite different from ordinary knowledge (or belief). His main point is that it has what one might call restricted application: One's tacit knowledge of the meanings of words and the principles by which they combine is used only in one's comprehension of language. It is not available, in particular, for one to report verbally. Otherwise, semantics would be easy: If you wanted to know how adverbs work, say, you could just ask someone; after all, we all know how they work, right? But we only know such facts tacitly, and our knowledge is (as it is sometimes said) "modular" and unavailable to us outside certain specific applications.

The idea that the mind consists of a number of distinct systems that are 'modular' in this way is largely due to Jerry Fodor. The view is often combined with the idea that there is also an 'executive' system that is not modular, but which co-ordinates the activity of the various modules. See the SEP article on modularity for more information.

In section IV, Evans discusses the question whether attribution of tacit knowledge of a structured theory of meaning to a speaker can explain their capacity to understand novel sentences. Evans first concedes that, by itself, it cannot. It is, honestly, not entirely clear to me why Evans makes this concession. I think the reason is this. Suppose that the speaker regards some utterance of the sentence "Gb", for example, as being true iff Harry is happy. And suppose that we were to regard that judgement as the exercise of the dispositions that constitute their tacit knoweldge of the theory T2. Then the judgement cannot also be regarded as explained by her having those same dispositions: that would be like explaining a sleeping pill's effectiveness in terms of its dormativity.

But the usual response to this kind of objection is that it assumes that the disposition is "thin" rather than "full-blooded", as Evans's were meant to be. So why doesn't Evans just say that, if the dispositions are full-blooded, then there is an explanation to be had, in terms of whatever the categorical basis of those dispositions is? Perhaps this is the point Evans is making when he says that "to say that a group of phenomena have a common explanation is obviously not yet to say what the explanation is": To give a satisfying explanation, we would need to know what the states underlying the dispositions actually were.

I suggest that Evans may be guilty here of roughly the same sort of confusion Fodor identifies in "Special Sciences". How so?

Evans goes on to suggest that we can get an explanation of "creativity" if we embed the attribution of tacit knowledge in a larger story about language acquisition. How is that supposed to go? How does it relate to the "full-bloodedness" of the dispositions? (Developing a proper answer to the various questions I've been asking here would make a nice final paper.)

23 October

Martin Davies, "Meaning, Structure, and Understanding", Synthese 48 (1981), pp. 135-61 (PhilPapers, PDF, DjVu, JSTOR)

This is a long-ish paper and, although not terribly dense, made more difficult due to Davies's insistence upon trying to formulate everything at the highest possible level of generality. Imagine how much easier the paper would be if he simply stuck to the Evans-like example with which he begins! Or, at least, worked primarily with the example, and formulated the generalizations afterwards (or even in footnotes). There is a lesson there for us all.

You do not need to dig too deeply into the first three objections discussed in Part I, and you can skim or skip §6. You can probably also skim (or even skip) the first three paragraphs of §8, as well as the last two paragraphs (which use mass terms as an example). Finally, you can stop at the end of §12.

Show Reading Notes

Related Readings

  • Martin Davies, "Tacit Knowledge, and the Structure of Thought and Language", in C. Travis, ed., Meaning and Interpretation (Oxford: Blackwell, 1986), pp. 127-58 (PhilPapers)
    ➢ A later paper by Davies on tacit knowledge.
  • Martin Davies, "Meaning and Structure", Philosophia 13 (1983), pp. 13-33 (PhilPapers)
    ➢ Further consideration of similar issues.

Davies is interested in the question whether there is a way of making sense of the idea that a language (in roughly Lewis's sense) has a certain structure without having to claim that speakers of the language must in any sense, including tacitly, be aware of that structure. So, in particular, in the case of Evans's 100-sentence language, Davies wants to claim that, even if speakers fail to have the sorts of dispositions Evans specifies, it can still be true that the language they speak has a certain sort of semantic structure. Davies is thus arguing that semantics can be neutral on the question whether there is a cognitive basis for semantic competence.

Davies talks both about truth-theories and semantic theories that pair sentences with meanings, or as issuing in statements like: S means that p. This difference does not much amtter for the present discussion. I'll speak about 'theories of meaning' below, meaning to include 'interpretational' theories of truth.

Davies argues by example, on p. 137, that the best theory of meaning for a language need not be one to which speakers of that language bear any interesting relation. Is the example convincing? Are the imagined speakers of L0 "obliged" (p. 139) to revise their opinions about what, say, either "Fa" or "Gb" means if they have revised their opinion about what "Ga" means? Why or why not?

Recall that Evans suggested a theory of meaning for a language should be 'structure reflecting' if there were certain patterns of acquisition and loss among speakers of that language. As I'll remark below, Davies is implicitly adding a third such constraint, concerning change of meaning.

Davies suggests that semantic theories should instead meet what he calls the structural constraint:

If, but only if, there could be speakers of L who, having been taught to use and know the meanings of sentences (of L) s1, ...,sn..., could by rational inductive means go on to use and know the meaning of the sentence s..., then a theory of truth for L should employ in the canonical derivations of truth condition specifying biconditionals for s1, ...,sn resources already sufficient for the canonical derivation of a biconditional for s. (p. 138)

The kind of case Davies had in mind would be one in which someone had learned the meanings of "Fa" and "Gb" and was, on that basis, able to figure out the meanings of "Fb" and "Ga".

Note (as Davies emphasizes) that this involves what a hypothetical speaker could do, much along the lines of what Davidson suggests in "Radical Interpretation". The basic point here is supposed to be, again, that semantics need not have anything particular to do with facts about how speakers actually do understand their language. It is enough if someone could 'project' meanings in a certain way. (You need no worry too much about the discussion of the alternative Davies calls S*. It is still hypothetical.)

In part I of the paper, Davies considers four objections to the structural constraint. You can skip the discussion of the second, in §6,and the last paragraph of §7, on pp. 145-6.

The first objection is that there might be cases in which one can 'work out' the meaning of some sentence, given what one knows about the meanings of other sentences, but where one would not want to say that there was 'semantic' structure that made such projection possible. (The third objection is in the same ballpark.) Are there better examples where meaning is systematic in some way but where there is no syntactic structure to which it corresponds? If so, how much of a problem is that for Davies?

The most important of the objections is the fourth. Davies' initial presentation of the objection is somewhat complicated (because of the insistence upon generality), but he gives a concrete example on p. 146 (to which you can probably just skip). The core of the objection is that certain actual speakers of a language might lack the ability to "project" the meanings of certain sentences they have not encountered (e.g., "Bug moving"), whereas other speakers might have that ability. If so, then it seems odd to say that the language of the speakers who cannot "project" has the same structure as that of the speakers who can project, simply because someone else could project meaning in that way. Davies's response is to deny that, in such a case, the common sentences of the two languages (e.g., "Bug") actualy do have the same meaning.

How exactly does this save the structural constraint? What does it say about how the meanings of sentences are being specified when languages are identified as Davies identifies them at the very beginning of the paper? Can we really say what the meanings of the sentences are without knowing whether they are semantically structured? That is: Is this response compatible with Davies's insistence that, even if speakers understand Evans's 100-sentence language in a non-compositional way, still the right semantics for that language is the compositional one? (Consider carefully Davies's remark that "Members of the S group either do not have, or else do not employ here, the concept of a bug..." (pp. 146-7).) We'll return to this below.

In part II, Davies discusses what is involved in crediting speakers with "implicit" knowledge of a theory of meaning for their language. Davies first step is to introduce a notion of "full understanding" of a language: Someone fully understands a language L if they are in a "differential state" with respect to the various words (semantic primitives) of the language.

The terminology here may be confusing. But, roughly, you are in the "<σ, L> state" if you know what the meanings of all the L-sentences containing the 'word' σ are. You are in the "σ differential state" if your knowledge of the meanings of those sentences is derived from your knowledge of what σ means. What Davies is trying to do here is to say how these two 'states' are to be distinguished.

Exercise: Come up with better terminology. (This is often an important part of writing.)

Ultimately, Davies's view is that someone 'fully understands' a language just in case they have the sorts of dispositions concerning acquisition and loss that Evans discusses, as well as dispositions concerning change of meaning that Evans does not mention. (This allows for answers to certain sorts of objections to Evans.) That is: If you change what you think "Fa" means, then you must either change what you think "Fb", "Fc", etc, mean, or you must change what you think "Ga", "Gb", etc, mean. The idea is that, if you understand "Fa" as being structured in a certain way, and derive your knowledge of its meaning from your knowledge of what its parts mean, then you must have changed your mind either about what "F" means or about what "a" means. (Note again how much easier it would be to understand Davies's paper had he used an example, as I just have, rather than formulating the point in complete generality, as he does on p. 149.)

This leads to a different objection to what Davies argued in part I. Suppose that the members of some group G speak the Evans language but do not have the kinds of dispositions just mentioned. Does the language they speak have 'semantic structure' or does it not? It must at least be a possibility that it does, if the structural constraint is to be different from the mirror constraint. But suppose further that what "Fa" means to them changes, without the meanings of any other sentences changing. (This is possible, since they lack the mentioned dispositions.) Does this new language have semantic structure? If not, does that show that their old language didn't? (If so, with what consequences for Davies's argument?) Or should we say that their new language does not have any semantic structure but that the old one did?

Formulate and discuss a version of the objection just given that involves acquisition of new expressions.

In §12, Davies worries that mere dispositions are not enough—compare Evans's insistence on 'full-blooded' dispositions—and so invokes causal and explanatory relations as well in giving a full account. In the case of Evans's language, for example, the claim would be that a 'full understander' would be in certain states Sa, Sb, SF, and SG that are "causally operative in the production of" their beliefs about what "Fa", etc, mean; moreover, their believing that "Fa" means that John is bald is explained by their being in the states Sa and SF.

In the rest of the paper, which you do not have to read, Davies raises the question whether we should attribute, to a full understander, tacit or implicit knowledge or belief that, say, "a" refers to John. He grants, in §14, that some kind of 'mechanism' must be in place that 'mirrors' the derivation of T-sentences from the axioms. But the question is whether Sa, SF, etc, are beliefs. In §§15-17, he considers, and dismisses, a series of objection to the claim that we should attribute such beliefs. In §18, though, he offers a reason not to do so, namely: There seems to be no need to attribute anything like implicit or tacit desires with which such beliefs might interact to produce intentions. His reasons seem closely related to Evans's point that such states are not "at the service of many projects".

But one might concede that point and suggest that the emphasis on belief here is inappropriate. How would Davies's discussion be affected if we insisted upon talking instead of informational states? So we would think of our full understander as possessing information about the reference of "a" and about what things "F" is true of, and of these information bearing states as interacting in certain ways, but not say that these are 'beliefs'.

25 October

Elizabeth Fricker, "Semantic Structure and Speakers' Understanding", Proceedings of the Aristotelian Society, New Series 83 (1982-1983), pp. 49-66 (PhilPapers, PDF, DjVu, JSTOR)

Fricker is probably best known for her work on the epistemology of 'testimony': that is, how we can acquire genuine knowledge from other speakers. She is not the author of Epistemic Injustice. That is her sister, Miranda Fricker.

Show Reading Notes

Related Readings

  • R.M. Sainsbury, R. M., "Understanding and Theories of Meaning", Proceedings of the Aristotelian Society 80 (1980), pp. 127-144 (PhilPapers)
    ➢ Mentioned by Fricker.

One might think of Fricker's goal in this paper as to try to answer Lewis's claim that there is "no promising way to make objective sense of the assertion that a grammar Γ is used by a population P whereas another grammar Γ', which generates the same language as Γ, is not" (L&L, p. 20)—and to do so on Lewis's own terms. (Fricker does not mention Lewis in this conne ction, though.) That is, Fricker accepts that semantic facts about a language must supervene on how it is used—on the linguistic abilities of its speakers—and that these abilites relate only to what whole sentences mean. (These are principles (α) and (A), on pp. 52 and 56, more or less). And yet, she wants to claim, each language has a semantic structure that is essential to it and to which speakers of the language bear some non-trivial relation. (These are principles (β) and (γ) on p. 52.) This is a bold, possibly even 'heroic', view. The strategy, to a large extent, is to effect a synthesis of Evans and Davies.

There's a lot of talk in this paper about 'a priori principles governing interpretation'. This not language with which I find myself entirely comfortable. But one might simply think of Fricker as claiming that there are certain methodological principles to which we ought to adhere when constructing theories of meaning—never mind if they are a priori or empirical—and that among these is, e.g., a principle of compositionality.

Note the strategy here: If there's some aspect of a view with which you find it difficult to agree, see if you can re-interpret it. It may be that the bit with which you disagree isn't really that central.

Fricker begins, in section I, by rehearsing what is, very roughly, the metaphysical project elaborated by Davidson in "Radical Interpretation": explaining how semantic facts are determined by (grounded in) facts about the use of language. Section II introduces the problem with which she is concerned and argues that both Davies and Evans fail properly to vindicate the idea of semantic structure.

Section III formulates an argument that the assumptions mentioned above lead to Lewis's conclusion: that a theory of meaning for a language need not uncover structure. One aspect of Fricker's presentation that is somewhat different from what we have seen before is her characterization of what is distinctive of speakers of a language: that they have "a recognitional capacity for sentence meanings": the ability, when presented with a utterance in their language, to recognize what it means. Note that this is a psychological, cognitive matter. Fricker suggests that, if we take this to be all that is distinctive of speakers (as far as semantics is concerned), then we are liable to be led to Lewis's conclusion.

The main argument of the paper is in sections IV and V. Fricker's way out begins by noting that "...a TRSA for a language L...will be neutral with respect to L's structure only if the correctness of its ascriptions of meanings to sentences of L is independent of facts about its structure" (p. 58). If sentences that have different semantic structures therefore have different meanings, then it will not, after all, possible for the sentences of two languages to have the same meanings but for one of them to be significantly structured and the other one not. Fricker gives three sorts of reasons in favor of this claim:

  1. Unless there were "some form of Structural Postulate requiring that sentences be seen to be composed out of a stock of semantically primitive elements which make the same characteristic contribution to sentence-meanings in all their occurrences" (p. 59), interpretation would be too indeterminate. In particular, the interpretation of one sentence needs to be constrained by how other sentences are interpreted if we're to make any progress at all.
  2. There are many sentences of, e.g., English that will never be uttered and yet that have determinate meanings. But if facts about meaning supervene on facts about use, then facts about the meanings of un-uttered sentences must supervene on facts about uttered sentences, and that can only be so if the language is compositional.
  3. There is a connection between the meanings we assign to sentences and the contents we assign to speakers' beliefs. And the latter, typically, will themselves be structured, involving certain concepts we take the speakers to possess. This "will tend to ensure that OL sentences are not interpreted by ML sentences with a greater degree of structural complexity" than the speakers are capable of understanding (p. 60).

Fricker concludes that facts about sentence-meanings are not independent of facts about their structure" (p. 60).

Regarding (i): Perhaps one way to think of this is that, if we're to regard the speakers of the language as rational, then we need to be able to see them as reasoning. But, at least in the case of elementary logical reasoning, we need to be able to see the premises and conclusions of inferences as related in various ways, and that will require articulating structure. Can you develop this idea? The really crucial ingredient is that the structure must reflect not just syntactic recurrence but what we might call semantic recurrence.

Regarding (ii): it's important to remember that Fricker is trying to argue here not just for compositionality, but that the structural facts about the sentences of a language are essential to it. We've of course seen considerations about creativity and productivity before, and not everyone has wanted to draw the same conclusion Fricker does. What is it about how she's understanding (ii) that leads her to the stronger conclusion?

Regarding (iii), Fricker's conclusion is that these sorts of considerations will tend to restrict the degree of structural complexity ascribed to sentences. One might think this cannot help her argument. Why?

The arguments just mentioned are elaborated and interwoven in section V. Here, Fricker adapts Evans's proposal to generate a "transcendental" definition of the notion of a semantic primitive. The idea is that the semantic structure of a language must mirror the causal structure of its speakers' linguistic abilities: If, but only if, speakers can project meaning in a certain way should we regard their sentences as being structured. This leads directly to the question whether different speakers' abilities might have different structures—perhaps some speakers of English can project meaning in a certain way, while others cannot— which is what leads to Lewis's pessimistic conclusion. (See pp. 63-4.)

The "immanent/transcendent" language is used in this connection by Evans, who borrows it from Quine. One might think of an 'immanent' definition of F-ness as saying what it is for a theory to treat something as F; a 'transcendent' definition says, rather, what it is to be F, which will then have implications for how a theory ought to treat certain things (namely, as F, if they in fact are F). In our case: It's clear enough what it is for a theory to treat a sentence as structured (that's the immanent bit); what's wanted is an account of what it is for a sentence to be structured (the transcendental bit); if a sentence is structured, then of course a theory should treat it as such.

Fricker's response is on p. 64: She wants to insist that, if speakers understand a sentence to have different structures, then they cannot understand it in the same way. This is because it will be implausibe to hold that the beliefs these speakers associate with the sentence deploy the same concepts. So Fricker concludes that "...the initially plausible thought", which we find in Davies, that "there could be structure in a language, though its speakers were blind to it, is wrong" (p. 65).

I take Fricker's point here to be a version of one Davies makes in response to an objection. Consider an unstructured sentence "Bugmoving" and its structured counterpart "Bug moving". It might be that the two sentences have the same truth-condition, in the sense that the conditions in which the former is true are exactly the conditions in which the latter is true. But one might nonetheless want to say that the meaning of the latter sentence involves predicating movement of a particular bug, whereas the meaning of the former does not. The structure thus turns out to be present in the meaning of the sentence itself and to reflect the conceptual resources that speakers of the language deploy when they use this sentence. The upshot is that, even though you only need to know what a sentence means to understand it, you can't know what the sentence means unless (in some sense) you appreciate its structure (p. 61).

Can such a tight link between semantic structure and the conceptual capacities of speakers be maintained? This would make a good topic for a final paper, though no one paper could really hope to answer this question. But one could try to make some progress. One might even make a bit of progress in a Canvas post....

28 October

Louise Antony, "Meaning and Semantic Knowledge", Proceedings of the Aristotelian Society, sup. vol. 71 (1997), pp. 177-209 (PhilPapers, PDF, DjVu, JSTOR)

This is a long-ish paper, but it contains a lot of (helpful, to my mind) exposition of views we have already encountered. You can probably read through those parts fairly quickly. It's when Antony starts giving arguments against those views that you'll want to slow down.

Show Reading Notes

Related Readings

  • Crispin Wright, "Theories of Meaning and Speakers' Knowledge", in his Realism, Meaning, and Truth (Oxford: Blackwell, 1986), pp. 204-38 (PDF, DjVu)
    ➢ The paper in which Wright develops Reconstructive Rationalism.
  • Steven Gross, "Knowledge of Meaning, Conscious and Unconscious", in The Baltic International Yearbook of Cognition, Logic, and Communication, Vol. 5: Meaning, Understanding, and Knowledge (2010), pp. 1-44 (PhilPapers)
    ➢ Discusses different sorts of reasons one might have for atttributing semantic knowledge to speakers.

Antony's main goal in this paper is to argue that semantics should be understood as inextricably linked to psychology: to questions about what speakers actually come to know about their languages when they learn to speak (and how they come to know it). She distinguishes three sorts of alternatives to her position: Platonism, represented by Devitt, Katz, and Soames; Instrumentalism, represented by Foster and Davidson; and Reconstructive Rationalism, represented by Dummett and Wright. (Except for the first, these are my terms.) She discusses Platonism only briefly, claiming that the facts about (say) English simply cannot be completely independent of facts about how English speakers behave and gesturing in the direction of Fricker.

The discussion of Instrumentalism and Reconstructive Rationalism, which dominates section I, moslty consists in playing them off against one another. The latter view insists that semantic theory should be concerned with speakers' knowledge, but only with a "systematized" (Dummett) or "idealized" (Wright) version of such knowledge, not with the knowledge actual speakers possess. Antony carefully reconstructs Wright's reasons for thinking we do need to attribute semantic knowledge to speakers. She then notes (on p. 185), that what Wright keeps insisting is that there are certain explanatory purposes for which we need to attribute such knowledge to speakers. If not, then the attribution would be merely heuristic (and thus no different from what Instrumentalism offers). But, she suggests, "...the rational reconstruction of linguistic meaning cannot explain the rationality of human language use if the posited linguistic structure is not available to speakers" (p. 188). That is: Just saying what an idealized speaker might know that would make their use of language rational cannot show that ordinary speakers' use of language is rational. But that was what we were supposed to be explaining.

How fair a criticism is this? Is there some other explanatory goal that Wright's "idealized epistemology of understanding" might serve?

Antony goes on to argue that the idealizations inherent in Reconstructive Rationalism are difficult to reconcile with the professed goals of its proponents. There is, she says, a "tension between, on the one hand, appealing to human capacities in order to justify features of meaning-theoretic projects, and on the other, ignoring the actual nature of those capacities" (p. 188). The original sin here is Davidson's attempt to motivate compositionality, and the problem this observation poses for Reconstructive Rationalism is supposed to be that it is hard to see why, if we are not going to idealize away from the finitude of the language-learner, we should think it justified to idealize away from any of the other circumstances under which language is, in fact, acquired by human beings. I.e., the question is: Which idealizations are legitimate and which are not, and why?

I take Anonty's more fundamental point to be something like this. What motivates the principle of compositionality is the fact that actual speakers are actually able to interpret novel sentences. Presumably, there is some answer to the question how they do it. Either we think they do so in a way that makes their appreciation of the meaning of the new sentence rationally justified or we do not. If we don't, it's unclear why we should think that there is, as Wright supposes, any rational basis for such knowledge, even in principle. But if we do, then why not just ask how people actually come by knowledge of the meanings of novel sentences? Of what real interest is the question how some idealized agent might do so? (As Antony notes, this criticism is very much of a piece with one that Quine makes of 'rational reconstruction' more generally.)

This point then morphs into a criticism of Instrumentalism. The discussion is directed at Quine's restriction on the data on which radical translation must be based, but could equally be directed at Davidson's corresponding restriction on the data on which radical interpretation must be based (what's held true under what circumstances). Such restrictions are motivated by claims about what sort of evidence is available, in principle, to a language-learner (or, though Antony does not mention the point, to an actual speaker who is attempting to determine if someone else speaks her language—see the first couple pages of "Radical Interpretation").

Antony argues that the evidence that is allowed to a radical whatever-er is both wider and narrower than what is available to actual language-learners. How so?

But the most interesting claim Antony makes is this one:

Considered in the context of Quine's metaphysical goals, the idealization involved in permitting the linguist an unlimited amount of behavioural evidence appears concessive to the meaning realist; in fact, it is a slick piece of bait-and-switch. The cooperative tone distracts us from the fact that Quine has already begged the crucial question, by assuming that whatever physical facts metaphysically determine meaning must be identical with the physical facts that constitute the evidence children have available to them during language acquisition. (pp. 192-3)

What does Antony mean here? What might be examples of physical facts on which semantic facts supervene that are not among the facts available as evidence to a child learning language? How, if there are such facts, might they, in some other (non-evidential) sense, be "available" to ordinary speakers interpeting one another? Here's a question that might help you get at a possible answer: Suppose that, as a matter of empirical fact, there were exactly 1000 concepts humans could possess, and that they were all innate, so that we were all born already possessing all of them and could never come to possess any others. How would that affect language acquisition and interpretation?

There is thus a common criticism of both Instrumentalism and Reconstructive Rationalism: "[T]he epistemic strategies of 'ideal' learners are of no theoretical value to the task of understanding human cognitive competencies if the idealizations abstract away from epistemic constraints that are in fact constitutive of the learning task confronting humans" (p. 193).

How good is Antony's argument for the claim just mentioned?

Section II of the paper turns to questions about tacit knowledge. Antony begins (on pp. 195-8) by rehearsing some of the reasons people have found tacit knowledge puzzling. She then argues (on pp. 198-200) that the strategy pursued by Evans and Davies has a fatal flaw: It purports to justify the attribution of tacit knowlege entirely on the basis of an isomorphism between the structure of a semantic theory and the structure of a speaker's abilities: But "...isomorphisms are cheap: the mere fact that the formal structure of a particular theory can be projected onto some structure of causally related states is not enough to make it true that the set of states actually embodies the theory", that is, that those states carry the sort of information that the theory does (p. 200).

By contrast, Antony insists, what we want is for the causal processes that underlie the understanding of novel sentences to be sensitive to information about the meanings of sub-sentential constituents. And she proposes that we can have that only if we take seriously the idea that there are states in the speaker that encode the very information articulated by a semantic theory, and that these states interact causally in ways that are sensitive to that information. Accepting those assumptions means accepting a strongly 'realist' conception of the mind and of mental processes. But, or so Antony argues, only such a conception can give us a plausible account of tacit knowledge.

Here again, then, is the big point Antony is trying to make. Almost everyone we have read seems willing to accept something like the following reasoning, which Fricker makes explicit: Whatever the truth about what expressions of natural language mean, that truth must somehow be knowable on the basis of the sort of evidence that's available to a child learning language, or to an ordinary speaker who is just trying to make sure that their conversational partners really do mean what they seem to mean. So, in some sense, meaning must be 'manifest' in linguistic behavior (as Sir Michael Dummett famously put it). Different people seem to have different ideas what 'linguistic behavior' is (that is, what 'use' is), and then they argue about how determinate meaning really is, whether we can really think of words (and not just sentences) as having meaning, and so forth. But Antony thinks this whole set-up is a con game. Why?

30 October

Christopher Peacocke, "When Is a Grammar Psychologically Real?", in A. George, ed., Reflections on Chomsky (Oxford: Basil Blackwell, 1989), pp. 111-30 (PhilPapers)

Show Reading Notes

Related Readings

  • Christopher Peacocke, "Explanation in Computational Psychology: Language, Perception and Level 1.5", Mind and Language 1 (1986), pp. 101-123 (PhilPapers, PDF, DjVu)
    ➢ Introduces the basic idea that is used in the paper we are reading.

Reading Notes TBA

1 November

Discussion Meeting

Second short paper due

Contextualism, For and Against
4 November

John Searle, "Literal Meaning", Erkenntnis 13 (1978), pp. 207-24 (PhilPapers, PDF, DjVu, JSTOR)

Show Reading Notes


Searle is concerned in this paper to argue against a certain conception of the (literal) meaning of a sentence and in favor of a different conception. He describes his target as follows:

Every unambiguous sentence...has a literal meaning which is absolutely context free and which determines for every context whether or not an utterance of that sentence in that context is literally true or false. (p. 214)

His preferred view is:

For a large class of unambiguous sentences...the notion of the literal meaning of the sentence only has application relative to a set of background assumptions. The truth condtions of the sentence will vary with variations in these background assumptions; and given the absence or presence of some background assumptions the sentence does not have determinate truth conditions. These variations have nothing to do with indexicality, change of meaning, ambiguity, conversational implication, vagueness or presupposition as these notions are standardly discussed in the philosophical and linguistic literature. (p. 214)

And Searle argues that these "background assumptions" cannot, even in principle, all be "specifi[ed] as part of the semantic content of the sentence, [since] they are not fixed and definite in number" (pp. 214-5). More importantly, no such specification can ever be complete: No matter how precisely we try to specify the "background assumptions", there will always be other background assumptions in play which can, by clever construction of examples, be brought to our attention and varied, so as to lead to variation in truth-conditions.

The general strategy of argument is to consider a sentence that seems to have a perfectly definite literal meaning. (In practice, we focus on one word and the contribution it is making.) We then consider certain peculiar contexts and note that it is simply not clear whether to regard a perfectly literal utterance of the sentence as true or false in that context. Indeed, there will be ways of developing the example so that a perfectly literal utterance of the sentence would be true; and there will be other way of developing it so that such an utterance would be false. Hence, what the truth-condition of the utterance is depends upon exactly which "background assumptions" are in play.

To check your own understanding, it is worth trying to construct examples similar to Searle's for sentences other than the ones he considers.

A more general, principled question concerns why Searle is so focused on the literal meaning of sentences. In many ways, what Searle argues could be rephrased in terms not of sentence meaning but in terms of Grice's notion of what is said. Searle's point would then be that what is said, even in the most literal utterance of a very ordinary sentence, is not completely determined by "stable" features of the sentence that is uttered, but depends also upon background assumptions that cannot, even in principle, be completely specified. How, if that is true, might it affect the conception of semantic theory with which we have been operating throughout our discussions so far? What should we think of a competent speaker as knowing about such ordinary sentences in virtue of being a competent speaker? What might we say about what competent speakers know, say, about the meaning of the word "on", as it occurs in sentences such as "The cat is on the mat"?

6 November

No Class

8 November

Robyn Carston, "Implicature, Explicature, and Truth-theoretic Semantics", in R. Kempson, ed., Mental Representations: The Interface Between Language and Reality (New York: Cambridge University Press, 1988), pp. 155-82 (PhilPapers, PDF, DjVu, ResearchGate)

Second short paper returned

The details of the 'relevance-theoretic framework' will not matter for our purposes. All you should really need to know is that it amounts to an attempt to let something like Grice's Maxim of Relevance do all the work. You can skim, and probably even skip, §5 and §8.

Show Reading Notes

Related Readings

  • Barbara Hall Partee, "Some Structural Analogies Between Tenses and Pronouns in English", Journal of Philosophy 70 (1973), pp. 601-9 (PhilPapers)

Grice writes in "Logic and Conversation":

In the sense in which I am using the word say, I intend what someone has said to be closely related to the conventional meaning of the words (the sentence) he has uttered. (p. 25)

Grice is of course aware that contextual factors may play various sorts of roles in determining what is said, with respect to words like "I" and "you" and "this". But the fixed, stable meanings of the words used are supposed to play an especially important role.

We have more or less been following Grice in this respect, and assuming that we can derive the truth-condition of a sentence from semantic axioms governing its component parts. But Carston, in this paper, challenges this sort of assumption. She is particularly concerned with the requirements of a psychologically adequate account of linguistic competence. And she argues that, if we want such an account, then the difference between what is said—which she calls the explicature associated with an utterance—and what is implicated is much less stark than Grice seems to suppose.

The first lesson to learn from this paper is that it is not at all clear how we should draw the distinction between what is said and what is meant. Carston presents a whole battery of examples to show this. Her discussion of "and" is especially intriguing, since that was one of the examples in which Grice was especially interested. In particular, Carston argues that an utterance of "A and B" can assert the existence of all sorts of different relations between A and B: temporal, causal, rational, and so forth. And she argues further that neither the view that "and" is multiply ambiguous nor Grice's view that the assertion of such relations is always an implicature can be sustained. The view she proposes is instead that speakers use various sorts of pragmatic processes, very similar to those that generate implicatures, to "enrich" the linguistically specified content so as to arrive at what is said.

More specifically, Carston opposes what she calls the "linguistic direction principle", which claims that any "explicating" process must be in response to something in the linguistic form that calls for it. She sees the more traditional view as supposing that "what is said" must be truth-evaluable and that the only work context can do to fix what is said is whatever needs to be done to get us something truth-evaluable. So, e.g., the reference of demonstrative has to be determined, since otherwise one has nothing truth-evaluable; but one does not need to find any relation for "and" to express beyond truth-functional conjunction, since that is already truth-evaluable. What do you think of her arguments against this traditional view?

Our main interest will be in the sorts of arguments Carston gives that, e.g, the temporal aspect of certain uses of "and" must be part of what is said. There are four of these:

  • Functional: Carston has views about the different roles the explicature and implicature play, and in particular about how the implicature is generated. These are supposed to imply that the implicature cannot logically entail the explicature. Too little seems to be said here to make it clear why she holds this view and, frankly, it does not seem overwhelmingly plausible on its face. (Can you think of any clear counterexamples to that claim?) So feel free to set it aside if you wish, though it would be nice to hear if anyone has some idea how to justify this restriction, which plays a large role early in the paper.
  • Relevance: Carston argues that relevance plays a central role in communication. In particular, considerations of relevance enter not just into the determination of implicatures but into the determination of what is said, e.g., the resolution of ambiguity, the determination of the reference of demonstratives, and so forth. Claims about relevance thus fuel some of Carston's claims about what is part of the explicature. Here, though, Carston mostly gestures at work by Dan Sperber and Deidre Wilson, so these arguments may be difficult for us to evaluate. (That is why I said you can just skim or skip it.)
  • Negation: Consider Grice's gas station case. Joe says, "Yo, Bill, I'm out of gas!" and Bill answers, "There's a gas station around the corner". Grice says that Bill implicates that the station is open and has gas to sell. Suppose Fred knows that the station is closed. Then it seems clear that Fred cannot say to Joe, "Bill's wrong", thereby meaning that the station is closed. As it's sometimes said, negation cannot "target" the implicated content.
  • Conditionals: There is something similar to be said about conditionals. I'll leave it as an exercise (i.e., feel free to do this in your response) to explain how the "conditional test" is supposed to work.

Carston uses the "negation test" and the "conditional test" to argue, in a variety of cases, that the explicature is much richer than one might have supposed. As I said before, there is a whole battery of examples here. Which of these seem to you to be the strongest? which the weakest? and why? What strategies do you think might be available for resisting the conclusion that Carston wants to draw: that pragmatic processes play a surprisingly large role in determining what is said?

It's not always clear that Carston's various tests deliver the same result. She argues based on functional considerations that "The park is some distance from where I live" says something non-trivial. But how does it interact with negation and conditionals?

11–13 November

Jason Stanley and Zoltán Gendler Szabó, "On Quantifier Domain Restriction", Mind and Language 15 (2000), pp. 219-61 (PhilPapers, PDF, DjVu, Wiley Online)

Show Reading Notes

Related Readings

  • Jason Stanley, "Making It Articulated", Mind and Language 17 (2002), pp. 149-68 (PhilPapers)
    ➢ A more general presentation and defense of the 'binding argument' that plays such a large role in the paper we are reading.
  • Jason Stanley, "Context and Logical Form", Linguistics and Philosophy 23 (2000):391-434 (PhilPapers)
    ➢ Argues that all context-dependence must be traceable to effects on covert variables.

This paper is concerned with a particular case of the general problem raised by Searle and Carston: quantifier domain restriction. That is, it is concerned with the question how an utterance of a sentence like "Every bottle is empty" comes to express, not the absurd proposition that every bottle in the universe is empty, but some sensible proposition to the effect that every bottle in some particular group G is empty.

Stanley and Szabó begin by distinguishing between descriptive and foundational problems of context dependence. The core descriptive questions are which aspects of the utterance give rise to context sensitivity and what has to be done, exactly, to resolve it. Foundational questions concern how context does whatever needs to be done, e.g., how the value of a demonstrative pronoun is in fact fixed. Stanley and Szabó explain the distinction by reference to an example involving demonstratives, which is worth studying carefully.

I would suggest that this distinction should already goes some way towards lessening one's sense of panic in the face of the examples offered by Searle and Carston, on the ground that at least some of what is troubling about those examlpes concerns the foundational problem, whereas semantics itself need be concerned only with the descriptive problem. How might that suggestion be developed?

Stanley and Szabó then distinguish three ways in which context can affect interpretation.

  1. Syntactic: Context is called upon to resolve both lexical and structural ambiguity. One might think there are also cases in which something less than a sentence is uttered, and a complete sentence has to be reconstructed from the context.
  2. Semantic: Context may be called upon to fix the values of contextual parameters, such as demonstratives and indexicals, but also e.g. to provide a comparison class for an attributive adjective. Note that, as Stanley and Szabó use the term, any contextual effect that affects what is said, in Grice's sense, is semantic.
  3. Pragmatic: Context is of course critical to determining what is implicated by a speaker in making a certain utterance.

How does the distinction between descriptive and foundational questions apply in each of these cases?

With that distinction in place, Stanley and Szabó raise the question which of these roles context plays in the case of quantifer domain restriction. So there are three options.

  1. Syntactic: The "missing material" is essentially elided, so context has to reconstruct the complete sentence that determines what is said.
  2. Semantic: Either (i) there is an unpronounced expression, present in the "logical form" of the uttered sentence, to which context assigns a value; or (ii) the semantic clause for quantifiers somehow introduces a "domain" over which the quantifier is supposed to range.
  3. Pragmatic: What is said in an utterance of "Every bottle is empty" always is the absurd proposition that every bottle in the universe is empty, but a more sensible proposition is usually communicated through pragmatic processes.

To which sort of view do you think Searle or Carston might incline? If none of them, what sort of view do you think has been left out of account? It is also worth checking your understanding here by considering what the relevant options would be in other of the cases we have discussed.

In §5, Stanley and Szabó criticize the syntatic approach. Their main objection is what they call the "underdetermination" objection, which is that it is very hard to see how context could provide a unique 'restrictor' for each quantificational phrase. I.e., they claim that this view makes the foundational problem nearly impossible. This objection is not developed in much detail, so it would be well worth trying to explore it a bit. Here's one crucial question: How exactly does "context" resolve structual or lexical ambiguity? If context is what resolves it, then would it be possible for someone to utter an ambiguous sentence, fully intending that the sentence should have one particular interpretation, but somehow fail to utter that sentence, since context determined the other interpretation? Might a judicious application of the distinction between descriptive and foundational problems help here? If so, how? and how much?

In §6, Stanley and Szabó argue against the pragmatic approach. The core of their criticism is what has come to be known as the binding argument. Here's a simple example. Consider the sentence:
(*) Every senator is reviled by most voters.
It seems reasonable to suppose that an utterance of (*) could mean that every senator is reviled by most voters in that senator's state, not by most voters in the country. So which voters are in question depends upon which senator is in question. Can you think of other sorts of examples along these lines? Why are such examples supposed to be a problem for the pragmatic view? Obviously, utterances of (*) can implicate almost anything. So why isn't it enough to point out that they can implicate that thing, too? Part of an answer would involve considering:
(**) Every senator is reviled by most voters. So are most representatives.
and noting that his sentence is ambiguous. How so?

Finally, in §7, Stanley and Szabó discuss semantic approaches, considering three versions of the view:

The quantifier domain is provided by context in much the way a "domain" is provided in an interpretation. The argument against this view is that different domains can be needed for different quantifiers. Can you come up with better examples of that sort of phenomenon?

A sequence of quantifier domains is provided by context (much as context provides a sequence of objects to be the referents of various demonstatives). The objection to this view is that it too falls to the binding argument. How so?

The quantifier domain is represented explicitly in the syntactic structure (logical form) of the sentence, though it is not pronounced. We will discuss this view a bit in class. You do not need to worry too much about the different implementations of this view and the arguments about which should be preferred, on pp. 254-8

How plausible does this view seem?

15 November

Discussion Meeting

Revised second short paper due

18 November

Emma Borg, "Minimalism versus Contextualism in Semantics", in Gerhard Preyer and Georg Peter, eds., Context-Sensitivity and Semantic Minimalism (Oxford: Oxford University Press, 2007), pp. 339-60 (PhilPapers, PDF, DjVu)

Show Reading Notes

Related Readings

  • Herman Cappelen and Ernie Lepore, Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism (Malden MA: Wiley Blackwell, 2005
    ➢ Argues for a view similar to Borg's, but a bit less radical.
  • Emma Borg, Minimal Semantics (Oxford: Oxford University Press, 2007)
    ➢ A book length treatment of Borg's ideas.

This paper is one in a volume of essays responding to and commenting upon Herman Cappelen and Ernie Lepore's book Insensitive Semantics, in which they argue for the view now known as semantic minimalism. This is the view that, with the exception of the obvious exceptions ("I", "here", "this", etc), every sentence expresses a unique proposition, so that context-sensitivity is limited to those obvious exceptions. Borg also defends a form of this view, though an even stronger one. Our interest is in how carefully Borg sets out the different positions and how she distinguishes the sorts of views Carston and Stanley defend. She does not argue for any particular view of her own in this paper.

Borg identifies four sorts of arguments against minimalism, that is, in favor of the view that some particular expression is context-sensitive.

  1. Context-shifting Arguments: These claim that some particular sentence, e.g., "Michael is tall", can express a truth in some contexts, but not in others, even if the relevant facts have not changed.
  2. Incompleteness: These claim that particular sorts of sentences, such as "Mary is ready", rarely, if ever, express propositions on their own, but require some supplmentation.
  3. Inappropriateness: These claim that certain sorts of sentences, such as "Every bottle is empty", although there is a proposition they could always express, cannot express that proposition, because it is too obviously not what speakers mean.
  4. Indeterminacy: These arguments claim that even the thought expressed is in certain sorts of cases indeterminate. It's much less clear what argument of this type would be like, and Borg does not give any examples. But consider a sentence like "The apple is green", and suppose it clear which apple is in question and that we are talking about color (not ripeness). Is it supposed to be green on the outside or on the inside? Is it OK if it's a red apple painted green? (Note what that use of the word "red" meant.)
Part of the point Borg wants to make is that some of these arguments, if accepted, lead to more radical departures from minimalism than others.

Following C&L, Borg distinguishes two sorts of contextualism: radical and moderate. C&L had characterized the difference in terms of the scope of context-sensitivity, so that more moderate views regard fewer terms as context-sensitive. As Borg notes, however, this is not a particulary illuminating characterization. Rather, it is one thing to hold that there are terms outside the "basic set" ("I", "here", "tomorrow", and the like) that are context-sensitive in the same way that those terms are. It is an entirely different thing to hold that "there are forms of context-sensitivity that are not capturable on the model of...the Basic Set" (p. 344).

Thus, Borg regards the crucial questions as being: What are the mechanisms of context-sensitivity? Can the context of utterance affect semantic content even when such action is not demanded by the syntax of the sentence? Radical contextualists think it can; moderates think it cannot. The moderate view is thus much closer in spirit to minimalism, since it regards all context-sensitivity as traceable to syntactic 'triggers'. (So moderate contextualits accept a form of the 'linguistic direction principle' or what Carston had earlier called the Isomorphism Principle.) The disagreement between these views concerns not what kind of phenomenon context-sensitivity is, so to speak, but only how extensive that phenomenon is. Radical contextualists, on the other hand, think we need "an entirely different picture of the relationship between semantics and pragmatics" (p. 346), i.e., they think that there is something fundamentally wrong with the model of context-sensitivity that informs miminalism and moderate contextualism.

Using this distinction, Borg then defends moderate contextualism against a charge made by C&L. In their book, they had argued that moderate contextualism collapses into radical contextualism: Once one allows for the possibility of context-sensitivity outside the basic set, using the sorts of arguments mentioned above to argue for that claim, then one will find it difficult not to accept the context-sensitivity is all but ubiquitous. But Borg argues that moderates can have reasons to limit the scope of context-sensitivity. Borg's observation is that if what distinguishes moderate from radical contextualism is a view about the mechanisms of context-depencence (and not just how many expressions are context-dependence), then no such argument can possibly work. But now there's another question to ask: If the distinction is one about 'mechanisms'—if, in particular, moderate contextualists accept the linguistic direction principle—might that not provide some way for moderate contextualists to resist the idea that all expressions turn out to be context-dependent?

Borg spends the remainder of the paper offering a characterization of minimalism. It has four parts.

  • Sentences (not just utterances of them) typically do express propositions (modulo any obviously context-dependent elements that might contain).
This most obviously matters with examples like "Every bottle is empty" or "Jill is ready", though what's weird about it differs in these cases. How so?
  • There is something special about the expressions in the 'basic set' and any extension of context-dependence beyond these is momentous.
What reasons does Borg give, in the first paragraph of §3.2, in favor of this claim? How good are those reasons? (The rest of §3.2 discusses a sectarian dispute between Borg and C&L and can be skipped.)
  • Minimalists draw a strong distinction between semantic content and speech at content, where the latter corresponds to what is said in Grice's sense.
Most of Borg's own discussion is, again, concerned with a disagreement between her and C&L about what role this 'minimal proposition' is supposed to play. But there's a deeper worry here for both Borg and C&L. If what's said when someone utters "Jill is ready" isn't determined by the minimal proposition allegedly expressed, then what does determine it? In practice, it would seem to be the sorts of pragmatic processes (enrichment, etc) to which radical contextualists draw attention. So it seems as if it turns out that, if our interest is in what's said, then minimalism looks to be closer to radical contextualism than one might have thought it should.
  • "...[T]here is an entirely formal route to semantic content".
Borg distinguishes two different forms of this view. On the weaker, it amounts to the linguistic direction principle. On the stronger, it implies that the 'semantic content' of a sentence should be wholly determined by its formal features and so independently of any contextual facts. This is an incredibly strong view. I take the main question about it to be: Why should one think, as Borg seems to think (see e.g., fn. 42), that there needs to be something propositional or truth-evaluable that is formally determined in this way?

20 November

Ishani Maitra, "How and Why To Be a Moderate Contextualist", in Gerhard Preyer and Georg Peter, eds., Context-Sensitivity and Semantic Minimalism (Oxford: Oxford University Press, 2007), pp. 112-32 (PhilPapers, PDF, DjVu)

Show Reading Notes

Related Readings

  • Richard Kimberly Heck, "Semantics and Context-Dependence: Towards a Strawsonian Account", in A. Burgess and B. Sherman, eds., Metasemantics: New Essays on the Foundations of Meaning (Oxford: Oxford University Press, 2014), pp. 327-64; originally published under the name "Richard G. Heck, Jr" (PhilPapers)
    ➢ Argues for a view similar to Maitra's, though with a focus on demonstratives.

Maitra first takes up a topic also discussed by Borg: what divides Moderate from Radical Contextualism. Like Borg, she starts with Cappelen and Lepore's idea that the issue concerns the "extent" of context-sensitivity. But Maitra suggests that we should understand this in terms of:

  • The Meaning Question: How much does the "standing meaning" of an expression constrain the content it can have on a given occasion of use?
  • The Context Question: How rule-governed is the determination of content from context?
The thought, then, is: "The more constrained and rule-governed the semantic contents of an expression are on a given Contextualist view, the less context-sensitive that expression is taken to be" (p. 116). This is an important point: One does not have to choose between the different sorts of views here once and for all, but can hold different views about different sorts of expressions. E.g., almost everyone agrees that "I" is much more constrained than, say, "that" and it seems not unreasonable to think that determining the reference of "I" is more rule-governed than determining the reference of "that".
It does not seem particularly important to be able to compare all possible views along these dimensions: see pp. 116-7. It's enough to have distinguished what the right questions are.
Maitra mentions the term "character", which was introduced in work on this kind of topic by David Kaplan. In this sense, 'character' is supposed to model 'standing meaning', that is, the character of an expression is (roughly) the meaning that expression has in its own right (as opposed to on any given occasion). Kaplan treats character as a function from contexts (circumstances in which an utterance is made) to 'contents', so the character of "you", for example, would be a function from contexts to whoever is denoted by "you" in that context (the 'addressee', as it is sometimes put).

Maitra goes on to argue, in §3, that this way of categorizing the different views makes the question how many expressions are context-sensitive not the crucial question. What matters is the way in which they are context-sensitive. There are thus some obvious similarities between how Maitra and Borg characterizes these views. Are there differences as well or do they really amount to the same idea? If there are differences, is there some way to put the two views together to get a stronger one?

The main focus of the paper is on what Maitra calls the "Miracle of Communication Argument" (MCA) against Radical Contextualism. The worry here is that, if pragmatic processes affect semantic content as dramatically as Radical Contextualists propose, then, since almost any piece of information one has can prove relevant, it is obscure how speakers and hearers ever manage to converge on a particular interpretation of some bit of language.

Maitra raises two sorts of objections to this argument.

  1. Something along these lines seems obviously to be true of implicature, and yet we do manage to communicate by implicature. Hence, it looks as if there must be some explanation to be given of how this works. Any idea what that might be? (How specific are implicatures?)
    Maitra argues that something similar is true, on C&L's own view, of 'what is asserted', which is what Borg called "speech act content". You need not worry about this part of the argument.
  2. Something along these lines seems to be true of uncontroversially context-sensitive expressions. Indeed, there seem to be very, very few expressions in the "basic set" whose content on a given occasion of utterance is completely determined by rule: "I", "today", "tomorrow", and "yesterday" seem plausible candidates. (These are the so called 'automatic indexicals'.) But neither "here" nor "now" nor "we" nor "you" has its reference completely determined by rule. Can you construct examples to illustrate this point? (Maitra gives one for "we", but it is not developed much.)
Still, Maitra thinks (or at least concedes) that the MCA does pose some sort of challenge to Contextualism. In particular, we need to "explain why hearers are generally more confident about what is communicated via semantic contents, than about what is communicated in other ways" (p. 125), such as implicature.

Maitra argues in §5 that an appropriately Moderate form of Contextualism has an explanation to offer. Focusing on comparative adjectives, like "tall", she suggests that:

  1. Their standing meaning highly constrains their content, since the only locus of variation is in the comparison class. I.e., "tall" always means: tall for an F, and the only issue is what F is.
  2. It might be possible to say something fairly definite about how different contexts make "natural" comparison classes available.
How does Maitra develop this latter idea? What does she mean by "natural" readings of sentences? (How might this compare to an example we discussed earlier, according to which there are 1000 possible concepts humans can have, and they are all innate?) How is this view supposed to answer the MCA?

On p. 128, Maitra considers an objection that a radical contextualist might reasonably bring: Even once we know what the comparison class for "fast", say, is, "there are many ways of being fast for a snail". So the worry is that, if we just specify the comparison class as being snails, we have still yet to specify a truth-evaluable content. Maitra offers two replies, the first of which is ad hominem. What is the second reply and how effective is it? (It might help here to recall Borg's discussion of 'incompleteness arguments'.)

Finally, Maitra considers the question whether a Contextualist might concede that, since so much information is potentially relevant for determining the comparison class, say, there will be failures of perfect communication, but then respond that communication does not need to be perfect to be successful. She does not really develop an example to illustrate this possibility. Can you do so, say, one involving "fast" or "tall"?

Slurs
22 November

Christopher Hom, "The Semantics of Racial Epithets", Journal of Philosophy 105 (2008), pp. 416-440 (PhilPapers, PDF, PDCNet)

NOTE: Please be advised that, within this literature, it is customary for racial epithets and other slurs to be written out. So you will, for example, see the n-word written out, and not just referred to as "the n-word". Elisabeth Camp addresses the question why people do this in her paper (where she does so, too), writing:

I am going to mention (though not use) a variety of slurs in contemporary use. This will offend some readers, for which I apologize. But I believe we can understand slurs' actual force only by considering examples where we ourselves experience their viscerally palpable effects. I hope the offense is offset by a commensurate gain in understanding.

We will follow this practice in class, too, and I shall follow it below, as well, though we shall also try not to overdo it. I shall usually use as an example the word "faggot", since it is an epithet that was often directed at me in my younger days. My apologies, too, then, to those for whom that word carries special force.

Show Reading Notes

Related Readings

  • Christopher Hom and Robert May, "Moral and Semantic Innocence", Analytic Philosophy 54 (2013), pp. 293-313 (PhilPapers)
    ➢ Develops more of the linguistic details of Hom's view.

The central question of Hom's paper is how we should think about the derogatory content of racial epithets. In fact, the discussion seems to apply more broadly, to other slurs, such as "faggot". Is the derogatory content of that term part of what is said when someone uses it? Or is it just part of what is meant?

Hom defends the 'semantic' view which, to first approximation, is that "faggot" means: gay man (that part is the 'non-pejorative correlate) and despicable because of it. Such an account faces two main challenges: (i) explaining why only some uses of epithets derogate (in particular, how there can be 'appropriated' uses); and (ii) how different epithets can have different force (e.g., why "faggot" is worse than "fairy").

In §II, Hom considers (and criticizes) various 'pragmatic' accounts of epithets, though these are not all 'pragmatic' in the same sense. The first of these is a 'contextualist' account that would hold that the meaning of a epithet varies according to the context. (Note that this view could be understood as either concerning what is said or what is meant.) Hom's complaint about it is that it doesn't specify a 'rule' that would allow us to predict the meaning of a given utterance of an epithet. Generally speaking, contextualists have not been worried by such complaints. Hom does give some reasons they should be worried here. How good are those?

The second pragmatic strategy borrows from Frege, who suggested that synonymous words sometimes differ in 'tone' or 'coloring'. My own view is that what Frege meant here is probably best elaborated in terms of conventional implicature, which is the third view Hom discusses. Here, Hom is trying to see if there is a different way of understanding the view. So, to a large extent, this discussion can be skimmed.

As for the third view just mentioned, Hom has several criticisms to make of it. The first is that, if the derogatory content of a word is a conventional implicature, then it is not 'cancelable' and so would have always to be present. But Hom claims (and will argue later) that there are non-derogatory uses of words like "faggot" (and not just appropriated uses). Another is that, on this view, "Male homosexuals are faggots" is not only true but analytic, since it is synonymous with "Male homosexuals are gay men". Moreover, pairs like:

  1. Alex said that Fred is a gay man.
  2. Alex said that Fred is a faggot.

might seem to have different truth-values although, again, on the view we are considering, one might expect them to be synonymous. Do you have any ideas about what a defender of the pragmatic view might say in response? (Compare: "Fred believes that a fortnight is a fortnight" and "Fred believes that a fortnight is two weeks".)

In §III, Hom lays out a number of facts he thinks a good theory of epithets will need to explain. Many of these are fairly obvious, but a couple merit special attention. What Hom calls "derogatory autonomy" is the fact that the power of an epithet does not seem to depend upon a speaker's own attitudes but to be some kind of social fact (possibly a convention, in Lewis's sense). What he calls "evolution" is related: the fact that the derogatory force of an epithet can change over time.

The most important of these, though, is the last: Hom claims that there are non-derogatory, non-appropriated uses of epithets. Hom gives a number of examples to support this claim. How compelling do you find them to be? (It's worth reformulating these for yourself in terms of other epithets. It's also worth considering what might seem to be minor variants of these, such as "Is Naomi Osaka a chink?" or "Naomi Osaka is not a chink" or "Are Japanese people chinks?")

In §IV, Hom develops his positive view, which he calls "combinatorial externalism". Hom's view is that the derogatory force of a racial epithet, at a given time, and in a given society, derives, in large part, from racist ideologies and racist practices prevalent in that society at that time. So his view is that what the epithet means is: a person who is a member of a certain group and who, as such, ought to be treated in certain ways (that is the practice part) because members of that group have certain properties (that is the ideology part). It's crucial to Hom's view that, although this content is always present when the epithet is used, uses of it are not always derogatory. For example, "Fred is not a faggot" denies that Fred ought to be mistreated because he is gay.

In the last section, Hom argues that his view does better with the desiderata listed in §III than the other views he considered. I'll leave it to you to comment upon these and to raise any objections to Hom's view that might seem to you to emerge from that. (I find it hard myself to understand what Hom wants to say about appropriation. If you do understand it, please feel free to explain.)

25 November

Elisabeth Camp, "Slurring Perspectives", Analytic Philosophy 54 (2013), pp. 330-49 (PhilPapers, PDF, Wiley Online)

Camp also deploys the notion of 'perspective' in her account of metaphor. See the optional reading.

Show Reading Notes

Related Readings

  • Elisabeth Camp, "Metaphor and That Certain 'Je Ne Sais Quoi'", Philosophical Studies 129 (2006), pp. 1-25 (PhilPapers)
    ➢ Develops more of Camp's positive view of metaphor.

Camp offers in this paper an account of slurs in which they encode what she calls 'perspectives'. She here attempts to remain neutral on how such perspectives are 'encoded': whether as part of what's said or in some other way. Her claims here are meant only to concern, as she puts it, "what the 'other' component of slurs is" (p. 331), besides what is present in the 'non-pejorative correlate'.

Camp first considers some alternatives to the view she wants to defend. The first is a kind of expressivism, according to which the use of a slur expresses (in the sense of moral expressivism) a negative attitude about the target group. Camp lodges two objections against this view. The first is that not all uses of slurs seem to express such attitudes. I find these examples somewhat difficult to analyze, but it's important to keep in mind here that, at least as Camp understands the view she is criticizing, uses of slurs are supposed to express negative attitudes. This needs to be distinguished from whether use of a slur reveals such attitudes.

On the other end are views like Hom's. Camp does a nice job explaining why there is a sense that someone who uses a slur (in the typical way) is making a mistake. But she notes that it is difficult, at best, to say exactly what, besides membership in a given group, the use of a slur asserts.

One important observation here (which is not original to Camp) is that the derogatory aspect of slurs seems to 'project out' from negation, conditionals, and many other constructions. I.e., "John is not a faggot" and "If John is a faggot, then..." seem every bit as offensive as "John is a faggot". This is a problem for views like Hom's: It seems as if I should be able straighforwardly to negate the derogatory component, if it is part of what is said. I.e, "John is not a faggot" should mean something like: It is not the case that John is a gay man and, as such, is an appropriate target of discrimination. But it clearly does not.

In §3, Camp begins developing her view, "that slurs are so rhetorically powerful because they signal allegiance to a perspective: an integrated, intuitive way of cognizing members of the targeted group" (p. 335). §3.1 is devoted to outlining what a perspective is: a conception, usually implicit, of what features of members of the target group (in this case) are most important, in the sense that they explain or ground other features. (You might think about this in connection with implicit bias.) Camp offers a number of suggestions about how such a perspective might influence both thought and affect. She also insists that "getting a perspective", that is, understanding what it is, "even temporarily, requires actually structuring one's thoughts in the relevant" way (p. 336). This will turn out to be one of those cases where a philosopher says something, almost in passing, that one might not even have noticed but which turns out to be really important.

In §3.2, Camp applies this notion to slurs, arguing that use of a slur "signals a commitment to an overarching perspective on the targeted group as a whole" (p. 337) or, at least, to the claim that being a member of that group is an important feature of a member of it, one that explains or grounds other features. It is then because the relevant perspective evinces disrespect for members of the target group that use of the slur derogates.

In §3.3, Camp argues that the connection between slurs and perspectives is, in some sense, semantic, i.e., that the associated perspective is part of the meaning of the slur itself. The main argument here is simply that use of that word, as opposed to the neutral counterpart, seems to 'insert a way of thinking about the group into the conversation'. Precisely how that is part of meaning, Camp does not say, but she does identify it as 'not at issue content', one standard example of which is conventional implicature (so one might, for the moment, take that to be the view).

Finally, Camp suggests that we can explain, in such terms, "why the [slurs] produce a feeling of complicity in their hearers..." (p. 343). She suggests that there are two forms of this. First, someone who hears the slur is, in effect, forced to adapt, if only temporarily, the perspective associated with it; this is 'cognitive' complicity. Second, the utterance of the slur is a reminder of the social structures that give it weight; this is 'social' complicity. And silence equals agreement. Do both of those points seem right?

If you read a lot of the footnotes, you can see that the view that Camp expresses in this paper is a successor to a view she held earlier. It seems to me that, in some ways, it is actually easier to understand her current view if you see it in that light. So let me fill in this part of the story.

Camp's original view seems to have been as follows. Each slur is associated with a stereotype of the members of the group to which the slur purports to apply. I am guessing, given what else I know about Camp's views, that the notion of "stereotype" she had in mind was not (just) the ordinary one, but that it at least had elements of the notion going by that name that psychologists sometimes discuss. Stereotypes, in this sense, are supposed to play a role in how we sort objects into categories, how we organize our thought about them (what's typical, what goes with what), and so forth. We might then think of the "pejorative aspect" of a slur as, e.g., signalling a commitment to the appropriateness of the stereotype. And one can now imagine relatively easily what kind of story one might tell about why the use of slurs typically derogates the targeted group.

It would seem, however, that Camp became convinced that that view is too narrow: Some slurs, the claim seems to have been, are not associated with stereotypes. So Camp reformulates the view so that the role previously played by stereotypes is now played by what she calls 'perspectives'. These play a similar psychological role, and one might think of stereotypes as a kind of perspective. So much of the structure of the original view is preserved. But one might wonder if something else has been lost: Is it a good thing or a bad thing if these 'perspectives' can be so thin that there is little more to them than that a person's being a member of the 'targeted group' is supposed to be regarded as an important fact about them (e.g., important to who that person is)?

27–29 November

No Class: Thanksgiving Holiday

2 December

Luvell Anderson and Ernie Lepore, "Slurring Words", Noûs 47 (2013), pp. 25-48 (PhilPapers, PDF, Wiley Online)

Show Reading Notes

Related Readings

  • Daniel Whiting, "It's Not What You Said, It's How You Said It: Slurs and Conventional Implicatures", Analytic Philosophy 54 (2013), pp. 364-77 (PhilPapers)
    ➢ Defends a view in the vicinity of Camp's, on which the pejorative content of slurs is conveyed by conventional (rather than conversational) implicatures.

One welcome fact about A&L's approach is how explicit they are about what they want to explain: the offensiveness of slurs. (We'll return to this point.) Their goal is (i) to argue against all views according to which the offensiveness of slurs is to be explained in terms of some offensive content that the slur conventionally expresses (in one way or another) and thereby (ii) to motivate their alternative view according to which the offensiveness of slurs is to be explained simply in terms of the fact that they are taboo, i.e., that their use is prohibited.

Section 2 considers whether (e.g.) "faggot" and "gay man" 'express the same concept', i.e., are synonymous. (So you can think of them as arguing against Hom here.) One complaint that A&L make here is that everything that anyone has ever been said about how these words differ in meaning would make "faggot" and "fairy" have the same meaning, even though the former is much more offensive than the latter. (They give a different example.) A second worry is that, on Hom's view (e.g.), it seems as if
(a) Elton John is not a faggot.
should (assuming the negation is 'normal' and not 'meta-linguistic') mean something like: Elton John is not a gay man who should be discrimimated against for that reason. Which seems straightforwardly true and, indeed, laudatory, whereas (a) is in fact every bit as offensive as "Elton John is a faggot". (The case where we are dealing with meta-linguistic negation is more complicated.)

Section 3 addresses issues about embedding and speech reports. The point here is that, if "faggot" had a different meaning from "gay man", then something like:
(b) Eric said that Elton John is a faggot.
should, one might have thought, simply attribute such an assertion to Eric. But A&L argue both that it is the person who utters (b) who slurs; it need not attribute a slur to Eric (though it might suggest that). The data here seem very muddy indeed.

Section 4 discusses presuppostional views. There are many cases in which use of a term seems to 'presuppose' certain facts. Thus, even the question "Has Bill stopped smoking?" 'presupposes' that Bill has smoked in the past; the same goes for "Bill has not stopped smoking". But presuppositions do not always 'project out', and one case where they do not are speech reports , as A&L note. So if the use of "faggot" presupposes that gay men are vile, then one would expect (b) above not to presupppose that, and so not to be offensive. But it is.

And here, I think, an interesting issue emerges. In a footnote that Whiting seems to have added late to the optional paper mentioned above, he writes:

Though I do not doubt that the mere mention of a slur can cause offence, I doubt that merely to mention a slur is really to derogate. (p. 369, fn. 11)

Whiting's point is that the offensiveness of slurs is not what he is trying to explain. It is, rather, the way in which slurs are used to derogate (i.e., belittle or demean). That is quite different. So what should we be trying to explain? There's a similar issue here about Camp: Some of her examples of 'non-derogatory' uses are quite similar to A&L's (28)-(30).

Sections 6-7 considers the view that slurs conventionally implicate offensive content. A&L note several advantages of this view: Most importantly, it explains why the offensive content of slurs tends to 'project out' so well, indeed, always. But they argue that the view then over-generates, since there are non-offensive uses of slurs. On the other side, though, if what makes a slur offensive is something about its meaning, then it is puzzling why the mere mention (quotation) of a slur might be offensive, since its meaning is irrelevant when it is just mentioned. Here again, though, one might wonder about the contrast just mentioned, between offense and derogation.

A&L present their positive view in section 8: "slurs are prohibited words" (p. 38), and the group who prohibits the use, in such cases, is usually the 'target' group. So the view is that "faggot" is a slur, and its uses are offensive, because gay men have deemed the word prohibited. But is it really true that the N-word is offensive because someone has prohibited its use? (Whiting raises this question in the optional reading.) It wasn't even very long ago that lots of white people used the N-word routinely They thought that was perfectly fine, because they were racists and actually did think black people were inferior, etc. If there was a "prohibition" on use of the N-word, that would have been news to them. An objection that seems to me more principled, but in much the same area, is that Prohibitionism gets the explanatory order backwards: Slurs don't offend because they are taboo; they are taboo because they offend (better: because they derogate and therefore offend).

4 December

Renée Jorgensen, "The Pragmatics of Slurs", Noûs 51 (2017), pp. 439-62; published under the name Renée Jorgensen Bolinger (PhilPapers, PDF, Wiley Online)

Topic for final paper due

Jorgensen is also a talented artist, as you can see here. She graciously agreed to my using her painting of Frege for the cover of my book Modes of Representation.

Show Reading Notes

Related Readings

  • Geoffrey Nunberg, "The Social Life of Slurs", in D. Fogal, D. W. Harris, and M. Moss, eds., New Work on Speech Acts (Oxford: Oxford University Press, 2018), pp. 237–295 (PhilPapers)
    ➢ If I had a view on slurs, this would probably be it. But the paper is very long, 73 pages, so you will see why we are not reading it. But if you find yourself interested in these topics, then you really should read this paper. It is close in some ways to Jorgensen's view, and in other ways to Anderson and Lepore's, in so far as it sees slurs as largely a sociological rather than a linguistic phenomenon. But it is much more self-conscious about that fact. (Maybe there is a lesson here about the limits of both philosophy of language and linguistic theory.)

In this paper, Jorgensen offers an account of the offensiveness of slurs that is entirely grounded in pragmatics. It is important to understand that her goal is limited in precisely that respect: It is only the offensiveness of slurs for which Jorgensen is aiming to account. Slurs may have many other features, and she even mentions some of them on p. 2, and then again in the conclusion. Perhaps a semantic account is needed to explain some of those features. But Jorgensen is claiming that no semantic story is required to account for offensiveness. In that respect, her view agrees with Anderson and Lepore's, though the details are of course different.

Section 1 makes a number of useful distinctions concerning the notion of offense itself. Jorgensen distinguishes actual offense from (morally) warranted and (epistemically) rational offense, and notes that the hearer's offense may have its source either in the speaker's intention to offend, or in the inappropriateness of a remark (e.g., if I were use (not mention) the word "fuck" during class), or in various associations the word has. The latter are what matter most to Jorgensen. As she notes, almost all views regard there as being some such association. Her idea, basically, is to take that association as a basic sociological fact, and use it to explain offensiveness.

The central idea behind Jorgensen's account is that the "offense-generation profile" of slurs is strikingly similar to that of "rude" expressions, which include such terms as "damn", "bastard", and "motherfucker". So, she claims, we ought to seek an account of offensiveness that covers both rude terms and slurs. In this case, too, Jorgensen claims (drawing on work on impoliteness) that there is, simply as a matter of empirical fact, a significant correlation between the use of such terms and the holding of certain sorts of attitudes (e.g., anger, frustration, etc). If this correlation is widely enough known, then the use of a particular expletive may reliably signal that the user holds those attitudes.

Jorgensen's account of slurs is similar. To borrow some language from Geoffrey Nunberg (in the optional paper mentioned above), the idea is that it is not that racists use slurs because they are offensive but that slurs are offensive because they are the words racists use. For example, someone's using the term "faggot" to refer to gay men is reliably correlated with their holding hateful attitudes towards gay men, and that fact is widely known (as are the exceptions). To use that term, rather than "gay man" or some other non-pejorative equivalent, thus 'signals' that one holds such artitudes, precisely because that correlation is widely known.

Note that 'signals' in Jorgensen's sense need not be conventional in Lewis's sense (Lewis discusses 'signals' extensively in Convention), though Jorgensen does speak of some of them becoming 'conventionalized'. It's a nice question whether mere correlation is sufficient here or whether we need a stronger notion. As already noted, it is essential to Jorgensen's account that the correlation be known, and one might now wonder whether it needs to be commonly known, in Lewis's sense, at least for certain purposes.

One question worth pursuing here is what work the "contrastive" part of Jorgensen's account actually does. If there's a strong connection between use of a certain term and the holding of certain attitudes, why can't mere use of that term signal that one has such attitudes, whether or not there is some alternative term one might have used instead? I do not have an example to hand, but the idea would be that there is some group that one might not otherwise need to have picked out at all, but some other group decides they need a term for them, maybe because they hold negative attitudes about that group. In that case, it seems to me, the term would be a slur, even if there is no neutral equivalent. A possible example would be "TERF" (short for: trans-exclusionary radical feminist). Some people targeted by that term have claimed that it is a slur, and they have, for that reason, introduced other terms by which they might be characterized.

In section 4.2, Jorgensen offers an explanation of why the use of slurs is offensive. What is that account? How convincing do you find her explanation of why the use of the N-word "by upper-class black males, particularly if they happen to be hip-hop artists" is not offensive?

In section 5, Jorgensen argues that her account provides a good explanation of the facts about offense she had articulated earlier. Perhaps the most striking of these is in section 5.4, where she discusses speech reports and quotation. How important does the 'contrastive' aspect of Jorgensen's account seem in this case? One thing Jorgensen does not really explain, however, is why slurs are such an appropriate vehicle for derogation (which is a focus of many other accounts). Do you have any idea what she might say here?

There's a very interesting discussion of the uses of slurs in fiction, theater, and the like in section 6. I'll leave it to you, though, to comment on it should you wish to do so.

6 December

Discussion Meeting

12 December

Final paper due

Richard Kimberly Heck Department of Philosophy Brown University