Philosophy 1760: Syllabus

Note: You may download the original syllabus as a PDF. The syllabus may (and probably will) change during the semester. The version here should always be current.

Readings

As well as a list of readings and such, this page contains links to the various papers we shall be reading.1 The files are usually available in two forms. There are (i) a DjVu file and (ii) a PDF file. Why both forms? They are intended for different uses.

There is another advantage to DjVu. Because DjVu is a file format specifically designed for scanned text, the DjVu encoder produces files that are typically much smaller than the corresponding PDFs. For example, the PDF for Davidson's "Truth and Meaning" is 1.2 MB; the DjVu, which was created from the PDF, is 490K, less than half the size, and that includes the embedded text, for searchability. The contrast is even greater in other cases.

To view the PDFs, you will of course need a PDF reader. For the DjVu files, you will need a DjVu reader. Linux users can likely just install the djviewlibre package using their distro's package management system. There are also free (as in beer and as in speech) readers for Windows and Mac OSX. If you follow those links, you will see a list of files you can download. Just download the most recent one. (Do not download the file mentioned above the list of files as the "latest version". That is source code.) And there is a browser plugin for Google Chrome that should work on any OS.

Another option is Okular, which was originally written for Linux's KDE Desktop Environment but which can now be run, experimentally, on Windows and OSX, as well. A list of other DjVu resources is maintained at djvu.org. There are also DjVu readers available for Android and whatever proprietary garbage other folks are peddling these days. Go to Play Store or whatever to find them.

The program I've used to convert PDFs to DjVu is a simple Bash script I wrote myself, pdf2djvu. It relies upon other programs to do the real work and should run on most varieties of Unix.


Class Schedule

DateReadings, Etc
27 January Introductory Meeting
Literal Meaning
29 January

H.P. Grice, "Meaning", Philosophical Review 66 (1957), pp. 377-88 (DjVu, JStor)

Grice has two main goals in this paper. The first is to distinguish 'natural' from 'non-natural' meaning. Grice offers several tests that he claims separate them. How good are these tests? Are some better than others? Give some examples of your own of both kinds of meaning. Are there cases of meaning that don't seem to fit either of Grice's categories?
Grice's second goal is to offer an account of non-natural meaning. What is Grice's account? How does he develop and motivate it? How good is the account? Can you think of any cases that pose a problem for it? (There are plenty!)
Grice finishes the paper by making some remarks about the notion of intention that is deployed in his account. What is the purpose of those remarks?

1 February

P. F. Strawson, Introduction to Logical Theory (London: Methuen, 1952), sections 3.2 and 7.1 (DJVu, PDF)

Feel free to focus on the material from Chapter 3.
For a reply from the "formal logicians", see W. V. O. Quine, "Mr Strawson on Logical Theory", Mind 62 (1953), pp. 433-51 (DjVu, JSTOR)

Strawson is critical of the usual identifications between the stipulated meanings of the logical symbols for conjunction and the like and certain words of ordinary language. In fact, Strawson seems to think that none of the usual identifications are correct, except possibly the one between "~" and "not".

  • Strawson raises several objections to the identification of "∧" (which he writes: .) and "and". Which of these is most important? What kinds of replies might one make to these obejctions?
  • Strawson raises lots of objections to the identification of "⊃" with "if...then...". Can you figure out some way to characterize and order these? Which of them seem most worrying? Strawson definitely plays fast and loose with the distinction between indicative and subjunctive (especially counterfactual) conditionals. If we set aside all the objections concerning the latter, what is left?
  • Perhaps most surprisingly, Strawson rejects the identification between "∨" and "or". He gives two very different sorts of reasons, both of which involve the rejection of the inference from P to "P or Q". What are these? How do they differ?
More generally: How important is it if Strawson is right? Why?

3 February

H.P. Grice, "Logic and Conversation", in Studies in the Ways of Words (Cambridge MA: Harvard University Press, 1989), pp. 22-40 (DjVu)

Grice's central goal in this paper is to distinguish what is "strictly and literally said" in an utterance from other things that might be "meant" or "communicated" in the making of that utterance and, in particular, to articulate a notion of what is implicated by an utterance that is part of what is 'meant' or 'communicated', but is not part of what is said.

  • What is supposed to be the difference between what is "said" and what is merely "meant"? (See the first full paragraph on p. 25.)
  • Among the things that are merely "meant" are what Grice calls "implicatures". He distinguishes two types of these, which he calls "conventional" and "conversational". What is the difference between them? Make sure you can give a couple examples of your own, of each type.
  • Grice offers an account of how implicature works. It consists of two parts: The "co-operative principle" (on p. 26) and four "maxims" (on pp. 26-7). How is the principle related to the maxims? See if you can give your own examples of implicatures that "flout" each of the four maxims.
  • It's a reasonable thought that conversational implicature is a type of non-natural meaning, in the sense of the paper of Grice's we read earlier. Can one argue for this claim? I.e., can one show that what someone conversationally implicates, according to the definition Grice gives on pp. 30-31, is also meant by them? How, in particular, is the overtness condition for meaning supposed to be satisfied?
  • Grice indicates at the beginning of the paper that he intends to use the notion of implicature to answer Strawson's argument that there are "divergences in meaning between, on the one hand, ...the formal devices...and,on the other, what are taken to be their analogues or counterparts in natural language". But he does not say how. (This happens in a later lecture in the series.) Can you see how he might do it, though?
Looking forward to material we will cover at the end of the semester: Grice makes some very sketchy remarks about how metaphor might fit into this sort of account. How plausible do they seem?

Meaning and Truth-Theory: Davidson's Proposal
5 February

Donald Davidson, "Theories of Meaning and Learnable Languages", in his Inquiries into Truth and Interpretation (Oxford: Oxford University Press, 1984), pp. 3-15 (DjVu)

The central purpose of Davdison's paper is to motivate what has come to be called the "principle of compositionality", which asserts that the meaning of a sentence is a function of the meaning of the words of which it is composed, and of how that sentence is composed from those words.

  • Davidson wants to argue that a language that (a) contained, as natural languages do, infinitely many expressions and (b) did not satisfy the principle of compositionality would be unlearnable in principle. What is his argument for this claim? Why does it imply that English, e.g., must contain finitely many "semantic primitives"? To what extent does the argument depend upon there really being infinitely many expressions in English? Are there really infinitely many expressions in English? Does each of us actually understand infinitely many expressions?
  • Davidson gives four examples of theses philosophers have held about language that would imply that English contains infinitely many semantic primitives. Which of these examples do you find most compelling? Can you think of other, not totally implausible accounts of certain sorts of constructions, that would have similar problems?
  • We will work through Davidson's discussion of quotation in class, in some detail. What is it about quotation that is supposed to be "misleading"? Why does the remark that a quotation names its interior "not provide even the kernel of a theory"? How is Quine's proposal, which is e.g. that "'snow'" should be replaced by "ess + en + oh + double-u", supposed to solve the problem?

8 February

Donald Davidson, "Truth and Meaning", Synthese 17 (1967), 304-23; reprinted in Inquiries, pp. 17-36 (DjVu, Springer)

In "Theories of Meaning and Learnable Languages", Davidson argued that, if a language is learnable, we should be able to give a compositional theory of meaning for it: a theory that shows how the meaning of a sentence depends upon, and is determined by, the meanings of its parts. In "Truth and Meaning", Davidson argues that a theory of truth can play this role, and he claims, moreover, that nothing else can.

  • At the beginning of the paper, Davidson considers the problem of explaining the meaning of all terms of the form "the father of ... the father of Annette". What is the intended lesson of that discussion? Why does Davidson think this kind of treatment cannot be extended to sentences?
  • Eventually, Davidson comes around to the idea that we need to talk not about reference but about meaning. But then he argues, on pp. 307-8, that any such theory is vacuous. What is Davidson's argument?
  • On p. 309, Davidson proposes that it would be enough if we had a theory that would yield, for every sentence S, a theorem of the form: "S" is true iff S, e.g.: "snow is white" is true iff snow is white, and do so in a compositional way. What is Davidson's argument for this claim? Or, perhaps better, how does he motivate it, after the fact?
  • On pp. 311-2, Davidson considers the objection that, just as "snow is white" is true iff snow is white, so "snow is white" is true iff grass is green. Hence, a theory that yielded the latter would be as good as one that yielded the former. What is his response? How good is it?
The end of the paper constitutes a catalog of open problems that such an approach to semantics suggests. Do you understand why these would be open problems? Why isn't it trivial to give a theory that yields the correct results for "snow" and "grass", for example? Why does "Bardot is a good actress" popse a problem? What problems does it pose?

10 February

P.F. Strawson, "Meaning and Truth", in his Logico-Linguistic Papers (London: Methuen, 1971), pp. 170-89 (DjVu, PDF)

Strawson is concerned in this paper to mediate a dispute between two approaches to questions about meaning. The first ten pages or so are devoted to explaining the two sides, which are represented by Grice and Davidson, among others.

  • On p. 179, Strawson arrives at what he takes to be the crucial question: how the notion of truth-condition might be explained, without any appeal to the notion of a communicative intention. More specifically, one might say that Strawson wants to know why truth-conditions are supposed to be so central to the explication of meaning. Why does he think this is the right question to ask? Why does he think it cannot just be avoided? (A slightly later discussion, on pp. 182-3, about why it is not sufficient simply to "correlate" each sentence with some state of affairs or other, may throw some light on this question.)
  • Strawson goes on to suggest that, in so far as there is anything general to be said about truth, it lies in connecting truth with such notions as stating and asserting. But that, he suggests, gives the game away to Grice. Why?
  • From that discussion emerges the suggestion that the "rules" that determine the meaning of a sentence do so by determining what belief someone who uttered that sentence would conventionally be expressing to the audience. Strawson then proposes that Davidson might suggest that we simply eliminate "to the audience" and fasten upon the idea of belief-expression. After exploring how this idea might be developed, Strawson simply dismisses it as "too perverse" to take seriously. Is it? Why or why not?

12 February

David Lewis, "Languages and Language", Minnesota Studies in the Philosophy of Science 7 (1975), pp. 3-35 (reprinted in Lewis's Philosophical Papers, vol.1 (New York: Oxford University Press, 1983), pp. 163-88) (DjVu, Minnesota Studies)

You should concentrate on sections I-III, in which Lewis summarizes a modified version of the more extensive account of linguistic meaning given in his book Convention (Cambridge MA: Harvard University Press, 1969), and on pp. 17-23 (pp. 174-9 of the reprint), where Lewis discusses a series of objections connected to compositionality.

Lewis here is returning to the sorts of questions that exercise Strawson in "Meaning and Truth". He argues that languages, which are the concern of "formal semanticists", are central to an account of langauge, which is the social phenomenon of concern to people like Grice. In particular, Lewis wants to explain what it is for a sentence to have a certain meaning in terms of the relation: L is the language used by some population P, where that relation is itself explained in terms of a particular convention that prevails in P. In the background is a general account of what a convention is.

  • What exactly is Lewis's account of: S means that p?
  • On pp. 8-9 (pp. 167-8), Lewis argues that, if some language L is used by a population P, then there always will be a convention of truthfulness and trust in L that prevails in P. The argument consists in checking that the six conditions that characterize conventions are all satisfied. Must they be?
  • On pp. 17-22 (pp. 174-8), Lewis considers a couple objections based upon the fact that languages, as he describes them, only pair sentences with meanings and do not involve any sort of compositionality. What exactly are the worries to which he is responding?
  • Lewis's reply to these worries involves an insistence that compositionality is an empirical hypothesis that has no role in an analysis. What does this say about how he understands his project?
  • On pp. 22-3 (pp. 178-9), Lewis considers the objection that we ought to define what it is for L to be used by P in terms of the members of P assigning certain meanings to the sentences of L. His response is that "assigning a meaning" is neither an action nor a belief, and so cannot be the content of a convention. Is there, in fact, no way to think of "assigning a meaning" as an action or belief? Who might be Lewis's target here?

Topics for first short paper announced

Meaning and Truth-Theory: The Foster Problem
15 February

John Foster, "Meaning and Truth-Theory", in G. Evans and J. McDowell, eds., Truth and Meaning: Essays in Semantics (Oxford: Oxford University Press, 1976), pp. 1-32 (DjVu)

You need only read sections 1-2, on pages 1-16, carefully. The discussion in section 3 concerns Davidson's "revised thesis", which we have not year encountered, and section 4 contains Foster's emendation of Davidson's position, which, as we shall see, falls to a version of Foster's own objection to Davidson.

Foster's paper is important for a certain sort of objection that it brings against what Foster calls "Davidson's Initial Thesis". As we shall see, the objection threatens rather more than just that.

  • In the first section of the paper, Foster attempts to motivate a certain sort of approach to philosophical questions about meaning that puts "theories of meaning", in the sense of Davidson's "Truth and Meaning", at the center. What is that motivation? How good is it? How is such a theory supposed to relate to the actual capacities of actual speakers?
  • It is in the second section that Foster criticizes Davidson's Initial Thesis. But he never says very clearly what that thesis is. What is it? One way to think about the issue here is to focus on the following inference:
    S is true iff p
    So, S means that p
    In general, such an inference is completely invalid. But one might think that if we add some additional premise, then the inference would be valid. And now the question becomes: What should that additional premise be? What does Foster think Davidson originally thought it was? How does his objection show that this premise is insufficient?

17 February

Donald Davidson, "Radical Interpretation", Dialectica 27 (1973), pp. 314-328 (also in Inquiries, pp. 125-39) (DjVu, Wiley Online), and "Reply to Foster", in Inquiries, pp. 171-9 (DjVu, PDF)

You need only read through about p. 174 or so in "Reply to Foster". The remainder discusses Foster's objections to Davidson's new view, which we did not read.
Davidson's treatment owes, as he notes, a great deal to Quine's notion of radical translation, for which see W. V. O. Quine, Word and Object (Cambridge MA: MIT Press, 1960), Ch. 2.

Davidson's reply to Foster consists, more or less, of pointing to the view he develops in "Radical Interpretation". So most of our discussion will focus on that paper.

  • At the beginning of the paper, Davidson elaborates an approach to questions about meaning that is very similar to Foster's. What are Davidson's reasons for regarding what he thinks are the right questions to ask as hypothetical, rather than empirical? What are the advantages or disadvantages of each approach?
  • How does Davidson propose to grease the slide from truth to meaning that is revealed by the failure of the inference:
    S is true iff p
    So, S means that p
    What might be the best way to think of this inference, according to Davidson? An answer can perhaps be extracted from the second full paragraph on p. 173 of "Reply to Foster".
On p. 319 (or p. 131), Davidson lists three questions about the approach he outlines. The second and third are the crucial ones.
  • Can a theory of truth be verified by evidence plausibly available to a radical interpreter?
    Davidson has a very austere conception of what evidence is available to the radical interpreter. In particular, and contra Grice, no "finely discriminated" facts about beliefs and intentions are meant to be included. He argues for this restriction first on p. 315 (or p. 127). What is the argument?
  • Could such a theory, if known to be justified by such evidence, be used to interpret the target language?
    Davidson's argument for a positive answer, such as it is, is contained in the final two paragraphs of the paper. What is the argument?

19 February

Discussion

First short paper due

22 February

No Class: Presidents' Day Holiday

24 February

Scott Soames, "Truth, Meaning, and Understanding", Philosophical Studies 65 (1992), pp. 17-35 (DjVu, Springer)

See also Scott Soames, "Semantics and Semantic Competence", in S. Schiffer and S. Steele, eds., Cognition and Representation (Boulder CO: Westview Press, 1988), pp. 185-207 (DjVu).

Our main interest here is in Soames's re-articulation of the Foster problem, on pp. 19-25, and his criticism of Davidson's response to it, on pp. 25-8. In between, on p. 25 itself, there are a handful of remarks about Higginbotham's response, which we shall read next. But since we have not yet read Higginbotham's paper, you need not worry about them now.
Soames thinks there are basically two ways to interpret the claim that a theory of truth may "serve as" a theory of meaning. On one of these, knowledge of what an interpretive (or, as he puts it, "translational") truth-theory states is supposed to be sufficient for understanding. On the other, knowledge of what a truth-theory states is supposed to be necessary for understanding. That view comes in two forms, as well: that knowledge of everything the theory states is necessary for understanding, or that the theory states everything knowledge of which is necessary for understanding.

  • Soames claims that all forms of the sufficiency view fall to the Foster problem. Why so?
  • The argument against the first form of the necessity view has two parts:
    1. Knowledge of the compositional axioms, etc, does not seem necessary for understanding, since ordinary people do not have such knowledge.
    2. Even a totally trivial theory would state things necessary for understanding, so the condition seems too weak.
    Concerning the first, could one grant the point and suggest, nonetheless, that knowledge of the T-sentences was necessary? What would be lost? If too much, might there be a way of insisting nonetheless that ordinary speakers do have such knowledge? Concerning the second objection, is there a way to modify the view to avoid this consequence?
  • The argument against the second form of the necessity view is that the Foster problem applies to it, as well. That might make one wonder whether it is very different from the sufficiency view. Is it?
  • Soames interprets Davidson as holding that, if one adds to one's knowledge of a truth-theory the knowledge that theory is "translational"—i.e., that the sentences that appear on the right-hand sides of the T-sentences it generates translate the sentences mentioned on the left-hand sides—then that will be enough information to allow one to draw inferences about what the various sentences mean. Soames finds every step of the argument to be problematic. But how plausible is it that Davidson had any such argument in mind?

26 February

James Higginbotham, "Truth and Understanding", Philosophical Studies 65 (1992), pp. 3-16 (DjVu, Springer)

For an approach that is different from but similar to Higginbotham's, see Richard Larson and Gabriel Segal, Knowledge of Meaning: An Introduction to Semantic Theory (Cambridge MA: MIT Press, 1995), Chs. 1-2

You should focus on sections 1 and 2. We will not discuss section 3 directly.
This is an extremely complex paper. Part of the difficulty is that the dialectical structure of the paper is fairly complicated. I'll try to provide a bit of guidance by outlining the paper, and then asking some questions along the way. Note that by ¶1, I mean the first full paragraph on a page; ¶0 is the paragraph continuing from the previous page, if any.
In section 1, Higginbotham introduces his own account of the relationship between truth and meaning.

  • On pp. 3-4¶2, Higginbotham quickly sketches an account of why reference is essential to the theory of meaning. This essentially summarizes the arguments of "Truth and Meaning".
  • The rest of p. 4 quickly introduces the sort of problem that Foster presses, arguing that no appeal to structure can solve the problem.
  • In the rest of the section, Higginbotham articulates a positive conception of what role a theory of truth might play in an account of a speaker's linguistic competenence, in particular, of her knowledge of meaning. He summarizes it as follows:
    From this point of view, meaning does not reduce to reference, but knowledge of meaning reduces to the norms of knowledge of reference. Such norms are iterated, because knowledge of meaning requires knowledge of what others know, including what they know about one's own knowledge. To a first approximation, the meaning of an expression is what you are expected, simply as a speaker, to know about its reference.
    Higginbotham does not here provide much motivation for this view. As he notes in footnote 6, however, there is an obvious link here to notions like overtness that figure prominently in Grice and Lewis. How might common knowledge of T-sentences figure in an account of interpretation and communication? How does it compare to the way Lewis uses common knowledge in his account of linguistic conventions and their role in communication? (There is an intriguing, but somewhat telegraphic, remark on this point at the very end of the paper.)
In section 2, Higginbotham argues against certain other types of solutions to the Foster problem, and then argues that his own view does solve it. The first page of so rehearses Soames's way of formulating the problem.
  • At p. 7¶1, Higginbotham distinguishes two sorts of responses to the Foster problem, very briefly articulating the "immanent" response, but then turning his attention to the "transcendent" response.
  • The transcendent response is discussed from p. 7¶2, through p. 8¶1. The idea here is to deny that there are any linguistic facts beyond what would be apparent to a radical interpreter. Hence, if there really is a difference of meaning of the sort on which the Foster problem rests (e.g., between "Snow is white" and "Snow is white and arithmetic is incomplete"), then that difference will have to be one that "disrupt communication with him". The criticism is that, insofar as that seems plausible, it is because we are, in effect, appealing to fact about speech acts, in which case such an account "swallows meaning whole". How good is that criticism?
  • Higginbotham criticizes the immanent response from p. 8¶2 through p. 9¶1. This part of the argument is a bit easier to understand. What is it?
  • At p. 9¶2, Higginbotham explains own response to the Foster problem, viz: The account outlined in section 1 is immune to it. But there is not much explicit argument. Can you fill in the details? The first full paragraph on p. 10 may be of some assistance.
  • In the remainder of the section, Higginbotham briefly explores the role that knowledge, and common knowledge, play in his account. He remarks that "...the theory of truth...is not something that one starts with, augmenting it with conditions or constraints so as to make it acceptable as a theory of meaning. Rather, truth comes in as something [a competent speaker] knows about, and the deliverancess of the theory are of interest only insofar as knowledge of them is part of [her] linguistic competence" (p. 10). This echoes an earlier remark that his account "makes use of the information that a person tacitly possesses about the truth conditions of her own utterances" (p. 5). There is an implicit criticism of Davidson here, and an attempt to re-orient the focus of the theory of meaning. What is the criticism, and what is the new focus supposed to be?

29 February

Michael Dummett, "What Do I Know When I Know a Language?", in The Seas of Language (Oxford: Oxford University Press, 1993), pp. 94-105 (DjVu)

As usual with Dummett's writings, this paper is dense and difficult. The central question of the paper is why, in explaining the concept of linguistic meaning, we should need to invoke a speakers' mastery (understanding) of her language, and why we should need to invoke the concept of knowledge in explaining the latter.

  • The first few pages, up to p. 97, are devoted to the question whether we should think of "knowledge" of language as involving knowledge at all, in a sense different from any in which "purely practical" abilities, like the ability to swim, do so. (This amounts to answering a question raised by Quine on the first couple pages of "Methodological Reflections on Current Linguistic Theory" (DjVu, Springer).) Why does Dummett think there is any such difference? Dummett goes on to suggest that the knowledge one has in such a case is "implicit". What is that notion? Does it seem adequate?
  • On pp. 97-9, Dummett considers the question: "Is the significance of language to be explained in terms of a speaker’s knowledge of his language?" He ends up agreeing that it should be, but he first rejects a particular understanding of speakers' semantic knoweldge: what he calls the "code conception of language". What is the code conception? What is the central objection to it?
    Dummett argues that the lesson we should draw from that objection is that "we cannot appeal to the speaker’s prior grasp of [a] concept in explaining what it is for him to associate that concept with that word" (p. 99), even if he has such a grasp. Is this compatible with the view that understanding a sentence amounts to knowing a T-sentence for it? or that understanding a word amounts to knowing the truth-theoretic axiom for it?
  • On pp. 99-102, Dummett explores the idea that a speaker's mastery of a language is to be explained in terms of her implicitly knowing a "theory of meaning" for that language. Dummett argues that it is not sufficient to say what the speaker knows: "we have to go on to give an account of what it is to have such knowledge". Why does Dummett think this additional step is required?
    Dummett goes on to propose we should give this additional account "in terms of the practical ability which the speaker displays in using sentences of the language" (p. 101). The thought, that is, is that we should explain what it is for the speaker to know (say) that "snow is white" is true iff snow is white in terms of the way the speaker uses this sentences. What reason does Dummett give for thinking we should give the account in these terms?
    All of this amounts to an argument for a version of the thesis that meaning is determined by use. Dummett then uses that thesis to rebut a skeptical objection that, if we conceived of linguistic competence as involving implicit knowledge, then "it would remain a possibility, which you could never rule out, save by faith, that I systematically attached different senses to my words from those you associated with them" (p. 102), and Dummett seems to take this as additional reason to insist that the "implicit knowledge ascribed to the speakers must be manifested in their use of the language" (p. 102). Is another answer to this sort of objection possible?
  • On pp. 102-5, Dummett considers a different version of this same objection: Why not simply describe the practice of using the language directly, as one might describe what it is to swim, and then say that "knowing a language" is simply being able to do that? i.e., to engage in that practice? Then the detour through knowledge is dispensible. Dummett argues in reply that we cannot understand the practice of speaking a language unless we distinguish "regularities" in that practice of which the speaker is aware from regularities that "might be uncovered by a psychologist or neurologist" (p. 104): between, as Lewis might have put it, between (mere) regularities and linguistic conventions. How is this supposed to make the appeal to what the speaker knows something other than a detour?

2 March

Ian Rumfitt, "Truth Conditions and Communication", Mind 104 (1995), pp. 827-62 (DjVU, JStor)

See also Michael Dummett, "Language and Communication", in The Seas of Language (Oxford: Clarendon Press, 1993), pp. 166-87; John McDowell, "Meaning, Communication, and Knowledge", in Meaning, Knowledge, and Reality (Cambridge: Harvard University Press, 1998), pp. 29-50

This is a long and difficult paper. You should focus on pp. 827-44. I'll make some comments about the material that follows, which is well worth reading, but we obviously cannot discuss the whole of the paper in one class.
The central goal of the paper is to offer accounts of two notions: expressing the thought that p, and "taking in" the thought that p, when it has been expressed to one. The claim is that we will need both the sorts of resources typically deployed by Griceans and the notion of truth (or truth-conditions) to do this.

  • The first three sections mostly summarize material we have already discussed, though it is important to Rumfitt that no analysis of assertion, let alone of expressing a thought, can be given in terms of communicative intentions. The problem is supposed to be that there is no satisfactory way of saying what "response" the speaker intends from the audience. Make sure you understand why Rumfitt thinks this.
  • Rumfitt begins to develop his account of expressing a thought in section IV. The key idea is to appeal to a speaker's reasons for speaking as she does. The main point in this section is that, whatever those reasons are, they will always have to involve (and, in a certain sense, terminate in) the speaker's reasons for uttering a particular sentence. What is the argument for that claim? (Actually, Rumfitt talks about uttering certain words, in a certain order, or something of the sort, but this is relevant only to the discussion at the end of the paper.)
  • Rumfitt's analysis is developed in section V. The key thought here is that the speaker's reasons for speaking will always have to include something that establishes a link between the sentence she intends to utter and a certain proposition. The speaker's knowledge of a T-sentence is then supposed to be able to play this role. How so? The key passage here is: "What expressions of a particular thought have relevantly in common is not the presence in the speaker of any particular intention, but a particular expectation about the knowledge that the audience will deploy in responding to the utterance" (p. 840).
  • A form of the Foster problem arises on p. 841, as an objection to the analysis stated at the top of that page. What is the objection? Why did I just call it a form of the Foster problem? What is Rumfitt's response to it? Why does he think strengthening "iff" won't help?
  • There is a close relationship between Rumfitt's proposal and Higginbotham's answer to the Foster problem. In particular, both of them talk about "expectations". There is also a connection between Rumfitt's notion of an "I-source of knowledge" and Higginbotham's idea that the meaning of a sentence is given by what you expect other speaker's to know about its truth-conditions, simply in so far as they are a competent speaker. Can you develop this connection a bit?
Section 2 of Rumfitt's paper explores the same sort of issue, in connection with "taking in" or "apprehending" a thought expressed by someone else. The analysis he eventually gives, at the bottom of p. 850, is parallel to the analysis of expressing a thought. You should read this part of the paper, but needn't read it as carefully as the first part.
Section 3 of the paper raises some questions about compositionality, and how that figures into the analysis. I personally find this part of the paper to be extremely interesting, but we probably will not have time to discuss it in class, so you should feel free to skip it, if you so wish.

4 March

Richard Heck, "Reason and Language", in C. Macdonald and G. Macdonald, eds., McDowell and His Critics (Oxford: Blackwell Publishing, 2006), pp. 22-45 (PDF)

Focus here on the arguments in the first two sections of the paper. The first section argues that speech is "propositionally rational", that is, that speech is intentional under propostional descriptions, such as: saying that snow is white. The second section argues that speech is also intentional under verbal descriptions, such as: uttering the sentence "snow is white". What are the arguments for these two claims? And how convincing are they?
The third and fourth sections use the results of the first and second sections, respectively, to argue against the views that understanding a language is just being able to use it correctly or just being able to use it to put one's thoughts into words. The conclusion is supposed to be that understanding a language involves having semantic knowledge: knowledge of what various sentences do or would mean if uttered in certain circumstances. How to do those arguments seem?
The final section considers a version of the objection, pressed by Soames, that the semantic knoweldge ordinary speakers have is due to their understanding of their language, not constitutive of it.

7 March

Discussion

You may wish to have a look at Richard Heck, "Meaning and Truth-Conditions", in D. Griemann and G. Siegwart, eds., Truth and Speech Acts: Studies in the Philosophy of Language (New York: Routledge, 2007), pp. 349-76 (DjVu).

Topics for second short paper announced

Tacit Knowledge
9 March

Noam Chomsky, Knowledge of Language: Its Nature, Origin, and Use (London: Praeger, 1986), Chs. 1-2 (DjVu)

You might also want to look at Noam Chomsky, Aspects of the Theory of Syntax (Cambridge MA: MIT Press, 1965), chapter 1, sections 1-6 (DjVu, PDF), and W. V. O. Quine, "Methodological Reflections on Current Linguistic Theory", Synthese 21 (1970), pp. 386-98 (DjVu, Springer).

The main thing we will want to discuss is Chomsky's distinction between E-language and I-language: what the distinction is and why Chomsky thinks that I-language should be the focus of linguistic inquiry.

Regarding the former issue, when Chomsky introduces this terminology, he remarks that E-language is external language, and I-language is internal language, and some of his remarks accord with that usage. But there is another distinction that is at least as important: between language though of extensionally, e.g., as a set of grammatical sentences, or of sentence-meaning pairs; and language thought of intensionally, in terms of a system of rules for determining grammaticality or what meaning (if any) to assign to a sentence. What is the relation between these two ways of thinking of the distinction?

Much of Chomsky's discussion revolves around syntax and, in one striking passage (pp. 41ff), phonology. But it would be natural to want to apply these same sorts of considerations to the case of semantics—as, indeed, Higginbotham explicitly proposes to do "I am applying to semantics a research program that goes forward in syntax and phonology, asking, 'What do you know when you know a language, and how do you come to know it?'" ("Truth and Understanding", p. 13). How might that sort of program in semantics be motivated, in a broadly Chomskyan way?

On pp. 44-5, however, Chomsky alleges that the shift from E- to I-language essentially excludes semantics—conceived as the study of the relation between language and world—from linguistic theory. How serious do the concerns Chomsky expresses here seem to you to be?

11 March

Gareth Evans, "Semantic Theory and Tacit Knowledge", in his Collected Papers (Oxford: Oxford University Press, 1985), pp. 322-42 (DjVu)

Evans is responding to Crispin Wright, "Rule-following, Objectivity, and the Theory of Meaning", in S. Holtzman and C. Leich, eds., Wittgenstein: To Follow a Rule (London: Routledge and Kegan Paul, 1981), pp. 99-117. Similar worries can be found in other authors. See e.g. Hilary Putnam, "The 'Innateness Hypothesis' and Explanatory Models in Linguistics", Synthese 17 (1967), 12-22 (Springer), reprinted in his Mind, Language, and Reality: Philosophical Papers, v. 2 (Cambridge: Cambridge University Press, 1975), pp. 107-16. Wright continues the discussion in his "Theories of Meaning and Speakers' Knowledge", in his Realism, Meaning, and Truth (Oxford: Blackwell, 1986), pp. 204-38 (DjVu). In reply to Wright, see Martin Davies, "Tacit Knowledge and Semantic Theory: Can a Five per cent Difference Matter?", Mind 96 (1987), pp. 441-6 (JSTOR)
There is an extensive literature on tacit knowledge generally. For further reading, see Martin Davies, "Tacit Knowledge, and the Structure of Thought and Language", in C. Travis, ed., Meaning and Interpretation (Oxford: Blackwell, 1986), pp. 127-58; see also Martin Davies, "Tacit Knowledge and Subdoxastic States", Crispin Wright, "The Rule-following Arguments and the Central Project of Theoretical Linguistics", and Christopher Peacocke, "When is a Grammar Psychologically Real?", all in Alexander George, ed., Reflections on Chomsky (Oxford: Blackwell, 1989).

Evans's main goal in this paper is to try to explain what motivates the requirement that theories of meaning should be compositional, and to explain as well what might allow us to distinguish "extensionally equivalent" theories that prove all the same T-sentences.

In section II, Evans discusses a toy language with ten unary predicates and ten names, and so 100 sentences, and considers two theories of truth, T1 and T2, that do and do not acribe any sort of structure to the sentences of this language. Evans suggests that, considered as accounts of what a speaker of the language knows, these two theories can be distinguished empirically. The core idea is that the axioms of the theories can be associated with certain dispositions that the speaker has. What are the empirical differences between the theories? Can you think of any other such differences we might expect to find? Why is it important that the dispositions in question should be "full-blooded"?

In section IV, Evans discusses the question whether attribution of tacit knowledge of a structured theory of meaning can explain a speaker's capacity to understand novel sentences. Evans first concedes that, by itself, it cannot. It is, honestly, not entirely clear to me why. I think the reason is that the speaker's regarding the sentence "J(e)", for example, as true iff whatever, if that judgement is regarded as the exercise of the dispositions that constitute her tacit knoweldge of the theory T2, then the judgement cannot also be regarded as explained by her having those same dispositions: that would be like explaining a sleeping pill's effectiveness in terms of its dormativity—its disposition to make people sleepy. But the usual response to this kind of worry is that this objection is only good if the disposition is "thin" rather than "full-blooded", as Evans's were meant to be. So why doesn't Evans just say that, if the dispositions are full-blooded, then there is an explanation to be had, in terms of whatever the categorical basis of those dispositions is? Is that perhaps the point Evans is making when he says that "to say that a group of phenomena have a common explanation is obviously not yet to say what the explanation is"?
Anyway, Evans goes on to suggest that we can get an explanation of "creativity" if we embed the attribution of tacit knowledge in a larger story about language acquisition. How is that supposed to go? How does it relate to the "full-bloodedness" of the dispositions? (Developing a proper answer to the various questions I've been asking here would make a nice term paper, by the way.)

14 March

Discussion

Second short paper due

16 March

Martin Davies, "Meaning, Structure, and Understanding", Synthese 48 (1981), pp. 135-61 (DjVu, Springer)

Davies is interested in the question whether there is a way of making sense of the idea that a language (in roughly Lewis's sense) has a certain structure without having to claim that speakers of the language must in any sense, including tacitly, be aware of that structure. So, in particular, in the case of Evans's 100-sentence language, Davies wants to claim that, even if speakers fail to have the sorts of dispositions Evans specifies, it can still be true that the language they speak has a certain sort of semantic structure. Davies thus suggests that semantic theories should meet what he calls the structural constraint:

If, but only if, there could be speakers of L who, having been taught to use and know the meanings of sentences (of L) s1, ...,sn..., could by rational inductive means go on to use and know the meaning of the sentence s..., then a theory of truth for L should employ in the canonical derivations of truth condition specifying biconditionals for s1, ...,sn resources already sufficient for the canonical derivation of a biconditional for s. (p. 138)
Note (as Davies more or less mentions) that this involves what a hypothetical speaker could do, much along the lines of what Davidson suggests in "Radical Interpretation". The basic point here is supposed to be that semantics need not have anything particular to do with facts about how speakers understand their language.

In section I of the paper, Davies considers four objections to the structural constraint. The most important of these is the fourth. Davies' initial presentation of the objection is somewhat complicated, but he gives a concrete example on p. 146. The core of the objection is that certain actual speakers of a language might lack the ability to "project" the meanings of sentences they have not encountered (e.g., "Bug moving"), whereas other speakers might have that ability; and, if so, then it seems odd to say that the language of the speakers who cannot "project" has the same structure as that of the speakers who can project, simply because someone could project meaning in that way. Davies's response is to deny that, in such a case, the common sentences of the two languages (e.g., "Bug") do in fact have the same meaning. How exactly does this save the structural constraint? What does it say about how the meanings of sentences are being specified when a languages are identified as Davies identifies them at the very beginning of the paper? Is this response compatible with Davies's insistence that, even if speakers understand Evans's 100-sentence language in a non-compositional way, still the right semantics for that language is the compositional one?

In section II, Davies discusses what is involved in crediting speakers with "implicit" knowledge of a theory of meaning for their language. Davies first step is to introduce a notion of "full understanding" of a language: Someone fully understands a language L if she is in a "differential state" with respect to the various words (semantic primitives) of the language. This requires her to have the sorts of dispositions concerning acquisition and loss that Evans discusses, as well as the dispositions concerning change of meaning that were mentioned in class. But Davies worries that mere dispositions are not enough and so invokes causal and explanatory relations as well in giving a full account. Try to explain that account as best you can, in your own words. The question then arises: Can't we at least imagine that an earthling and a Martian (see p. 150) might have very different such dispositions? What follows if we can?

Davies emphasizes, on pp. 152-3, that the account of full understanding does not itself involve attributing tacit or implicit knowledge. Davies first considers, and dismisses, a series of objection to the claim that we should attribute such knowledge. Beginning on p. 156, however, he offers a reason not to do so, namely, that there seems to be no need to attribute anything like implicit or tacit desires with which such beliefs might interact. How does this relate to Evans's idea that such informational states are not "at the service of many projects"? How is Davies proposing that we should think of the relation between a full understander and the best semantic theory for her language? (See the last paragraph of §18.) How would this discussion be affected if we replaced talk of "belief" with talk of "information", and made it clear that the information in question need not be available to the speaker, at the personal level?

18 March

Elizabeth Fricker, "Semantic Structure and Speakers' Understanding", Proceedings of the Aristotelian Society, New Series 83 (1982-1983), pp. 49-66 (DjVu, JSTOR)

One might think of Fricker's goal in this paper as to try to answer Lewis's claim that there is "no promising way to make objective sense of the assertion that a grammar Γ is used by a population P whereas another grammar Γ', which generates the same language as Γ, is not" (L&L, p. 20)—and to do so on Lewis's own terms. That is, Fricker accepts that semantic facts about a language must supervene on how it is used—on the linguistic abilities of its speakers—and that these abilites relate only to tell what whole sentences mean. (These are principles (α) and (A), on pp. 52 and 56, more or less). And yet she wants to claim that each language has a semantic structure that is essential to it and to which speakers of the language bear some non-trivial relation. (These are principles (β) and (γ) on p. 52.) This is a bold, which is not necessarily to say 'heroic', view. The strategy, to a large extent, is to effect a synthesis of Evans and Davies.

Fricker begins, in section I, by rehearsing what is, very roughly, the metaphysical project elaboarted in "Radical Interpretation". Section II introduces her problem and argues that both Davies and Evans fail properly to vindicate the idea of semantic structure. Section III formulates an argument that the Ludovician assumptions mentioned above lead to Lewis's conclusion: that a theory of meaning for a language need not uncover structure.

The main argument of the paper is in sections IV and V. Fricker's conclusion is that "...facts about sentence-meanings are not independent of facts about their structure" (p. 60). She mentions three sorts of reasons in favor of this claim:

  • Unless there were "some form of Structural Postulate requiring that sentences be seen to be composed out of a stock of semantically primitive elements which make the same characteristic contribution to sentence-meanings in all their occurrences" (p. 59), interpretation would be too indeterminate.
  • There are many sentences of, e.g., English that will never be uttered and yet that have determinate meanings. But if facts about meaning supervene on facts about use, then facts about the meanings of un-uttered sentences must supervene on facts about uttered sentences, and that can only be so if the language is compositional.
  • There is a connection between the meanings we assign to sentences and the contents we assign to speakers' beliefs. And the latter, typically, will themselves be structured, involving certain concepts we take the speakers to possess. This "will tend to ensure that OL sentences are not interpreted by ML sentences with a greater degree of structural complexity" (p. 60).
These arguments just mentioned are elaborated and interwoven in section V. Here, Fricker adapts Evans's proposal to generate a "transcendental" definition of the notion of semantic primitive according to which the semantic structure of a language mirrors the causal structure of its speakers' linguistic abilities. This leads directly to the question whether different speakers' abilities might have different structures, which is what leads to Lewis's conclusion. Fricker's response is on p. 64: She wants to insist that, if speakers understand a sentence to have different structures, then they cannot understand it in the same way. This is because it will be implausibe to hold that the beliefs these speakers associate with the sentence deploy the same concepts. (Compare Davies, §8.) So Fricker concludes that "...the initially plausible thought there could be structure in a language, though its speakers were blind to it, is wrong" (p. 65).

What do you think of these arguments? Perhaps what is most striking about them is that they lead to the suggestion that the sorts of principles Davies and Evans propose are a priori principles that govern radical interpretation. How plausible is that suggestion?

21 March

Louise Antony, "Meaning and Semantic Knowledge", Proceedings of the Aristotelian Society, sup. vol. 71 (1997), pp. 177-209 (DjVu, JSTOR)

Antony's main goal in this paper is to argue that semantics should be understood as inextricably linked to psychology: to questions about what speakers actually come to know about their languages when they learn to speak. She distinguishes three sorts of alternatives to her position: Platonism, represented by Devitt, Katz, and Soames; Instrumentalism, represented by Davidson; and Reconstructive Rationalism, represented by Dummett and Wright. (Except for the first, these are my terms.) She discusses Platonism only briefly, claiming that the facts about (say) English simply cannot be completely independent of facts about how English speakers behave and gesturing in the direction of Fricker.

The discussion of Instrumentalism and Reconstructive Rationalism, which dominates section I, consists, largely, in playing them off against one another. The latter view insists that semantic theory should be concerned with speakers' knowledge, but only with a "systematized" (Dummett) or "idealized" (Wright) version of such knowledge, not with the knoweldge actual speakers possess. Antony argues that the attribution of semantic knowledge, if it is not to be merely heuristic (and thus no different from what Instrumentalism offers), must have some explanatory work to do. But, she suggests, "...the rational reconstruction of linguistic meaning cannot explain the rationality of human language use if the posited linguistic structure is not available to speakers" (p. 188). How fair a criticism is this? Is Antony right simply to dismiss Wright's project of an "idealized epistemology of understanding"?

Antony goes on to argue that the idealizations inherent in Reconstructive Rationalism are difficult to reconcile with the professed goals of its proponents. There is, she says, a "tension between, on the one hand, appealing to human capacities in order to justify features of meaning—theoretic projects, and on the other, ignoring the actual nature of those capacities" (p. 188). The original sin here is Davidson's attempt to motivate compositionality, and the problem this observation poses for Reconstructive Rationalism is supposed to be that it is hard to see why, if we are not going to idealize away from the finitude of the language-learner, we should think it justified to idealize away from any of the other circumstances under which language is, in fact, acquired by human beings. Does this seem a fair criticism?

This point then morphs into a criticism of Instrumentalism. The discussion is directed at Quine's restriction on the data on which radical translation must be based, but could equally be directed at Davidson's corresponding restriction on the data on which radical interpretation must be based. Such restrictions are motivated by claims about what sort of evidence is available, in principle, to a language-learner (or, though she does not mention the point, to an actual speaker who is attempting to determine if someone else speaks her language—see the first couple pages of "Radical Interpretation"). Antony argues in response that the evidence that is allowed to a radical whatever-er is both wider and narrower than what is available to actual language-learners. But the most interesting claim she makes is this one:

Considered in the context of Quine's metaphysical goals, the idealization involved in permitting the linguist an unlimited amount of behavioural evidence appears concessive to the meaning realist; in fact, it is a slick piece of bait-and-switch. The cooperative tone distracts us from the fact that Quine has already begged the crucial question, by assuming that whatever physical facts metaphysically determine meaning must be identical with the physical facts that constitute the evidence children have available to them during language acquisition.

What does Antony mean here? What might be examples of physical facts on which semantic facts supervene that are not among the facts available as evidence to a child learning language? An even more interesting question is how, if there are such facts, they might, in some other (non-evidential) sense, be "available" to ordinary speakers interpeting one another. (Suppose that, as a matter of empirical fact, there were exactly 1000 concepts humans could possess, and we were all born with all of them. How would that affect language acquisition and interpretation?)

There is thus a common criticism of both Instrumentalism and Reconstructive Rationalism: "...[T]he epistemic strategies of 'ideal' learners are of no theoretical value to the task of understanding human cognitive competencies if the idealizations abstract away from epistemic constraints that are in fact constitutive of the learning task confronting humans" (p. 193). Yes or no?

Section II of the paper turns to questions about tacit knowledge. Antony argues that the strategy pursued by Evans and Davies (think, in his case, of what he says about full understanders) has a fatal flaw: It purports to justify the attribution of tacit knowlege entirely on the basis of an isomorphism between the structure of a semantic theory and the structure of a speaker's abilities: But "...isomorphisms are cheap: the mere fact that the formal structure of a particular theory can be projected onto some structure of causally related states is not enough to make it true that the set of states actually embodies the theory" (p. 200). By contrast, Antony insists, what we want is for the causal processes that underlie the understanding of novel sentences to be sensitive to information about the meanings of sub-sentential constituents. And she proposes that we can have that only if we take seriously the idea that there are states in the speaker that encode the very information articulated by a semantic theory, and that these states interact causally in ways that are sensitive to that information. Out of this emerges a criticism of Fricker. What is that criticism?

23 March

Steven Gross, "Knowledge of Meaning, Conscious and Unconscious", in The Baltic International Yearbook of Cognition, Logic, and Communication, Vol. 5: Meaning, Understanding, and Knowledge (2010), pp. 1-44 (Baltic Yearbook)

Gross notes that there are two kinds of arguments for attributing (propositional) semantic knowledge to competent speakers. On the one hand, speech is a conscious, rational activity, and the reasons for which we speak seem to involve knowledge of the semantic properties of expressions. On the other hand, productivity, systematicity, etc, seem to demand explanation in terms of information that speakers possess and deploy. The issue in which Gross is interested concerns the relation between these two sorts of knowledge: How, in particular, is the rationalizing belief that "John runs slowly" is true iff John runs slowly related to the T-theorem that "John runs slowly" is true iff (∃e)[Agent(e, John) & Running(e) & Slow(e)]? The question is generated by the fact that these two sorts of knowledge seem to be very different:

  1. Epistemic Status: We have specifically first-personal, introspective grounds for attribution of the former, but not the latter.
  2. Cognitive Role: The former seems to be consciously accessible, whereas the latter is not.
  3. Content: The former seems to be "homophonic", in the sense that saying what a statement a sentence means seems to involve simply repeating that sentence, whereas the T-sentences produced by actual semantic theories involve concepts that may not appear to be present in the original sentence.

It is (3) that plays the most important role in Gross's paper. Is it the most fundamental of the three? What implications seem to hold between them, generally?

In section 2, Gross elaborates these two sorts of reasons to attribute semantic knowledge. His discussion in §2.1 of the rationality of language is both similar to and different from those of Heck and Rumfitt. How so? What seems to you to be the most original of Gross's contributions here? How plausible is it? The discussion of compositionality, etc, in §2.2, is more cursory, but there is an observation worth noting. Gross notes that one goal of semantic theory is to explain why certain sentences are ambiguous and others are not, and more precisely why certain readings are available and others are not. So a semantic theory needs to be able to generate different T-sentences for the various readings of, e.g., "Visiting relatives can be annoying" or "Everyone loves someone", and Gross takes that to be a reason that the T-theorems of a semantic theory cannot be homophonic. But is there not a similar point to be made about rationalizing semantic knowledge? If so, how might that bear upon Gross's relation question?

In section 3, Gross considers six sorts of answers to the "relation question".

  • The first of these, discussed in §3.1, at which we have gestured repeatedly, is that our tacit knowledge is of the axioms of the theory, and the product of this knoweldge (the "output") is what is consciously accessible. But Gross raises a problem for this view: The output seems to be non-homophonic, whereas what rationalizes is homophonic. How serious is that worry?
  • Gross then discusses three responses.
    • §3.2: Perhaps speakers do, despites appearances, have the sophisticated concepts deployed in the T-theorems. The reply is that this does not address the problem, since we are still dealing with states that have different contents, and one wants to know how they are related.
    • §3.3: Perhaps some additional processing at the "semantic-conceptual interface" converts the complicated eventish stuff into something simpler. Here, the worry seems to be only that this is an empirical commitment, perhaps a surprising one. Does that seem right? or is there some deeper worry about this kind of move?
    • §3.4: Perhaps we could re-do the semantics to avoid generating funky T-theorems. Here, the worry seems to be that this is just to make a bet on the future development of semantics that we have no reason to think will pan out.
  • The last two options attempt to avoid the problem by insisting either (i) that there is no difference of content between the two bits of knowledge (§3.5) or (ii) that, even if there is, the mental states involved are the same (§3.6) Discussion of the latter becomes extremely complex and is, perhaps, best skimmed for now.

Gross does not come to any definite conclusions here: His point is simply that all of these options have their costs. Which seems to be most plausible to you?

Finally, in section 4, Gross considers the question what sort of relation there might be between the two sorts of knowledge, if it cannot be as "intimate" as one might have hoped. His proposal is that tacit knowledge of a semantic theory might be necessary for both possession of the rationalizing knowledge (his (A)) and the ability to express it (his (B)). And, moreover, one's tacit knowledge might be causally responsible for one's rationalizing knowledge, even if it does not bear any rational or inferential relation to it. How plausible does that view seem?

Contextualism, For and Against
25 March

John Searle, "Literal Meaning", Erkenntnis 13 (1978), pp. 207-24 (DjVu, JSTOR)

Searle is concerned in this paper to argue against a certain conception of the (literal) meaning of a sentence and in favor of a different conception. He describes his target as follows:

Every unambiguous sentence...has a literal meaning which is absolutely context free and which determines for every context whether or not an utterance of that sentence in that context is literally true or false. (p. 214)

His preferred view is:

For a large class of unambiguous sentences...the notion of the literal meaning of the sentence only has application relative to a set of background assumptions. The truth condtions of the sentence will vary with variations in these background assumptions; and given the absence or presence of some background assumptions the sentence does not have determinate truth conditions. These variations have nothing to do with indexicality, change of meaning, ambiguity, conversational implication, vagueness or presupposition as these notions are standardly discussed in the philosophical and linguistic literature. (p. 214)

And Searle argues that these "background assumptions" cannot, even in principle, all be "specifi[ed] as part of the semantic content of the sentence, [since] they are not fixed and definite in number" (pp. 214-5). More importantly, no such specification can ever be complete: No matter how precisely we try to specify the "background assumptions", there will always be other background assumptions in play which can, by clever construction of examples, be brought to our attention and varied, so as to lead to variation in truth-conditions.

The general strategy of argument is to consider a sentence that seems to have a perfectly definite literal meaning. (In practice, we focus on one word and the contribution it is making.) We then consider certain peculiar contexts and note that it is simply not clear whether to regard a perfectly literal utterance of the sentence as true or false in that context. Indeed, there will be ways of developing the example so that a perfectly literal utterance of the sentence would be true; and there will be other way of developing it so that such an utterance would be false. Hence, what the truth-condition of the utterance is depends upon exactly which "background assumptions" are in play.

To check your own understanding, it is worth trying to construct examples similar to Searle's for sentences other than the ones he considers.

A more general, principled question concerns why Searle is so focused on the literal meaning of sentences. In many ways, what Searle argues could be rephrased in terms not of sentence meaning but in terms of Grice's notion of what is said. Searle's point would then be that what is said, even in the most literal utterance of a very ordinary sentence, is not completely determined by "stable" features of the sentence that is uttered, but depends also upon background assumptions that cannot, even in principle, be completely specified. How, if that is true, might it affect the conception of semantic theory with which we have been operating throughout our discussions so far? What should we think of a competent speaker as knowing about such ordinary sentences in virtue of being a competent speaker? What might we say about what competent speakers know, say, about the meaning of the word "on", as it occurs in sentences such as "The cat is on the mat"?

28 March–1 April No Class: Spring Break
4 April

Robyn Carston, "Implicature, Explicature, and Truth-theoretic Semantics", in R. Kempson, ed., Mental Representations: The Interface Between Language and Reality (New York: Cambridge University Press, 1988), pp. 155-82 (DjVu). You can skip, or skim, the final section.

Grice writes in "Logic and Conversation":

In the sense in which I am using the word say, I intend what someone has said to be closely related to the conventional meaning of the words (the sentence) he has uttered. (p. 25)

Grice is of course aware that contextual factors may play various sorts of roles in determining what is said, as he goes onto discuss. But the fixed, stable meanings of the words used are supposed to play an especially important role.

We have more or less been following Grice in this respect, and assuming that we can derive the truth-condition of a sentence from semantic axioms governing its component parts. But Carston, in this paper, challenges this sort of assumption. She is particularly concerned with the requirements of a psychologically adequate account of linguistic competence. And she argues that, if we want such an account, then the difference between what is said—which she calls the explicature associated with an utterance—and what is implicated is much less stark than Grice seems to suppose.

The first lesson to learn from this paper is that it is not at all clear how we should draw the distinction between explicature (what is said) and implicature (what is meant). This was already argued in Searle, but Carston presents a whole battery of examples, and her discussion of "and" is especially intriguing, since that was one of the examples in which Grice was especially interested. In particular, Carston argues that an utterance of "A and B" can assert the existence of all sorts of different relations between A and B: temporal, causal, rational, and so forth. And she argues further that neither the view that "and" is multiply ambiguous nor Grice's view that the assertion of such relations is always an implicature can be sustained. The view she proposes is instead that speakers use various sorts of pragmatic processes, very similar to those that generate implicatures, to "enrich" the linguistically specified content so as to arrive at the explicature.

More specifically, Carston opposes what she calls the "linguistic direction principle", which claims that any "explicating" process must be in response to something in the linguistic form that calls for it. She sees the more traditional view as supposing that "what is said" must be truth-evaluable and that the only work context can do to fix what is said is whatever needs to be done to get us something truth-evaluable. So, e.g., the reference of demonstrative has to be determined, since otherwse one has nothing truth-evaluable; but one does not need to find any relation for "and" to express beyond truth-functional conjunction, since that is already truth-evaluable. What do you think of her arguments against this traditional view?

Our main interest will be in the sorts of arguments Carston gives that, e.g, the temporal aspect of certain uses of "and" must be part of what is said. There are four of these:

  • Functional: Carston has views about the different roles the explicature and implicature play, and in particular about how the implicature is generated. These are supposed to imply that the implicature cannot logically entail the explicature. Too little seems to be said here to make it clear why she holds this view and, frankly, it does not seem overwhelmingly plausible on its face. So feel free to set it aside if you wish, though it would be nice to hear if anyone has some idea how to justify this restriction, which plays a large role early in the paper.
  • Relevance: Carston argues that relevance plays a central role in communication. In particular, considerations of relevance enter not just into the determination of implicatures but into the determination of what is said, e.g., the resolution of ambiguity, the determination of the reference of demonstratives, and so forth. Claims about relevance thus fuel some of Carston's claims about what is part of the explicature. Here, though, Carston mostly gestures at work by Dan Sperber and Deidre Wilson, so these arguments may be difficult for us to evaluate.
  • Negation: Consider Grice's gas station case. Joe says, "Yo, Bill, I'm out of gas!" and Bill answers, "There's a gas station around the corner". Grice says that Bill implicates that the station is open and has gas to sell. Suppose Fred knows that the station is closed. Then it seems clear that Fred cannot say to Joe, "Don't listen to Bill, 'cause he's wrong", thereby meaning that the station is closed. As it's sometimes said, negation cannot "target" the implicated content.
  • Conditionals: There is something similar to be said about conditionals. I'll leave it as an exercise (i.e., feel free to do this in your response) to explain how the "conditional test" is supposed to work.

Carston uses the "negation test" and the "conditional test" to argue, in a variety of cases, that the explicature is much richer than one might have supposed. As I said before, there is a whole battery of examples here. Which of these seem to you to be the strongest? which the weakest? and why? What strategies do you think might be available for resisting the conclusion that Carston wants to draw, that pragmatic processes play a surprisingly large role in determining what is said?

Finally, what kind of threat, if any, do such examples pose to truth-conditional semantics as we have been discussing it? Carston herself thinks the threat is large, claiming that the right sorts of representations for which to define a truth-conditional semantics are the mental representations that are the result of explicature, not the linguistic representations that are the input to pragmatic processes. How plausible is that claim?

6 April and 8 April

Jason Stanley and Zoltán Gendler Szabó, "On Quantifier Domain Restriction", Mind and Language 15 (2000), pp. 219-61 (DjVu, Wiley Online, via Gendler Szabó's site)

This paper, and the next one we shall read, are reprinted in Jason Stanley, Language in Context (Oxford: Oxford University Press, 2007), together with several other essays on context-sensitivity.
Replies to this paper by Stephen Neale and Kent Bach, together with a reply by Stanley and Szabó, were published in the same issue of Mind and Language.

This paper is concerned with a particular case of the general problem raised by Searle and Carston: quantifier domain restriction. That is, it is concerned with the question how an utterance of a sentence like "Every bottle is empty" comes to express, not the absurd proposition that every bottle in the universe is empty, but some sensible proposition to the effect that every bottle in some particular group G is empty.

Stanley and Szabó begin by distinguishing between descriptive and foundational problems of context dependence. The core descriptive questions are which aspects of the utterance give rise to context sensitivity and what has to be done, exactly, to resolve it. Foundational questions concern how context does whatever needs to be done, e.g., how the value of a demonstrative pronoun is in fact fixed. Stanley and Szabó explain the distinction by reference to an example involving demonstratives, which is worth studying carefully.

I would suggest that this distinction should already goes some way towards lessening one's sense of panic in the face of the examples offered by Searle and Carston, on the ground that at least some of what is troubling about those examlpes concerns the foundational problem, whereas semantics itself need be concerned only with the descriptive problem. How might that suggestion be developed?

Stanley and Szabó then distinguish three ways in which context can affect interpretation.

  • Syntactic: Context is called upon to resolve both lexical and structural ambiguity. One might think there are also cases in which something less than a sentence is uttered, and a complete sentence has to be reconstructed from the context.
  • Semantic: Context may be called upon to fix the values of contextual parameters, such as demonstratives and indexicals, but also e.g. to provide a comparison class for an attributive adjective. Note that, as Stanley and Szabó use the term, any contextual effect that affects what is said, in Grice's sense, is semantic.
  • Pragmatic: Context is of course critical to determining what is implicated by a speaker in making a certain utterance.

How does the distinction between descriptive and foundational questions apply in each of these cases?

With that distinction in place, Stanley and Szabó raise the question which of these roles context plays in the case of quantifer domain restriction. So there are three options.

  • Syntactic: The "missing material" is essentially elided, so context has to reconstruct the complete sentence that determines what is said.
  • Semantic: Either (i) there is an unpronounced expression, present in the "logical form" of the uttered sentence, to which context assigns a value; or (ii) the semantic clause for quantifiers somehow introduces a "domain" over which the quantifier is supposed to range.
  • Pragmatic: What is said in an utterance of "Every bottle is empty" always is the absurd proposition that every bottle in the universe is empty, but a more sensible proposition is usually communicated through pragmatic processes.

To which sort of view do you think Searle or Carston might incline? If none of them, what sort of view do you think has been left out of account? It is also worth checking your understanding here by considering what the relevant options would be in other of the cases we have discussed.

In §5, Stanley and Szabó criticize the syntatic approach. Their main objection is what they call the "underdetermination" objection, which is that it is very hard to see how context could provide a unique 'restrictor' for each quantificational phrase. I.e., they claim that this view makes the foundational problem nearly impossible. This objection is not developed in much detail, so it would be well worth trying to explore it a bit. Here's one crucial question: How exactly does "context" resolve structual or lexical ambiguity? If context is what resolves it, then would it be possible for someone to utter an ambiguous sentence, fully intending that the sentence should have one particular interpretation, but somehow fail to utter that sentence, since context determined the other interpretation? Might a judicious application of the distinction between descriptive and foundational problems help here? If so, how? and how much?

In §6, Stanley and Szabó argue against the pragmatic approach. The core of their criticism is what has come to be known as the binding argument. Here's a simple example. Consider the sentence:

(*) Every senator is reviled by most voters.

It seems reasonable to suppose that an utterance of (*) could mean that every senator is reviled by most voters in that senator's state, not by most voters in the country. So which voters are in question depends upon which senator is in question. Can you think of other sorts of examples along these lines? Why are such examples supposed to be a problem for the pragmatic view? Obviously, utterances of (*) can implicate almost anything. So why isn't it enough to point out that they can implicate that thing, too? Part of an answer would involve considering:

(**) Every senator is reviled by most voters. So are most representatives.

and noting that his sentence is ambiguous. How so?

Finally, in §7, Stanley and Szabó discuss semantic approaches, considering three versions of the view:

  1. The quantifier domain is provided by context in much the way a "domain" is provided in an interpretation. The argument against this view is that different domains can be needed for different quantifiers. Can you come up with better examples of that sort of phenomenon?
  2. A sequence of quantifier domains is provided by context (much as context provides a sequence of objects to be the referents of various demonstatives). The objection to this view is that it too falls to the binding argument. How so?
  3. The quantifier domain is represented explicitly in the syntactic structure (logical form) of the sentence, though it is not pronounced. We will discuss this view a bit in class. You do not need to worry too much about the different implementations of this view and the arguments about which should be preferred, on pp. 254-8

How plausible does this view seem?

11 April

Jason Stanley, "Making It Articulated", Mind and Language 17 (2002), pp. 149-68 (DjVu, Wiley Online)

This issue of Mind and Language was devoted to this sort of topic, and many of the other papers are also worth reading.

Topics for third short paper announced.

This paper continues Stanley's articulation and defense of the "binding argument" that is central to the previous paper we read. As he makes clear in the introduction, his larger goal is to defend "the view that all the constituents of the propositions hearers would intuitively believe to be expressed by utterances are the result of assigning values to the elements of the sentence uttered, and combining them in accord with its structure" (pp. 150-1). In particular, there are no "unarticlated constituents". More generally still, this is supposed to contribute to the defense of the view that context can affect what proposition is expressed by an utterance only by affecting the interpretation of elements of the synactic structrue of the sentence uttered.

The first section of the paper elaborates the binding argument and places it in the context of the sorts of arguments often given in linguistic theory for the existence of "hidden" or "unpronounced" elements. If these sorts of arguments are unfamiliar, don't worry about it. The main point here is simply that the binding argument is very much of a piece with the sorts of arguments in favor of "covert elements" that linguists standardly give.

The second section recounts a debate between Sellars and Strawson over the proper treatment of so-called "incomplete descriptions", such as "the table" (which, for the obvious sort of reason, are not straightforwardly amenable to Russell's treatment of descriptions). The main point here is the one made at the end of the section concerning what a proper response to the binding argument would have to be like. One cannot simply say that there is some "magical" process through which the right interpretation is generated. One has actually to explain what that process is.

In the third section, Stanley elaborates such a response, drawing upon work by Robyn Carston and Kent Bach. The idea is roughly as follows. Fans of "free enrichment" already accept that, during the process of semantic interpretation, additional material can be added to the (possibly incomplete) proposition that is provided simply by the literal meanings of the words used and whatever compositional rules there might be. For example, if someone utters the sentence "Michael is tall", then this can be 'enriched' to "Michael is tall for a human male" or "Michael is tall for a basketball player". Similarly, then, the thought is that the same process could also provide a pronoun that can then be bound by a higher operator. E.g., in the course of interpreting "Every senator is reviled by most voters", one might 'enrich' it to: Every senator is reviled by most voters in his or her state, thus recovering the bound reading.

In the final section, Stanley argues against this sort of move by arguing that it over-generates. It is important to appreciate here that it is every bit as important that one be able to explain why certain readings of sentences are not available as that one should be able to explain why certain readings are available. For example, we want to know not just why, in "John's brother said Tom kissed him", the pronoun can take either "John" or "John's brother" as its antecedent, but also why it cannot take "Tom" as antecedent. So the worry here is that, if "free enrichment" can provide bindable material, then we would expect to get readings of sentences we cannot in fact get.

Stanley claims that:

(15) Everyone has had the privilege of having John greet.

is ungramamtical, but that it could be rescued from ungrammaticality by the addition of the word "her" to the end of the sentence:

(16) Everyone has had the privilege of having John greet her.

(This is itself ambiguous, but we are interested here in the reading where "her" is bound by "everyone".) Since that is the sort of thing that "free enrichment" is supposed to be able to do, it is then a mystery why (15) is not grammatical after all. But (15) is grammatical (and a similar point applies to (13)). Greeting is a task one might have at a church or a meeting, and so (15) can pefectly well mean that everyone has had the privilege of John's performing a certain task for them. Still, if "enrichment" could add the word "him" to the end of (15), then (15) would be ambiguous, and now the objection is that (15) simply isn't ambiguous in that way. Thus, this account "over-generates", in the sense that it predicts that (15) should have a reading it simply does not have, unless some way can be found to stop (15) from being "enriched" to (16). Could it be responded here that we don't have the option of adding "him" to (15), since (15), we now see, already expresses a perfectly sensible proposition? (There is relevant discussion of the contrary move on pp. 163-4.) Are there other responses worth considering?

It is extremely important here to keep clearly in mind that the issue is supposed to be whether utterances of (15) can express what (16) does. It is not relevant if utterances of (15) can communicate what (16) does. This is the point made on pp. 165-6.

Finally, then, let me mention a different series of examples.

  1. Everyone who read a book skipped some pages.
  2. Everyone who read skipped some pages.
  3. Everyone skipped some pages.

I claim that there are readings of (i) that are not available for (ii) or (iii) but that should be if quantifer domain restriction worked through "enrichment". Can you develop this argument?

13 April

Emma Borg, "Minimalism versus Contextualism in Semantics", in Gerhard Preyer and Georg Peter, eds., Context-Sensitivity and Semantic Minimalism (Oxford: Oxford University Press, 2007), pp. 339-60 (DjVu)

See also her books Minimal Semantics (Oxford: Oxford University Press, 2007) and Pursuing Meaning (Oxford: Oxford University Press, 2012).

This paper is one in a volume of essays responding to and commenting upon Herman Cappelen and Ernie Lepore's book Insensitive Semantics, in which they argue for the view known as semantic minimalism. This is the view that, with the exception of the obvious exceptions, every sentence expresses a unique proposition, so that context-sensitivity is limited to those obvious exceptions. Borg also defends a form of this view, though an even stronger one. Our interest is in how carefully Borg sets out the different positions. She does not argue for any of them in this paper.

Borg identifies four sorts of arguments against minimalism, that is, in favor of the view that some particular expression is context-sensitive.

  1. Context-shifting Arguments: These claim that some particular sentence, e.g., "Michael is tall", can express a truth in some contexts, but not in others, even if the relevant facts have not changed.
  2. Incompleteness: These claim that particular sorts of sentences, such as "Mary is ready", rarely, if ever, express propositions on their own, but require some supplmentation.
  3. Inappropriateness: These claim that certain sorts of sentences, such as "Every bottle is empty", although there is a proposition they could always express, cannot express that proposition, because it is too obviously not what speakers mean.
  4. Indeterminacy: These arguments claim that even the thought expressed is in certain sorts of cases indeterminate.
The thought then is that some of these arguments, if accepted, lead to more radical departures from minimalism than others.

Following C&L, Borg distinguishs two sorts of contextualism: radical and moderate. C&L had characterized the difference in terms of the scope of context-sensitivity, so that more moderate views regard fewer terms as context-sensitive. As Borg notes, however, this is not a particulary illuminating characterization. Rather, it is one thing to hold that there are terms outside the "basic set" ("I", "here", "tomorrow", and the like) that are context-sensitive in the same way that those terms are. And it is an entirely different thing to hold that "there are forms of context-sensitivity that are not capturable on the model of...the Basic Set" (p. 344).

Thus, Borg regards the crucial questions as being: What are the mechanisms of context-sensitivity? Can the context of utterance act on semantic content even when such action is not demanded by the syntax of the sentence? Radical contextualists think it can; moderates think it cannot. The moderate view is thus much closer in spirit to minimalism. The disagreement between these views concerns not what context-sensitivity is, so to speak, but only now extensive the phenomenon is. Radical contextualism, on the other hand, thinks we need "an entirely different picture of the relationship between semantics and pragmatics" (p. 346), i.e., that there is something fundamentally wrong with the model of context-sensitivity that informs miminalism and moderate contextualism.

Using this distinction, Borg then defends moderate contextualism against a charge made by C&L: that once one allows for the possibility of context-sensitivity outside the basic set, established by the sorts of arguments typically used for that purpose, then one will find it difficult not to accept the context-sensitivity is all but ubiquitous. But Borg argues that moderates can have reasons to limit the scope of context-sensitivity. What are those reasons? One nice way to answer this question would be to reflect upon the different sorts of arguments that Borg distinguishes and try to see if they match up in any sensible way with the moderate-radical distinction as she draws it.

Borg spends the remainder of the paper offering a characterization of minimalism. We probably will not have time to consider this part of the paper in detail. But it is worth reflecting on the most distinctive feature of her characterization, which is what she calls formalism. What is the motivation for that feature of minimalism? How plausible is it? Can you see how it might lead to a really radical minimalism according to which there has to be a unique proposition that even sentences like "I am a philosopher" and "You aren't very funny" express, independent of context?

15 April

Ishani Maitra, "How and Why To Be a Moderate Contextualist", in Gerhard Preyer and Georg Peter, eds., Context-Sensitivity and Semantic Minimalism (Oxford: Oxford University Press, 2007), pp. 112-32 (DjVu)

There are several other essays in the Preyer and Peter volume that are well worth reading.

Maitra first takes up a topic also discussed by Borg: what divides Moderate from Radical Contextualism. Like Borg, she starts with Cappelen and Lepore's idea that the issue concerns the "extent" of context-sensitivity. But Maitra suggests that we should understand this in terms of:

  • The Meaning Question: How much the "standing meaning" of an expression constrain the content is can have on a given occasion of use?
  • The Context Question: How rule-governed is the determination of content from context?
The thought, then, is: "The more constrained and rule-governed the semantic contents of an expression are on a given Contextualist view, the less context-sensitive that expression is taken to be" (p. 116). This is an important point: One does not have to choose between the different sorts of views here once and for all, but can hold different views about different sorts of expressions.

Maitra goes on to point out that this way of categorizing the different views makes the question how many expressions are context-sensitive not the crucial question. What matters is the way in which they are context-sensitive. How might this compare to Borg's way of characterizing the views, in terms of different mechanisms of context-sensitivity?

The main focus of the paper, though, is on what Maitra calls the "Miracle of Communication Argument" (MCA) against Radical Contextualism. The worry here is that, if pragmatic processes affect semantic content, then, since almost any piece of information one has can prove relevant, it is obscure how speakers and hearers ever manage to converge on a particular interpretation of some bit of language. As Maitra notes, however, something along these lines seems obviously to be true of implicature, and yet we do manage to communicate by implicature. Hence, it looks as if there must be some explanation to be given of how this works. Any idea what that might be?

Another point is that something along these lines seems to be true of uncontroversially context-sensitive expressions, such as "that" and "we". Indeed, there seem to be very, very few expressions in the "basic set" whose content on a given occasion of utterance is completely determined by rule: "I", "today", "tomorrow", and "yesterday" seem plausible candidates. But neither "here" nor "now" (nor, as Maitra mentions, "we") is. Can you construct examples to illustrate this point?

Still, Maitra concedes that the MCA does pose some sort of challenge to Contextualism. In particular, we need to "explain why hearers are generally more confident about what is communicated via semantic contents, than about what is communicated in other ways" (p. 125). Maitra goes on to argue that an appropriately Moderate form of Contextualism has an explanation to offer. Focusing on comparative adjectives, like "tall", she suggests that (i) their standing meaning highly constrains their content, since the only locus of variation is in the comparison class, and (ii) it might be possible to say something fairly definite about how different contexts make "natural" comparison classes available. How does Maitra develop this latter idea? What does she mean by "natural" readings of sentences? How satisfying is what she has to say on this score? How is this view supposed to answer the MCA? To what extent is that reply undermined if, in the end, there isn't much to be said about the "Context Question"?

On p. 128, Maitra considers an objection that sounds very much as if it might have been offered by Searle: Even once we know what the comparison class for "fast", say, is, "there are many ways of being fast for a snail". So the worry is that, if we just specify the comparison class as being snails, we have yet to specify a truth-evaluable content. Maitra offers two replies, the first of which is ad hominem. What is the second reply?

Finally, Maitra considers the question whether a Contextualist might concede that, since so much information is potentially relevant for determining the comparison class, say, there will be failures of perfect communication, but then respond that communication does not need to be perfect to be successful. She does not really develop an example to illustrate this possibility. Can you do so?

18 April

Discussion

Third short paper due

Metaphorical Meaning
20 April

Donald Davidson, "What Metaphors Mean", Critical Inquiry 5 (1978), pp. 31-47, also in Inquiries, pp. 245-64 (DjVu, JSTOR).

The classic paper on metaphor, to which much of the early literature responds, is Max Black, "Metaphor", Proceedings of the Aristotelian Society 55 (1955), pp. 273-94 (PDF). Black responds to Davidson in "How Metaphors Work: A Reply to Donald Davidson", Critical Inquiry 6 (1979), pp. 131-43 (PDF).

The central question of Davidson's paper is what we should say about the meanings of metaphorical utterances, of which, as you will note, his paper is quite full. His view is that the only meaning a metaphorical utterance has is its literal meaning. This is a bold view, and one central problem is to understand how Davidson does think metaphor functions.

Davidson first argues that simply saying that using an expression metaphorically gives it a special, metaphorical meaning (and a special, metaphorical extension) cannot be right, because, if so, then "there is no difference between metaphor and the introduction of a new term into our vocabulary" (p. 34). What that view leaves out is the fact that metaphorical meaning, if such there is, depends upon literal meaning. So, Davidson argues, metaphor is not just a form of ambiguity, either.

The most developed form of the ambiguity theory is what Davidson calls the 'Fregean' theory. His argument against it spans pp. 36-8. What is the central idea in this argument? How is it supposed to refute the 'Fregean' view? Perhaps the key passage is this one:

If metaphor involved a second meaning, as ambiguity does, we might expect to be able to specify the special meaning of a word in a metaphorical setting by waiting until the metaphor dies. The figurative of the should be immortalized in the literal meaning living metaphor of the dead.

The next theory considered is the standard, grade school account: A metaphor is a simile without "like" or "as". Davidson complains that the "corresponding" simile is not always easy to identify. But his deeper complaints are that this view (i) "den[ies] access to what we took to be the literal meaning of the metaphor" and (ii) trivializes metaphor, since the literal meaning of a simile is simply that this is like that. If there is more to the meaning of the simile—if some particular ways in which this is like that are part of the meaning of the simile—then, Davidson complains, the "reduction" of metaphor to simile is unhelpful. Does that seem right?

To this point, then, Davidson seems to have taken himself to have disposed of the idea that metaphorical meaning should be regarded as part of what is said. Thus, he insists, on p. 40, that the particular comparisions a simile might lead you to notice is part of what is meant. He then goes on to say that "[w]hat words do with their literal meaning in simile must be possible for them to do in metaphor". The next topic to be explored, then, is whether we should think of metaphorical meaning also in terms of what is meant.

The question, then, is whether a given metaphor has any sort of "cognitive content". Davidson's first question is why, if it does, it is so difficult to say what it is, i.e., to replace the metaphor with a literal paraphrase. This reveals what Davidson calls

a tension in the usual view of metaphor. For on the one hand, the usual view wants to hold that a metaphor does something no plain prose can possibly do and, on the other hand, it wants to explain what a metaphor does by appealing to a cognitive content—just the sort of thing plain prose is designed to express.

His suggestion, then, is that we should "give up the idea that a metaphor carries a message, that it has a content or meaning (except, of course, its literal meaning)" (p. 45). Thus, he writes:

The central error about metaphor is most easily attacked when it takes the form of a theory of metaphorical meaning, but behind that theory, and statable independently, is the thesis that associated with a metaphor is a cognitive content that its author wishes to convey and that the interpreter must grasp if he is to get the message. This theory is false, whether or not we call the purported cognitive content a meaning. (p. 46)

Davidson thus wants to deny that we should think of metaphorical meaning in terms of what is meant, either.

Thus, we are left with the question how Davidson does think metaphor works. What he seems to say is that a metaphor can make us aware of, or lead us to appreciate, certain sorts of similarities (e.g.), but that this is not because of any "coded message" that the metaphor carries. The language here is broadly causal. How might the view properly be understood? What advantages or disadvantages does it seem to have?

22 April

Elizabeth Camp, "Contextualism, Metaphor, and What is Said" (Mind & Language 21 (2006), pp. 280–309 (Wiley Online, via Camp's Website, DjVu).

The sort of Gricean view of metaphor Camp defends was originally elaborated by John Searle in "Metaphor", in his Expression and Meaning (Cambridge: Cambridge University Press, 1979), pp. 76-116 (DjVu). This sort of idea was already mentioned by Grice ("Logic and Conversation", p. 34).

Camp has published a few other papers on this topic, which can be found on her personal web site.

Camp endorses a largely Gricean view of metaphor, according to which one who utters a metaphor "say[s] one thing in order to communicate something different" (p. 280). Much of her paper defends this view against a "contextualist" treatment of metaphor according to which metaphorical content is part of what is said. (Note that what Camp means by "contextualism" seems to be a form of "radical" contextualism.) To that end, Camp discusses four sorts of arguments for the "contextualist" view.

One interesting question to consider as you read this paper is whether metaphor is a unified category from the perspective of semantic theory. I'll raise this kind of issue at a couple places below.

The first argument is that speakers are willing to report someone who has uttered "Bill is a bulldozer" as having said, e.g., that Bill is a tough guy. The simple response to this argument is that the ordinary use of "said" should not be presumed to be any kind of guide to the theoretical notion of what is said. (This point is strongly associated with Cappelen and Lepore (1997), to which Camp refers.) A deeper response, is contained in the very last paragraph of this section, is that metaphor "patterns with" implicature as regards it interaction with indirect speech reports. Can you elaborate this response? Can you think of other examples that might support it?

The second argument is that the metaphorical interpretation is in some sesen "direct" and independent of the (alleged) literal meaning, whereas an implicature of course does depend upon what is said and so is, in that sense, "indirect". The difficulty, according to Camp, is to explain clearly what that is supposed to mean. It cannot mean, as Recanati seems to suggest, that "indirect" meanings have to be worked out consciously, since many implicatures are not. A better suggestion is that the process of working out "indirect" meanings has to be "available", in the sense in which one thinks one's reasons for action are "available". But then, Camp claims, metaphorical interpretation is "indirect" in the relevant sense. This is simply because, as Davidson emphasized, metaphorical meaning depends upon literal meaning. But one might wonder whether there is not a contrast here between what Camp calls "ordinary conversational metaphors" and "poetic metaphors". Does literal meaning play the same role in these two cases? For a somewhat different example, consider:

The demon in charge of this portion of Hell is a bulldozer.
said by someone commenting upon their department chair. Does the literal meaning of that sentence play a significant role in its interpretation?

The third argument is that metaphorical content can serve as input for the process of calculating implicatures. Camp argues, however, that agreed forms of indirect speech, such as sarcasm, can do so as well. The hard case is to get implicatures to trigger further implicatures. How convincing is Camp's example of that? How plausible is her explanation for when this can happen and when it cannot? Still, there is, as Camp notes, an asymmetry: Implicature must follow metaphorical interpretation; you cannot have a metaphorical interpretation of an implicature. What is her explanation of this asymmetry (borrowed from Josef Stern)? Why might one think it showed that metaphorical meaning is part of what is said? Her argument, in response, is again that this does not clearly differentiate metaphor from other forms of indirect speech, such as sarcasm. This is a complex argument involving example (23). Can you unpack it? (The crucial claim is that "the manner-generated implicatures must fall within the scope of the sarcasm".)

The fourth argument is that one can explicitly agree or disagree with the metaphorical content of an utterance, by saying things like "Yes, that's true", or "No, he's not". In cases like the letter of recommendation, on the other hand, one cannot disagree with what is merely implicated that way. But Camp argues (somewhat tentatively) that there are cases of implicature where one can use such language, and that it certainly can be used with sarcasm, malapropisms, and certain sorts of speaker's meaning. She also argues that respondents can insist upon a literal construal, saying that

the crucial point is this: if the original speaker’s utterance had genuinely ‘lodged’ a new metaphorical meaning in the words uttered, or even just had established a new, temporary use for them, then that meaning should necessarily be inherited by any later use of those same words in that same context which responds to the initial claim.
Care to elaborate? How good is that argument?

One might wonder if another argument might be available here, namely, that the difference here tracks explicitness and obviousness, rather than revealing a difference between metaphor and implicature. How might that go?

The remainder of the paper sketches a positive account of how to delineate "what is said", which Camp initially explains as a "notion of 'first meaning'—first in the rational order of interpretation" (p. 300). This part of the paper is well worth studying, but our focus at the moment is on metaphor, and it is doubtful we will have time to discuss it. You should read it, however, as it throws important light on Camp's discussion of the four objections. (Camp has, by the way, continued to develop this sort of account in her more recent work.)

25 April

Catherine Wearing, "Metaphor and What Is Said", Mind and Language 21 (2006), pp. 310-332 (DjVu, Wiley Online).

See also Dan Sperber and Deirdre Wilson, "A Deflationary Account of Metaphor", in R. Gibbs, ed. The Cambridge Handbook of Metaphor and Thought (Cambridge: Cambridge University Press, 2008), pp. 84-105 (via Sperber's Site)

27 April

Josef Stern, "Metaphor and Minimalism", Philosophical Studies 153 (2011), pp. 273-98 (DjVu, Springer).

Stern has published a number of papers on metaphor. The original piece elaborating his view is "Metaphor as Demonstative", Journal of Philosophy 82 (1985), pp. 677-710 (JSTOR). He has also published a book on the topic, Metaphor in Context.

3 May, 5pm

Topic for final paper must be cleared with instructor

10 May, 5pm

Final Paper Due

1Where possible, links to publically accessible electronic copies of the papers are included. For copyright reasons, however, many of the links require a username and password available only to those enrolled in the course.

Richard Heck Department of Philosophy Brown University