Note: You may download the original syllabus as a PDF. The syllabus may (and probably will) change during the semester. The version here should always be current.
As well as a list of readings and such, this page contains links to the various papers we shall be reading.1 The files are usually available in two forms. There are (i) a DjVu file and (ii) a PDF file. Why both forms? They are intended for different uses.
There is another advantage to DjVu. Because DjVu is a file format specifically designed for scanned text, the DjVu encoder produces files that are typically much smaller than the corresponding PDFs. For example, the PDF for Davidson's "Truth and Meaning" is 1.2 MB; the DjVu, which was created from the PDF, is 490K, less than half the size, and that includes the embedded text, for searchability. The contrast is even greater in other cases.
To view the PDFs, you will of course need a
PDF reader. For the DjVu files, you will need a DjVu reader. Linux users can likely just install the
djviewlibre package using their distro's package management system. There are also free (as in beer and as in speech) readers for
If you follow those links, you will see a list of files you can download. Just download the most recent one. (Do not download the file mentioned above the list of files as the "latest version". That is source code.) And there is a
browser plugin for Google Chrome
that should work on any OS.
Another option is Okular, which was originally written for Linux's KDE Desktop Environment but which can now be run, experimentally, on Windows and OSX, as well. A list of other DjVu resources is maintained at djvu.org. There are also DjVu readers available for Android and whatever proprietary garbage other folks are peddling these days. Go to Play Store or whatever to find them.
The program I've used to convert PDFs to DjVu is
a simple Bash script I wrote myself,
pdf2djvu. It relies upon other programs to do the real work and should run on most varieties of Unix.
|27 January||Introductory Meeting|
Grice has two main goals in this paper. The first is to distinguish 'natural' from 'non-natural' meaning. Grice offers several tests that he claims separate them. How good are these tests? Are some better than others? Give some examples of your own of both kinds of meaning. Are there cases of meaning that don't seem to fit either of Grice's categories?
Strawson is critical of the usual identifications between the stipulated meanings of the logical symbols for conjunction and the like and certain words of ordinary language. In fact, Strawson seems to think that none of the usual identifications are correct, except possibly the one between "~" and "not".
H.P. Grice, "Logic and Conversation", in Studies in the Ways of Words (Cambridge MA: Harvard University Press, 1989), pp. 22-40 (DjVu)
Grice's central goal in this paper is to distinguish what is "strictly and literally said" in an utterance from other things that might be "meant" or "communicated" in the making of that utterance and, in particular, to articulate a notion of what is implicated by an utterance that is part of what is 'meant' or 'communicated', but is not part of what is said.
|Meaning and Truth-Theory: Davidson's Proposal|
Donald Davidson, "Theories of Meaning and Learnable Languages", in his Inquiries into Truth and Interpretation (Oxford: Oxford University Press, 1984), pp. 3-15 (DjVu)
The central purpose of Davdison's paper is to motivate what has come to be called the "principle of compositionality", which asserts that the meaning of a sentence is a function of the meaning of the words of which it is composed, and of how that sentence is composed from those words.
In "Theories of Meaning and Learnable Languages", Davidson argued that, if a language is learnable, we should be able to give a compositional theory of meaning for it: a theory that shows how the meaning of a sentence depends upon, and is determined by, the meanings of its parts. In "Truth and Meaning", Davidson argues that a theory of truth can play this role, and he claims, moreover, that nothing else can.
Strawson is concerned in this paper to mediate a dispute between two approaches to questions about meaning. The first ten pages or so are devoted to explaining the two sides, which are represented by Grice and Davidson, among others.
David Lewis, "Languages and Language", Minnesota Studies in the Philosophy of Science 7 (1975), pp. 3-35 (reprinted in Lewis's Philosophical Papers, vol.1 (New York: Oxford University Press, 1983), pp. 163-88) (DjVu, Minnesota Studies)
You should concentrate on sections I-III, in which Lewis summarizes a modified version of the more extensive account of linguistic meaning given in his book Convention (Cambridge MA: Harvard University Press, 1969), and on pp. 17-23 (pp. 174-9 of the reprint), where Lewis discusses a series of objections connected to compositionality.
Lewis here is returning to the sorts of questions that exercise Strawson in "Meaning and Truth". He argues that languages, which are the concern of "formal semanticists", are central to an account of langauge, which is the social phenomenon of concern to people like Grice. In particular, Lewis wants to explain what it is for a sentence to have a certain meaning in terms of the relation: L is the language used by some population P, where that relation is itself explained in terms of a particular convention that prevails in P. In the background is a general account of what a convention is.
Topics for first short paper announced
|Meaning and Truth-Theory: The Foster Problem|
John Foster, "Meaning and Truth-Theory", in G. Evans and J. McDowell, eds., Truth and Meaning: Essays in Semantics (Oxford: Oxford University Press, 1976), pp. 1-32 (DjVu)
You need only read sections 1-2, on pages 1-16, carefully. The discussion in section 3 concerns Davidson's "revised thesis", which we have not year encountered, and section 4 contains Foster's emendation of Davidson's position, which, as we shall see, falls to a version of Foster's own objection to Davidson.
Foster's paper is important for a certain sort of objection that it brings against what Foster calls "Davidson's Initial Thesis". As we shall see, the objection threatens rather more than just that.
You need only read through about p. 174 or so in "Reply to Foster". The remainder discusses Foster's objections to Davidson's new view, which we did not read.
Davidson's reply to Foster consists, more or less, of pointing to the view he develops in "Radical Interpretation". So most of our discussion will focus on that paper.
First short paper due
No Class: Presidents' Day Holiday
See also Scott Soames, "Semantics and Semantic Competence", in S. Schiffer and S. Steele, eds., Cognition and Representation (Boulder CO: Westview Press, 1988), pp. 185-207 (DjVu).
Our main interest here is in Soames's re-articulation of the Foster problem, on pp. 19-25, and his criticism of Davidson's response to it, on pp. 25-8. In between, on p. 25 itself, there are a handful of remarks about Higginbotham's response, which we shall read next. But since we have not yet read Higginbotham's paper, you need not worry about them now.
For an approach that is different from but similar to Higginbotham's, see Richard Larson and Gabriel Segal, Knowledge of Meaning: An Introduction to Semantic Theory (Cambridge MA: MIT Press, 1995), Chs. 1-2
You should focus on sections 1 and 2. We will not discuss section 3 directly.
Michael Dummett, "What Do I Know When I Know a Language?", in The Seas of Language (Oxford: Oxford University Press, 1993), pp. 94-105 (DjVu)
As usual with Dummett's writings, this paper is dense and difficult. The central question of the paper is why, in explaining the concept of linguistic meaning, we should need to invoke a speakers' mastery (understanding) of her language, and why we should need to invoke the concept of knowledge in explaining the latter.
See also Michael Dummett, "Language and Communication", in The Seas of Language (Oxford: Clarendon Press, 1993), pp. 166-87; John McDowell, "Meaning, Communication, and Knowledge", in Meaning, Knowledge, and Reality (Cambridge: Harvard University Press, 1998), pp. 29-50
This is a long and difficult paper. You should focus on pp. 827-44. I'll make some comments about the material that follows, which is well worth reading, but we obviously cannot discuss the whole of the paper in one class.
Section 3 of the paper raises some questions about compositionality, and how that figures into the analysis. I personally find this part of the paper to be extremely interesting, but we probably will not have time to discuss it in class, so you should feel free to skip it, if you so wish.
Richard Heck, "Reason and Language", in C. Macdonald and G. Macdonald, eds., McDowell and His Critics (Oxford: Blackwell Publishing, 2006), pp. 22-45 (PDF)
Focus here on the arguments in the first two sections of the paper. The first section argues that speech is "propositionally rational", that is, that speech is intentional under propostional descriptions, such as: saying that snow is white. The second section argues that speech is also intentional under verbal descriptions, such as: uttering the sentence "snow is white". What are the arguments for these two claims? And how convincing are they?
You may wish to have a look at Richard Heck, "Meaning and Truth-Conditions", in D. Griemann and G. Siegwart, eds., Truth and Speech Acts: Studies in the Philosophy of Language (New York: Routledge, 2007), pp. 349-76 (DjVu).
Topics for second short paper announced
Noam Chomsky, Knowledge of Language: Its Nature, Origin, and Use (London: Praeger, 1986), Chs. 1-2 (DjVu)
You might also want to look at Noam Chomsky, Aspects of the Theory of Syntax (Cambridge MA: MIT Press, 1965), chapter 1, sections 1-6 (DjVu, PDF), and W. V. O. Quine, "Methodological Reflections on Current Linguistic Theory", Synthese 21 (1970), pp. 386-98 (DjVu, Springer).
The main thing we will want to discuss is Chomsky's distinction between E-language and I-language: what the distinction is and why Chomsky thinks that I-language should be the focus of linguistic inquiry.
Regarding the former issue, when Chomsky introduces this terminology, he remarks that E-language is external language, and I-language is internal language, and some of his remarks accord with that usage. But there is another distinction that is at least as important: between language though of extensionally, e.g., as a set of grammatical sentences, or of sentence-meaning pairs; and language thought of intensionally, in terms of a system of rules for determining grammaticality or what meaning (if any) to assign to a sentence. What is the relation between these two ways of thinking of the distinction?
Much of Chomsky's discussion revolves around syntax and, in one striking passage (pp. 41ff), phonology. But it would be natural to want to apply these same sorts of considerations to the case of semantics—as, indeed, Higginbotham explicitly proposes to do "I am applying to semantics a research program that goes forward in syntax and phonology, asking, 'What do you know when you know a language, and how do you come to know it?'" ("Truth and Understanding", p. 13). How might that sort of program in semantics be motivated, in a broadly Chomskyan way?
On pp. 44-5, however, Chomsky alleges that the shift from E- to I-language essentially excludes semantics—conceived as the study of the relation between language and world—from linguistic theory. How serious do the concerns Chomsky expresses here seem to you to be?
Gareth Evans, "Semantic Theory and Tacit Knowledge", in his Collected Papers (Oxford: Oxford University Press, 1985), pp. 322-42 (DjVu)
Evans is responding to Crispin Wright, "Rule-following, Objectivity, and the Theory of Meaning", in S. Holtzman and C. Leich, eds., Wittgenstein: To Follow a Rule (London: Routledge and Kegan Paul, 1981), pp. 99-117. Similar worries can be found in other authors.
See e.g. Hilary Putnam, "The 'Innateness Hypothesis' and Explanatory Models in Linguistics", Synthese 17 (1967), 12-22
reprinted in his Mind, Language, and Reality: Philosophical Papers, v. 2 (Cambridge: Cambridge University Press, 1975), pp. 107-16.
Wright continues the discussion in his "Theories of Meaning and Speakers' Knowledge", in his Realism, Meaning, and Truth (Oxford: Blackwell, 1986), pp. 204-38
In reply to Wright, see Martin Davies, "Tacit Knowledge and Semantic Theory: Can a Five per cent Difference Matter?", Mind 96 (1987), pp. 441-6
Evans's main goal in this paper is to try to explain what motivates the requirement that theories of meaning should be compositional, and to explain as well what might allow us to distinguish "extensionally equivalent" theories that prove all the same T-sentences.
In section II, Evans discusses a toy language with ten unary predicates and ten names, and so 100 sentences, and considers two theories of truth, T1 and T2, that do and do not acribe any sort of structure to the sentences of this language. Evans suggests that, considered as accounts of what a speaker of the language knows, these two theories can be distinguished empirically. The core idea is that the axioms of the theories can be associated with certain dispositions that the speaker has. What are the empirical differences between the theories? Can you think of any other such differences we might expect to find? Why is it important that the dispositions in question should be "full-blooded"?
In section IV, Evans discusses the question whether attribution of tacit knowledge of a structured theory of meaning can explain a speaker's capacity to understand novel sentences. Evans first concedes that, by itself, it cannot. It is, honestly, not entirely clear to me why. I think the reason is that the speaker's regarding the sentence "J(e)", for example, as true iff whatever, if that judgement is regarded as the exercise of the dispositions that constitute her tacit knoweldge of the theory T2, then the judgement cannot also be regarded as explained by her having those same dispositions: that would be like explaining a sleeping pill's effectiveness in terms of its dormativity—its disposition to make people sleepy. But the usual response to this kind of worry is that this objection is only good if the disposition is "thin" rather than "full-blooded", as Evans's were meant to be. So why doesn't Evans just say that, if the dispositions are full-blooded, then there is an explanation to be had, in terms of whatever the categorical basis of those dispositions is? Is that perhaps the point Evans is making when he says that "to say that a group of phenomena have a common explanation is obviously not yet to say what the explanation is"?
Second short paper due
Davies is interested in the question whether there is a way of making sense of the idea that a language (in roughly Lewis's sense) has a certain structure without having to claim that speakers of the language must in any sense, including tacitly, be aware of that structure. So, in particular, in the case of Evans's 100-sentence language, Davies wants to claim that, even if speakers fail to have the sorts of dispositions Evans specifies, it can still be true that the language they speak has a certain sort of semantic structure. Davies thus suggests that semantic theories should meet what he calls the structural constraint:
If, but only if, there could be speakers of L who, having been taught to use and know the meanings of sentences (of L) s1, ...,sn..., could by rational inductive means go on to use and know the meaning of the sentence s..., then a theory of truth for L should employ in the canonical derivations of truth condition specifying biconditionals for s1, ...,sn resources already sufficient for the canonical derivation of a biconditional for s. (p. 138)Note (as Davies more or less mentions) that this involves what a hypothetical speaker could do, much along the lines of what Davidson suggests in "Radical Interpretation". The basic point here is supposed to be that semantics need not have anything particular to do with facts about how speakers understand their language.
In section I of the paper, Davies considers four objections to the structural constraint. The most important of these is the fourth. Davies' initial presentation of the objection is somewhat complicated, but he gives a concrete example on p. 146. The core of the objection is that certain actual speakers of a language might lack the ability to "project" the meanings of sentences they have not encountered (e.g., "Bug moving"), whereas other speakers might have that ability; and, if so, then it seems odd to say that the language of the speakers who cannot "project" has the same structure as that of the speakers who can project, simply because someone could project meaning in that way. Davies's response is to deny that, in such a case, the common sentences of the two languages (e.g., "Bug") do in fact have the same meaning. How exactly does this save the structural constraint? What does it say about how the meanings of sentences are being specified when a languages are identified as Davies identifies them at the very beginning of the paper? Is this response compatible with Davies's insistence that, even if speakers understand Evans's 100-sentence language in a non-compositional way, still the right semantics for that language is the compositional one?
In section II, Davies discusses what is involved in crediting speakers with "implicit" knowledge of a theory of meaning for their language. Davies first step is to introduce a notion of "full understanding" of a language: Someone fully understands a language L if she is in a "differential state" with respect to the various words (semantic primitives) of the language. This requires her to have the sorts of dispositions concerning acquisition and loss that Evans discusses, as well as the dispositions concerning change of meaning that were mentioned in class. But Davies worries that mere dispositions are not enough and so invokes causal and explanatory relations as well in giving a full account. Try to explain that account as best you can, in your own words. The question then arises: Can't we at least imagine that an earthling and a Martian (see p. 150) might have very different such dispositions? What follows if we can?
Davies emphasizes, on pp. 152-3, that the account of full understanding does not itself involve attributing tacit or implicit knowledge. Davies first considers, and dismisses, a series of objection to the claim that we should attribute such knowledge. Beginning on p. 156, however, he offers a reason not to do so, namely, that there seems to be no need to attribute anything like implicit or tacit desires with which such beliefs might interact. How does this relate to Evans's idea that such informational states are not "at the service of many projects"? How is Davies proposing that we should think of the relation between a full understander and the best semantic theory for her language? (See the last paragraph of §18.) How would this discussion be affected if we replaced talk of "belief" with talk of "information", and made it clear that the information in question need not be available to the speaker, at the personal level?
One might think of Fricker's goal in this paper as to try to answer Lewis's claim that there is "no promising way to make objective sense of the assertion that a grammar Γ is used by a population P whereas another grammar Γ', which generates the same language as Γ, is not" (L&L, p. 20)—and to do so on Lewis's own terms. That is, Fricker accepts that semantic facts about a language must supervene on how it is used—on the linguistic abilities of its speakers—and that these abilites relate only to tell what whole sentences mean. (These are principles (α) and (A), on pp. 52 and 56, more or less). And yet she wants to claim that each language has a semantic structure that is essential to it and to which speakers of the language bear some non-trivial relation. (These are principles (β) and (γ) on p. 52.) This is a bold, which is not necessarily to say 'heroic', view. The strategy, to a large extent, is to effect a synthesis of Evans and Davies.
Fricker begins, in section I, by rehearsing what is, very roughly, the metaphysical project elaboarted in "Radical Interpretation". Section II introduces her problem and argues that both Davies and Evans fail properly to vindicate the idea of semantic structure. Section III formulates an argument that the Ludovician assumptions mentioned above lead to Lewis's conclusion: that a theory of meaning for a language need not uncover structure.
The main argument of the paper is in sections IV and V. Fricker's conclusion is that "...facts about sentence-meanings are not independent of facts about their structure" (p. 60). She mentions three sorts of reasons in favor of this claim:
What do you think of these arguments? Perhaps what is most striking about them is that they lead to the suggestion that the sorts of principles Davies and Evans propose are a priori principles that govern radical interpretation. How plausible is that suggestion?
Antony's main goal in this paper is to argue that semantics should be understood as inextricably linked to psychology: to questions about what speakers actually come to know about their languages when they learn to speak. She distinguishes three sorts of alternatives to her position: Platonism, represented by Devitt, Katz, and Soames; Instrumentalism, represented by Davidson; and Reconstructive Rationalism, represented by Dummett and Wright. (Except for the first, these are my terms.) She discusses Platonism only briefly, claiming that the facts about (say) English simply cannot be completely independent of facts about how English speakers behave and gesturing in the direction of Fricker.
The discussion of Instrumentalism and Reconstructive Rationalism, which dominates section I, consists, largely, in playing them off against one another. The latter view insists that semantic theory should be concerned with speakers' knowledge, but only with a "systematized" (Dummett) or "idealized" (Wright) version of such knowledge, not with the knoweldge actual speakers possess. Antony argues that the attribution of semantic knowledge, if it is not to be merely heuristic (and thus no different from what Instrumentalism offers), must have some explanatory work to do. But, she suggests, "...the rational reconstruction of linguistic meaning cannot explain the rationality of human language use if the posited linguistic structure is not available to speakers" (p. 188). How fair a criticism is this? Is Antony right simply to dismiss Wright's project of an "idealized epistemology of understanding"?
Antony goes on to argue that the idealizations inherent in Reconstructive Rationalism are difficult to reconcile with the professed goals of its proponents. There is, she says, a "tension between, on the one hand, appealing to human capacities in order to justify features of meaning—theoretic projects, and on the other, ignoring the actual nature of those capacities" (p. 188). The original sin here is Davidson's attempt to motivate compositionality, and the problem this observation poses for Reconstructive Rationalism is supposed to be that it is hard to see why, if we are not going to idealize away from the finitude of the language-learner, we should think it justified to idealize away from any of the other circumstances under which language is, in fact, acquired by human beings. Does this seem a fair criticism?
This point then morphs into a criticism of Instrumentalism. The discussion is directed at Quine's restriction on the data on which radical translation must be based, but could equally be directed at Davidson's corresponding restriction on the data on which radical interpretation must be based. Such restrictions are motivated by claims about what sort of evidence is available, in principle, to a language-learner (or, though she does not mention the point, to an actual speaker who is attempting to determine if someone else speaks her language—see the first couple pages of "Radical Interpretation"). Antony argues in response that the evidence that is allowed to a radical whatever-er is both wider and narrower than what is available to actual language-learners. But the most interesting claim she makes is this one:
Considered in the context of Quine's metaphysical goals, the idealization involved in permitting the linguist an unlimited amount of behavioural evidence appears concessive to the meaning realist; in fact, it is a slick piece of bait-and-switch. The cooperative tone distracts us from the fact that Quine has already begged the crucial question, by assuming that whatever physical facts metaphysically determine meaning must be identical with the physical facts that constitute the evidence children have available to them during language acquisition.
What does Antony mean here? What might be examples of physical facts on which semantic facts supervene that are not among the facts available as evidence to a child learning language? An even more interesting question is how, if there are such facts, they might, in some other (non-evidential) sense, be "available" to ordinary speakers interpeting one another. (Suppose that, as a matter of empirical fact, there were exactly 1000 concepts humans could possess, and we were all born with all of them. How would that affect language acquisition and interpretation?)
There is thus a common criticism of both Instrumentalism and Reconstructive Rationalism: "...[T]he epistemic strategies of 'ideal' learners are of no theoretical value to the task of understanding human cognitive competencies if the idealizations abstract away from epistemic constraints that are in fact constitutive of the learning task confronting humans" (p. 193). Yes or no?
Section II of the paper turns to questions about tacit knowledge. Antony argues that the strategy pursued by Evans and Davies (think, in his case, of what he says about full understanders) has a fatal flaw: It purports to justify the attribution of tacit knowlege entirely on the basis of an isomorphism between the structure of a semantic theory and the structure of a speaker's abilities: But "...isomorphisms are cheap: the mere fact that the formal structure of a particular theory can be projected onto some structure of causally related states is not enough to make it true that the set of states actually embodies the theory" (p. 200). By contrast, Antony insists, what we want is for the causal processes that underlie the understanding of novel sentences to be sensitive to information about the meanings of sub-sentential constituents. And she proposes that we can have that only if we take seriously the idea that there are states in the speaker that encode the very information articulated by a semantic theory, and that these states interact causally in ways that are sensitive to that information. Out of this emerges a criticism of Fricker. What is that criticism?
Steven Gross, "Knowledge of Meaning, Conscious and Unconscious", in The Baltic International Yearbook of Cognition, Logic, and Communication, Vol. 5: Meaning, Understanding, and Knowledge (2010), pp. 1-44 (Baltic Yearbook)
Gross notes that there are two kinds of arguments for attributing (propositional) semantic knowledge to competent speakers. On the one hand, speech is a conscious, rational activity, and the reasons for which we speak seem to involve knowledge of the semantic properties of expressions. On the other hand, productivity, systematicity, etc, seem to demand explanation in terms of information that speakers possess and deploy. The issue in which Gross is interested concerns the relation between these two sorts of knowledge: How, in particular, is the rationalizing belief that "John runs slowly" is true iff John runs slowly related to the T-theorem that "John runs slowly" is true iff (∃e)[Agent(e, John) & Running(e) & Slow(e)]? The question is generated by the fact that these two sorts of knowledge seem to be very different:
It is (3) that plays the most important role in Gross's paper. Is it the most fundamental of the three? What implications seem to hold between them, generally?
In section 2, Gross elaborates these two sorts of reasons to attribute semantic knowledge. His discussion in §2.1 of the rationality of language is both similar to and different from those of Heck and Rumfitt. How so? What seems to you to be the most original of Gross's contributions here? How plausible is it? The discussion of compositionality, etc, in §2.2, is more cursory, but there is an observation worth noting. Gross notes that one goal of semantic theory is to explain why certain sentences are ambiguous and others are not, and more precisely why certain readings are available and others are not. So a semantic theory needs to be able to generate different T-sentences for the various readings of, e.g., "Visiting relatives can be annoying" or "Everyone loves someone", and Gross takes that to be a reason that the T-theorems of a semantic theory cannot be homophonic. But is there not a similar point to be made about rationalizing semantic knowledge? If so, how might that bear upon Gross's relation question?
In section 3, Gross considers six sorts of answers to the "relation question".
Gross does not come to any definite conclusions here: His point is simply that all of these options have their costs. Which seems to be most plausible to you?
Finally, in section 4, Gross considers the question what sort of relation there might be between the two sorts of knowledge, if it cannot be as "intimate" as one might have hoped. His proposal is that tacit knowledge of a semantic theory might be necessary for both possession of the rationalizing knowledge (his (A)) and the ability to express it (his (B)). And, moreover, one's tacit knowledge might be causally responsible for one's rationalizing knowledge, even if it does not bear any rational or inferential relation to it. How plausible does that view seem?
|Contextualism, For and Against|
Searle is concerned in this paper to argue against a certain conception of the (literal) meaning of a sentence and in favor of a different conception. He describes his target as follows:
Every unambiguous sentence...has a literal meaning which is absolutely context free and which determines for every context whether or not an utterance of that sentence in that context is literally true or false. (p. 214)
His preferred view is:
For a large class of unambiguous sentences...the notion of the literal meaning of the sentence only has application relative to a set of background assumptions. The truth condtions of the sentence will vary with variations in these background assumptions; and given the absence or presence of some background assumptions the sentence does not have determinate truth conditions. These variations have nothing to do with indexicality, change of meaning, ambiguity, conversational implication, vagueness or presupposition as these notions are standardly discussed in the philosophical and linguistic literature. (p. 214)
And Searle argues that these "background assumptions" cannot, even in principle, all be "specifi[ed] as part of the semantic content of the sentence, [since] they are not fixed and definite in number" (pp. 214-5). More importantly, no such specification can ever be complete: No matter how precisely we try to specify the "background assumptions", there will always be other background assumptions in play which can, by clever construction of examples, be brought to our attention and varied, so as to lead to variation in truth-conditions.
The general strategy of argument is to consider a sentence that seems to have a perfectly definite literal meaning. (In practice, we focus on one word and the contribution it is making.) We then consider certain peculiar contexts and note that it is simply not clear whether to regard a perfectly literal utterance of the sentence as true or false in that context. Indeed, there will be ways of developing the example so that a perfectly literal utterance of the sentence would be true; and there will be other way of developing it so that such an utterance would be false. Hence, what the truth-condition of the utterance is depends upon exactly which "background assumptions" are in play.
To check your own understanding, it is worth trying to construct examples similar to Searle's for sentences other than the ones he considers.
A more general, principled question concerns why Searle is so focused on the literal meaning of sentences. In many ways, what Searle argues could be rephrased in terms not of sentence meaning but in terms of Grice's notion of what is said. Searle's point would then be that what is said, even in the most literal utterance of a very ordinary sentence, is not completely determined by "stable" features of the sentence that is uttered, but depends also upon background assumptions that cannot, even in principle, be completely specified. How, if that is true, might it affect the conception of semantic theory with which we have been operating throughout our discussions so far? What should we think of a competent speaker as knowing about such ordinary sentences in virtue of being a competent speaker? What might we say about what competent speakers know, say, about the meaning of the word "on", as it occurs in sentences such as "The cat is on the mat"?
|28 March–1 April||No Class: Spring Break|
Robyn Carston, "Implicature, Explicature, and Truth-theoretic Semantics", in R. Kempson, ed., Mental Representations: The Interface Between Language and Reality (New York: Cambridge University Press, 1988), pp. 155-82 (DjVu). You can skip, or skim, the final section.
Grice writes in "Logic and Conversation":
In the sense in which I am using the word say, I intend what someone has said to be closely related to the conventional meaning of the words (the sentence) he has uttered. (p. 25)
Grice is of course aware that contextual factors may play various sorts of roles in determining what is said, as he goes onto discuss. But the fixed, stable meanings of the words used are supposed to play an especially important role.
We have more or less been following Grice in this respect, and assuming that we can derive the truth-condition of a sentence from semantic axioms governing its component parts. But Carston, in this paper, challenges this sort of assumption. She is particularly concerned with the requirements of a psychologically adequate account of linguistic competence. And she argues that, if we want such an account, then the difference between what is said—which she calls the explicature associated with an utterance—and what is implicated is much less stark than Grice seems to suppose.
The first lesson to learn from this paper is that it is not at all clear how we should draw the distinction between explicature (what is said) and implicature (what is meant). This was already argued in Searle, but Carston presents a whole battery of examples, and her discussion of "and" is especially intriguing, since that was one of the examples in which Grice was especially interested. In particular, Carston argues that an utterance of "A and B" can assert the existence of all sorts of different relations between A and B: temporal, causal, rational, and so forth. And she argues further that neither the view that "and" is multiply ambiguous nor Grice's view that the assertion of such relations is always an implicature can be sustained. The view she proposes is instead that speakers use various sorts of pragmatic processes, very similar to those that generate implicatures, to "enrich" the linguistically specified content so as to arrive at the explicature.
More specifically, Carston opposes what she calls the "linguistic direction principle", which claims that any "explicating" process must be in response to something in the linguistic form that calls for it. She sees the more traditional view as supposing that "what is said" must be truth-evaluable and that the only work context can do to fix what is said is whatever needs to be done to get us something truth-evaluable. So, e.g., the reference of demonstrative has to be determined, since otherwse one has nothing truth-evaluable; but one does not need to find any relation for "and" to express beyond truth-functional conjunction, since that is already truth-evaluable. What do you think of her arguments against this traditional view?
Our main interest will be in the sorts of arguments Carston gives that, e.g, the temporal aspect of certain uses of "and" must be part of what is said. There are four of these:
Carston uses the "negation test" and the "conditional test" to argue, in a variety of cases, that the explicature is much richer than one might have supposed. As I said before, there is a whole battery of examples here. Which of these seem to you to be the strongest? which the weakest? and why? What strategies do you think might be available for resisting the conclusion that Carston wants to draw, that pragmatic processes play a surprisingly large role in determining what is said?
Finally, what kind of threat, if any, do such examples pose to truth-conditional semantics as we have been discussing it? Carston herself thinks the threat is large, claiming that the right sorts of representations for which to define a truth-conditional semantics are the mental representations that are the result of explicature, not the linguistic representations that are the input to pragmatic processes. How plausible is that claim?
|6 April and 8 April||
This paper, and the next one we shall read, are reprinted in Jason Stanley, Language in Context (Oxford: Oxford University Press, 2007), together with several other essays on context-sensitivity.
This paper is concerned with a particular case of the general problem raised by Searle and Carston: quantifier domain restriction. That is, it is concerned with the question how an utterance of a sentence like "Every bottle is empty" comes to express, not the absurd proposition that every bottle in the universe is empty, but some sensible proposition to the effect that every bottle in some particular group G is empty.
Stanley and Szabó begin by distinguishing between descriptive and foundational problems of context dependence. The core descriptive questions are which aspects of the utterance give rise to context sensitivity and what has to be done, exactly, to resolve it. Foundational questions concern how context does whatever needs to be done, e.g., how the value of a demonstrative pronoun is in fact fixed. Stanley and Szabó explain the distinction by reference to an example involving demonstratives, which is worth studying carefully.
I would suggest that this distinction should already goes some way towards lessening one's sense of panic in the face of the examples offered by Searle and Carston, on the ground that at least some of what is troubling about those examlpes concerns the foundational problem, whereas semantics itself need be concerned only with the descriptive problem. How might that suggestion be developed?
Stanley and Szabó then distinguish three ways in which context can affect interpretation.
How does the distinction between descriptive and foundational questions apply in each of these cases?
With that distinction in place, Stanley and Szabó raise the question which of these roles context plays in the case of quantifer domain restriction. So there are three options.
To which sort of view do you think Searle or Carston might incline? If none of them, what sort of view do you think has been left out of account? It is also worth checking your understanding here by considering what the relevant options would be in other of the cases we have discussed.
In §5, Stanley and Szabó criticize the syntatic approach. Their main objection is what they call the "underdetermination" objection, which is that it is very hard to see how context could provide a unique 'restrictor' for each quantificational phrase. I.e., they claim that this view makes the foundational problem nearly impossible. This objection is not developed in much detail, so it would be well worth trying to explore it a bit. Here's one crucial question: How exactly does "context" resolve structual or lexical ambiguity? If context is what resolves it, then would it be possible for someone to utter an ambiguous sentence, fully intending that the sentence should have one particular interpretation, but somehow fail to utter that sentence, since context determined the other interpretation? Might a judicious application of the distinction between descriptive and foundational problems help here? If so, how? and how much?
In §6, Stanley and Szabó argue against the pragmatic approach. The core of their criticism is what has come to be known as the binding argument. Here's a simple example. Consider the sentence:
(*) Every senator is reviled by most voters.
It seems reasonable to suppose that an utterance of (*) could mean that every senator is reviled by most voters in that senator's state, not by most voters in the country. So which voters are in question depends upon which senator is in question. Can you think of other sorts of examples along these lines? Why are such examples supposed to be a problem for the pragmatic view? Obviously, utterances of (*) can implicate almost anything. So why isn't it enough to point out that they can implicate that thing, too? Part of an answer would involve considering:
(**) Every senator is reviled by most voters. So are most representatives.
and noting that his sentence is ambiguous. How so?
Finally, in §7, Stanley and Szabó discuss semantic approaches, considering three versions of the view:
How plausible does this view seem?
This issue of Mind and Language was devoted to this sort of topic, and many of the other papers are also worth reading.
Topics for third short paper announced.
This paper continues Stanley's articulation and defense of the "binding argument" that is central to the previous paper we read. As he makes clear in the introduction, his larger goal is to defend "the view that all the constituents of the propositions hearers would intuitively believe to be expressed by utterances are the result of assigning values to the elements of the sentence uttered, and combining them in accord with its structure" (pp. 150-1). In particular, there are no "unarticlated constituents". More generally still, this is supposed to contribute to the defense of the view that context can affect what proposition is expressed by an utterance only by affecting the interpretation of elements of the synactic structrue of the sentence uttered.
The first section of the paper elaborates the binding argument and places it in the context of the sorts of arguments often given in linguistic theory for the existence of "hidden" or "unpronounced" elements. If these sorts of arguments are unfamiliar, don't worry about it. The main point here is simply that the binding argument is very much of a piece with the sorts of arguments in favor of "covert elements" that linguists standardly give.
The second section recounts a debate between Sellars and Strawson over the proper treatment of so-called "incomplete descriptions", such as "the table" (which, for the obvious sort of reason, are not straightforwardly amenable to Russell's treatment of descriptions). The main point here is the one made at the end of the section concerning what a proper response to the binding argument would have to be like. One cannot simply say that there is some "magical" process through which the right interpretation is generated. One has actually to explain what that process is.
In the third section, Stanley elaborates such a response, drawing upon work by Robyn Carston and Kent Bach. The idea is roughly as follows. Fans of "free enrichment" already accept that, during the process of semantic interpretation, additional material can be added to the (possibly incomplete) proposition that is provided simply by the literal meanings of the words used and whatever compositional rules there might be. For example, if someone utters the sentence "Michael is tall", then this can be 'enriched' to "Michael is tall for a human male" or "Michael is tall for a basketball player". Similarly, then, the thought is that the same process could also provide a pronoun that can then be bound by a higher operator. E.g., in the course of interpreting "Every senator is reviled by most voters", one might 'enrich' it to: Every senator is reviled by most voters in his or her state, thus recovering the bound reading.
In the final section, Stanley argues against this sort of move by arguing that it over-generates. It is important to appreciate here that it is every bit as important that one be able to explain why certain readings of sentences are not available as that one should be able to explain why certain readings are available. For example, we want to know not just why, in "John's brother said Tom kissed him", the pronoun can take either "John" or "John's brother" as its antecedent, but also why it cannot take "Tom" as antecedent. So the worry here is that, if "free enrichment" can provide bindable material, then we would expect to get readings of sentences we cannot in fact get.
Stanley claims that:
(15) Everyone has had the privilege of having John greet.
is ungramamtical, but that it could be rescued from ungrammaticality by the addition of the word "her" to the end of the sentence:
(16) Everyone has had the privilege of having John greet her.
(This is itself ambiguous, but we are interested here in the reading where "her" is bound by "everyone".) Since that is the sort of thing that "free enrichment" is supposed to be able to do, it is then a mystery why (15) is not grammatical after all. But (15) is grammatical (and a similar point applies to (13)). Greeting is a task one might have at a church or a meeting, and so (15) can pefectly well mean that everyone has had the privilege of John's performing a certain task for them. Still, if "enrichment" could add the word "him" to the end of (15), then (15) would be ambiguous, and now the objection is that (15) simply isn't ambiguous in that way. Thus, this account "over-generates", in the sense that it predicts that (15) should have a reading it simply does not have, unless some way can be found to stop (15) from being "enriched" to (16). Could it be responded here that we don't have the option of adding "him" to (15), since (15), we now see, already expresses a perfectly sensible proposition? (There is relevant discussion of the contrary move on pp. 163-4.) Are there other responses worth considering?
It is extremely important here to keep clearly in mind that the issue is supposed to be whether utterances of (15) can express what (16) does. It is not relevant if utterances of (15) can communicate what (16) does. This is the point made on pp. 165-6.
Finally, then, let me mention a different series of examples.
I claim that there are readings of (i) that are not available for (ii) or (iii) but that should be if quantifer domain restriction worked through "enrichment". Can you develop this argument?
Emma Borg, "Minimalism versus Contextualism in Semantics", in Gerhard Preyer and Georg Peter, eds., Context-Sensitivity and Semantic Minimalism (Oxford: Oxford University Press, 2007), pp. 339-60 (DjVu)
See also her books Minimal Semantics (Oxford: Oxford University Press, 2007) and Pursuing Meaning (Oxford: Oxford University Press, 2012).
This paper is one in a volume of essays responding to and commenting upon Herman Cappelen and Ernie Lepore's book Insensitive Semantics, in which they argue for the view known as semantic minimalism. This is the view that, with the exception of the obvious exceptions, every sentence expresses a unique proposition, so that context-sensitivity is limited to those obvious exceptions. Borg also defends a form of this view, though an even stronger one. Our interest is in how carefully Borg sets out the different positions. She does not argue for any of them in this paper.
Borg identifies four sorts of arguments against minimalism, that is, in favor of the view that some particular expression is context-sensitive.
Following C&L, Borg distinguishs two sorts of contextualism: radical and moderate. C&L had characterized the difference in terms of the scope of context-sensitivity, so that more moderate views regard fewer terms as context-sensitive. As Borg notes, however, this is not a particulary illuminating characterization. Rather, it is one thing to hold that there are terms outside the "basic set" ("I", "here", "tomorrow", and the like) that are context-sensitive in the same way that those terms are. And it is an entirely different thing to hold that "there are forms of context-sensitivity that are not capturable on the model of...the Basic Set" (p. 344).
Thus, Borg regards the crucial questions as being: What are the mechanisms of context-sensitivity? Can the context of utterance act on semantic content even when such action is not demanded by the syntax of the sentence? Radical contextualists think it can; moderates think it cannot. The moderate view is thus much closer in spirit to minimalism. The disagreement between these views concerns not what context-sensitivity is, so to speak, but only now extensive the phenomenon is. Radical contextualism, on the other hand, thinks we need "an entirely different picture of the relationship between semantics and pragmatics" (p. 346), i.e., that there is something fundamentally wrong with the model of context-sensitivity that informs miminalism and moderate contextualism.
Using this distinction, Borg then defends moderate contextualism against a charge made by C&L: that once one allows for the possibility of context-sensitivity outside the basic set, established by the sorts of arguments typically used for that purpose, then one will find it difficult not to accept the context-sensitivity is all but ubiquitous. But Borg argues that moderates can have reasons to limit the scope of context-sensitivity. What are those reasons? One nice way to answer this question would be to reflect upon the different sorts of arguments that Borg distinguishes and try to see if they match up in any sensible way with the moderate-radical distinction as she draws it.
Borg spends the remainder of the paper offering a characterization of minimalism. We probably will not have time to consider this part of the paper in detail. But it is worth reflecting on the most distinctive feature of her characterization, which is what she calls formalism. What is the motivation for that feature of minimalism? How plausible is it? Can you see how it might lead to a really radical minimalism according to which there has to be a unique proposition that even sentences like "I am a philosopher" and "You aren't very funny" express, independent of context?
Ishani Maitra, "How and Why To Be a Moderate Contextualist", in Gerhard Preyer and Georg Peter, eds., Context-Sensitivity and Semantic Minimalism (Oxford: Oxford University Press, 2007), pp. 112-32 (DjVu)
There are several other essays in the Preyer and Peter volume that are well worth reading.
Maitra first takes up a topic also discussed by Borg: what divides Moderate from Radical Contextualism. Like Borg, she starts with Cappelen and Lepore's idea that the issue concerns the "extent" of context-sensitivity. But Maitra suggests that we should understand this in terms of:
Maitra goes on to point out that this way of categorizing the different views makes the question how many expressions are context-sensitive not the crucial question. What matters is the way in which they are context-sensitive. How might this compare to Borg's way of characterizing the views, in terms of different mechanisms of context-sensitivity?
The main focus of the paper, though, is on what Maitra calls the "Miracle of Communication Argument" (MCA) against Radical Contextualism. The worry here is that, if pragmatic processes affect semantic content, then, since almost any piece of information one has can prove relevant, it is obscure how speakers and hearers ever manage to converge on a particular interpretation of some bit of language. As Maitra notes, however, something along these lines seems obviously to be true of implicature, and yet we do manage to communicate by implicature. Hence, it looks as if there must be some explanation to be given of how this works. Any idea what that might be?
Another point is that something along these lines seems to be true of uncontroversially context-sensitive expressions, such as "that" and "we". Indeed, there seem to be very, very few expressions in the "basic set" whose content on a given occasion of utterance is completely determined by rule: "I", "today", "tomorrow", and "yesterday" seem plausible candidates. But neither "here" nor "now" (nor, as Maitra mentions, "we") is. Can you construct examples to illustrate this point?
Still, Maitra concedes that the MCA does pose some sort of challenge to Contextualism. In particular, we need to "explain why hearers are generally more confident about what is communicated via semantic contents, than about what is communicated in other ways" (p. 125). Maitra goes on to argue that an appropriately Moderate form of Contextualism has an explanation to offer. Focusing on comparative adjectives, like "tall", she suggests that (i) their standing meaning highly constrains their content, since the only locus of variation is in the comparison class, and (ii) it might be possible to say something fairly definite about how different contexts make "natural" comparison classes available. How does Maitra develop this latter idea? What does she mean by "natural" readings of sentences? How satisfying is what she has to say on this score? How is this view supposed to answer the MCA? To what extent is that reply undermined if, in the end, there isn't much to be said about the "Context Question"?
On p. 128, Maitra considers an objection that sounds very much as if it might have been offered by Searle: Even once we know what the comparison class for "fast", say, is, "there are many ways of being fast for a snail". So the worry is that, if we just specify the comparison class as being snails, we have yet to specify a truth-evaluable content. Maitra offers two replies, the first of which is ad hominem. What is the second reply?
Finally, Maitra considers the question whether a Contextualist might concede that, since so much information is potentially relevant for determining the comparison class, say, there will be failures of perfect communication, but then respond that communication does not need to be perfect to be successful. She does not really develop an example to illustrate this possibility. Can you do so?
Third short paper due
The classic paper on metaphor, to which much of the early literature responds, is Max Black, "Metaphor", Proceedings of the Aristotelian Society 55 (1955), pp. 273-94 (PDF). Black responds to Davidson in "How Metaphors Work: A Reply to Donald Davidson", Critical Inquiry 6 (1979), pp. 131-43 (PDF).
The central question of Davidson's paper is what we should say about the meanings of metaphorical utterances, of which, as you will note, his paper is quite full. His view is that the only meaning a metaphorical utterance has is its literal meaning. This is a bold view, and one central problem is to understand how Davidson does think metaphor functions.
Davidson first argues that simply saying that using an expression metaphorically gives it a special, metaphorical meaning (and a special, metaphorical extension) cannot be right, because, if so, then "there is no difference between metaphor and the introduction of a new term into our vocabulary" (p. 34). What that view leaves out is the fact that metaphorical meaning, if such there is, depends upon literal meaning. So, Davidson argues, metaphor is not just a form of ambiguity, either.
The most developed form of the ambiguity theory is what Davidson calls the 'Fregean' theory. His argument against it spans pp. 36-8. What is the central idea in this argument? How is it supposed to refute the 'Fregean' view? Perhaps the key passage is this one:
If metaphor involved a second meaning, as ambiguity does, we might expect to be able to specify the special meaning of a word in a metaphorical setting by waiting until the metaphor dies. The figurative of the should be immortalized in the literal meaning living metaphor of the dead.
The next theory considered is the standard, grade school account: A metaphor is a simile without "like" or "as". Davidson complains that the "corresponding" simile is not always easy to identify. But his deeper complaints are that this view (i) "den[ies] access to what we took to be the literal meaning of the metaphor" and (ii) trivializes metaphor, since the literal meaning of a simile is simply that this is like that. If there is more to the meaning of the simile—if some particular ways in which this is like that are part of the meaning of the simile—then, Davidson complains, the "reduction" of metaphor to simile is unhelpful. Does that seem right?
To this point, then, Davidson seems to have taken himself to have disposed of the idea that metaphorical meaning should be regarded as part of what is said. Thus, he insists, on p. 40, that the particular comparisions a simile might lead you to notice is part of what is meant. He then goes on to say that "[w]hat words do with their literal meaning in simile must be possible for them to do in metaphor". The next topic to be explored, then, is whether we should think of metaphorical meaning also in terms of what is meant.
The question, then, is whether a given metaphor has any sort of "cognitive content". Davidson's first question is why, if it does, it is so difficult to say what it is, i.e., to replace the metaphor with a literal paraphrase. This reveals what Davidson calls
a tension in the usual view of metaphor. For on the one hand, the usual view wants to hold that a metaphor does something no plain prose can possibly do and, on the other hand, it wants to explain what a metaphor does by appealing to a cognitive content—just the sort of thing plain prose is designed to express.
His suggestion, then, is that we should "give up the idea that a metaphor carries a message, that it has a content or meaning (except, of course, its literal meaning)" (p. 45). Thus, he writes:
The central error about metaphor is most easily attacked when it takes the form of a theory of metaphorical meaning, but behind that theory, and statable independently, is the thesis that associated with a metaphor is a cognitive content that its author wishes to convey and that the interpreter must grasp if he is to get the message. This theory is false, whether or not we call the purported cognitive content a meaning. (p. 46)
Davidson thus wants to deny that we should think of metaphorical meaning in terms of what is meant, either.
Thus, we are left with the question how Davidson does think metaphor works. What he seems to say is that a metaphor can make us aware of, or lead us to appreciate, certain sorts of similarities (e.g.), but that this is not because of any "coded message" that the metaphor carries. The language here is broadly causal. How might the view properly be understood? What advantages or disadvantages does it seem to have?
The sort of Gricean view of metaphor Camp defends was originally elaborated by John Searle in "Metaphor", in his Expression and Meaning (Cambridge: Cambridge University Press, 1979), pp. 76-116 (DjVu). This sort of idea was already mentioned by Grice ("Logic and Conversation", p. 34).
Camp has published a few other papers on this topic, which can be found on her personal web site.
Camp endorses a largely Gricean view of metaphor, according to which one who utters a metaphor "say[s] one thing in order to communicate something different" (p. 280). Much of her paper defends this view against a "contextualist" treatment of metaphor according to which metaphorical content is part of what is said. (Note that what Camp means by "contextualism" seems to be a form of "radical" contextualism.) To that end, Camp discusses four sorts of arguments for the "contextualist" view.
One interesting question to consider as you read this paper is whether metaphor is a unified category from the perspective of semantic theory. I'll raise this kind of issue at a couple places below.
The first argument is that speakers are willing to report someone who has uttered "Bill is a bulldozer" as having said, e.g., that Bill is a tough guy. The simple response to this argument is that the ordinary use of "said" should not be presumed to be any kind of guide to the theoretical notion of what is said. (This point is strongly associated with Cappelen and Lepore (1997), to which Camp refers.) A deeper response, is contained in the very last paragraph of this section, is that metaphor "patterns with" implicature as regards it interaction with indirect speech reports. Can you elaborate this response? Can you think of other examples that might support it?
The second argument is that the metaphorical interpretation is in some sesen "direct" and independent of the (alleged) literal meaning, whereas an implicature of course does depend upon what is said and so is, in that sense, "indirect". The difficulty, according to Camp, is to explain clearly what that is supposed to mean. It cannot mean, as Recanati seems to suggest, that "indirect" meanings have to be worked out consciously, since many implicatures are not. A better suggestion is that the process of working out "indirect" meanings has to be "available", in the sense in which one thinks one's reasons for action are "available". But then, Camp claims, metaphorical interpretation is "indirect" in the relevant sense. This is simply because, as Davidson emphasized, metaphorical meaning depends upon literal meaning. But one might wonder whether there is not a contrast here between what Camp calls "ordinary conversational metaphors" and "poetic metaphors". Does literal meaning play the same role in these two cases? For a somewhat different example, consider:
The demon in charge of this portion of Hell is a bulldozer.said by someone commenting upon their department chair. Does the literal meaning of that sentence play a significant role in its interpretation?
The third argument is that metaphorical content can serve as input for the process of calculating implicatures. Camp argues, however, that agreed forms of indirect speech, such as sarcasm, can do so as well. The hard case is to get implicatures to trigger further implicatures. How convincing is Camp's example of that? How plausible is her explanation for when this can happen and when it cannot? Still, there is, as Camp notes, an asymmetry: Implicature must follow metaphorical interpretation; you cannot have a metaphorical interpretation of an implicature. What is her explanation of this asymmetry (borrowed from Josef Stern)? Why might one think it showed that metaphorical meaning is part of what is said? Her argument, in response, is again that this does not clearly differentiate metaphor from other forms of indirect speech, such as sarcasm. This is a complex argument involving example (23). Can you unpack it? (The crucial claim is that "the manner-generated implicatures must fall within the scope of the sarcasm".)
The fourth argument is that one can explicitly agree or disagree with the metaphorical content of an utterance, by saying things like "Yes, that's true", or "No, he's not". In cases like the letter of recommendation, on the other hand, one cannot disagree with what is merely implicated that way. But Camp argues (somewhat tentatively) that there are cases of implicature where one can use such language, and that it certainly can be used with sarcasm, malapropisms, and certain sorts of speaker's meaning. She also argues that respondents can insist upon a literal construal, saying that
the crucial point is this: if the original speaker’s utterance had genuinely ‘lodged’ a new metaphorical meaning in the words uttered, or even just had established a new, temporary use for them, then that meaning should necessarily be inherited by any later use of those same words in that same context which responds to the initial claim.Care to elaborate? How good is that argument?
One might wonder if another argument might be available here, namely, that the difference here tracks explicitness and obviousness, rather than revealing a difference between metaphor and implicature. How might that go?
The remainder of the paper sketches a positive account of how to delineate "what is said", which Camp initially explains as a "notion of 'first meaning'—first in the rational order of interpretation" (p. 300). This part of the paper is well worth studying, but our focus at the moment is on metaphor, and it is doubtful we will have time to discuss it. You should read it, however, as it throws important light on Camp's discussion of the four objections. (Camp has, by the way, continued to develop this sort of account in her more recent work.)
See also Dan Sperber and Deirdre Wilson, "A Deflationary Account of Metaphor", in R. Gibbs, ed. The Cambridge Handbook of Metaphor and Thought (Cambridge: Cambridge University Press, 2008), pp. 84-105 (via Sperber's Site)
Stern has published a number of papers on metaphor. The original piece elaborating his view is "Metaphor as Demonstative", Journal of Philosophy 82 (1985), pp. 677-710 (JSTOR). He has also published a book on the topic, Metaphor in Context.
|3 May, 5pm||
Topic for final paper must be cleared with instructor
|10 May, 5pm||
Final Paper Due
1Where possible, links to publically accessible electronic copies of the papers are included. For copyright reasons, however, many of the links require a username and password available only to those enrolled in the course.