Logical Constants

Logical Constants

Ken Warmbrod

Alfred Tarski’s model-theoretic account of logical consequence is assumed, in one form or another, in nearly all contemporary discussions of logical theory, and it is the only theory of consequence a student is likely to encounter in a logic textbook. Given the infrequency with which philosophical theories are accorded this degree of general acceptance, one would tend to assume that the theory’s foundations are reasonably secure. Tarski, however, was not so confident of those foundations. His theory of consequence depended on an underlying distinction between logical vocabulary (that is logical constants) and extra-logical vocabulary. Tarski was disarmingly candid about his inability to elucidate the distinction: “no objective grounds are known to me which permit us to draw a sharp boundary between the two groups of terms” (1936, pp. 418-9).(1) The terms most widely recognized today as logical constants are the troth-functional connectives and first-order quantifiers. It remains a problem, however, to explain why these terms, and no others, should make it onto the list. That problem will be the main focus of this essay.

The widely accepted strategy of characterizing logical constants by specifying necessary and sufficient conditions for constancy will be examined in [sections] 1. I argue that this approach holds little prospect of explaining why certain terms, but not others, are assigned constant meanings in a theory of logical truth and consequence. [sections] 2 considers the alternative strategy of formulating notions of “logical consequence” and “logical truth” based on various pre-theoretic intuitions about necessity, apriority and form. I argue that the intuitions in question are too controversial in philosophical terms to provide a suitable basis for logical theory. [sections] 3 defends a conception of core logical theory which avoids most philosophically controversial intuitions and which aims for the more modest goal of a logical theory adequate to the task of deductively systematizing scientific theories. In [sections] 4 I argue for the adequacy of a particular set of constants for core logic and consider the need for non-standard constants such as modal operators and second-order quantifiers. Finally, [sections] 5 examines the differences between core logic and extended logical theories which recognize logical constants not needed in the core theory.

Some writers have not acknowledged a need for any rationale for the choice of logical constants. Since the class of terms accorded this status is usually finite, it is common practice to simply list them. Quine, for example, explicitly endorsed a policy of stipulation at one point: “Logical vocabulary is specified only, I suppose, by enumeration” (1953, p. 141). A stipulated list does have a virtue worth noting: it is flexible. A logician has the option of specifying an initial list which is adequate for some basic logical theory and then expanding the list later when developing more advanced extensions of the basic theory. However, absent some conscious rationale for the choice of logical terms, a stipulated list is also troubling. As Quine noted himself, such stipulation of constants carries with it an “element of apparent arbitrariness” (1953, p. 141). The choice of logical constants determines which sentences are counted as logical truths. If the selection of constants is arbitrary, there is nothing to prevent an unorthodox logician from assimilating “Clinton” and “has been a US President” to the status of logical constant. In that case, “Clinton has been a US President” will be true under all reinterpretations of the non-logical terms and hence logically true. Logicians who entertain the idea that logical troths should be necessary will find this obviously unacceptable. The result is troubling, however, even if one invests no weight at all in a notion of necessity. If there is no rationale for the choice of logical constants, then there will be no rationale for designating some truths as logical and others as ordinary truths. Ultimately, such arbitrariness calls into question the basis for distinguishing logic from the rest of science. Even if there is no such thing as necessary truth, it is useful to have a motivated division of labour between problems that logicians try to solve and those that other scientists investigate.

1. Against criteria for logical constancy

One alternative to acceptance of an apparently arbitrary list has been to specify some criterion–a set of necessary and sufficient conditions–for logical constancy. If a criterion of constancy is to be considered successful, it is obviously not sufficient for it to simply single out the traditional set of truth-functional connectives and quantifiers. That much can be accomplished with a list. A successful criterion must provide a rationale for the choice of constants which explains and justifies the type of treatment accorded to such terms in logical theory. The nature of the problem becomes apparent when we consider the functional nature of the concept “logical constant”. Logical constants are terms that play a certain type of role in a theory of logical consequence and logical troth. The logical constants of a theory are the terms whose meaning assignments are held fixed while the assignments to other terms vary through some admissible range of assignments. The question which must be addressed by any criterion of logical constancy, then, is this: why should those terms that satisfy the criterion, and only those terms, have fixed meanings in a theory of logical troth and consequence?

There is neither need nor space here to survey all extant proposals for criteria of constancy. Nevertheless, it will be useful to examine a few well-known proposals as illustrations of how a criterion may fail to address the critical question. I begin with Christopher Peacocke’s (1976) suggestion of a criterion that depends on a notion of a priori knowledge. Suppose that a is an expression which applies to formulas or singular terms [[Beta].sub.1], …, [[Beta].sub.n], and assume that one is given knowledge as follows for each [[Beta].sub.i]:

(1) If [[Beta].sub.i] is a singular term, one knows which object is

assigned to it by each sequence.

(2) If [[Beta].sub.i] is a formula, one knows which sequences satisfy it.

(3) One knows the assignment clause or satisfaction condition for [Alpha].

Under Peacocke’s criterion ct is a logical constant if and only if, given such knowledge, one can know a priori which sequences satisfy [Alpha]([[Beta].sub.1], …, [[Beta].sub.n), if this expression is a formula, or which object is assigned to [Alpha]([[Beta].sub.1], …, [[Beta].sub.n]) if this expression is a singular term.

Peacocke argues that the standard first-order connectives and quantifiers count as logical constants under his criterion. In addition, perhaps somewhat surprisingly, temporal operators such as “In the past …” also qualify. On the other hand, identity, modal operators (except on some readings), and set membership are excluded. Peacocke argues persuasively that his account accords with intuitions about a priori knowledge which have often been thought to be important to logical theory.

There are, of course, intuitions and arguments that conflict with Peacocke’s account. The case of identity illustrates the difficulties well enough. Quine (1970, pp. 61-4), for example, appeals to the completeness and topic neutrality of identity theory as a basis for counting “=” as logical. In addition, Quine is skeptical about a priori knowledge in general. So in his view intuitions about apriority provide no justification for classifying terms as logical or not.

Is there any reasonable prospect that Peacocke’s criterion will provide a reason for preferring his treatment of identity as non-logical over the contrary intuitions of other logicians? The identity predicate fails to qualify as a logical constant under Peacocke’s criterion because the required condition (I) is false.

(I) If someone knows which objects are assigned to “x” and “y” by each

sequence, and if he knows the satisfaction condition for “=”, then he can

know a priori which sequences satisfy “x = y”.

According to Peacocke, condition (I) is false because a person who understands the satisfaction condition for “=” and who knows that a sequence assigns a and b to “x” and “y”, respectively, may still not know that a and b are identical. Hence he would not know that the sequence in question satisfies “x = y”.

For the sake of argument, let us ignore skeptical misgivings about a priori knowledge and assume that Peacocke is right in claiming that (I) is false. Does this mean that “=” is not a logical constant? The issue is not simply whether the term “logical constant” should be applied. The problem is how “=” should be treated in a semantic theory about logical truth and logical consequence. In particular, should the meaning assigned to “=” be fixed or allowed to vary through interpretations? The shortcoming of Peacocke’s criterion is that one can agree that (I) is false and still wonder why this fact makes it a mistake for a logician to assign a fixed meaning to “=”. The criterion identifies a property common to a number of acknowledged constants, but nothing about the property in question indicates why a fixed meaning should be assigned to all and only terms that possess the property. It is not surprising, therefore, that the criterion is unpersuasive for logicians who are inclined to treat “=” as logical. Exactly similar considerations hold for other terms whose status as logical constants is controversial.

More recently, Gila Sher (1989, 1991, 1996) has proposed a criterion of constancy quite different from Peacocke’s. As we shall see, however, it ultimately raises similar problems. Sher notes an interesting characteristic common to the standard universal and existential quantifiers: both quantifiers can be viewed as indicating the size of the extension of an open formula relative to a domain of discourse. For sets, sameness of size is determined by whether there is a one-to-one mapping between the sets. For interpretations of a language, the notion which is most analogous to sameness of size is that of isomorphism. Simplifying somewhat, two interpretations I and d are isomorphic if there is a one-to-one mapping f between their two domains such that, for any n-ary predicate P, satisfies P under I if and only if satisfies P under J. Intuitively, when interpretations are isomorphic, there is a mapping which establishes sameness of size of the domains and which preserves extensions of the predicates. In formal terms, then, Sher’s intuition about the job of the two standard quantifiers can be expressed by saying that both quantifiers satisfy condition (S).

(S) If I and J are isomorphic, then for any open formula p[x], Qxp[x] will

be true under I if and only if it is true under J.(2)

Satisfaction of(S) is fundamental for a logical term, Sher argues, because, quoting Mostowski, such a term “does not allow us to distinguish between different elements of [the universe]”.(3) Sher thus proposes that any quantifier satisfying condition (S) be allowed as a constant of logical theory.

Sher’s proposal is somewhat awkward when understood as a general criterion for logical constancy since it has no straightforward application to sentential connectives. In fact, she allows truth-functional connectives as logical constants via an ad hoc extension of the criterion, and the criterion offers no guidance at all on whether modal operators should be accorded status as logical constants (1989, p. 354; 1991, p. 54). Nevertheless, Sher’s proposal can still be viewed as providing necessary and sufficient conditions for whether a quantifier should count as a logical constant.

Acceptance of Sher’s criterion would considerably expand the set of quantifiers recognized as logical constants. Generalized quantifiers such as “Most x”, “Exactly 5 things x are such that …”, “An even number things x are such that …”, “Uncountably many things x are such that …”, and many others, will clearly qualify as logical constants. In addition, Sher’s proposal makes branching quantifiers (at least, those that satisfy (S)) logical constants. Though many such new quantifiers seem to be natural additions to the standard first-order pair, Sher’s criterion clearly lets in some unwelcome additions. For example, let n be some randomly chosen positive integer. Read “Qx” as “[exists]x” in an interpretation if the domain contains n or fewer objects, and read it as “[inverted a]x” otherwise. “Qx” satisfies condition (S), but it would be difficult to motivate such an addition to logical theory.(4)

The intuition underlying Sher’s proposal, very roughly, is that a quantifier’s job is to indicate the size of a set (that is the extension of an open formula) but nothing else about the membership of that set. The intuition is plausible if one takes typical mathematical discourse to provide the paradigms for the use of quantifiers. But one can also take natural language discourse to provide paradigms for quantifier use. Based on typical English usage, there is an (at least) equally plausible intuition according to which a quantifier may indicate not only the size of a set but almost anything about the membership. It is interesting in this regard to compare Sher’s proposals with those of Barwise and Cooper (1981) who also advocate adding generalized quantifiers to the standard set of first-order quantifiers. According to them, a quantifier is to be viewed as a complex noun phrase D([Eta]) where D is a determiner and [Eta] is a set term. Allowable determiners include “every”, “some”, “exactly 3”, “most” and many others. A set term can be any term regarded as denoting a set including, for example, “thing”, “man”, and “winner of the Nobel Prize”. Some of the quantifiers allowed by Barwise and Cooper satisfy Sher’s proposal (S) (for example “Some (thing)”, “Most (thing)”), and others do not (for example “Every (man)”, “Most (winner of the Nobel Prize)”).

Clearly, both Sher’s proposal and that of Barwise and Cooper are based on plausible intuitions about what quantifiers (in different settings) do. But, as was the case with Peacocke’s criterion, it is not clear that the disagreement can be resolved satisfactorily by appeal to a criterion for constancy. It is not enough simply to identify some interesting property common to a class of acknowledged constants, and then deny constancy for terms that lack the property. To be sure, Sher’s criterion identifies a property of the standard quantifiers, namely, indication of the size of an open formula’s extension. Equally clearly, quantifiers such as “Every (man)” and “Some (winner of the Nobel Prize)” do not confine themselves to indicating the size of a set. But, by itself at least, this fact provides no obvious indication of what mistake would be committed by a logician who assigns the same meaning to such quantifiers under every interpretation. The issue that must be addressed by a satisfactory criterion of constancy is that of why a candidate term should be accorded or denied such special treatment in a semantic theory. As was the case with Peacocke’s criterion, Sher’s criterion simply does not appear to address the critical question.

It is worth noting that there is a very different argument, unrelated to Sher’s condition (S), which is sometimes offered as a reason for adding generalized and/or branching quantifiers to the standard set of logical constants. Such quantifiers, it is held, are needed to express facts that cannot be expressed using “[inverted a]x” and “[exists]x” alone. If, indeed, there are facts that cannot be expressed without such exotic quantifiers, then we have a compelling reason to accord such quantifiers a place in formalized theories. However, even this reason does not automatically entail that the appropriate semantic role for such quantifiers is that of terms whose meanings are fixed over all interpretations. Imagine, for example, that we introduce a quantifier “Qx” and stipulate that the truth condition associated with “Qx” will be assigned differently (within some admissible range, naturally) by each interpretation. Thus under one interpretation, “Qx” might be read as “Exactly 5 things x are such that …”; under a second interpretation it might mean “Exactly 100 things x are such that …”; under a third it might mean “Uncountably many things x are such that …”. Such a quantifier would provide a means for expressing facts for which generalized quantifiers are sometimes wanted, but it would not commit us to any new logical constants. We shall return to the issue of the need for generalized and branching quantifiers in [sections] 4 below.

Ian Hacking (1979) has offered a very different criterion for logical constancy, and in this case there is at least an initial basis for thinking that the criterion addresses the problem of why a term should be assigned a fixed meaning. A logical constant, Hacking argues, is a term that can be introduced into a language by certain Gentzen-style rules of inference. The rules that introduce a term determine the truth conditions of sentences containing the term. Hence, those rules also determine the meaning of the term. But this provides a reason for saying that the meanings of such terms should be held constant in a semantic theory. If a term’s meaning is determined by inference rules, and the rules do not change across interpretations, then the meaning should not change either. So the term must be a logical constant.

The Gentzen-style rules Hacking appeals to all have a certain general form: if certain metalinguistic statements about derivability hold, then a certain new statement about derivability also holds. Conjunction, for example, is introduced into a language by the following three rules.


Details for recovery of a truth-table from such rules need not be rehearsed here (see Peacocke 1981, p. 172). What matters is that the relationship between

inference rules and the semantic interpretation of a term appears to provide a basis for thinking that the meaning assigned to the term should be held constant. However, things are actually not quite so straightforward. To get the “right” set of traditional constants, Hacking must impose various constraints on which Gentzen-style rules are admissible for introducing constants. For example, such rules must have the “subformula property”: roughly, the component sentences in the premises of the rule must be subformulae of the principal formula in the conclusion. In addition, admissible rules must conform to requirements such as preservation of cut-elimination, dilution, and deducibility of identicals.

While Hacking does try to motivate the constraints he imposes on acceptable rules, he acknowledges that no absurdity results if one adopts rules that violate his constraints. Of course, rules that violate the constraints lead to non-standard logical constants. One may reasonably ask what is wrong with adding rules for some new constant if such rules violate Hacking’s requirements. “Nothing”, Hacking replies, “except that one is not then defining logical constants in connection with some previous language fragment. Rather one is creating, as a totality, a new system of logistic” (1979, p. 298). When rules violate Hacking’s constraints, the introduction of a constant into a language represents a genuine increase in the expressive power of the language, not simply an abbreviated means of expressing what could already be said. But this is surely not a reason to avoid introducing non-standard constants. On the contrary, a logician who thinks that modal operators or second-order quantifiers deserve to be counted as logical constants is likely to hold that opinion at least partly because such operators allow us to express truths that could not be expressed without them.

The upshot is that our problem, that is which terms should be considered logical constants, has simply been converted to a different form. The meaning of a term must be held constant because the meaning of the term is determined by inference rules governing the term. But we have no identification of any error that would result from recognizing similar rules for other terms. Since it is unclear why departures from Hacking’s constraints are wrong, it is unclear why it is a mistake to recognize logical constants that do not satisfy his criteria.(5)

One might still seek to save a criterion by showing that it is flexible and thus, by design, can allow the introduction of non-standard constants. When he turns to the subject of non-standard constants Hacking remarks: “A good criterion is one which is sharp but which can also be relaxed in various ways” (1979, p. 308). Flexibility, as we noted at the beginning of the paper, is one of the few virtues of characterizing logical constants by means of an explicit list. However, flexibility in a criterion of logical constancy simply betrays weakness in the arguments that were meant to establish the criterion. If there are arguments that establish the correctness of some set of necessary and sufficient conditions for constancy, those same arguments also establish that terms that fail to satisfy the criterion are not logical constants. If there are good reasons, on the other hand, for promoting a non-standard term to the status of logical constant, those reasons are also reasons for rejecting any criterion which excludes the term.

Only a few proposed criteria have been examined, but there is reason to think that the problems identified are general in nature. The appeal of a criterion is that it would provide a basis for adjudicating proposals for new logical constants. Terms that satisfy it would be automatically counted as logical constants, and terms that do not satisfy it would be routinely excluded. However, as we have seen, there is nothing inherent in the idea of a set of necessary and sufficient conditions for constancy which guarantees an answer to the critical question, namely, why should terms satisfying the criterion, and only those terms, have their meanings held constant while the meanings of other terms vary? Moreover, the idea that a criterion would suffice to adjudicate proposals for new constants does injustice to the character of such proposals. Decisions about potential new logical constants are anything but routine. The proponents of a non-standard constant typically believe that logical theory will be better off in some way, will come closer to achieving its objectives, if the new term is accorded the desired status. Hence, the proponents will argue, any criterion that excludes the new term must be mistaken. It is thus unlikely, for example, that debates about whether modal operators or second-order quantifiers are logical terms could be settled by simply showing that such operators satisfy, or fail to satisfy, some proposed criterion of constancy. Assuming that no obvious catastrophe results from assigning a fixed meaning to a new term, consideration surely must be given to the benefits achieved by treating the new term as a constant and whether, indeed, such treatment furthers the fundamental purposes of logical theory.

2. Logic and pre-theoretic intuitions

We have encountered reason to be dissatisfied both with arbitrary lists of logical constants and with criteria formulated as necessary and sufficient conditions for constancy. However, there is another way of approaching the problem of logical constancy. One’s understanding of the goals of logical theory and of the way in which those goals are achieved may determine which terms must be recognized as logical constants. That is, terms recognized by the theory as constants are those that need to have this stares in order for the theory to achieve its purposes. Given a clear understanding of the objectives of logical theory, then, the choice of constants will be non-arbitrary, and it need not depend on any specification of necessary and sufficient conditions for constancy. This way of thinking about the issue of constancy, of course, assumes some understanding of the goals of logical theory and of the means by which those goals can be achieved. This issue has itself sometimes been a matter of significant controversy.

Here is one understanding of the goal of logical theory and the way in which this goal is to be achieved. Logic, like much of the rest of philosophy, is driven by uniquely philosophical intuitions. The logician’s aim is to formally characterize logical consequence and logical truth on the basis of various pre-theoretic intuitions (beliefs) about matters such as necessity, apriority and form. Any formalization is to be judged by whether it accords with these pre-theoretic intuitions. Logical constants, on this approach, are simply terms that need to be assigned fixed meanings in order to arrive at a theory of logical consequence and logical truth that accords with the relevant pre-theoretic intuitions.

Tarski himself provides the obvious historical illustration of this approach. Immediately after introducing his definition of “consequence”, Tarski appealed to common usage and intuitions about necessity to justify the definition:

It seems to me that everyone who understands the content of the above

definition must admit that it agrees quite well with common usage…. In

particular, it can be proved, on the basis of this definition, that every

consequence of true sentences must be true …. (1936, p. 417)

In the same essay he invokes formality and, apparently, the apriority of logical consequence:

… since we are concerned here with the concept of logical, that is,

formal, consequence, and thus with a relation which is to be uniquely

determined by the form of the sentences between which it holds, this

relation cannot be influenced in any way by empirical knowledge…. (1936,

p. 414)

The appeal to intuitions about necessity, apriority and form opens Tarski’s theory to controversy. John Etchemendy, for example, has argued at length that Tarski was simply mistaken: “Briefly put, my claim is that Tarski’s analysis is wrong, that his account of logical truth and logical consequence does not capture, or even come close to capturing, any pretheoretic conception of the logical properties” (1990, p. 6). For the moment, we can set aside the issue whether the particular analysis of the consequence relation proposed by Tarski actually conforms to pre-theoretic intuitions. Our immediate concern is the more fundamental problem of whether an account of consequence should be judged by whether it conforms to such intuitions. More specifically, should logical theories be judged by whether they respect pre-theoretic intuitions, and if so, which intuitions?

Three distinct kinds of intuitions are commonly considered to be essential in justifying or criticizing an account of logical consequence. Intuitions about necessity are widely appealed to: if A is a logical consequence of [Gamma], then it is presumed that A must be a necessary consequence of [Gamma]. But most logicians do not recognize all the necessary consequences of a sentence as logical consequences. For example, “The glass contains atoms of hydrogen” is generally acknowledged to be a necessary consequence of “The glass contains water”, but few logicians would count this connection as logical. According to William Hanson, such an inference is not logical because “we cannot know a priori the link between premise and conclusion” (1997, p. 377). Therefore, Hanson argues, intuitions about a priori knowledge must also be considered in evaluating a theory of logical consequence: if A is a logical consequence of [Gamma], this fact must be knowable a priori. Finally, logicians sometimes appeal directly to intuitions about logical form. Witness, for example, Gila Sher:

The distinction between logical and extralogical terms is founded on our

pre-theoretical intuitions that logical consequences are distinguished from

material consequences in being necessary and formal. To reject this

intuition is to drop the foundation of Tarski’s logic. To accept it is to

provide a ground for the division of terms into logical and extralogical.

(1991, p. 51)

The major problem with all such intuitions, of course, is that they are highly controversial. Disputes about what is necessary and about what, if anything, is known a priori have historically been rife and nearly impossible to resolve. Philosophical accounts of necessity and a priori knowledge also have been subjected to serious and well-known criticism from Quine and others. It seems foolish for logicians blithely to ignore those criticisms. Consider the consequences if the skeptics are right. If it turns out that no truths are necessary or that there is no such thing as a priori knowledge, then an account which requires logical truths to be necessary or a priori will have the result that there are no logical truths. Likewise, no sentences would logically imply other sentences if logical implication is required to be necessary or a priori. The foundations of logical theory would thus be undercut: there would literally be nothing for logicians to talk about. Clearly, there is a compelling incentive to avoid appeal to such disputed intuitions at the foundations of logical theory.(6)

Intuitions about logical form are even more tenuous. Native speakers of a language will generally agree in assenting to sentences which seem to report intuitions about what must be the case, what could not be otherwise, etc., though it is still debatable whether these untrained intuitions derive from any underlying notion of necessity of the kind philosophers typically assume. With respect to logical form, on the other hand, it is doubtful that such untrained intuitions even exist. That is, it is unclear whether ordinary people have any intuitions about logical form apart from those that may have been learned through instruction in logic or mathematics. The absence of untrained intuitions doubtless explains the fact that, when logicians have attempted to characterize logical form, they have had to choose between competing accounts. Russell (1919, pp. 167-9), for example, held that the logical forms of “I met Jones” and “I met a man” must be viewed as different, though he acknowledged that traditional grammar assigns the same form to both. The problem is also illustrated by the simple inference from “Tom is a bachelor” to “Tom is unmarried”. According to a lore which is popular among logicians, the inference is logically invalid because the forms of the sentences are “x is F” and “x is G”. But imagine a renegade whose intuitions lead him to attribute the forms “x is a bachelor” and “x is unmarried”. The renegade’s formalization conflicts with the tradition and common practice of logicians, but one would be hard pressed to demonstrate any conflict with untrained intuitions about logical form demonstrable in speakers of English. Moreover, the renegade can justify his formalization on the basis of the fact that it captures an intuition about necessity, an intuition which one could expect to find reflected in discriminations by ordinary speakers. It is thus doubtful that appeals to pre-theoretic intuitions about necessity, apriority and form are likely to be sufficient to resolve which choices are correct or incorrect in attributing logical form to a sentence.

Gila Sher (1996, pp. 672-8) has sought to solve the problem of choice in assigning logical forms to sentences by appealing to forms of non-linguistic entities. On her account, the logical terms of a language are those that assign formal properties to non-linguistic objects, in particular, to sets. Thus “[inverted a]x” attributes the formal property of being universal, “[exists]x” attributes the formal property of being non-empty, and “There are at least five things x” attributes the formal property of having at least five members (1996, p. 675). The sets that have these formal properties are the extensions of open formulas, for example, the extension of “x is a cat”.

But Sher’s move simply transforms the problem of choice about the forms of sentences into the problem of choice in deciding which properties of non-linguistic objects should be recognized as formal. According to Sher, the formal properties of a set are, roughly, those that indicate the size of a set. Thus the properties of being universal, being non-empty, and having five members are formal properties. But as was the case with sentences, it seems that there are still alternatives as to which features of a set should be counted as formal. It not clear, for example, why a logician would be wrong to count shared colour or shared shape of members as formal features of a set. Further, why should we not acknowledge formal properties for entities other than sets? If size is a formal property of a set, it seems at least analogous to treat length as a formal property of a line and height as a formal property of a man. Taking logical constants still to be terms that attribute formal properties, we could recognize “is six feet long (tall)” as logical.

An additional problem with Sher’s appeal to formal features of non-linguistic entities is that it seems reasonable to require that any attributes singled out as “formal” properties ought to be objective features of entities in the sense that they do not vary depending on context or point of view. However, on Sher’s account, the formal properties of a set hold relative to, and will vary with, whatever happens to be the current universe of discourse. The set of cats, for example, has the formal property of being nonempty relative to the universe of four-legged creatures. But the same set has the formal property of being empty relative to a universe of dogs, and it is universal relative to itself as a domain. It is thus not clear that Sher’s account ascribes any formal features at all to the set of cats considered in itself, apart from any universe of discourse.(7)

Appeals to intuitions about necessity, apriority and the like clearly tie the foundations of logical theory to philosophical assumptions that are highly controversial. The idea that logical theory is an effort to formalize such disputed intuitions thus conflicts with another widely held view of logic: the foundations of logic ought to be as solid and free of controversy as possible. According to this alternative view, the job of logic should be to provide a relatively uncontroversial, common framework within which conflicting and controversial theories can be stated, compared and evaluated. Theories about necessity and apriority would presumably be among the theories open for such comparison.

It is not necessary to systematically banish controversial intuitions to accommodate the view that logical theory, or at least some important part of it, should be safe. Logical theory can be understood as comprising two major components. The first component–what we might refer to as “core” logical theory–would seek to characterize “logical consequence” and “logical truth” in a way that avoids appeal to contentious intuitions as much as possible. Ideally, such a core theory would provide a framework within which other, more controversial theories, both logical theories and non-logical theories, could be formulated. The second component–what I shall refer to as “extended” logical theory–would consist of various additional theories formulated usually as extensions of the core theory. Such extensions would explicitly seek to formalize intuitions about necessity, apriority or whatever. The skeptic thus need not be put in the position of claiming implausibly that theories that depend on intuitions about necessity (for example, modal logic) are not part of logic. Such theories can be viewed as extensions of a safer core theory which does not depend on such contested intuitions. Of course, the skeptic will still claim that such extensions are bad or defective in some respect.

The viability of this two-tiered conception of logic obviously depends on there being a motivation for the core theory. If pre-theoretic intuitions about necessity, apriority and form are to be considered off limits in evaluating formalizations of logical truth and logical consequence in the core theory, then how are such formalizations to be justified? That justification, as I shall argue below, turns on a recognition of the fact that there is another purpose for logical theory besides formalization of pre-theoretic intuitions about necessity, apriority and the like. Logic also plays a role in the larger scientific enterprise. Its function is to provide a framework for the deductive systematization of scientific theories. That purpose can be pursued in a way that depends on few, if any, controversial philosophical intuitions. Moreover, I shall argue that it leads ultimately to a familiar body of logical theory.

3. Core logic: the theory of deductive systematization

Consider the potential contribution of logical theory to a scientist’s task of constructing and testing theories about the world. We have first to imagine a hypothetical scientist toying with some theory which she has heretofore understood only vaguely. She hopes to enlist formal logic as an aid in developing and evaluating the theory. Her principal needs are threefold: (a) to clarify exactly which claims are made by the theory, (b) to communicate the theory to other scientists, and (c) to enable systematic testing of the theory. Initially, I shall assume, she has no firm opinions about which logical theory, or theories, should be considered correct. She does understand, however, that a logical theory would provide her with a definition of “logical consequence” and, in the best cases, with an effective proof procedure which is sound and complete with respect to the notion of consequence.

The scientist’s hope is that such tools will help with her problems by allowing her to formulate the scientific theory as a deductive system. As she imagines things going, deductive systematization of the theory would achieve her three objectives as follows. First, given a definition of “logical consequence”, the set of claims of the theory can be clarified (objective (a)) by simply choosing an appropriate set of axioms. The assertions of the theory will then be just those that are either axioms or logical consequences of the axioms. Equivalently, assuming soundness and completeness of a proof procedure, the claims of the theory will be those that are derivable from the axioms by means of the proof procedure. The problem of communicating the theory to others (objective (b)) is resolved as well. To understand the solution, however, it is helpful first to reflect briefly on the nature of the problem. If the scientist’s theory happened to make only a finite number of claims, she could communicate the whole theory by simply listing the claims. For any serious scientific theory, however, the set of assertions of the theory will likely be infinite. Hence, no finite list can convey all the claims. Without a developed logical theory, the best our scientist could hope to do is to convey the flavour of her theory by listing typical examples of assertions the theory makes. Once a logical theory is in hand, however, she can communicate the scientific theory to others by simply listing its axioms and indicating the logical theory assumed. The proof procedure allows other scientists to identify any sentences that are claims of the theory by deriving them from the axioms. Finally, testing of the theory (objective (c)) is facilitated provided that the notion of logical consequence is truth-preserving in the following minimal sense: whenever p is a logical consequence of a set F either some member off is false or p is true. There is no need here to assert (or deny) necessity for the logical consequence relation. So long as the notion of consequence is truthpreserving in this minimal sense, any investigator can test the scientific theory by testing its consequences. If someone derives a consequence from the axioms and shows it to be false, he has demonstrated some error in the axioms.(8)

What the scientist hopes will be provided by logic, then, might be termed a theory of deductive systematization. Though she has a particular scientific theory in mind that she wants systematized, the scientist seeks a set of conceptual tools that will be generally useful for deductively systematizing scientific theories (her own theory and competing theories). The three specific objectives identified above determine a few desirable characteristics of such a theory of systematization: the theory needs to define a notion of “logical consequence” which is at least truth-preserving, and the theory should provide a complete proof procedure.

Though the desiderata are not yet sufficient to determine a single definition of “consequence”, they are suggestive of the general character of a theory of consequence. Objective (c), in particular, is important in this regard. If one is to show that a consequence relation is truth-preserving, the concept of consequence must be linked somehow to the concept of truth, a concept that is overtly semantic in nature. On the face of it, this appears to rule out any purely proof-theoretic account of the consequence relation. The scientist must be able sensibly to ask whether a proposed proof procedure is correct in the sense of being truth-preserving.

Further reflection on the needed link between truth and consequence suggests much more about the likely shape of the logical theory. As I have argued, it is undesirable in the core theory to claim necessity for the logical consequence relation. But this does not relieve us of the burden of somehow making a plausible case that, indeed, a proposed consequence relation is always truth-preserving. The most natural strategy for doing this is to appeal to a general theory of truth, a theory which specifies a truth condition for each sentence of the language. If one can then define the consequence relation in terms of the notion of truth, one can argue from the theory of truth and the definition of “consequence” that if p is a consequence of [Gamma] and all members of [Gamma] are true, p will be true. Since the class of sentences is infinite, the most plausible way to assign a truth condition to each sentence is for the semantic theory to parse sentences into structural components and assign meanings to the components in a way that allows one to derive a truth condition for each sentence. It is a relatively small step from here to Tarski’s suggestion about how to tie the consequence relation to troth. We identify certain terms of the language as having fixed assignments and allow others to have assignments that vary through some permissible range. Logical consequence is then defined as the relation that holds when all permitted assignments that make a premise true also make a certain conclusion true. The claim that the consequence relation is truth-preserving thus inherits its plausibility from the theory of truth and does not depend on any assumption that the relation is necessary.

The Tarskian strategy clarifies the general nature of a theory of consequence, but it is still an open question at this point which terms should be assigned fixed meanings (thereby qualifying as logical constants). Clearly, other desiderata are needed. One consideration that surely must influence that decision is the scientist’s beliefs about the theory she wants systematized. I characterized her initial understanding of the scientific theory as vague. Nevertheless, it is reasonable to suppose that, at the outset, she has various beliefs about the claims of the theory. There is first a collection of beliefs about which sentences are claims of the theory. Since these beliefs are about the content of a theory, I shall refer to them as content intuitions. There is, for example, a finite set of sentences which the scientist has actually identified as claims of the theory. In addition, the content intuitions include a large set of sentences which she would accept as claims of the theory if presented with the sentences and given a reasonable amount of time to reflect on them. Finally, the content intuitions include beliefs about sentences which have actually been rejected, or which would be rejected, as claims of the theory. As a tentative general rule,(9) it seems reasonable to require that sentences thus pre-theoretically identified or identifiable as claims of the theory should normally be included in the final deductively systematized theory. In addition, sentences that have actually been rejected, or which would be rejected, as claims of the theory should not normally be included in the systematized theory either as axioms or as consequences of axioms. From the point of view of deductive systematization, it is a presumably a matter of indifference whether the sentences counted as claims of the theory are included as axioms or as consequences of axioms.

The scientist’s content intuitions will presumably not be the only intuitions that affect the shape of the systematized theory. Like almost everyone else, she can be expected to have intuitions of the philosophically controversial kinds already discussed concerning necessary, analytic and a priori relations and properties of sentences of her language. I shall refer to these simply as modal intuitions. The scientist recognizes (2) as a claim of her theory, perhaps, because she previously accepted (1) as part of the theory (a content intuition), and she thinks that (2) is necessary given (1).

(1) The Pacific contains more water than the Atlantic.

(2) The Atlantic contains less water than the Pacific.

The scientist’s modal intuitions thus clearly have an influence on her content intuitions. However, we can acknowledge this fact without assuming that the choice of logical constants must be directly tailored to those modal intuitions. As far as the project of deductive systematization is concerned, what matters is just that both (1) and (2) should be included as claims of the final systematized theory. That result might be achieved in various ways. One of the definitions of “consequence” that should surely be considered is the first-order definition. Under that account, the scientist can accept (1) and “[inverted a]x [inverted a]y(x contains more water than y [right arrow] y contains less water than x)” as axioms. (2) is then included as a theorem. A competing, non-first-order alternative might be to formulate a notion of consequence deliberately aimed at capturing the scientist’s modal intuitions. If both predicates “contains more water than” and “contains less water than” were recognized as logical constants, every interpretation which makes (1) tree would make (2) true. Hence, if only (1) is recognized as an axiom, (2) belongs to the scientific theory as a direct logical consequence. The non-first-order alternative clearly does better justice to a modal intuition about necessity. However, if our aim is simply deductive systematization of a theory, the non-first-order alternative has no advantage over a first-order formulation. Both satisfy the purposes of clarifying, communicating and testing the theory.

There are reasons, moreover, for preferring a set of logical constants which is as modest as possible. Like most other people, the scientist recognizes the necessary connection noted earlier between “The glass contains water” and “The glass contains atoms of hydrogen”, but she is unlikely to be comfortable having this relation built into the theory of logical consequence. The discomfort need not arise from any intuition that logical consequence relations must be knowable a priori. Indeed, let it be assumed, for the sake of argument, that the scientist believes that there is no such thing as a priori knowledge. Still, she wants a notion of consequence capable of systematizing both her own theory and competing theories. Part of the virtue of her theory, she thinks, is that it can be shown to withstand evidence which falsifies other theories. Hence, she prefers a logical theory which allows the formulation of consistent alternative theories which assert the first sentence above but deny the second. The upgrade of predicates to logical constants is thus undesirable from her point of view because it builds too much science into logical theory.

Unfortunately, the most straightforward formulation of this last point as a desideratum for logical theory is quickly seen as question begging. The maxim “Avoid building scientific theory into logic” assumes that we already know where logic leaves off and the rest of science begins, and that is just another way of understanding the question at issue. The desideratum is best formulated as a requirement of minimalism: logical theory should be as simple, as modest in its assumptions, and as flexible as possible given the goal of providing a conceptual apparatus adequate for the project of systematization. In practice, the minimalist constraint dictates that the set of terms recognized as logical constants should be as small as possible. Hence, if a given body of theory can be systematized without recognizing logical constants such as “contains water” and “contains hydrogen atoms”, then it is preferable to do so. The same constraint dictates avoidance of logical constants such as “contains more water than” and “contains less water than” where the body of theory in question can be systematized using only a smaller set of constants such as those of firstorder logic.

The minimalist constraint has clear motivation in the scientist’s need for comparative testing of theories. The logical theory which is best suited to the task of systematizing competing, controversial theories is the theory which itself resolves as few of those controversies as possible. A logic which recognizes constants such as “contains hydrogen atoms” decides certain controversies that will be left unresolved in a theory that avoids such constants. Hence, the minimal conceptual apparatus adequate for systematizing pre-theoretic, content intuitions is the apparatus that has the best chance of providing an uncontroversial common ground for scientists advocating contrary theories. Such a minimalist theory is also an inherently fundamental theory since, in the nature of the case, the minimal apparatus required for systematization will appear in one form or another in any systematized scientific theory.

The minimalist constraint finally helps to clarify the nature of the mistake one may commit in promoting a non-standard term to the status of logical constant. Consider again the controversial case of the identity predicate. Recognition of “=” as a logical constant does nothing for the task of deductive systematization. We can systematize the same sets of sentences by recognizing only the truth-functional connectives and firstorder quantifiers as constants, treating “=” as an ordinary predicate, and adopting appropriate axioms for identity. The identity predicate, then, is not part of the minimal conceptual apparatus needed to deductively systematize scientific theories. To treat “=” as if it were needed as a constant of core logic–the theory of deductive systematization–is thus simply a mistake. On the other hand, if one’s aim is to systematize some body of intuitions about topic neutrality or generality, one might well want a theory in which “=” (as well as other terms possibly) is treated semantically as a constant. But we are talking now about an extended theory, an addition to the minimal, core theory which is needed for any scientific purpose.

In sum, appeals to both pre-theoretic intuitions about necessity and apriority as well as intuitions about logical form are not critical to the project of deciding between alternative proposals for a core notion of “logical consequence”. The job of core logical theory is simply to provide the least controversial, minimal apparatus adequate to the task of clarifying, communicating, and testing scientific theories. As we have seen, these aims by themselves constrain the choice of a notion of logical consequence and of a set of logical constants. Merit of a limited sort may thus be acknowledged in Etchemendy’s criticism of Tarski’s account of consequence. Tarski’s theory, at least as it is realized by first-order logic, fails in certain respects to capture pre-theoretic modal intuitions about necessary (a priori, etc.) relations between sentences. But the point can be conceded without acknowledging any serious damage to the Tarskian approach to logical theory. Tarski’s only serious mistake was in assuming that an account of consequence needs to conform to pre-theoretic intuitions about necessity, apriority and form.

4. Adequacy of the truth-functional constants

Given that a minimal theory of deductive systematization is the main objective of core logic, a set of constants will be appropriate or correct provided that it is just rich enough to allow systematizations of the kinds of theories scientists need to formulate and evaluate. The case for admitting truth-functional connectives as constants is straightforward. Any scientific theory will posit entities and make claims to the effect that an entity satisfies, or fails to satisfy, some condition (for example, “Particle a has negative charge”, “Particle b has no negative charge”). A theory will also sometimes need to claim that an entity satisfies one of two conditions without specifying which (for example, “Particle c has a negative or positive charge”). The language will thus require some means of expressing negation and disjunction. Moreover, if a theory contains negations and disjunctions, it should also contain all the truth-functional consequences of such sentences. A theory which asserts p, for example, will also assert ??p, ??p, etc. But to insure this is simply to recognize truth-functional operators as logical constants. A theory which is adequate for core logic will thus recognize some expressively complete set of truth-functional connectives (that is, a set capable of expressing any truth-function) as logical constants.

One might still complain that it may not be necessary to recognize a full expressively complete set of truth-functional connectives as logical. However, expressive completeness can be achieved as long as only one such connective (that is, joint denial or alternative denial) is accorded this status. Moreover, nothing substantive turns on which expressively complete set of truth-functional connectives we adopt. Any sentence formed with one expressively complete set of operators can be viewed as an abbreviation of a sentence formed with any other such set. Since the extensions of “logically true” and “logically implies” will not vary, a logician who eschews “&” and “[disjunction]” and restricts his set of truth-functional connectives to the Sheffer stroke gains no advantage comparable to that achieved in avoiding a logical constant such as “contains hydrogen atoms”. Different sets of expressively complete troth-functions are equivalent in terms of the philosophical and scientific issues that they decide or leave open.

It might also be objected that acceptance of the classical troth-functions already assumes too much. The conservativeness of the minimalist constraint suggests intuitionist logic as an even more modest starting point. The main difficulty with this suggestion is that there is serious doubt that intuitionism is really more conservative than classical logic. The alternative view is that intuitionist and classical logicians simply assign different readings to the standard connectives. The reading of “??”, in particular, is at issue. Classical logic can be understood as reading “??” as follows:

(Neg) ??p is true if p is not true; ??p is false otherwise.

Two considerations weigh in favour of the view that the intuitionist reading of “??” is other than that given by (Neg). First, intuitionist formulations of the excluded middle strongly suggest an alternative reading. Brouwer, for example, expresses excluded middle as the claim that “every mathematical assertion … either is a truth or cannot be a truth” (1940, p. 78). Dummett formulates the principle as a claim about what can be proved: “The assertion of A [disjunction] ?? A is … a claim to have, or to be able to find, a proof or disproof of A” (1977, p. 21). Second, there appears to be no way to make a case for a failure of p [disjunction] ?? p without assuming a different reading of “??”. If p [disjunction] ?? p fails to be true, neither p nor ?? p will be true. But under (Neg), any failure of troth for p requires that ?? p be true. The denial of the excluded middle thus falls into incoherency unless (Neg) is denied,(10) In what follows, therefore, I will assume that intuitionism is not a more conservative theory than classical logic.(11)

Apart from the truth-functions, it is plausible that a scientific language will need to contain at least one first-order quantifier. If no scientific theory posited more than a finite set of entities, we could get by, in principle at least, with long conjunctions and/or disjunctions of atomic sentences. But some theories will posit infinite collections, and without quantifiers this would require infinite conjunctions and/or disjunctions. Such sentences raise problems for the objective of communication. A theoretician could never fully write out such a sentence, and those to whom she wished to communicate could never finish reading one. The problem is resolved by simply employing a first-order quantifier. Given that a scientific theory will contain quantified assertions, it also seems reasonable to require that it should contain all the first-order consequences of any such assertions. However, this does not automatically compel us to recognize first-order quantifiers as logical constants. As was the case for “=” noted earlier, we can deductively systematize first-order logic by simply supplementing truth-functional logic with quantified axioms. There is no need to recognize additional relations of logical consequence or additional logical truths.

As a preliminary hypothesis, then, a logical theory which is adequate for the purposes of deductive systematization will at least need to recognize troth-functional connectives as logical. The logical truths will consist of tautologies, and logical consequence can be understood as tautological consequence. The language will contain first-order quantifiers, and axioms for quantification are needed. But these axioms need not be counted as logically true. Given just this modest logical theory, a scientist can formulate and compare a wide range of theories simply by introducing new predicates or other terms and adding appropriate axioms. Still, there are various issues that need to be considered in deciding whether this quantitier-enhanced truth-functional logic must be further strengthened to serve the purposes of deductive systematization of scientific theories.

There is first the problem of the language in which scientific theories are formulated. I have so far assumed that the task of systematization is a straightforward matter of axiomatizing sets of sentences of some fixed language–presumably the scientist’s natural language. Benefits can be gained, however, if this assumption is relaxed slightly. Scientists in fact regularly modify the natural language on a patchwork basis by inventing new technical vocabulary and revising existing terminology when a new use of words suits some theoretical purpose. It seems reasonable to modify the rule suggested in the last section to allow that sentences formulated initially in the scientist’s native language may be paraphrased, where possible, in first-order form. The most straightforward and obviously harmless examples of such paraphrases involve replacement of English truth-functional connectives that operate on predicates with connectives that operate only on complete sentences or open formulas. “Fred is old and wise” routinely becomes “Fred is old & Fred is wise”. Somewhat more innovative paraphrases are required for English sentences containing predicate modifiers. “John signed the contract with his pen in the library” apparently involves a predicate “signed” and a number of modifiers which characterize the way in which the signing is done. Under a well-known proposal of Davidson’s (1967), the sentence can be paraphrased in first-order form by construing it as making an existential commitment to an event: “[exists]x(x was a signing & x was by John & x was of the contract & x was done with a pen & x occurred in the library)”. Similar paraphrases are available for first-order reformulations of a variety of natural language constructions that might otherwise require the introduction of new logical constants (see, for example, Wheeler 1972).

There is no need to make any claim that such first-order paraphrases constitute exact translations of English sentences. Formal substitutes for pre-theoretic musings will in many cases be more accurately characterized as replacements serving similar purposes rather than as translations. Pre-theoretic, natural language sentences often carry unwanted nuances which make them less than optimal as formulations of scientific theory. The English “and”, for example, frequently conveys temporal order, but its closest formal analogue–the purely truth-functional “&”–deliberately strips away that nuance. It is reasonable, therefore, to revise our initial rule concerning content intuitions along the following lines: if a sentence is pre-theoretically identified as a claim of a theory, the final systematized theory should contain either the sentence or a first-order paraphrase which is similar enough to serve the same theoretical purpose.

Other questions that bear on the adequacy of the truth-functional constants concern proposals for specific additional constants such as modal operators, generalized and branching quantifiers, and second-order quantifiers. For the purposes of core logical theory, the issues here are whether such additional operators are required in the language and whether they must be treated semantically as logical constants in order to formulate theories about the world. Let us consider the case of modal operators first. If “Necessarily” and “Possibly” are introduced as operators and accorded status as logical constants, it becomes possible to deductively systematize sets of sentences that cannot be systematized using only the first-order constants. This fact creates prima facie pressure in favour of recognizing modal operators as logical constants. What is unclear, however, is whether the theoretical claims in question can be made only in a language which contains modal operators.

Let us assume, for the sake of argument, that modal sentences make claims of the kind normally attributed to them by the now familiar semantics of modal logic: modal sentences make claims about possible worlds. We have already noted that new logical constants can sometimes be avoided if satisfactory means can be found for paraphrasing the sentences in question using only the expressive resources of a first-order language. The possible world semantics provides a straightforward means for formulating such paraphrases. Given suitable predicates, a first-order language can also make claims about possible worlds. Indeed, modal sentences which are interpreted by means of a possible world semantics can be accurately paraphrased, one-for-one, using appropriately interpreted first-order sentences which contain no modal operators. The basic idea of the paraphrase is that claims in a modal language about n-ary relations between objects can be replaced by claims in an ordinary first-order language about (n+1)-ary relations between objects and a world. “Socrates is older than Plato”, for example, is understood to contain an implicit reference to the actual world and hence to translate as “Socrates is older than Plato at world a” where “a” names the actual world. “Necessarily, Socrates is older than Plato” becomes “For all w, if w is accessible from a, Socrates is older than Plato at w”.

Since I am claiming that any modal sentence can be paraphrased in first-order form, it will be necessary to characterize the translation scheme in general terms and to show that meaning is preserved. Assume that L is a modal language which is in all respects the same as an ordinary first-order language except for the presence of an operator “[square]” understood as expressing necessity. The modal language is assumed to be interpreted in ways standard for modal theories such as quantified versions of T, S4, and S5. A possible worlds interpretation I of L will consist of a non-empty domain [D.sub.I](12), a non-empty set of worlds [W.sub.I], a binary relation of accessibility [R.sub.I] on [W.sub.I], an assignment of a member I(c) of [D.sub.I] to each individual constant c of L, and an assignment of a set of n-tuples I(P, w) of members of [D.sub.I] to each n-ary predicate letter P at each world w [element of] [W.sub.I]. Truth of a sentence p of L at a world w under interpretation I is defined in the standard way.

Sentences of L can be translated into a corresponding first-order language L’ provided that L’ is related to L as follows:

(1) L’ contains all the individual constants of L and a further infinite

supply of individual constants not in L.

(2) For each n-ary predicate letter P of L, L’ contains an (n+1)-ary

predicate letter P’.

(3) L’ contains a binary predicate letter “R'” and a unary predicate letter

“D'” not in L.

The binary predicate “R'” will be used in L’ to make explicit claims about accessibility between worlds. The predicate “D'” is used to identify objects (as opposed to worlds). As we noted above, translation of an L sentence into an L’ sentence requires making explicit an unstated reference to a world. Hence, translation of L sentences always requires introducing a term (an individual constant or variable) to refer to a world. We thus define the translation Tr(p, t) of p with respect to a term t as follows:(13)

(1) Tr(P[t.sub.1] … [t.sub.n], t) = P'[t.sub.1] … [t.sub.n] t.

(2) Tr(p [right arrow] q, t) = Tr(p, t) [right arrow] Tr(q, t).

(3) Tr(??p, t) = ??Tr(p, t).

(4) Tr([inverted a]vp[v], t) = [inverted a]v(D’v [right arrow] Tr(p[v], t)).

(5) Where v is a variable not in p, Tr([square]p, t) = [inverted a]v(R’tv [right arrow] Tr(p, v)).

The translation of “Socrates is older than Plato” with respect to a constant “a” is thus simply “Socrates is older than Plato at a”. “[square] Socrates is older than Plato” becomes “[inverted a]x(R’ax [right arrow] Socrates is older than Plato at x)”. The Barcan formula “[inverted a]x[square]Fx [right arrow] [square][inverted a]xFx ” becomes “[inverted a]x(D’x [right arrow] [inverted a]y (R’ay [right arrow] F’xy)) [right arrow] [inverted a]y(R’ay [right arrow] [inverted a]x(D’x [right arrow] F’xy))”.

A paraphrase scheme cannot be called a “translation” unless there is a way of insuring faithfulness of the paraphrase. One way to justify a translation is to appeal to independently justifiable theories of troth for the languages in question. The possible world semantics which I assumed earlier is widely accepted in one form or another by modal logicians. With respect to first-order quantifiers, the truth-functional account which suffices for our notion of logical consequence will be inadequate for purposes of translation. Translation requires a more discriminating theory which shows how the meaning of a quantified sentence is determined by meanings of its sub-sentential components. The Tarskian semantics for quantifiers is clearly more appropriate for this purpose. Thus we take a first-order interpretation J to consist of a non-empty domain [D.sub.J], an assignment of an individual J(c) of [D.sub.J] to each constant c, and an assignment of a set of n-tuples J(P) of members of [D.sub.J] to each n-ary predicate letter P. Truth under an interpretation is then defined in the usual Tarskian way.

What we want to insure is that a sentence p of the modal language L is true under one of its intended possible world interpretations if and only if the sentence’s non-modal translation into L’ is true under an appropriate, corresponding interpretation. Assume that I is an interpretation of the modal language L and that L’ corresponds to L as indicated above. Then I’ will be an appropriately corresponding first-order interpretation of L’ provided that I’ satisfies the following conditions:

(1) [D.sub.I’] = [D.sub.I] [union] [W.sub.I].

(2) For each constant c of L’: if c is in L, then I'(c) = I(c); otherwise I'(c) [element of] [W.sub.I].

(3) For each (n+1)-ary predicate letter P’ corresponding to a predicate letter P in L: [element of] I'(R’) if and only if [element of] I(P, w).

4) For the binary predicate letter “R'”: [element of] I'(R’) if and only if [o.sub.1] [R.sub.I] [o.sub.2].

(5) For the unary predicate letter “D'”: o [element of] I'(D’) if and only if o [element of] [D.sub.I].

Faithfulness of Tr as a translation scheme can now be assured. Let I'[c:w] be an interpretation like I’ except that it assigns a world w to the individual constant c. So long as the two languages (L and L’) and interpretations (I and I’) are related as described above, one can show by straightforward mathematical induction that any sentence p of L is true at w under I if and only if Tr(p, c) is true under I'[c:w].

One might still object that this line of argument establishes only material equivalences between sentences of L and L’. No necessary connection has been demonstrated. Hence, one might hold that we have not really demonstrated that the translation scheme preserves meaning. The objection raises the issue of what is required to insure that a translation is correct. If, indeed, a necessary connection is needed, then the above argument establishes nothing about translation. However, though only material equivalences are demonstrated, it is significant that the equivalences are derived from independently plausible semantic theories about the languages L and L’. If the semantic theories are reasonable, then translational equivalences derived from them are presumably also reasonable. The semantic theories, of course, are open to question. Someone might argue, for example, that the possible world semantics for “[square]” is wrong fundamentally or in some detail. Likewise, one might object that the standard Tarskian truth conditions for first-order sentences are mistaken. If such criticisms proved to be correct, then of course the semantic theories could not provide any basis for translation. In that event, it might be necessary to recognize modal operators as logical constants of the core theory.

On the other hand, if the Tarskian and possible world semantic theories are correct, translations produced by Tr allow the enterprise of deductive systematization to proceed with a logical theory no richer than quantifier-enhanced truth-functional logic. The elimination of modal operators from core logic achieves an additional benefit that is also surely worthwhile: the controversies of modal logic are now relocated to a special science. This includes both issues that divide modal believers and issues that separate believers from skeptics. For example, the “internal” question whether sentences of form “[square]p [arrow right] [square][square]p” should be considered logically true is converted to the problem whether the accessiblilty relation between worlds is transitive. The modal skeptic’s “external” questions about the meaning of “Necessarily” are reformulated as questions about whether possible worlds exist, about how possible worlds are distinguished from worlds that are not possible, about whether objects can exist in more than one world, and so forth. The elimination of modal operators from core logic does nothing, of course, to resolve these controversies. But this is surely just what one expects of accurate translation. The paraphrase simply relocates the debates from the heart of logical theory to a special science thus leaving core logical theory uncontroversial and safe as it should be. Questions and theories about the existence and nature of possible worlds can be entertained and debated in the same way as issues of other special sciences concerning, for example, the nature of light, imaginary numbers, neural networks, or disembodied spirits.

The flexibility of the translation scheme bears emphasizing. Worlds can be understood as harmless mathematical abstractions or as robust realist entities. Worlds could also be entities that are open to further analysis in terms of propositions, states of affairs, properties or any other entities a proponent of modal theories cares to invoke. Further, the translation scheme is easily modifiable to accommodate different versions of the possible world semantics such as versions that assign different domains of entities to different worlds.

The strategy used here for paraphrasing modal claims can be put to work to provide first-order versions of claims using other exotic operators such as generalized and branching quantifiers. If a plausible truth-conditional semantics for the non-standard constants can be formulated in first-order form, the semantics itself provides the means for constructing a paraphrase in a language equipped with suitably interpreted predicates. Thus consider Sher’s proposal (1991, p. 57) for a two-place quantifier “most”. “Mx(Bx, Cx)” (in effect, “Most B are C”) is true provided that the set of Bs that are C is larger than the set of Bs that are not C. The idea is expressed in a first-order set theory as “{x|Bx} [intersection] {x|Cx} ?? {x|Bx}-{x|Cx}”.(14) Likewise, “UxBx” (for “Uncountably many things are B”) has a first-order set-theoretic paraphrase in “{x|Bx} ?? [Omega]” where “[Omega]” denotes the set of non-negative integers.

First-order set theory thus allows us to achieve the effect of introducing generalized quantifiers but not treating them as logical constants. Under a standard interpretation, the appropriate first-order sentence expresses the facts for which the generalized quantifiers were wanted, but the facts are expressed without any cost in enlargement of logical theory. Of course, it should be recognized that minimalism in logical theory is not the only consideration that bears on the acceptability of such a first-order paraphrase. The formal paraphrase for “Most B are C”, for example, carries an ontological commitment to sets which is not apparent in English. Someone who objects to the commitment has a reason to reject the paraphrase and hence, perhaps, to admit “Most” as a logical operator in its own right. Clearly, the demand for minimalism in logical theory must be balanced against other considerations such as the demand for minimalism in ontology.

Similar issues are raised by branching quantifiers. Branching quantifiers have been controversial, in part, because it has proved difficult to find convincing examples of English sentences which require a branching reading. However, Barwise has argued that this difficulty can be overcome by considering sentences in which the quantifiers are not limited to the standard “every” and “some”. Thus “Most cats and most dogs hate each other” plausibly has a branching structure in which neither occurrence of “most” depends on the other:

Under the analysis proposed by Barwise (1979), the sentence is to be understood as asserting the existence of a class containing most cats and a class containing most dogs where every member of each class hates every member of the other class. The analysis is overtly set-theoretical and can be expressed straightforwardly in a first-order set theory:(15)


Second-order quantifiers are also candidates for the status of logical constants.(16,17) One potentially persuasive reason for introducing such constants is the fact that they allow at least a partial success in the effort to deductively systematize arithmetic. One can identify a set of axioms whose second-order consequences include all the truths of elementary arithmetic (See Boolos and Jeffrey 1989, Ch. 18). By comparison, Godel demonstrated that no consistent, finite set of first-order axioms will have all of these truths as first-order consequences. Second-order logic thus apparently succeeds in one respect in which first-order logic fails. However, second-order logic suffers in a different respect since it lacks a complete proof procedure.

It is debatable, of course, whether the lack of a complete proof procedure constitutes a serious shortcoming. George Boolos (1975) has asked why it should be important that there is no complete proof procedure for second-order logic but not equally damning that first-order logic lacks a decision procedure. The existence of a complete proof procedure was one of our original desiderata in [section] 3 above for evaluating efforts at deductive systematization. A theory of consequence, by itself, allows us to characterize the claims of a theory as those sentences that are either axioms or consequences of the axioms. In effect, a theory of consequence and a set of axioms create a set of objective facts of the matter concerning which sentences are claims of a scientific theory. What the theory of consequence does not do, however, is provide any assurance that the claims of a theory will actually be recognizable as such. When a complete proof procedure is available, any claim of the theory will eventually be identified as such assuming that an investigator is allowed unlimited time to apply the procedure.(18) Without such a procedure, however, there will be some assertions of the theory that cannot be identified as such by any routine method. Lack of a complete proof procedure thus impacts on the clarity and communicability of the systematized theory, and these were fundamental objectives of the project of deductive systematization. In one obvious sense, the formulator of a theory which lacks a complete proof procedure, as well as other scientists to whom she wishes to communicate the theory, will have less of a firm grasp on just which assertions are claims of the theory. Absence of a complete proof procedure also means that the systematized theory will not be testable in the usual sense. There will be sentences that are consequences of the axioms but which cannot be demonstrated to be such by routine methods. If such a sentence happens to be false, knowledge that the sentence is false may still not enable us to refute the theory. With respect to Boolos’s point, the lack of a decision procedure in first-order logic is a shortcoming, but it is a less serious shortcoming from the point of view of the aims of deductive systematization. It means, for one thing, that there are cases where a claim is not an assertion of a first-order theory, but no routine method exists for demonstrating that it is not. But this limitation does not appear to impair our ability to clarify, communicate or test theories.

It is clear, then, that any benefits derived from the introduction of second-order quantifiers are secured only at the cost of serious compromise of the basic goals of deductive systematization. With respect to the prospects for systematizing elementary arithmetic, the fair conclusion seems to be that no axiomatization, first or second-order, achieves everything that we could want in a systematization of elementary arithmetic. Arithmetic, in fact, appears to mark the limitations of the enterprise of deductive systematization.

The arguments above clearly do not demonstrate that truth-functional logic supplemented with first-order quantifiers is adequate for all purposes of deductive systematization of scientific theories. For one thing, there are many other proposals for non-standard constants, perhaps indefinitely many others. It would go far beyond the scope of this essay to attempt to demonstrate that all such proposals are really unneeded. Nevertheless, the arguments above do suggest strategies and desiderata that may be applicable to other proposals for non-standard constants. If, as in the case of modal operators, a truth-conditional semantics for a non-standard set of constants can itself be formulated in first-order form, then the semantics itself may provide a basis for paraphrasing the non-standard theory.(19)

There is a deeper reason, however, for the fact that there can be no a priori proof of the adequacy of a quantifier-enhanced truth-functional logic. I have argued that the choice of a core logical theory is constrained by the needs of science. In particular, it is constrained by the types of theories scientists need to formulate. Presumably, the nature of the needed theories is at least partly determined by the nature of the real world scientists are investigating. Since we cannot presume to know what future investigation will reveal, the types of scientific theories needed, and hence the logical theory needed, must remain to some extent an open question. According to the vision of logic developed here, therefore, logical theory can never be the sort of “closed and completed body of doctrine” once envisaged by Kant (1965, Preface to the Second Edition).

5. Extended logical theories

Core logic seeks to identify a minimal set of conceptual resources adequate for the deductive systematization of scientific theories. Axioms, predicates and even operators can be added as needed to systematize specific theories, but new logical constants and resultant changes in the theory of logical consequence are admitted only when the relevant body of scientific theory cannot be systematized without them.

Though core logic clearly has its place and purpose in the scientific enterprise, there are also reasons for entertaining systematizing theories that do not aim for such minimal resources. One might, for example, seek a theory that is richer than core logic but still topic-neutral in the sense that it requires nothing about the kinds of entities that exist. Such objectives lead naturally to recognition of the first-order quantifiers and identity as logical constants. Complete proof procedures are available as in the core theory, and the change of status has the effect that additional inferences commonly thought of as necessary are recognized as logical. Tradition still applies the name “logic” when such constants are recognized, and that classification is unproblematic here though the theories are extensions of the core theory.

Likewise, it is sometimes desirable to formalize not just theories about the world but particular ways of talking about the world. One theme that runs through much philosophical writing about logic is the idea that logic is a formalization or regimentation of one or another aspect or ordinary discourse. It is commonplace, for example, to speak of the “logic of tense”, “epistemic logic”, the “logic of questions”. In such usage, the term “logic” is applied to the study of virtually any properties or relations of sentences that are pre-theoretically considered to be necessary, a priori, analytic, etc. Modal intuitions of various kinds that have no role in a minimalist systematization assume paramount importance in evaluating theories of this sort. It will not be surprising, therefore, if theories that formalize some aspect of the natural language recognize additional terms whose meanings are fixed across interpretations.

One reasonably well developed illustration is tense logic. Tense logic can be understood as an effort to formalize a feature of the grammar of natural language. Since tense can be expressed in a variety of ways in English, it is useful in a formalization to identify only a limited set of operators which perform the functions in question. We thus introduce “P” to express “It was the case that” and “F” to express “It will be the case that”. When such operators are added to an otherwise truth-functional language, we achieve something akin to a laboratory setting in which the desired feature of the natural language can be studied more or less independently of other features. In particular, we can consider the effects of various assumptions about the range of acceptable interpretations in an effort to formalize modal intuitions about necessary properties and relations of sentences. Consider, for example, the intuition that if a sentence p were true, then it follows that [PF.sub.p] is true. We let TL be a language which contains the usual truth-functional operators plus “P” and “F”. An interpretation I of To consists of a non-empty set of times [E.sub.I], a binary relation [E.sub.I] (earlier than) on [T.sub.I], and an assignment I(P, t) of a truth-value T or F to each simple sentence P at each time t [element of] [T.sub.I]. Truth in TL is defined in the standard way for sentences governed by truth-functional connectives and as follows for the temporal operators.

Pq is true at [t.sub.1] under I if and only if [exists][t.sub.2]([t.sub.2] [element of] [T.sub.I] & [t.sub.2] [E.sub.I] [t.sub.1] & q is true at [t.sub.2] under I).

Fq is true at [t.sub.1] under I if and only if [exists] [t.sub.2]([t.sub.2] [element of] [T.sub.I] & [t.sub.1] [E.sub.I] [t.sub.2] & q is true at [t.sub.2] under I).

Formalization of the desired intuition requires an assumption–in effect, that each time is preceded by an earlier time –which we impose as a single constraint on the relation [E.sub.I] used in interpretations of TL: [inverted a][t.sub.1][exists][t.sub.2] [t.sub.2] [E.sub.I] [t.sub.1]. Given only this constraint, we can claim to have captured the intuition about a necessary connection between p and [FF.sub.p]: every interpretation that makes p true at a time will make [FF.sub.p] true at that time.

Within the formalized theory of tense, the operators “P” and “F” play a role exactly analogous to that of logical constants in a minimalist theory of deductive systematization. That is, interpretations of TL can vary in a number of respects–in the assignments they make to simple sentences, in the set [T.sub.I], and even in the order of times represented by [E.sub.I]. However, they do not vary in the truth conditions associated with “P” and “F”. What distinguishes “P” and “F” from logical constants of core logic is not the type of role the operators play but the nature and purposes of the theory in which it is played. The theory of tense deliberately ignores the minimalist constraint. The aim of the theory is not simply to systematize beliefs about the nature of time. If that were the only objective, the tense operators “P” and “F” would be unnecessary. All of the desired generalizations about time can be recast in an ordinary first-order language containing no tense operators. The point of a theory employing tense operators is to systematize a particular way of talking about time, a feature of the grammar of the natural language. Since the aim is to formalize the behaviour of tense operators, the minimalist constraint which would dispense with such operators is relaxed. “P” and “F” thus play the role of logical constants, but they play it in a theory whose purposes are quite different from those of core logic.

Modal logic, in standard formulations such as T, S4, S5 or quantified versions thereof, provides more examples of such non-minimalist theories. The aim is again to formalize a particular way of talking, specifically, of the behaviour of “Necessarily” and “Possibly”. Clearly, a translation scheme like the one considered in the last section that simply eliminates such terms would miss the point completely. The objective of a theory of modal operators is not to eliminate them but to understand them and characterize them in formal terms. Hence, theories with modal operators are best understood as theories of extended logic with purposes quite different from those of core logic. Exactly similar considerations hold for theories about generalized quantifiers and branching quantifiers.

Given the differences in objectives, one may still wonder whether it is appropriate to apply the term “logic” both to core logic and to extended theories. There is clearly a deep difference between the objective of deductive systematization, as such, and other aims such as that of understanding features of language through formalization. One project seeks the minimal resources required for systematizing theories, irrespective of the language in which the theories are formulated. The other project holds the language variable fixed, thus deliberately passing over possible simplifications that might be achieved through paraphrase. A minimalist systematization is inherently a more fundamental theory since linguistic systematizations inevitably presuppose at least this minimal apparatus. Tradition and the actual practice of logicians apply the same term to both kinds of theories, and it would be anomalous to characterize the discipline in a way that implies that much of the theoretical work done by actual logicians is not logic. But it should also probably be acknowledged that the use of the name “logic” for both enterprises has tended to blur the differences just noted.(20)

(1) Tarski’s uncertainty was not a permanent condition, and he subsequently advanced his own account of the distinction (Tarski, 1986). His proposal was similar to but less well developed than Sher’s (1991) theory of constancy which will be discussed below. See also Sher’s discussion (1991, pp. 63-4) of Tarski’s theory.

(2) (S) omits some provisions of Sher’s criterion of constancy, though I think it expresses the key intuition underlying her criterion. For the full criterion, see Sher (1991, pp. 54-5).

(3) Mostowski (1957) quoted in Sher (1991, p. 14).

(4) For similar problematic examples, see Machover (1994) and Hanson (1997, pp. 391-2).

(5) Similar problems arise for Arnold Koslow’s (1992) more recent analysis of logical operators. Koslow shows that, given a notion of implication, one can define the logical operators in terms of the implication relation. But different choices of assumptions about the implication relation result in different sets of logical operators. Hence, the problem of which operators should be recognized as logical operators has simply been replaced by a different problem which is no less difficult to solve: which assumptions should be adopted concerning implication?

(6) Even if one assumes that the controversies associated with necessity and apriority could be resolved, it is by no means certain that the payoff would be a single, sharp distinction between logical and non-logical terms. Hanson, for example, invokes both necessity and apriority in his analysis of logical consequence. Still, he holds that “the selection of terms to serve as logical constants i ultimately a pragmatic matter” (1997, p. 365). But if different pragmatic considerations can lead to different choices of logical constants, those considerations will also lead to different extensions for “logical consequence” and “logical truth”.

(7) I have concentrated on problems relating to intuitions about necessity, apriority and form mainly because these intuitions are widely regarded as essential to the logical enterprise, and they are also subjects of serious controversy. Similar considerations apply to intuitions about topic neutrality and generality, though the philosophical issues involved may be slightly less fundamental. Ryle (1954, p. 116), for example, considered “is a member of” to be topic-neutral, though many logicians reject the predicate as a logical constant on the grounds that it is not topic-neutral. Likewise, temporal operators have seemed to many to make claims about time, but Peacocke (1976, pp. 229-30) counts them as logical and specifically asserts their topic neutrality. For discussion of problems relating to the vagueness of topic neutrality see Haack (1978, pp. 5-6) and Sainsbury (1991, pp. 313-4).

(8) I do not assume that such testing is limited to deriving observational consequences and testing them by experiment. Nothing is assumed here about the means that can be used to determine that a consequence is false. One might, for example, test a mathematical theory by showing that it has a previously unrecognized but obviously absurd consequence.

(9) We shall have occasion to revise the rule in the next section.

(10) An intuitionist might respond that, though he does not deny any instance of p [disjunction] ??p, he simply does not assert certain instances. Unquestionably, there are truths which, for various reasons, we do not assert. It is at least conceivable that some instances of the excluded middle might fall into this category. However, it seems likely that, for some intuitionists at least, a difference in the reading of “[disjunction]” is also at work here. Dummett’s formulation of the excluded middle quoted above strongly suggests this. He holds that the meanings of all logical constants are to be given as conditions of proof: “The meaning of any given constant is to be given by specifying, for any sentence in which that constant is the main operator, what is to count as a proof of that sentence” (1977, p. 12). His meaning specification for “[disjunction]” says that “A proof of A [disjunction] B is anything that is a proof either or A or of B” (1977, p. 12). This reading of “[disjunction]” seems clearly different from that given by the familiar two-valued truth-table. We can imagine explicitly introducing a new operator “[disjunction]*” to express Dummett’s reading: “p [disjunction]* q” would mean “We can prove that p or we can prove that q”. Clearly a classical logician would have no problem withholding assent to some instances of p [disjunction]* ??p. But this does not conflict with the classical logician’s claim that p [disjunction] ??p is always true when “??” and “[disjunction]” are understood in the way he reads them.

(11) I should acknowledge that logicians disagree on this issue, and some may prefer to identify core logic with intuitionist logic. My claim that intuitionist logic assigns a different reading to “??” naturally raises the question what that reading is. One promising suggestion, which is perhaps implicit in the passage quoted above from Brouwer, is the proposal by McKinsey and Tarski (1948) that the intuitionist ??p be read as ??[diamond]p where the “[diamond]” is the possibility operator of the Lewis system S4. Also see Fitting (1969). For an interesting argument that the intuitionist still needs classical negation, see Hossack (1990).

(12) To keep things simple, I consider only modal interpretations involving a single domain rather than a separate domain for each world. Modifications of the translation scheme that would allow different domains for different worlds are straightforward.

(13) The translation scheme proposed here is similar to one suggested by Forbes (1985, Appendix). Forbes does not address the problem of insuring that the translation preserves meaning as characterized by the semantic interpretations.

(14) It is convenient to employ a limited vocabulary of first-order function symbols such as “[intersection]” and “–” as well as set abstraction. These devices are eliminable in well-known ways. The first-order set theory assumed is discussed in Mendelson (1987, Ch. 4).

(15) Sher (1991, Ch. 5) proposes a slightly different analysis, but it readily submits to the same sort of paraphrase into first-order set theory.

(16) As understood here, first-order and second-order logic are distinguished in formal rather than ontological terms. Second-order logic employs quantifiable variables for which predicates may be substituted. Only singular terms may be substituted for quantifiable variables in first-order logic. Ontologically, however, a first-order theory can commit to any type of entity that one can commit to in a second-order theory.

(17) Unlike the case of modal operators, the semantics of second-order quantifiers does not support translation of second-order statements into assertions that contain only quantifiers of first-order. Truth for “[inverted a]XXa” under an interpretation I is defined in terms of whether another sentence, “Fa”, is tree under the class of interpretations that differ from I at most in what they assign to “F”. Since truth for “[inverted a]XXa” is not defined in terms of whether “Fa” is true under a single interpretation of a different type, we have no way of pairing up each second-order sentence and its interpretation with a corresponding first-order sentence and interpretation.

(18) I assume here that there is a decision procedure for the axioms of the theory, and that any inference rule is decidable in the sense that one can always determine whether a given conclusion results from given premises by the rule. Such assumptions are presumably reasonable given the objectives of deductive systematization. In any such system, the theorems (hence, by completeness, the logical consequences of the axioms) can be effectively enumerated. That is, one can use Godel numbers assigned to sentences to mechanically generate a list which will eventually include any given theorem. See Mendelson (1987, p. 68) for an illustration of the technique. Of course, effective enumerability of the theorems does not constitute a decision procedure for theoremhood since it still does not give us a routine method for demonstrating that a sentence is not a theorem.

(19) For discussion of other issues relating to the adequacy of the first-order constants see Tharp (1975), Lindstrom (1969), and Zucker (1978).

(20) I am grateful for many helpful suggestions by a referee for Mind. This research was supported by grant #410-92-1072 from the Social Sciences and Humanities Research Council of Canada.


Barwise, J. 1979: “On Branching Quantifiers in English”. Journal of Philosophical Logic, 8, pp. 47-80.

Barwise, J. and Cooper, R. 1981: “Generalized Quantifiers and Natural Language”. Linguistics and Philosophy, 4, pp. 159-219.

Boolos, G. 1975: “On Second-Order Logic”. Journal of Philosophy, 72, pp. 509-27. Boolos, G. and Jeffrey, R. 1989: Computability and Logic, 3rd Edition. New York: Cambridge University Press.

Brouwer, L. E. J. 1940: “Consciousness, Philosophy and Mathematics”, in P. Benacerraf and H. Putnam (eds.) Philosophy of Mathematics, Englewood Cliffs, NJ: Prentice-Hall, 1964, pp. 78-84. Originally published in 1940, Proceedings of the Tenth International Congress of Philosophy, Amsterdam: North Holland.

Davidson, D. 1967: “The Logical Form of Action Sentences”, in his Essays on Actions and Events. Oxford: Oxford University Press, 1980, pp. 105-48. Originally published in 1967 in N. Rescher (ed.) The Logic of Decision and Action, Pittsburgh: University of Pittsburgh Press.

Dummett, M. 1977: Elements of Intuitionism. Oxford: Oxford University Press.

Etchemendy, J. 1990: The Concept of Logical Consequence. Cambridge, MA: Harvard University Press.

Fitting, M. C. 1969: Intuitionistic Logic, Model Theory and Forcing. Amsterdam: North Holland.

Forbes, G. 1985: The Metaphysics of Modality. Oxford: Oxford University Press.

Haack, S. 1978: Philosophy of Logics. Cambridge: Cambridge University Press.

Hacking, I. 1979: “What is Logic?” Journal of Philosophy, 76, pp. 285-319.

Hanson, W. H. 1997: “The Concept of Logical Consequence”. The Philosophical Review, 106, pp. 365-409.

Hossack, K. G. 1990: “A Problem about the Meaning of Intuitionist Negation”. Mind, 99, pp. 207-19.

Kant, I. 1965: Critique of Pure Reason, translated by N. K. Smith. New York: St Martin’s Press.

Koslow, A. 1992: A Structuralist Theory of Logic. Cambridge: Cambridge University Press.

Lindstrom, P. 1969: “On Extensions of Elementary Logic”. Theoria, 35, pp. 1-11.

Machover, M. 1994: “Review of Gila Sher, The Bounds of Logic”. British Journal for the Philosophy of Science, 45, pp. 1078-83.

McKinsey, J. C. C. and Tarski, A. 1948: “Some Theorems about the Sentential Calculus of Lewis and Heyting”. Journal of Symbolic Logic, 13, pp. 1-15.

Mendelson, E. 1987: Introduction to Mathematical Logic, 3rd edition. Monterey, CA.: Wadsworth and Brooks/Cole.

Mostowski, A. 1957: “On a Generalization of Quantifiers”. Fundamenta Mathematicae, 44, pp. 12-36.

Peacocke, C. 1976: “What is a Logical Constant?” Journal of Philosophy, 73, pp. 221-40.

–.1981′ “Hacking on Logic: Two Comments”. Journal of Philosophy, 78, pp. 168-75.

Quine, W. V. 1970: Philosophy of Logic. Englewood Cliffs, NJ: Prentice-Hall.

–.1953: “Mr. Strawson on Logical Theory” in his The Ways of Paradox and Other Essays. Cambridge, MA: Harvard University Press, 1976, pp. 136-57. Originally published in 1953 in Mind, 62.

Russell, B. 1919: Introduction to Mathematical Philosophy. London: George Allen and Unwin.

Ryle, G. 1954: Dilemmas. Cambridge: Cambridge University Press.

Sainsbury, M. 1991: Logical Forms. Oxford: Basil Blackwell Ltd.

Sher, G. 1989: “A Conception of Tarskian Logic”. Pacific Philosophical Quarterly, 70, pp. 341-68.

–1991: The Bounds of Logic. Cambridge, MA: MIT Press.

–1996: “Did Tarski Commit `Tarski’s Fallacy’?”. Journal of Symbolic Logic, 61, pp. 653-86.

Tarski, A. 1936: “On the Concept of Logical Consequence”, in his Logic, Semantics, Meta-Mathemetics. Indianapolis: Hackett Publishing, 1983, pp. 409-20. Originally published in Polish in 1936 in Prezglad Filozoficzny, 39.

–1986: “What are Logical Notions?” History and Philosophy of Logic, 7, pp. 143-54.

Tharp, L. 1975: “Which Logic is the Right Logic?” Synthese, 31, pp. 1-21.

Wheeler, S. 1972: “Attributives and their Modifiers”. Nous, 6, pp. 310-34.

Zucker, J. I. 1978: “The Adequacy Problem for Classical Logic”. Journal of Philosophical Logic, 7, pp. 517-35.


Department of Philosophy University of Manitoba Winninpeg, Manitoba Canada R3T 2N2 Warmbro@cc.umanitoba.ca3

COPYRIGHT 1999 Oxford University Press

COPYRIGHT 2008 Gale, Cengage Learning