Rationality in action
In everyday life people pass in corridors without colliding. They heed promises, do one another favours and cooperate beyond the call of self-interest. To a game theorist these facts can be as perplexing as the character of the external world to an epistemologist. In both cases there are leading accounts of practical or theoritical reason which result in paradoxes. Such paradoxes can often seem merely idle to the sensible inhabitants of the external world, and yet be so deep, disconcerting and fertile that the leading account is worth rethinking at its roots. The aim of this paper is to survey some recent developments in the theory of choice which are causing great turbulence in economic theory and bear significantly on the philosophy of rational action. Progress is leading to paradox, however, and we hope to spread the turbulence to philosophy in search of fresh ideas.
The paper is in four main parts. The first prepares the way by offering a brief historical sketch of how economic theory has come to make utility a core concept, bleached of specific content but not innocent of philosophical presuppositions. The second deploys the notion of strategically rational choice which is central to the theory of games, and points up some philosophically intriguing developments. It identifies paradoxical results which are disturbing in themselves and have wider implications for the analysis of practical reason, not least in ethics. The third explores the assumption that ideally rational agents choose in the light of the knowledge that other agents are ideally rational, and tests the suspicion that this assumption is incoherent. The fourth suggests that utility theory is not and cannot be innocent of all philosophy of mind, and considers some alternative accounts of motivation.
Economics was defined by Robbins (1932, p. 15) as “the science which studies the relationship between ends and scarce means which have alternative uses”. In that case rational choice theory could be the starting point for a very general science of practical reason, which is descriptively accurate, predictively successful and has explanatory power. So it is disconcerting when experience seems to show that even economic behavior, even in the market situations central to the theory of rational choice, often fails to conform to it. A common response is to argue that appearances are misleading, since the behaviour can usually be shown to conform to the theory, if properly interpreted. Here lies much of philosophical interest. But this paper takes a different tack. Any apparent lack of fit is not necessarily bad news for the theory. To identify a rational choice is to say that an agent would, in some sense and circumstances, do well to make it. If actual agents do not, they, rather than the theory, may be at fault. The theory of rational choice has a large prescriptive or normative aspect which can be isolated by considering an ideal world where all agents are fully rational. This theoretical exercise is our topic, and we shall focus on critical moments where, it seems, ideally rational agents are either paralysed, when reasonable people would not hesitate, or are rationally required to make choices which reasonable people would reject.
1. The origins of rational choice theory
To become puzzled that people do not collide in corridors, one first needs to find an “economic” analysis of rational action and practical reason highly persuasive. This is much helped by giving rational choice theory a skeletonic history, before presenting its theoretical elements. Although its origins can be traced back to Plato and Aristotle, the modern theory stems from the scientific aspirations of the Englightenment. It derives from the ambitious but conflicting attempts at a moral science of mind made by Hobbes, Hume and Kant, and was then given a mathematical structure by Bentham and the utilitarians, before being abstracted as an all-but-formal exercise in what might be termed epistemic logic.
Hobbes opens Leviathan with a mechanistic account of human beings–“life is but a motion of the limbs”. Our actions are driven by our appetites and aversions: the will is simply “the last appetite, or aversion, immediately adhering to the action, or the omission thereof”. Reason comes into the account when a person has conflicting appetites and aversions, so that he is drawn to mutually exclusive courses of action. This leads to “deliberation”. In this process, passion and reason play distinct roles:
Deliberation is expressed subjunctively; which is a speech proper to signify suppositions, with their consequences; as, if this be done, then this will follow; and differs not from the language of reason, save that reasoning is in general words; but deliberation for the most part is of particulars. The language of desire, and aversion, is imperative; as do this, forbear that [.] (1651, Ch. VI)
Reason, then, may be able to tell us how best to satisfy given desires, but the desires themselves lie beyond the appraisal of deliberative reason. Hobbes is subjectivist about the nature of good. Something can be called good for a person, only if it is an object of his desires:
For these words of good, evil, and contemptible [i.e. neither good nor bad], are ever used with relation to the person that useth them: there being nothing simply and absolutely so. (Ch. VI) Hobbes defines “felicity” as “continual success in obtaining those things which a man from time to time desireth” (Ch. VI). Felicity is an unspecific good, in some respects like the later concept of utility. But Hobbes does not present felicity as a common currency in which different desires can be measured. The pursuit of felicity is a much more open-ended and dynamic process than that of calculating how to maximize some given index of desire-satisfaction. “The felicity of this life”, he says, “consisteth not in the repose of a mind satisfied… Felicity is a continual progress of the desire, from one object to another; the attaining of the former, being still but the way to the latter.” It animates “a perpetual and restless desire of power after power, that ceaseth only in death” (Ch. XI).
Nevertheless, Hobbes argues that there are “precepts” or “general rules” of action which apply to all persons and which can be “found out by reason”. Such rules are possible because certain passions–particularly the desire to preserve one’s own life–are common to all human beings. By reason, we discover that we can best preserve our lives by following certain general rules, such as “to seek peace, and follow it” and “by all means we can, to defend ourselves.” It is rational for us to follow these rules because, by doing so, we satisfy our most urgent desires. In the same way, reason can help us to understand “the art of making and maintaining commonwealths”, thus removing the constant fear of violent death and securing a peace which will allow commodious living. Life in a state of nature is a war of every man against every man, but rational individuals can escape it by creating “a common power to keep all in awe”–a seminal thought for today’s theory of games. Meanwhile an Englightenment spirit gleams amid Hobbes’ often dour reflections. As he declares in Chapter V, “Reason is the pace; increase of science, the way; and the benefit of mankind, the end”.
Writing a century later, Hume is sceptical about the pretensions which Reason acquired in the seventeenth century as an a priori guide to science and the benefit of mankind. But he is not at all sceptical about the prospects for an empirical science of mind. His Treatise is intended to lay its foundations. Like Hobbes, Hume holds that all action is the product of two elements, passion and reason, with passion (which is “an original existence”) as the motivating force. Crucially, “reason alone can never be a motive to any action of the will”, since “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them” (1740, Book III, Part II, section II). Thus ultimate ends are not subject to rational criticism and the basic account of practical reason is similar in Hobbes and Hume.
Hume takes a more genial view of human nature than Hobbes does, however, crediting us with some kindlier passions, like a natural sympathy for people in distress. But, like Hobbes, he thinks that the basic structure of human desires is constant across time and across societies:
Ambition, avarice, self-love, vanity, friendship, generosity, public spirit: these passions, mixed in various degrees and dsitributed through society, have been, from the beginning of the world, and still are, the source of all the actions and enterprises, which have ever been observed among mankind… Mankind are so much the same, in all times and places, that history informs us of nothing new or strange in this particular. (1748, section VIII, Part I, 65)
This allows Hume to argue–again, like Hobbes–that there are certain rules of action that can be recommended as means for satisfying desires that are common to all human beings. Thus, having argued that the institution of property is a human contrivance which works to everyone’s benefit, Hume can remark that “nature provides a remedy in the judgement and understanding, for what is irregular and incommodious in the affections” (1740, Book III, Part II, section II). Reason recommends the institution of property to us, in virtue of that institution’s tendency to satisfy universal human desires.
In marked contrast to this instrumental notion of rational action stands Kant’s moral psychology, in which reason can be and often ought to be a motive to the will. This underpins his moral philosophy and his attempt to rationalise the categorical imperative, “Act only on that maxim which you can at the same time will that it should become a universal law”. The mark of a morally right action is that it would be right for anyone so placed, and is thus chosen from an impersonal and impartial point of view. Freed from personal inclinations and biases, the moral, ideally rational agent is autonomous and respects the autonomy of others, who must therefore never be treated as means to one’s own ends. The moral worth of an action arises from its being done from the right motives and regardless of its consequences–even its consequences for the sum of human happiness.
That would be an idle theory, unless reason alone can be a motive to the will. Kant held that moral action is indeed rational action. Fully rational agents who recognize that there is a moral reason for an action thereby make the reason their own and act on it. Thinkers in line of descent from Hume remain sceptical. But Kantian moral psychology ofers an impersonal and impartial standpoint which distances agents from their inclinations and makes room for a notion of noninstrumental rationality. This will become attractive presently, when the theory of rational choice starts to run into paradoxes because agents are apparently locked into their preferences and cannot escape foreseeably inferior consequences.
The main line of economic thinking has followed Hume more often than Kant, but with a slant largely due to Bentham. For Bentham, the requirements of rationality are expressed in the “principle of utility”, which “approves or disapproves of every action whatsoever, according to the tendency which it appears to have to augment or diminish the happiness of the party whose interest is in question”. Utility is defined as “that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness (all this in the present case comes to the same thing) … to the party whose interest is considered”. An action promotes the interest of an individual “when it tends to add to the sum total of his pleasures” (1789, Ch. 1)
Bentham’s account of rational action is in terms of the mental states which result from actions, and not in terms of the desires which, in Hume’s theory, provide the motive power for action. This gives reason a more active role. Bentham makes it rational to do what maximises happiness over the rest of one’s life, even if one’s current desires point in a different direction. To this extent, reason can overrule passion, although only for the sake of achieving a greater overall balance of “benefit, advantage, pleasure, good, or happiness.” It is significant also that pleasure (and hence utility) is seen as a portmanteau quality which all actions possess, that it can be measured in a unitary way, and that quantities of pleasure can be added together. Thus every problem of rational choice becomes an exercise in maximisation. This crucial feature of Bentham’s utilitarianism is not an implication of Hume’s theory of practical reason. If reason is the slave of the passions and if a passion is an original existence, then it cannot be a requirement of reason that the passions are susceptible to any particular mathematical representation. If a person’s desires do not have the properties of commensurability or “consistency” required for reason to identify a best action, that is too bad for reason. In order to arrive at a maximising theory of rational choice, we have to do what Bentham does: add a psychological hypothesis about the nature of desires or pleasures.
Bentham’s utilitarian approach dominated economics until well into this century. The idea that rationality requires the maximisation of utility was given greater mathematical sophistication from the 1870s, with the development of the theory of marginal utility and the application of the mathematics of calculus. But there was continual unease about the psychological concept best captured the generality of human motivation, granted that Bentham’s quintet do not “all come to the same thing”. J.S. Mill favoured “happiness” as the most general term and proposed a distinction between “higher” and “lower” pleasures, thus activating the question of whether all motivating factors are commensurable. In the ensuing complications a common move by the “neoclassical” economists of the late 19th century was to defend their psychological assumptions not as universal truths but as simplifications sufficiently realistic for the purposes of economics. For example, Jevons (1871, p. 93) accepted the incommensurability of higher and lower pleasures, but argued that economics could safely confine its attention to “the lowest rank of feelings”. Similarly, Edgeworth (1884, p. 16) qualified his claim that “The first principle of economics is that every agent is actuated solely by self-interest” with the remark that this principle was especially applicable to commerce, even if less so to other realms of behaviour.
Later economists took a radically different line, consciously removing all dependence on psychological assumptions, while retaining the mathematical structure of utilitarian theory. This approach was pioneered by Pareto, who set out to show that the then current theory of consumer choice could be derived without using any psychological assumptions about utility. Instead, Pareto begins with indifference curves–sets of bundles of consumption goods among which a consumer is indifferent. The notion of indifference, he claims, is “given directly by experience” (1972, p. 391). To say that a person is indifferent between two bundles of goods is to say that he would just be willing to exchange one for the other; nothing needs to be said about desire or pleasure. Given a family of indifference curves, we may assign a numerical index to each curve. The mathematical function which assigns indices to indifference curves is a representation of the person’s preferences. We may speak of these indices as indices of “utility” (Pareto preferred the term “ophelimity”), and we may describe the person’s behaviour as utility-maximizing; but this is merely a convenient language for expressing the idea that the person chooses in accordance with a given set of preferences. Summing up this approach, Pareto claims: “The theory of economic science thus acquires the rigour of rational mechanics; it deduces its results from experience, without bringing in any metaphysical entity” (1972, p. 113).
The programme initiated by Pareto was carried further in Samuelson’s (1947) theory of revealed preference, in which consumer theory is derived from a compact set of axioms about behaviour in situations of choice. The central axiom in Samuelson’s theory is one of consistency between choices. (It says that if, in any choice situation, x is revealed preferred to y–i.e. if x is chosen when y could have been chosen–then there is no choice situation in which y is revealed preferred to x.) Von Neumann and Morgenstern (1947) extended Pareto’s approach to choice in the face of risk, showing that, if probabilities were known, expected utility theory could be derived from axioms about preferences. In a parallel development, Ramsey (1931) showed that subjective probabilities could be derived from axioms about choice among gambles.
This body of work was synthesized by Savage (1954) into what is still generally regarded as the most satisfactory statement of the foundations of the theory of rational choice. Savage offers his theory as a quasi-logical analysis of choice under uncertainty. He is concerned with “the implications of reasoning for the making of decisions”, and interprets his work as an attempt to “extend” the principles of logic “by principles as acceptable as those of logic itself to bear more fully on uncertainty” (p. 6). Just as logic can be used to detect inconsistencies among our beliefs and to derive new beliefs from existing ones, so the principles of Savage’s theory can be used “to police [one’s] own decisions for consistency, and, where possible, to make complicated decisions depend on simpler ones” (p. 20). Thus, for Savage, rationality is understood in terms of the consistency of decisions with one another.
In Savage’s theory the primitive concepts are states of the world, consequences and preferences. Uncertainty is described by a set of mutually exclusive states of the world, one and only one of which obtains; sets of states of the world are events. A consequence is interpreted as “anything that may happen” to the person who is choosing (p. 13). Choices are made between acts, where an act is a list of conceivable consequences, one for each state of the world. Thus, when gambling on the toss of a coin, one might take “Heads” and “Tails” as the events and the payoffs in each event as the corresponding consequences. Then a typical choice might be between the acts “if heads, gain nothing; if tails, gain nothing” and “if heads, gain 10 [pound]; if tails, lose 10 [pound]”. The agent’s preference between acts is defined in terms of the choice made between them (p. 17).
Savage’s account of rational choice is formalised in four main postulates (and three more of a technical nature). The first postulate ensures that a rational agent has a complete and consistent preference ordering over all conceivable acts. The second and third postulates allow us to define the agent’s preference between any given pair of acts, conditional on any given event’s obtaining; such preferences depend only on the consequences of those acts in that event, and are independent of the description of the event itself. The fourth postulate allows us to define a relation of “subjectively more probable than” between events; roughly, one event is subjectively more probable than another if the agent would prefer to bet on the first than on the second.
Savage proves the following remarkable result: a person whose preferences satisfy all the postulates will choose as if maximizing expected utility. For such a person, it will be possible to assign a number between 0 and 1 to each event such that these numbers have the standard properties of probabilities: we may call these numbers the “subjective probabilities” of the corresponding events. It will also be possible to assign a number to every consequence, such that the ranking of consequences by numbers is the same as the person’s preference ranking: we may call these numbers the “utilities” of the corresponding consequences. And it will be possible to choose these numbers so that for every pair of acts, the ranking of the acts in terms of expected utility will correspond with their ranking in the preference ordering.
Notice that Savage has made no explicit assumptions about desires or beliefs. He has required only that a person’s decisions satisfy certain conditions of mutual consistency. This, then, is not a theory of instrumental rationality. A person whose decisions are consistent in Savage’s sense acts as if making complicated utilitarian calculations, using measures of utility and probability to work out the expected utility of each of the acts amongst which he has to choose. But these utility and probability measures do not explain or justify choices. Rather, they derive their meaning from the choices which reveal them. Savage retains a utilitarian mathematics while abandoning not only the utilitarian psychology but also the broadly Humean theory of practical reason on which this mathematics previously relied. The notion of utility has thus been bleached of all psychological content and we are left with an abstract scheme of quasi-logical relations.
2. The theory of games
The theory of rational choice, as presented so far, is focused on a single agent choosing within the parameters of an independent environment. The next step is to consider the making of choices when more than one agent is involved. This brings us to the theory of games. Crucially it begins where our historical sketch ends, with a presumption that analysis can and should proceed without making any psychological assumptions and without any but the most national and skeletonic theory of practical reason. This presumption is attractive in its resulting elegance and economy, but, to give warning, it will turn out to be haunted by the ghosts supposedly exorcised. Having identified these spectres, we shall argue that they are there to perform a necessary function and hence that the attempt to dispense with all moral psychology could not have succeeded. The point will be pursued in section 4.
The theory of games considers rational choice in situations where the outcome depends on the choices of more than one agent and the agents are aware of one another’s rationality. It does so by an austere feat of abstraction which we can best illustrate with the help of a very simple game, Heads and Tails. There are two players, A and B. They are not allowed to communicate with one another, except through their actions in the game itself. (Games without communication are termed non-cooperative. To keep things simple, we shall confine our attention to two-player games throughout the paper, calling A “he” and B “she”.) Each player has a coin to place showing either heads or tails, without seeing how the other coin is being placed. If both players choose Heads, each will be paid 10 [Pounds]; if both choose Tails, each will be paid 5 [Pounds]; otherwise, neither will be paid anything. Each player prefers more money to less.
In the language of game theory, each player has a choice between two pure strategies, Heads and Tails. (A pure strategy is one that has no element of randomness, in contrast to a mixed strategy, where a player randomizes between two or more strategies, using appropriate probabilities.) Thus there are four combinations of pure strategies which might be chosen in Heads and Tails, each of which produces a consequence (10 [Pounds], 5 [Pounds] or nothing) for each player. If each player is rational in the sense given by Savage’s theory, each of these consequences can be assigned a utility number. Suppose the players are “risk neutral”, so that we can assign the utility numbers 2, 1 and 0 to the consequences 10 [Pounds], 5 [Pounds] and nothing. (No significance attaches to the origin or units of the utility scale, or to inter-personal comparisons. We might equally well have used the numbers 10, 6 and 2 for A, and 100, 90 and 80 for B.)
Then the game may be described as in Figure 1. (This is the normal form of the game. The first entry in each cell is A’s utility, the second entry is B’s.)
In classical game theory, it is assumed that the structure of the game, thus described, is common knowledge between the players. The idea of common knowledge, although part of game theory from the outset, was formalized by Lewis (1969, p. 56). A proposition P is common knowledge if each player has reason to belive P, has reason to believe that the other has reason to believe P, and so on. Game theorists then make a further assumption, which adds a new tier of logical complexity to the analysis: it is common knowledge that the players are perfectly rational. Here “perfectly rational” implies first, that the players are rational decision-makers in the Savage sense (so that each player treats the strategies open to his opponent as events, and attaches subjective probabilities to them) and, second, that they know the truth of every theorem that can be proved about the game. We shall call this set of assumptions common knowledge of rationality, or CKR. Whether CKR is a coherent concept will be considered in section 3.
Although there is fairly general agreement about what game theory assumes, there are two schools of thought about its purpose. For any given game, one might expect the theory to answer the question. “What is it rational for the players to do?” (or “What is it rational for them to believe?”). Some game theorists try to answer these questions, but others merely look for equilibria.
The core notion of equilibrium in game theory is Nash equilibrium. The strategies of two players are in Nash equilibrium if each is a best reply to the other–that is, if each player’s strategy maximises his expected utility, given the other player’s strategy. Heads and Tails has two Nash equilibria in pure strategies: in one of these equilibria, A and B both play Heads; in the other, they both play Tails. (There is also a mixed-strategy equilibrium in which each player plays Heads with probability 1/3 and Tails with probability 2/3, thus equalising the expected utilities of the two strategies.)
Such pairs of strategies are equilibria in the general sense that the players’ beliefs are mutually consistent. As a rational agent, each player maximises his expected utility, given his beliefs about what the other player will do. But in Nash equilibrium specifically, a player’s action is also expected utility maximising in relation to what the other player actually does. Thus the equilibrium condition is stronger than CKR. Whereas CKR implies only that each player holds separately consistent beliefs about the other’s choices and beliefs, the equilibrium condition adds that there must be mutual consistency and hence that each player holds true beliefs about what the other will do.(2) This would follow from CKR alone only if CKR prescribed for each player a unique set of beliefs; and, as far as we can tell, it does not. Hence CKR does not in itself imply that the players’ beliefs must be in Nash equilibrium. (Nevertheless, there are some grounds for thinking that CKR implies some degree of mutual consistency among the players’ beliefs, and thus that it is itself a kind of equilibrium condition. We shall look at this issue in section 3.)
CKR clearly does imply, however, that each player’s choice of strategy is constrained by knowledge that the other is rational and so similarly constrained. That lets them (and us) rule out strategies inconsistent with CKR. (If a strategy is not expected utility maximising in relation to any possible set of beliefs, it can be ruled out as an irrational choice. Having ruled out such a strategy for one player, we can rule out any beliefs on the part of the opponent which attach a positive probability to that strategy’s being played. This may allow us to rule out further strategies as irrational choices for the opponent, and so on.) Any strategies which survive this filtering are said to be rationalisable–a a concept due to Bernheim (1984) and Pearce (1984). Provided that CKR is a coherent assumption, it implies that the players will choose strategies that are rationalisable.
Applying these ideas to Heads and Tails, we find that both (pure) strategies are rationalisable for both players. Heads is optimal for A, provided A attaches a subjective probability of at least 1/3 to B’s playing Heads. And Tails is optimal for A, provided A attaches a subjective probability of at least 2/3 to B’s playing Tails. Similarly, Heads and Tails are each optimal for B, given different beliefs that B could hold. Thus the filtering process of rationalisability does not eliminate any strategies. To put this conclusion another way, either choice by either player can be supported by an infinite chain of beliefs, none of which are inconsistent with CKR. (For example: A may justify choosing Tails on the grounds of his belief that B is very likely to play Tails. A may justify this belief on the grounds that he believes that B is very likely to believe that he is very likely to play Tails. And so on.) That both strategies in this game are rationalisable seems innocuous but acts like a time bomb when we turn to the problem of coordination.
The problem of coordination
Like our everyday passage through corridors without collision, the game of Heads and Tails typifies the elementary fact of social life that coordination often benefits everybody. This fact is elementary in all senses. It is basic to the existence of societies, crucial for understanding at least some of their institutions, and obvious to a dunce. Since there are often several ways to coordinate, it invites the far-reaching thought that conventions are devices for fastening on a particular way and thereby making it stable. Where one particular way suits everyone best, it seems no less elementary that everyone will opt for it and that a stable convention to that effect will emerge. If there is a problem of coordination, one is inclined to say, it cannot lie here and we can look to the theory of games to uphold our confidence. In Heads and Tails, there are no conflicts of interest between the players. Each player wants to win as much money as possible, and this can be achieved only if the other player also wins as much money as possible. If both players choose Heads, they reach the outcome that is the best possible for both of them. Presumably rationality, as defined above, requires each player to choose Heads. Or does it? Startlingly, this apparently common-sense conclusion is not an implication of CKR.
As we have already shown, Tails is a rationalisable strategy; and the situation in which each player expects the other to play Tails is a Nash equilibrium. Given CKR, what it is rational for A to do depends on what he expects B to do. If A expects B to choose Tails, then rationality requires A to choose Tails too. So, if we are to rule out A’s choice of Tails as irrational, we must show that it would be irrational for A to expect B to choose Tails. But how can we do this, except by showing that it is irrational for B to choose Tails? Yet, by parity of reasoning, we would have first to show that it was irrational for B to expect A to choose Tails. An infinite regress ensues: to show that Tails is irrational for one player, we first need to show that it is irrational for the other.
The apparent implication is that there is indeed a problem about how rational agents coordinate even in the most elementary of situations, where one of the equilibria is better for everyone. This is so disconcerting that many people find it frankly incredible. But we shall insist on it and shall argue that it is insoluble without retracing the historical trail of our opening section. Meanwhile, so as to keep the present focus firmly on CKR, we wish to postpone queries about how A and B are motivated. We have presented Heads and Tails as a game between self-interested players, but the paradox would remain if the players were altruistic or were moved by their common good. For example, suppose that each player’s objective is to maximize the total amount of money won by the two players together. These preferences, just like the original self-interested ones, could be represented by the utility numbers in Figure 1. (Indeed, Hodgson (1967) and Regan (1980) have presented the Heads and Tails paradox as a challenge to the morality of act utilitarianism). Thus it does not matter what the source of the utility numbers is, since the paradox can be derived merely from the sparse information given in Figure 1. Deeper thoughts about motivation can therefore wait.
To escape the impasse, it is tempting to invoke some general principle of rationality which would prescribe Heads as the unique rational strategy for each player in Heads and Tails. A weak version of such a principle might be that if one outcome is strictly preferred to every other outcome by both players, then rationality requires each player to choose the strategy that leads to that outcome. Let us call this the Principle of Coordination. (Stronger versions of this principle have been proposed by many theorists, including Luce and Raiffa (1957), Gauthier (1975), and Harsanyi and Selten (1988).) Proponents of this principle generally recognize that it is not an implication of CKR, but suggest that it is a minor addition to–and in the same spirit as–the other rationality assumptions of economics and game theory. Gauthier, for example, claims that his version of the principle “completes” and is “consonant with the spirit of” the conventional account of rationality (which he characterizes as act-consequentialism):
What distinguishes the … act-consequentialist, is, first, that he attends to, and only to, the consequences of particular actions, and second, that he is concerned with the maximisation of utility. The coordination principle satisfies these two requirements. (1975, p. 206) Yes; but the principle solves the problem only if followed by both players. Since we are dealing with a conception of rationality whose unit of agency is the individual, concerned only with maximising utility, the reason for each to follow the principle would be instrumental and directed to this end. Clearly, it would be rational for each player to follow this principle if he expected the other to follow it; but what grounds this expectation? A player who was concerned only about maximising utility might well wish that the Principle of Coordination were a requirement of rationality, since he could then count on the other player’s following it. The Principle of Coordination “completes” the conventional theory in the sense that it satisfies this wish. But this does not make the principle a natural extension of the theory, while each player can still rationally ask, “Why does rationality require me to follow this principle?” There is no adequate answer while, in order to show one player that it is rational for him to follow the principle, we need to show that it is rational for the other to follow it.
Admittedly, the situation would change if the players were to conceive of themselves collectively as a single unit of agency. Then, if the players ask, “Why does rationality require us to follow this principle?”, there is an obvious answer: “Because you (plural) are better off if you (plural) follow it”. Granted a team as an elementary unit, instrumental rationality at the level of the team requires them to follow the Principle of Coordination, at least in so far as the decision problem is now no longer one of strategic choice. But the idea of supra-individual units of agency implies deep revisions to the conventional theory. As Margaret Gilbert (1989) points out, this idea of agency requires that there be collective desires and collective beliefs, so that these collective agents can behave as individual agents. She sets herself to make sense of agency in collective terms and we are not criticising her when we say that the revisions thus demanded are radical.
Daunted perhaps, game theorists have not given much attention to collective agency. Arguably, however, Schelling’s (1960) analysis of salience is a significant exception. Schelling asks how players manage to coordinate in games with two or more Nash equilibria. In many games, he suggests, one equilibrium sticks out from the others by virtue of some feature whose salience is common knowledge between the players. Even though this feature may have no connection with the payoffs of the game, measured in utility terms, it can serve as a “focal point” on which the players’ expectations can converge. In the Heads and Tails game, for instance, the payoffs for (Heads, Heads) are higher. But, even if (Heads, Heads) scored the same as (Tails, Tails), it could be that Heads is salient in practice, as indeed it commonly is, when the game is tested out empirically. But how and why, we may ask, do rational players make use of such a signal?
Here is an example of how Schelling answers this question. He is considering a game for two players in which there are two equilibria (which we may call X and Y). These equilibrium outcomes are identical in terms of payoffs, and are preferred by both players to the game’s other outcomes. There are certain signals which make Y salient. Schelling suggests that a “perceptive” player might reason as follows:
Comparing just [X and Y] my partner and I have no way of concerting our choices. There must be some way, however, so let’s look for it. The only other place to look is [at the signals]. Do they give us the hint we need …? Yes, they do; they seem to “point toward” [Y]. They provide either a reason or an excuse for believing or pretending that [Y] is better than [X]; since we need an excuse, if not a reason, for pretending, if not believing, that one of [X and Y] is better, or more distinguished, or more prominent, or more eligible, than the other, and since I find no competing rule or instruction to follow or clue to pursue, we may as well agree to use this rule to reach a meeting of minds. (1960, pp. 297-8)
Since the players are unable to communicate with one another, they do not literally “agree”, and this whole line of reasoning seems to presuppose some kind of collective agency. Schelling’s player is not reasoning strategically. (If the other player will do this, then I ought to do that.) Instead, he is trying to find a line of reasoning which he can share with his “partner” (not “opponent”). He is trying to reason as if the two of them were reasoning together. Recognizing that the principle of heading for the salient equilibrium is the best principle for them to follow, he deduces that they should follow it.
A crucial difficulty with the idea of collective agency is to explain how the players of a game like Heads and Tails might come to conceive of themselves as a single unit. Is it a requirement of rationality that they conceive of themselves in this way? If we are to stay at all close to the account of rationality that derives from Hobbes, Hume, Bentham, Pareto and Savage, we must answer “No”. On this account, choices are rational in relation to the desires or preferences of the agent doing the choosing: a choice can be rational only for a particular agent. Thus a theory of rationality cannot tell us what kinds of agents there should be. All we can say, then, is that the Principle of Coordination is a principle of rationality for players who conceive of themselves as a team, but not for players who do not.
A radically different line is taken by Susan Hurley (1989). She argues that an adequate theory of rationality should address the question of “what the unit of agency, among those possible, should be” (p. 145). If a theory of rationality is to do this, it has to find a standpoint which is not that of any particular agent. Hurley seems to suggest that each of us, as an autonomous person, can find such a standpoint in our “substantive goals and ethical views” (p. 147). There are Kantian overtones here. With a Kantian conception of rationality, the move from “This is the rule which, if generally followed, would give the right results” to “Therefore I should follow it” would be straightforward. But Kantian reasoning is certainly not consonant with the spirit of conventional rational choice theory (although we shall return to it when we reach Motivation). Meanwhile, it seems to us, the problem of coordination has to be taken seriously.
Commitments and dispositions
Another elementary fact of social life is that people keep promises and do one another favours. This to is game-theoretically mysterious. In Chapter XIII of Leviathan, Hobbes gives perhaps the earliest and most resonant statement of the Prisoner’s Dilemma. (We offer Figure 2 as a bare reminder of this by-now-familiar game, with its apparent implication that Defect is the only rational strategy for each player.) Hobbes realizes that interaction among agents each striving for felicity can easily produce misery for all. All do better if all live at peace rather than at war, but in a state of nature each will rationally seek to destroy or subdue the others, for fear that otherwise they will destroy or subdue him. They can escape a situation in which the life of man is “solitary, poor, nasty, brutish, and short” only by combining to create “a common power to keep them all in awe”. This is the nub of Hobbes’s theory of the social contract and it anticipates a continuing crux for the theory of non-cooperative games.
We shall follow Hobbes in examining “covenants of mutual trust”, in which one party performs some service for the other, in return for a promise that the other will perform some service later. (Hobbes (1651, Ch. XIV) gives the example of a prisoner of war who is released on the promise that he will pay a ransom.) This kind of situation is grist to the mill of game theory. Without too much simplification, it can be represented by the Promising Game shown in Figure 3. This is a game in what game theorists term extensive form–that is, in which the sequence of the players’ moves is shown as a tree diagram. The status quo is represented by the utilities (0,0). Player A (the captive in Hobbes’s example) moves first: he decides whether or not to promise to perform, conditional on B’s performing first. (A “promise” is to be understood simply as the act of speaking the words, “I promise …”; no costs are involved.) If A decides not to promise, the game ends. If he promises, it is the turn of B(the captor) to move. She has to decide whether or not to perform (i.e. release A). If she decides not to perform, the game ends with the status quo payoffs. If B performs, she incurs a loss of one unit of utility and A benefits by three units. It is then A’s turn to decide whether or not to perform (i.e. pay the ransom). If he decides not to pay, the game ends with A having gained three units of utility and B having lost one. If he decides to perform, he then loses one unit while B gains three, so that the game ends with a net gain of two units for each person.
Given CKR, the analysis of the game is straightforward. The standard method is to start at the end of the game and work back towards the beginning. Suppose that the third “node” of the game (i.e. the point at which A has to make his second move) is reached. Then A, being a rational utility-maximiser, will choose Not Perform (since this gives him a payoff of 3, while Perform gives him only 2). Now suppose that the second node is reached and that B chooses Perform. By our previous conclusion, A will then choose Not Perform at the third node, giving B a payoff of -1. But this contradicts CKR, since if the second node is reached, B can be sure of 0 by choosing Not Perform. Thus the supposition that B chooses Perform is false. This tells us that, whether A promises or not, the outcome will be (0, 0): covenants of mutual trust are not possible for rational players.
This analysis is uncontroversial among game theorists. Yet it threatens to undermine social life among rational agents by destroying reciprocity and hence to undermine the claim of game theory to idealise the basis of everyday social intercourse also. Remedies are therefore sought, which might restore reciprocity without breach of instrumental rationality. These usually take the form of sanctions or commitments. Sanctions add to the cost of not performing and thus affect payoffs. Commitments either work similarly, for instance by loading a defector with the pangs of a bad conscience, or make it physically impossible to defect. (An example of the latter, made focal by Elster (1979), is Ulysses’ decision to have himself tied to the mast of his ship so that he could hear the song of the Sirens and survive.) We shall content that such devices serve as sticking plaster where a deeper diagnosis and remedy are needed.
Hobbes’s own analysis, however, is more complex. In a state of nature, he argues, covenants of mutual trust will rarely be honored: since “the bonds of words are too weak to bridle men’s … passions”, “covenants, without the sword, are but words, and of no strength to secure a man at all” (1651, Chs XIV, XVII). The weakness of such covenants in the state of nature is that B (the party who has to perform first) lacks an adequate assurance that A will perform afterwards. B is entitled to declare the covenant void “upon any reasonable suspicion” that A will not perform. The state of nature will provide many grounds for such reasonable suspicions, and so few covenants will survive. But what if B does perform? Modern theories of rational choice say that it would still be irrational for A to perform; indeed, it is because it would be irrational for A to perform second that B willnot perform first. Hobbes, however, argues the contrary in his well known reply to the “fool” who “questioneth, whether injustice … may not sometimes stand with that reason, which dictateth to every man his own good”. Hobbes insists that “it is not against reason” to honour a covenant if the other party has already performed. He reminds the fool that in a state of nature, everyone needs the help of confederates for self-protection:
He therefore that breaketh his covenant, and consequently declareth that he thinks he may with reason do so, cannot be received into any society, that unite themselves for peace and defence, but by the error of them that receive him; nor, when he is received, be retained in it, without seeing the danger of their error; which errors a man cannot reasonably reckon upon as the means of his security[.] (1651, Ch. XV)
This line of thought leads in two directions. One is to treat the honouring of commitments as an investment in a reputation for reliability. This seems straightforward enough, but leads to surprising problems in a world of fully rational agents: we shall consider these problems in section 3. The other is to make it rationalfor an agent to acquire a disposition to keep promises. Thus Gauthier (1986) suggests that one can adopt a state of mind in which “having promised to do x” becomes an independent reason for doing x, when dealing with agents who have adopted the same disposition. In the Promising Game, for instance, two players who recognise one another as “constrained maximisers” will arrive at an outcome of (2, 2) by both performing. Since this leaves both better off than in a theory where rationality is defined by direct reference to payoffs, we are being offered a philosophically interesting manoeuvre (reminiscent of rule- or motive- utilitarianism). Although Gauthier’s analysis is too subtle to discuss here, it issues a powerful challenge to Savage’s fusion of preference with choice.(3)
This kind of approach requires there to be a conceptual distance between utility and choice. If A is rational in Gauthier’s sense, he chooses Perform rather than Not Perform at the third node of the game, even though Not Perform lead with certainty to a utility of 3, while Perform leads with certainty to a utility of 2. If these utility indices were given the standard interpretation–that is, as representations of revealed preferences–then the possibility considered by Gauthier would be incoherent. Binmore (1993) is probably speaking for most game theorists when he uses this argument against Gauthier’s analysis of a variant of the Promising Game. He says that game theorists “see no merit in such an analysis. For them, it is tautological that [A will not perform] if given the opportunity”. It is tautological because the utility indices attached to the outcomes mean that if A has to choose between performing and not performing, he will not perform. Thus Gauthier’s supposed analysis of the Promising Game is in fact “an analysis of another game.”
As we have shown, the standard interpretation of “utility” depends on the standard theory of rational choice, as embodied in Savage’s axioms. Binmore is right to remind us that we cannot keep this interpretation of the utility numbers in a game while questioning the received theory of rational choice. But Binmore seems to be suggesting something more: that a disposition to keep one’s promises can be described (in “another game”) in the language of utility indices, and thus understood within the standard theory. Presumably the idea is that if A is genuinely disposed to keep his promises, he must prefer to keep them, and so the utilities that A derives from Perform and Not Perform should be revised to take account of this preference. It is true that rational players of the Promising Game are unable to make promises, but this does not mean that rational individuals are unable to make promises in the real cases that the Promising Game appears to model. We do not need to question the theory of rational choice in order to make sense of promises; we merely need to find a different game to model the situations in which promises are made. The line of argument used by Binmore is one that theorists of rational choice often use to escape apparent paradoxes. We shall say more about it in section 4, when we discuss motivation.
Gauthier’s (1986, p.21) way round this difficulty is to retain Savage’s theory for decisions made in “games against nature”, while remaining agnostic about its applicability to games between rational agents. Then we can attach utility indices to the consequences of games by looking at the players’ preferences over the same consequences when they occur in games against nature. This interpretation of utility is consistent with Savage’s, since Savage presents his theory only for games against nature. However, Gauthier’s approach leads to a new set of problems. By severing the link between utility and choice within games, we make space for new kinds of motivation, such as constrained maximisation; but we can no longer appeal to Savage’s axioms to support the idea that game players attach subjective probabilities to each other’s decisions. If the theory of probability does not apply to games, much of game theory–including some of the components that Gauthier needs for his own theory–is nullified. But the conventional approach may not fare any better in this respect. As we have said, Savage’s theory of subjective probability is formulated for parametric environments. It is not clear that the theory can be extended so that it applies to strategic, mutually self-referenced decisions. We shall say more about this issue, too, in section 4.
The aim of injecting a conceptual gap between utility and choice is to credit a rational agent with powers of strategic reflection. This cannot help, unless the results of such deliberation can be conveyed to other players. It cannot be done solely by making particular choices, since the innovation renders the meaning of choices ambiguous. But a ready suggestion is that language (or speech-acts) can convey whatever is relevant. Unfortunately, however, there is a deep-seated belief among game theorists that words like “I promise” are cheap talk and convey nothing. Recall Hobbes’s remark that “covenants, without the sword, are but words, and of no strength to secure a man at all”. Suppose that the mere saying of the words “I promise to perform later, if you perform now” were a reliable indicator that the speaker would do as he said. Then the utility-maximizing strategy for A in the Promising Game would be to say the words, thus inducing B to perform, but then not to perform in return. But if people in A’s position generally acted in this way, the speaking of the words would not be a reliable indicator of future performance.
When there is common knowledge of rationality, the effective meaning of a message has to be found by asking why it was in the sender’s interest to send it. Messages that are costless to send are unlikely to have much information content. In contrast, significant messages may be sent by performing actions that incur costs. Figure 4 provides an example. The 2 x 2 matrix represents the classic game of Battle of the Sexes, in which A and B are spouses or partners who have to decide independently whether to go to the theatre or to a football match. If both choose the same entertainment, they can enjoy each other’s company, but A particularly enjoys the theatre while B prefers football. If they go to different entertainments, both will be miserable. In the basic game, as represented by the matrix alone, both strategies are rationalisable for both players, and there are two pure-strategy Nash equilibria–one in which both go to the theatre, and one in which both go to the football match. This leaves the problem of coordination unresolved.
An intriguing extra twist is provided, however, by giving A an “outside option”: he can choose not to play the game at all, in which case A and B each gain a payoff of 2. B does not have to choose her strategy until she knows that A has chosen to play the game. A widely accepted principle of “forward induction”, due to Pearce (1984) and Kohlberg and Mertens (1986), implies that A will choose to play the game and that the pair will coordinate on the theatre. Why? Suppose A chooses to play the game. He has turned down the possibility of a payoff of 2. Since he is rational, his subjective probabilities must be such that his expected utility from playing the game is at least 2. But were he to choose Football, he could gain no more than 1. Thus his decision to play the game can be accounted for only by supposing that he intends to play Theatre. So B does best to play Theatre. A can foresee this, and so does best to play the game.
The game of Burning the Bank-note, due to Van Damme (1989) and shown in Figure 5, is a teasing variation on this theme. A and B are to play Battle of the Sexes. Instead of having an outside option, A (but not B) has the option of burning a bank-note. B sees whether A burn or not, and so burning the bank-note is a kind of message. In contrast to cheap talk, we might call it an expensive gesture. The effect of burning the bank-note is to reduce all A’s payoffs by two units, while leaving B’s unaffected. Van Damme argues that, with common knowledge of rationality, the bank-note will not be burned and A and B will meet at the theatre. His argument may be reconstructed as follows. If A plays Not Burn, he is sure of a payoff of at least 0. So if he plays Burn, his expected utility must be at least 0. This can be the case only if he intends to follow Burn by Theatre. B can work this out too, so if A plays Burn, B will play Theatre. Thus A can be sure of 2 by playing Burn, followed by Theatre. But then Burn is equivalent to Not Play in the previous game: if A plays Not Burn, his expected utility must be at least 2, and so he must intend to play Theatre. B can work this out too, and so if A plays Not Burn, B will play Theatre. Thus A can be sure of 4, his highest possible payoff, by playing Not Burn, followed by Theatre: this, then, is what he will do. We leave it to readers to decide whether this argument is valid, partly as a lively example of the infuriating character of many puzzles about strategic rationality and partly because the underlying question of whether talk has to be cheap cannot be quickly disposed of.
3. Common knowledge of rationality
The problem of coordination is disconcerting because the theory of choice seemingly cannot explain why each rational player should choose Heads in the game of Heads and Tails. This is disappointing and deeply subversive of the attempt to treat reason as the pace which leads to the benefit of mankind (to echo Hobbes). But at least the theory does not direct the players to do what is plainly foolish. We turn next to a case where it does.
Hobbes saw clearly that rational individuals are prone to make collective fools of themselves, and his reply to the fool attempts to introduce a vital corrective. Gauthier’s suggestion that a rational agent will adopt a disposition to keep promises is directly inspired by Hobbes. So, indirectly, is game theorists’ interest in the making of commitments and the acquiring of reputations. The connecting thought is that a mechanical application of rational choice theory can be self-defeating and that rational agents, foreseeing this, will set themselves to prevent it. Thus, if agents who trust one another all do better than those who do not, then prudence advises us to acquire a name for trustworthiness. It might seem that such advice fits easily into the framework of game theory. But here, too, paradox lurks.
When a game is to be played several times by the same players, early choices of strategy can plausibly be thought of as signals for later choices. Thus promises kept in the opening rounds of the Promising Game offer the prospect of future trust. If the Prisoner’s Dilemma is to be played repeatedly, it might pay to signal a strategy of playing tit-for-tat, which answers cooperation with cooperation and defection with defection. But, alas, if the series is of known, finite length, such thoughts are inconsistent with CKR. The final game in the series is, in effect,a one-shot game and what is rational there is thereby rational throughout. If it is irrational to keep a promise or to cooperate in the final game, it is irrationalto do so in the game before the final one; and so on. This conclusion has often been seen as paradoxical. For example, Luce and Raiffa (1957, pp. 97-102) endorse it as formally correct, but remark that intelligent players, even after careful consideration of the theoretical argument, would probably not defect in every game in a long sequence of Prisoner’s Dilemmas. Similarly, Selten (1978, p. 138) accepts the conclusion as “game-theoretically correct”, while denying that it is good practical advise.
The core of this paradox can be isolated by looking at a simple Centipede game (named after the shape of its extensive-form diagram). Suppose that a pile of gold doubloons is placed before A and B, who are told that each in turn may take (and keep) three doubloons or two. Each time two doubloons are taken, the turn passes; as soon as three are taken, or if only one is left, the game stops and the remaining doubloons vanish. The situation with nine doubloons at the start is shown in Figure 7, where the downstrokes mark a possible choice of three doubloons at that turn and the numbers in brackets show the resulting total from the game for A and then B. Thus, if A opens by taking three, the result is (3, 0). (We shall assume the players to be risk-neutral, so that “doubloons” “utility” are interchangeable.) Moves which take three doubloons (and thus stop the game) are labelled “S”; those which take two (and thus allow the game to continue) are labelled “C”.
Given CKR, we can prove that A will play S1 (i.e. take three doubloons on his first turn, and thus stop the game). Informally, the reasoning runs like this: if the third node of the game is reached, A will play. S3; so if the second node is reached, it pays B to play S2; therefore it pays A to play S1. (We shall present a full proof in a moment.) It is easy to see that the same conclusion would follow, no matter how many doubloons are on the table at the start. This is puzzling. Imagine the Centipede game with 100 doubloons. Is it really irrational for A to take two doubloons, in the expectation that this will start a process in which both gain many more? (Notice that, even if B cooperates only once before calling a halt, A will be better off for having started the process.)
There is clearly a paradox here; but there are different views about the nature of the paradox.(4) We start out with a model of two rational persons, who have common knowledge of their rationality, being confronted with the Centipede Game. We prove that the game will stop at the first move, no matter how many potential moves there may be. On one interpretation, the original model is entirely coherent, and what we have discovered is a property of the behaviour of perfectly rational agents. Then this result is paradoxical in much the same way as is the result that rational players of Heads and Tails have no reason to choose Heads: the behaviour of agents in the model is so peculiar that we may doubt whether they are reasonable or “really” rational (that is, in a sense that does not presuppose the standard theory of rational choice). On another interpretation, however, the paradox of the Centipede Game casts doubt on the internal coherence of CKR. We shall consider this possibility first.
We need to begin with a formal proof that CKR implies that A chooses S1, thus killing the game at the start. Throughout the proof, which goes by reductio ad absurdum, we assume CKR to be true. Thus, any statement of the form “suppose …” should be read as “suppose that the players are rational, and that this is common knowledge, and further suppose…”
To begin the proof, suppose that the third node of the game is reached. A, being rational, will choose S3 (since he prefers (5, 2) to (4, 5)). This gives us the proposition (a): either the third node is reached and A chooses S3 or the third node is not reached.
Now suppose that the second node is reached and that B chooses C2. In this case, the game proceeds to the third node and B ends with a utility of 2 (see above). By CKR, B knows this at the second node. But B, being rational, will not choose C2, when he could end with a utility of 3 by choosing S2. This contradicts the original supposition (that the second node is reached and that B chooses C2). By reductio ad absurdum, we arrive at the proposition (b): either the second node is reached and B chooses S2 or the second node is not reached.
Finally, suppose that A chooses C1 at the first node. The game will proceed to the second node, at which B will choose S2 (see above); A will end the game with a utility of 2. By CKR, A knows this at the first node. But, being rational, A will not choose C1, when he could end with a utility of 3 by choosing S1. Hence: A will choose S1 at the first node.
Notice that propositions (a) and (b) follow trivially from the conclusions of the proof (that A will choose S1 at the first node). These propositions tell us nothing further about what would happen, were the second and third nodes to be reached. All we know from the proof is that these nodes will not be reached.
This may seem surprising. In particular, it is tempting to think that (b) tells us that, were A to play C1, B would play S2. If (b) did tell us that, then it would also tell us why A, as a rational person, ought to play S1. But what (b) in fact tells us is that if the second node of the game is reached, and if there is common knowledge of rationality, then B plays S2. (Notice that, in order to show that B plays S2, we use the assumption that B knows A to be rational: this is what rules out the possibility that B plays C2 in the expectation that A will then play C3.) But we know from the conclusion of the proof that, if the second node were to be reached, there could not be common knowledge of rationality.
We have proved that, if CKR is true, S1 will be chosen. Is this equivalent to a recommendation of S1 as the uniquely rational choice for A? As a rational person, A will choose whichever act, among those open to him, maximises his expected utility. If we are to recommend an act to him as uniquely rational, we need to show him that this act leads to a greater expected utility than any other, given his preferences (as described by the game’s payoffs) and his (internally consistent) subjective beliefs. But can we produce such a recommendation?
By playing S1, A gets a utility of 3. So we need to show him that, were he to play C1, his expected utility would be less than 3. In order to do this, we must consider what B would do, were A to play C1. Recall that nothing in the proof we have been through tells us anything about what would happen in this contingency. So what would B do? If she is to make a rational choice, she must assign a conditional probability–let us denote it by [pi]–to A’s playing S3 in the event that B chooses C2. If [pi] > 2/3, it is uniquely rational for B to choose S2; but if [pi] < 2/3, it is uniquely rational for her to choose C2. If, at the third node, A acts as a rational utility-maximiser, he will of course choose S3, and so if B was certain that A would act rationally at the third node, she would assign the probability [pi] = 1 and thus choose S2. But we cannot appeal to CKR to justify B's certainty on this score. Remember that we are considering the counterfactual case in which A chooses C1, and that we have already proved that this case is inconsistent with CKR. Having observed C1, how much confidence should B place in A's propensity to act rationally? This question is in part empirical: B needs to make judgements about the kind of behaviour to be expected of people who do not always act rationally. It cannot be answered merely by appeal to a priori propositions about rationality and common knowledge.
The puzzle is that, apparently, this question can be answered a priori. CKR implies that A will choose S1. Thus, CKR implies that A believes that, were he to play C1, B would judge [pi] to be greater than 2/3. How can an a priori theory of rationality impose constraints on people’s empirical judgements about non-rational behaviour?
This paradox arises because CKR has been treated as an assumption that can apply universally. That CKR has this status is a very basic presupposition in much of game theory; but perhaps it is mistaken. Consider what it means to recommend a particular action, on grounds of rationality, to a person who has to choose one action from a set of options. To make such a recommendation, we have to consider what would happen, were each of these actions to be taken, and find the expected utility of each set of consequences. By comparing these two expected utilities, we determine which action should be chosen. This procedure is well-defined if the consequences of choosing an action are independent of whether or not the choice of that action is rational. This property of independence is clearly satisfied for games against nature. (Nature, we might say, takes no notice of our rationality or irrationality.) It is also satisfied for games in which each player makes only one decision, and in which these decisions are made simultaneously. (Player A’s move may tell player B whether A is rational or irrational; but by the time B has learned this information, it is too late for her to make any use of it.) But it is not necessarily satisfied for games in which one player makes a move after observing how the other player has moved.
The Centipede Game provides an example of how this independence property can break down. At the first node of this game, A has to choose between S1 and C1. If he chooses S1, he gets a utility of 3: this, at least, can be said without asking whether S1 is a rational move. But what would happen, were he choose C1, depends on how B responds; how B responds depends on the predictions she makes about A’s subsequent decisions; and what predictions she makes may depend on whether C1 is a rational move. The paradoxical features of the Centipede Game arise from the fact that if C1 is deemed to be a rational move, its consequences are inferior to those of S1, and so playing it is not justified; but if C1 is deemed to be an irrational move, its consequences may be better than those of S1, and so playing it might be justified.
Thus, if the independence property does not hold, the concepts of justification and rationality take on a circular character. We can ask whether the assertion that particular actions are “rational” is internally consistent, without necessarily being able to say whether those actions really are rational in any absolute sense. In Savage’s theory, of course, rationality at the level of the individual agent means no more than a certain kind of internal consistency. But CKR seems to depend on a concept of internal consistency which applies across agents: the consequences of one player’s actions, and hence their rationality or irrationality, may depend on whether particular actions are rational or irrational for the other player. Given the individualistic presuppositions of rational choice theory, it is hard to see how a criterion of inter-agent consistency can have recommendatory force. (To whom could the recommendation “Be consistent” be addressed?) This suggests the disturbing thought that CKR is no more than an equilibrium condition. “If there were common knowledge of rationality”, we may say, “then these particular actions would be taken by these particular agents”; but there need be no implication that those actions are justified or recommended.
At any rate, this may be as much as can be said without raising far-reaching questions about the motivation of a game-theoretically rational agent. Meanwhile, a pervasive problem emerges about the relation between the idealised world postulated by game theory after bleaching all content out of the notion of utility and the world which it seeks to help us understand. To sharpen the problem, ask what sense attaches to the idea of probability employed in discussing the value of [pi] in the preceding paragraphs. It is central to CKR that each player chooses rationally “in the Savage sense”, which includes the idea that each player treats his opponent’s decisions as events and assigns probabilities to them. But Savage’s axioms are not designed to apply to games against rational opponents; for Savage, “events” are states of nature. As we shall show in subsection4, it is far from clear that Savage’s axioms can provide an interpretation of subjective probabilities within games. Until a satisfactory interpretation of “strategic” probability is found, some scepticism about the coherence of CKR seems to be in order.
It is important to stress that CKR is not an optional refinement, to be simply discarded if its implications are peculiar. Modern economic theory is too deeply committed to bleaching the content out of “utility”, as shown in our historical section, for retreat to be easy. Binmore is entirely correct to say that the theorems of game theory are tautologous, given the standard axioms. These axioms capture the basis of an “economic” approach to the analysis of behaviour very neatly. Paradoxical theorems thus call the whole approach into question. All the same, it may still seem that CKR introduces quirks due solely to its being a limiting case of an analysis which becomes sensible enough, given a small measure of uncertainty. It is easy to believe that uncertainty is what makes the human world go round, whereas complete transparency would bring it to a standstill. We shall therefore next explain exactly why we take the quirks more seriously than that.
Several kinds of uncertainty are filtered out by CKR. If A and B were, in Gauthier’s language, translucent, rather than transparent, then each might be unsure of the preferences, the information or the computational skill of the other. Thus, where a quirky result depends on an infinite regress of interlocked expectations, it might collapse, if humans could manage only, say, four or five stages of complexity in their deliberations. If humans do not, or cannot, reason beyond a few levels of beliefs about beliefs, choices that would be irrational for the ideal agents of game theory might become rational for humans.(5) But for most of the games we have considered so far it does not take great sophistication to grasp the logic of CKR and to work out its implications. Thus, the regress in Heads and Tails is infinite but is readily spotted without having to work through it in an infinite number of steps. In the Promising Game the source of trouble is patent to anyone eyeing the final round. Relatedly, in the everyday coordination games, like motoring, which game theory supposedly models, A surely depends, when choosing to keep left (in Britain), not on B’s being too stupid to be aware that the regress of expectations goes to infinity but on B’s decision being guided in a way which has yet to emerge. In everyday cases of promising, B’s reason for trusting A is surely not that she has failed to see that it will pay A to break his promise. The implication is unmistakeably a deficiency in the model.
There might still be relevant uncertainty about preferences (rather than about reasoning powers). In a paper frequently quoted by economists, Kreps and Wilson (1982) overcome the paradoxical implications of CKR for games like Centipede by appealing to a residual element of uncertainty about preferences. Having described a game in terms of the payoffs of each player, Kreps and Wilson let there be a small probability that, for one or both players, the payoffs are in fact different or, as we shall say, non-standard. Applying this idea to Centipede, we might suppose there to be a small probability that, were the third node to be reached, A would prefer to play C3. In this case, the utility payoff if C3 is chosen is, say (6, 5) instead of (4, 5), everything else remaining unchanged. We might say that an A with such preferences is trustworthy: were B to take only two doubloons at her turn, a trustworthy A would reciprocate. The significance of this uncertainty is that it provides players who are not trustworthy with a reason for acting as if they were–at least, in the early stages of a game. (An untrustworthy A might take only two doubloons at his first turn, hoping that B would interpret this as evidence of his trustworthiness, but with the intention of taking three doubloons at his second turn.) When Centipede-like games are very long, as in the game with 100 doubloons, it turns out that very low probabilities of trustworthiness are sufficient to induce the players to behave cooperatively for most of the game. Nevertheless, there has to be some genuine trustworthiness before rational agents can profit by feigning it.
Qualifying CKR by introducing a residual element of uncertainty about payoffs is perhaps a reasonable enough move, but there is more to Kreps and Wilson’s argument than this. As they themselves emphasize (pp. 276-7), cooperation in Centipede games depends on the uncertainty being of the right kind. In our Centipede game, for example, we assumed that if A had non-standard preferences, he would derive two units of extra utility from playing C3. But why not assume instead that a non-standard A derives extra utility from playing S3? In that case, there would be no cooperation. Or what if some non-standard As have one type of preference and some the other? Then everything depends on the relative probabilities of the two types of preferences. Kreps and Wilson’s model is plausible only to the extent that the preferences it attributes to the non-standard agents are themselves plausible. The difficulty is to explain why we should expect non-standard preferences to take the assumed form, without appealing to conceptions of rationality that are incompatible with the standard theory of rational choice.
Kreps and Wilson’s explanation of cooperative behaviour exemplifies a common move in the theory of rational choice: to appeal to special kinds of preferences to escape apparently paradoxical conclusions. Binmore’s response to Gauthier (see p.17 above) is another example: according to Binmore, a disposition to keep one’s promises is just another kind of preference. We need to ask what kinds of motivation can, without inconsistency, be allowed within the standard theory of rational choice.
“Reason is the pace,” said Hobbes, “and the benefit of mankind the end”. Yet it emerges that, when the standard analysis of rational choice is applied to choices in strategic settings, the benefit of mankind is out of reach. In Heads and Tails, rational agents cannot coordinate by deliberating, even offered a superior equilibrium as an obvious focal point. In the Promising and Centipede Games, they have compelling reason to avoid a mutually preferred outcome. (In the case of the Centipede Game, the unfortunate implications of the standard analysis might be attributable to incoherence in CKR; but this does not seem to be the source of the problem in the other two games.) Leaving readers to draw their own conclusions about CKR, we turn next to the analysis of motivation assumed by the model.
It is not obvious that the model has one. In the received theory of rational choice the work is done by utility numbers, which represent preferences, provided that those preferences satisfy Savage’s axioms of coherence. These axioms are often claimed to be purely formal and to impose no substantive motivational constraints. Indeed that is why we introduced Savage as the culmination of attemps to purge utility theory of all psychological elements. We shall now argue, however, that substantive psychological-cum-philosophical constraints remain.
The broad reason why this is so is that the utility numbers in the description of a game have two very definite functions. One is to close any gap between preference and choice. If strategy x dominates strategy y, a rational agent automatically rejects y. There is no scope for hesitation because y is, for instance, more honourable or even because it would work out better, if the other player were also motivated in ways which game theory does not admit. There is simply no space between utilities and rational choices for reflective hesitation. Once the utilities are in place, they provide a reliable representation of all sources of motivation and serve as reliable information for all other players. This does not in itself rule out “ethical” or any other sort of motivating elements consistent with the Savage axioms. But every motivating element operates only as a source of utility and can have no further special role once the utilities have been identified. In the Promising Game, for example, it could be that A or B attaches value to the keeping of promises on moral grounds. But, if this makes any difference to the outcome, the utilities have been missated in our diagram. Granted utilities as stated, the game is sure to proceed as we said it does not matter how they came to be as they are.
The second function of utility numbers is to ensure that all reasons for action are forward-looking reasons. Reasons which appear to be of any other character are either powerless to motivate or can be represented as forward-looking. Thus rational agents are not inherently untrustworthy (because selfish perhaps, or at any rate amoral) but do not keep their word just because they have given it. They need some reason which can be represented in the utility numbers. For instance, the numbers might reflect the discomfort of a bad conscience or the future advantage of a reputation for steadfastness. This treatment, however, removes all trace of their grounds or origins, and converts them into utilities of the familiar sort. That is essentially why game theorists are right to comment that words are cheap talk. Agents as transparent as CKR makes them do only what they would have done without words, and CKR merely underlines the wider point that, if words make a difference to expectations where there is uncertainty, it is nevertheless a forward-looking difference. To all this, it might be objected that Savage’s approach can encompass any consistent pattern of choice. If an agent keeps his promise for no other reason than that he has made a promise, might he not still be said to derive utility (in Savage’s sense) from promise-keeping? For reasons that will emerge shortly, we think such an objection would be mistaken.
These two functions of utility numbers are clearly discernible in Savage’s theory. Recall that Savage begins by defining a set of conceivable consequences and a set of possible states of the world. We are free to construct “acts” by arbitrarily assigning consequences to states of the world. Between every pair of such acts, the agent has a preference, and preference is interpreted in terms of choice. It is essential for Savage’s expected utility theorem, and hence for the assignment of utility numbers to consequences, that preferences are “complete” in this sense. Suppose that x is a consequence in the Savage sense. Then we must be free to create meaningful acts by combining x with any other consequences we wish, and by assigning x to any event we wish. For example, we might consider the toss of a coin, define two events “the coin falls Heads” and “the coin falls Tails”, and then define an act in which the relevant agent receives x on Heads and some other consequence–whichever we wish to use–on Tails. In the theory of expected utility, utility numbers are derived by considering an agent’s preferences over exactly such artificial acts.
This procedure can work only if each consequence is so described that it can be slotted into any event, into any act, and into any choice problem. This imposes the crucial constraint that the description of a consequence cannot include any reference to a particular event, act or choice problem. In other words, the description of a consequence must say nothing about how that consequence was brought about. Thus, for example, any history of encounters or agreements between agents must be expunged from the description of a consequence. This effectively commits us to an instrumental and forward-looking account, in which the rationality of an act depends solely on its consequences, as compared with the consequences of alternative acts. It is as if every round of a game were the start of a new game.
Consider the Promising Game. Suppose A acts on the principle of keeping promises. It may be tempting to represent his sense of principle by, say, putting (1,-1) rather than (3, -1) as the result of his choosing “not perform” at the third node of the game, while leaving all the other payoffs unchanged. Then we might say that A prefers keeping his promise (a utility of 2) to not keeping it (a utility of 1). But this would require treating “having broken a promise” as a characteristic of a consequence, thus referring to the history of the game, and thus differentiating acts and events by how they came about. Similarly, a tempting way to rationalise the revision in the utility numbers is to regard it as a measure of the psychological satisfaction gained from acting on principle. But, if this rationalisation were to have any further influence, it would reimport the utilitarian theory of action which it was one aim of Savage to expel. Savage’s axioms serve to rule out accounts of motivation where the source of character of satisfactions affects the agent’s attitude towards consequences or where principles enter into the description of acts. To this extent, then, the axioms presuppose a particular account of motivation.
Before exploring the motivation of Savage’s agents, however, we shall look briefly at another implication of his use of the concepts of “consequence” and “event”. If each consequence is to be capable of being slotted in to each event, this imposes constraints on the descriptions of events, just as it does on the descriptions of consequences. The description of an event cannot include any reference to any particular consequence, act, or choice problem. This is crucial for Savage’s interpretation of subjective probability, in which the probability of an event is defined in terms of an agent’s willingness to take bets on its occurrence. This approach cannot work unless we are free to construct artificial acts (the hypothetical betting options) by assigning suitable consequences to the event whose probability we wish to measure. If, as Savage intended, events are understood as states of nature, then we do indeed have this freedom. But it is not clear that we have the same freedom if events are identified with the strategic choices of rational agents. For example, the event “Player B chooses Heads in the game of Heads and Tails” cannot be understood without knowing that player A faces a particular choice problem, with particular consequences if that event obtains. How, then, are we to construct the acts which will allow us to measure the subjective probability that A assigns to the event? Game theory assumes that players assign subjective probabilities to their opponents’ strategies, but surprisingly little has been said in support of this assumption, beyond brief appeals to Savage(6). We remain agnostic as to whether a satisfactory account of subjective probability is possible for strategic settings.
Savage’s theory, we have said, requires a particular account of motivation. This latent “moral psychology”, to use an old but instructive term, blocks any attempt to let agents rise above the earlier impasses by reflection. If we ask why it is there and why it takes its particular form, the answers are perhaps historical. From Hobbes and Hume game theory inherits a presumption that only passions, sentiments and desires can motivate. Following Hobbes and Bentham, it presumes that, in theory at least, reason can always weigh competing desires in a single balance. When these presumptions are rolled together under the general heading of “preference”, they hardly seem to be there. But they are there and we ask next what can be done by challenging them.
Since Hume endorses the first but not the second, let us start with him. Hume propounds a substantive moral psychology. It includes motives like natural sympathy, which can incline people to keep promises, for example, despite whatever avarice or self-love might recommend. Although Hume thinks there is an universal set of passions, what particular agents do in a specific situation depends on what particular mixture of passions motivates them. This might not matter, if reason could always weigh conflicting passions and arrive at a recommended course of action. But nothing guarantees that reason, “the slave of the passions, whose only office is to serve and obey them”, can always succeed in this task. Whether it can is a matter of fact to be settled by observation and experiment, not resolved by imposing a coherent scheme of preference a priori. Moreover, since all our reasonings concerning matters of fact themselves rest on custom, it is no surprise if regularities in behaviour are to be explained by appealing to custom and not solely to calculation. Nor is it surprising, therefore, if game theory lacks a full rationale for coordination and promise-keeping. The missing principle is “Custom or Habit”, and it is “the great guide of human life” (Hume, 1748, Section V, Part I, 36).
In that case game theory should accept defeat and return to an empirical moral psychology which seeks behavioural regularities, without presuming that these have a coherent rational reconstruction in the mind of the agent. Some economists have indeed been moving in this direction.(7) Yet even Hume has other moods. We quoted him earlier as saying that “Nature provides a remedy in the judgment and understanding for what is irregular and incommodious in the affections”, and the main line remains the one suggested by Hobbes and pursued by Bentham, where the internal coherence of the agent’s moral psychology is crucial. In that case, however, we need to know exactly what remedy nature provides in the judgement and understanding. Even to raise this question, we need to recognise that Savage, far from being neutral, embodies a moral psychology which blocks all answers.
An attractive idea here is to complicate the theory of action by introducing two tiers into deliberation. Kant beckons invitingly. Here reason emphatically can be a motive to an action of the will. An autonomous agent is one who overrides inclination in the name of reason, when occasion demands. Especially when acting morally, the agent adopts an impartial and universal point of view and does whatever it would be right for any and everyone so placed to do. In that case the problems of the coordination and promising games dissolve readily by construing them as moral problems. If someone objects that this fails to show how reason alone manages to cause action, Kant would reply that the connection is not causal. A rational agent who realises that there is sufficient reason for him to do x thereby makes that reason his own; and there is no gap between acquiring a reason and acting on it when appropriate. This will not satisfy all critics but it should tempt rational choice theorists. If abstraction to an ideal-type world turns out not to avoid all questions of how action is motivated, a Kantian theory at least avoids having to produce a causal account.
But, even if one credits Kant with a psychology to sustain his ethics, it would be rash to assimilate all awkward rational choices to moral choices. In Kantian ethics the agent tests the rightness of a contemplated action by asking whether everyone so placed should do it and, having established a suitable maxim, then acts on it, even if believing that no one else in fact will. By contrast, everyday strategic problems like coordination presumably call only for a reflective prudence, where reason suggests that whether it is rational to choose x depends on whether (and sometimes how many) others will do likewise. Yet prudence too seems to demand the point of view of an impartial spectator, even if it also needs an assurance that other players are similarly prudent. That is enough to make a two-tier psychology attractive, so as to give scope for reflection to pass judgement on the promptings of preference. It looks possible to borrow this much from Kant without landing ourselves deep in ethics.
The lower tier can comprise a standard preference ordering, ranging over possible outcomes. The upper tier is usually taken to comprise a second preference ordering whose domain is these first-order preferences, as suggested by Frankfurt (1971). The idea is to let agents act sometimes not on the preferences they have but on those which they prefer to have. Thus players of the Prisoner’s Dilemma could escape the foreseeably inferior result of mutual defection by acting as if blessed with preferences which made it rational to play cooperatively. In the Promising Game both will fare better if each can act out a preference akin to one for being a Rule-Utilitarian, rather than an Act-Utilitarian.
But snags remain. When preferences conflict, the mere fact that one is of higher order does not always make it rational to act on it. Even if a reluctant alcoholic would do better to put the gin out of reach or fight the desire for it, might a guilt-ridden homosexual do better to shed the guilt rather than school his passions? In that case the agent needs to be found a further standpoint from which to umpire between first and second order preferences. Equally, second order preferences seem not to be all of a piece. Some are to do with utility maximisation, as when an agent reflects that rejecting a dominant strategy might be worthwhile. Others turn on questions of self-respect or ethics, as when an agent reflects that a strategy would be profitable but dishonourable. So we need a standpoint from which to judge not only between the recommendations of different tiers but also between dissimilar kinds of preference on the upper tier.
A Kantian could try saying that the crucial distinction is between inclinations (lower tier) and duties (upper tier), with duty always serving as trumps. That would restore consistency and avoid the need for yet higher tiers. It might also help with the coordination problem, if a case could be made for treating all encounters surrounded with normative expectations as, somehow, morally charged, so that each player was obliged to consider the interests of both. But none of this is clearly compelling. Kant’s distinction between inclination and duty sets problems of its own; and the proposed treatment of all social norms as, somehow, moral obligations is unlikely to carry general conviction. So we shall merely leave the door open for Kantians who think the line worth exploring further.
Mention of social norms and normative expectations invites a Wittgensteinian comment. Although a Humean account is causal and a Kantian one is not, they agree that action is to be analysed by reference to the mental states of the agent. Hence social norms are presumed to be conventions emerging from the interplay of individuals. Wittgensteinians may take a radically different view of action. In so far as what gives behaviour its meaning is a matter of a rule followed in an institutional context, interaction is analytically prior to action. The description of the Promising Game, for instance, ceases to be context-neutral and we need to be clear whether an institution of promising is presupposed and how deeply it penetrates the players’ own conception of their strategies. At present Savage’s axioms ensure that the games of game theory are far removed from everything said about games in Philosophical Investigations, where moves and motives make sense only in antecedently given contexts. We are not suggesting a quick Wittgensteinian fix. But, if the attempt to analyse interaction in a social vacuum fails to explain even coordination, it is worth thinking further about the basis of convention; and, if the open-endedness which even constitutive rules have in a Wittgensteinian constructivism leaves room for reasoned choices, game theory could perhaps be adapted so as to offer a constructivist analysis of strategic deliberation. Here too, however, we mean only to float a thought which we think worth exploring (see Hollis, 1990).
The attempt to view social life as strategic interaction is proving immensely fertile yet disturbingly prone to paradox. Game theory provides an elegant, universal logic of practical reason, offering much to anyone whose notion of rationality is instrumental and whose view of the social world is individualist. Yet paradoxes beset its account even of coordination, trust and the keeping of promises. We conclude that there is something amiss with the general aim of abstracting from all “moral psychology” to a world of fully transparent agents with common knowledge of rationality and a synchromesh between preferences and choices. But the doubts raised point in different directions.
CKR is definitely suspect, in our opinion. It may even be an incoherent condition, as suggested by the Centipede, where CKR allows us to prove that a rational player would kill the game at the start, even though there seems to be no way to show that he maximises his expected utility by so doing. Were this only a quirk of a limiting case, all might be well if abstraction rested content with incompletely transparent and finitely rational agents. But we doubt it, not least because it is disingenuous to credit them with probabilistic reasoning without supplying a suitable theory of probability. As far as we can see, ideas of probability which make sense for “games against nature” cease to do so where A’s probability judgements make mutual crossreference to B’s. A theory of strategic probability is needed.
Equally suspect is the claim to dispense with all moral psychology. The claim is false in any case, because the theory relies on assuming that agents are motivated (solely) by forward-looking reasons which refer only to the final consequences of acts. Here we suggest returning initially to an older moral psychology. Hume offers one, where passions do not reduce to preferences over outcomes; Kant offers another, where reason need not be the slave of the passions and reflective agents can override their inclinations. Both thinkers would let us treat commitments as a source of backward-looking reasons, to the relief of the players in the Promising game.
We also have more cautious doubts about the individualism pervading standard game theory, as pointed up by the regress in the Coordination game. If the players saw themselves as a team with a common aim, they might escape. That is no minor amendment, however. It may call for a notion of collective agency and for more essentially social agents than game theory can accommodate. We gestured to Wittgenstein. But the normative relationships of Wittgensteinian games are so unlike the instrumental relationships of game theory that rapprochement is at best a distant prospect. Much thought is needed before rational agents, meant to exemplify the benefit of mankind, can become reasonable persons who pass freely in corridors and whose promises are not just cheap talk.
(1) This paper was written as part of the project, “The Foundations of Rational Choice Theory”, supported by the Economic and Social Research Council (award number R 000 23 2269). In developing the ideas presented here, we have been much influenced by the other members of the project team: Robin Cubitt, Shaun Hargreaves Heap, Judith Mehta and Chris Starmer. Equally, we have benefited from our collaboration with Hargreaves Heap, Bruce Lyons and Albert Weale in writing on the theory of choice (Hargreaves Heap et al., 1992). We are grateful to James Hopkins for valuable comments on an earlier draft.
(2) Some game theorists interpret the probabilities in mixed strategies are representing each player’s subjective uncertainty about what the other will do and not, as we have done here, in terms of deliberate randomization. On this interpretation, Nash equilibrium implies that each player holds true beliefs about the other player’s beliefs: see Sudgen (1991).
(3) For discussion of Gauthier’s Morals by Agreements see Gauthier and Sugden (1993) including the articles by Hollis and Sugden. For a related set of ideas about “steadfastness” or “resolution”, see McClennen (1990).
(4) One of the authors is tempted by the first of these interpretations, the other by the second. Our respective views can be found in Hollis (1991), Pettit and Sugden (1989) and Sugden (1992). For further discussion of the Centipede Game and related problems, see Binmore (1987), Bicchieri (1989), Basu (1990), Bonanno (1991) and Reny (1992).
(5) This is how Binmore (1987, 1988) proposes to deal with some of the trouble. See also Bicchieri (1989).
(6) Aumann (1987, p. 2) is typical: “We assume only that it is common knowledge that all the players are Bayesian utility maximizers, that they are rational in the sense that each one conforms to the Savage theory. Such an assumption underlies most of game theory, and of economic theory as well [.]”
(7) Expected utility theory is increasingly being called into question, both as a descriptive theory and as an account of rational choice, as a result of experimental investigations which reveal that people make systematically “irrational” choices. To explain these observations, new theories of choice are being developed which incorporate the mental routines which people actually use when making decisions: see, e.g., Kahneman and Tversky (1979). A parallel development is the application of evolutionary ideas to game theory. This approach views equilibria, not as the consequences of ideal rationality, but as historically contingent conventions: see. e.g., Sugden (1986).
Aumann, R.J. 1987: “Correlated Equilibrium as an Expression of Bayesian Rationality”. Econometrica, 55, pp. 1-18.
Basu, K. 1990: “On the Non-Existence of a Rationality Definition for Extensive Games”. International Journal of Game Theory, 19, pp. 33-44.
Bentham, J. 1789: An Introduction to the Principles of Morals and Legislation. 1970 edn, London: Athlone Press.
Bernheim, B. 1984: “Rationalizable Strategic Behavior”. Econometrica, 52, pp. 1007-28.
Bicchieri, C. 1989: “Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge”. Erkenntnis, 30, pp. 69-85.
Binmore, K. 1987: “Modeling Rational Players: Part I”. Economics and Philosophy, 3, pp. 179-214.
–1988: “Modeling Rational Players: Part II”. Economics and Philosophy, 4, pp. 9-55.
–1993: “Bargaining and Morality”, in Gauthier and Sugden. Bonanno, G. 1991: “The Logic of Rational Play in Games of Perfect Information”. Economics and Philosophy, 7, pp. 37-65.
Edgeworth, F.Y. 1881: Mathematical Psychics. London: Kegan Paul.
Elster, J. 1979: Ulysses and the Sirens. Cambridge: Cambridge University Press.
Frankfurt, H. 1971: “Freedom of the Will and the Concept of a Person”. Journal of Philosophy, 68, pp. 5-20.
Gauthier, D. 1975: “Coordination”. Dialogue, 14, pp. 195-221.
–1986: Morals by Agreement. Oxford: Oxford University Press.
Gauthier, D. and Sugden, R. (eds.) 1993: Rationality, Justice and the Social Contract. Hemel Hempstead: Harvester Wheatsheaf.
Gilbert, M. 1989: On Social Facts. London: Routledge.
Hargreaves Heap, S., Hollis, M., Lyons, B., Sugden, R. and Weale, A. 1992: The Theory of Choice: A Critical Guide. Oxford: Blackwell.
Harsanyi, J.C. and Selten, R. 1988: A General Theory of Equilibrium Selection in Games. Cambridge, Mass.: MIT Press.
Hobbes, T. 1651: Leviathan. 1991 edn., Cambridge: Cambridge University Press.
Hodgson, D.H. 1967: Consequences of Utilitarianism. Oxford: Clarendon Press.
Hollis, M. 1990: “Moves and Motives in the Games We Play”. Analysis, 50, pp. 49-62.
–1991: “Penny Pinching and Backward Induction”. Journal of Philosophy, 88, pp. 473-88.
Hume, D. 1740: A Treatise of Human Nature. 1978 edn, Oxford: Clarendon Press.
–1748: An Enquiry Concerning Human Understanding. 1975 edn, Oxford: Clarendon Press.
Hurley, S. 1989: Natural Reasons. Oxford: Oxford University Press.
Jevons, W.S. 1871: The Theory of Political Economy. Page references to 1970 edn, Harmondsworth: Penguin.
Kahneman, D. and Tversky, A. 1979: “Prospect Theory: An Analysis of Decision under Risk”. Econometrica, 47, pp. 263-91.
Kohlberg, E. and Mertens, J.-F. 1986: “On the Strategic Stability of Equilibria”. Econometrica, 54, pp. 1003-37.
Kreps, D.M. and Wilson, R. 1982: “Reputation and Imperfect Information”. Journal of Economic Theory, 27, pp. 253-79.
Lewis, D.K. 1969: Convention: A Philosophical Study. Cambridge, Mass.: Harvard University Press.
Luce, R.D. and Raiffa, H. 1957: Games and Decisions. New York: John Wiley.
McClennen, E.F. 1990: Rationality and Dynamic Choice. Cambridge: Cambridge University Press.
Neumann, J. von and Morgenstern, O. 1947: Theory of Games and Economic Behavior, 2nd. edn. Princeton: Princeton University Press.
Pareto, V., 1972: Manual of Political Economy, translated by A.S. Schweir. London: Macmillan. Published in French 1927.
Pearce, D.G. 1984: “Rationalizable Strategic Behavior and the Problem of Perfection”. Econometrica, 52, pp. 1029-50.
Pettit, P. and Sugden, R. 1989: “The Backward Induction Paradox”. Journal of Philosophy, 86, pp. 169-82.
Ramsey, F.P. 1931: “Truth and Probability”, in his The Foundations of Mathematics and Other Logical Essays. London: Routledge and Kegan Paul.
Regan, D. 1980: Utilitarianism and Cooperation. Oxford: Clarendon Press.
Reny, P.J. 1992: “Backward Induction, Normal Form Perfection and Explicable Equilibria”. Econometrica, 60, pp. 627-49.
Robbins, L. 1932: An Essay on the Nature and Significance of Economic Science. London: Macmillan.
Samuelson, P.A. 1947: Foundations of Economic Analysis. Cambridge, Mass.: Harvard University Press.
Savage, L.J. 1954: The Foundations of Statistics. New York: John Wiley.
Schelling, T.C. 1960: The Strategy of Conflict. Cambridge, Mass.: Harvard University Press.
Selten, R. 1978: “The Chain-Store Paradox”. Theory and Decision, 9, pp. 127-59.
Sugden, R. 1986: The Economics of Rights, Cooperation and Welfare. Oxford: Basil Blackwell.
–1991: “Rational Choice: A Survey of Contributions from Economics and Philosophy”. Economic Journal, 101, pp. 751-85.
–1992: “Inductive Reasoning in Repeated Games”, in R. Selten (ed.), Rational Interaction: Essays in Honor of John C. Harsanyi. Berlin: Springer-Verlag, pp. 201-21.
Van Damme, E. 1989: “Stable Equilibria and Forward Induction”. Journal of Economic Theory, 48, pp. 476-96.
COPYRIGHT 1993 Oxford University Press
COPYRIGHT 2004 Gale Group