LSAT Sequencing games. Sequencing games are the most common LSAT Logic Games.The good news? They are often the easiest! You might want to start each Logic Games section by searching through the four games and trying to spot a Sequencing game. The PowerScore LSAT Logic Games Bible is the most comprehensive book available for the Logic Games section of the LSAT. This book will provide you with an advanced system for attacking any game that you may encounter on the LSAT.
First published Fri Jul 27, 2001; substantive revision Fri Aug 16, 2019
![Logic Logic](https://i.pinimg.com/736x/53/92/f4/5392f4c1d1f0b8523ab3b13cc723bda7--local-cinema-logic-games.jpg)
Games between two players, of the kind where one player wins and oneloses, became a familiar tool in many branches of logic during thesecond half of the twentieth century. Important examples are semanticgames used to define truth, back-and-forth games used to comparestructures, and dialogue games to express (and perhaps explain) formalproofs.
- 7. Other Model-theoretic Games
- Bibliography
1. Games in the History of Logic
The links between logic and games go back a long way. If one thinks ofa debate as a kind of game, then Aristotle already made theconnection; his writings about syllogism are closely intertwined withhis study of the aims and rules of debating. Aristotle’sviewpoint survived into the common medieval name for logic:dialectics. In the mid twentieth century Charles Hamblinrevived the link between dialogue and the rules of sound reasoning,soon after Paul Lorenzen had connected dialogue to constructivefoundations of logic.
There are close links between games and teaching. Writers throughoutthe medieval period talk of dialogues as a way of‘teaching’ or ‘testing’ the use of soundreasoning. We have at least two textbooks of logic from the earlysixteenth century that present it as a game for an individual student,and Lewis Carroll’s The Game of Logic (1887) is anotherexample in the same genre. There are plenty of modern examples too,though probably there has not been enough continuity to justifytalking of a tradition of teaching logic by games.
Mathematical game theory was founded in the early twentieth century.Although no mathematical links with logic emerged until the 1950s, itis striking how many of the early pioneers of game theory are alsoknown for their contributions to logic: John Kemeny, J. C. C.McKinsey, John von Neumann, Willard Quine, Julia Robinson, ErnstZermelo and others. In 1953 David Gale and Frank Stewart made fruitfulconnections between set theory and games. Shortly afterwards LeonHenkin suggested a way of using games to give semantics for infinitarylanguages.
The first half of the twentieth century was an era of increasingrigour and professionalism in logic, and to most logicians of thatperiod the use of games in logic would probably have seemed frivolous.The intuitionist L. E. J. Brouwer expressed this attitude when heaccused his opponents of causing mathematics ‘to degenerate intoa game’ (as David Hilbert quoted him in 1927, cited in vanHeijenoort 1967). Hermann Weyl (cited in Mancosu 1998) used the notionof games to explain Hilbert’s metamathematics: mathematicalproofs proceed like plays of a meaningless game, but we can standoutside the game and ask meaningful questions about it.Wittgenstein’s language games provoked little response from thelogicians. But in the second half of the century the centre of gravityof logical research moved from foundations to techniques, and fromabout 1960 games were used more and more often in logical papers.
By the beginning of the twenty-first century it had become widelyaccepted that games and logic go together. The result was a hugeproliferation of new combinations of logic and games, particularly inareas where logic is applied. Many of these new developments sprangoriginally from work in pure logic, though today they follow their ownagendas. One such area is argumentation theory, where games form atool for analysing the structure of debates.
Below we will concentrate on those games that are most closelyassociated with pure logic.
2. Logical Games
From the point of view of game theory, the main games that logiciansstudy are not at all typical. They normally involve just two players,they often have infinite length, the only outcomes are winning andlosing, and no probabilities are attached to actions or outcomes. Thebarest essentials of a logical game are as follows.
There are two players. In general we can call them (forall) and(exists). The pronunciations ‘Abelard’ and‘Eloise’ go back to the mid 1980s and usefully fix theplayers as male and female making reference easier: her move, hismove. Other names are in common use for the players in particulartypes of logical game.
The players play by choosing elements of a set (Omega), called thedomain of the game. As they choose, they build up asequence
[a_0, a_1, a_2,ldots]of elements of (Omega). Infinite sequences of elements of(Omega) are called plays. Finite sequences of elements of(Omega) are called positions; they record where a playmight have got to by a certain time. A function (tau) (the turnfunction or player function) takes each position(mathbf{a}) to either (exists) or (forall); if(tau(mathbf{a}) = exists), this means that when the game hasreached (mathbf{a}), player (exists) makes the next choice (andlikewise with (forall)). The game rules define two sets(W_{forall}) and (W_{exists}) consisting of positions andplays, with the following properties: if a position (mathbf{a}) isin (W_{forall}) then so is any play or longer position that startswith (mathbf{a}) (and likewise with (W_{exists})); and no playis in both (W_{forall}) and (W_{exists}). We say that player(forall) wins a play (mathbf{b}), and that(mathbf{b}) is a win for (forall), if (mathbf{b}) isin (W_{forall}); if some position (mathbf{a}) that is aninitial segment of (mathbf{b}) is in (W_{forall}), then we saythat player (forall) wins already at (mathbf{a}). (Andlikewise with (exists) and (W_{exists}).) So to summarise, alogical game is a 4-tuple ((Omega , tau), (W_{forall}),(W_{exists})) with the properties just described.
We say that a logical game is total if every play is ineither (W_{forall}) or (W_{exists}), so that there are nodraws. Unless one makes an explicit exception, logical games arealways assumed to be total. (Don’t confuse being total with themuch stronger property of being determined—see below.)
It is only for mathematical convenience that the definition aboveexpects the game to continue to infinity even when a player has won atsome finite position; there is no interest in anything that happensafter a player has won. Many logical games have the property that inevery play, one of the players has already won at some finiteposition; games of this sort are said to be well-founded. Aneven stronger condition is that there is some finite number (n) suchthat in every play, one of the players has already won by the (n)-thposition; in this case we say that the game has finitelength.
A strategy for a player is a set of rules that describeexactly how that player should choose, depending on how the twoplayers have chosen at earlier moves. Mathematically, a strategy for(forall) consists of a function which takes each position(mathbf{a}) with (tau(mathbf{a}) = forall) to an element (b)of (Omega); we think of it as an instruction to (forall) tochoose (b) when the game has reached the position (mathbf{a}).(Likewise with a strategy for (exists).) A strategy for a player issaid to be winning if that player wins every play in which heor she uses the strategy, regardless of what the other player does. Atmost one of the players has a winning strategy (since otherwise theplayers could play their winning strategies against each other, andboth would win, contradicting that (W_{forall}) and(W_{exists}) have no plays in common). Occasionally one meetssituations in which it seems that two players have winning strategies(for example in the forcing games below), but closer inspection showsthat the two players are in fact playing different games.
![Logic Game Logic Game](https://previews.123rf.com/images/nataljacernecka/nataljacernecka2001/nataljacernecka200100016/137027279-set-of-funny-sudoku-puzzles-3-difficulty-levels-logic-game-for-children-and-adults-printable-page-fo.jpg)
A game is said to be determined if one or other of theplayers has a winning strategy. There are many examples of games thatare not determined, as Gale and Stewart showed in 1953 using the axiomof choice. This discovery led to important applications of the notionof determinacy in the foundations of set theory (see entry on large cardinals and determinacy). Gale and Stewart also proved an important theorem that bears theirname: Every well-founded game is determined. It follows that everygame of finite length is determined—a fact already known toZermelo in 1913. (A more precise statement of the Gale-Stewart theoremis this. A game (G) is said to be closed if (exists)wins every play of (G) in which she hasn’t already lost at anyfinite position. The theorem states that every closed game isdetermined. The proof of the theorem is basically easy: Let us call aposition winning for (forall) if he has a winning strategy startingfrom this position. Suppose (forall) does not have a winningstrategy in the game, that is, in the beginning the position is notwinning for (forall). If the first move is a move of (forall),after his move the position is still not winning for him. If the firstmove is a move of (exists), she must have a move after which theposition is still not winning for (forall), for otherwise theprevious position would have been winning for (forall). The gamegoes on in this way infinitely many moves through positions which arenot winning for (forall). Because the game is closed, (exists)wins.)
Just as in classical game theory, the definition of logical gamesabove serves as a clothes horse that we can hang other concepts onto.For example it is common to have some laws that describe what elementsof (Omega) are available for a player to choose at a particularmove. Strictly this refinement is unnecessary, because the winningstrategies are not affected if we decree instead that a player whobreaks the law loses immediately; but for many games this way ofviewing them seems unnatural. Below we will see some other extrafeatures that can be added to games.
The definitions of game and strategy above were purely mathematical.So they left out what is probably the single most important feature ofgames, which is that people play them (at least metaphorically). Theplayers aim to win, and by studying the strategies open to them westudy what behaviour is rational for a person with a particular aim.In most games there are several players, so we can study what is arational response to somebody else’s behaviour. By restrictingthe players’ moves and possible strategies, we can study boundedrationality, where an agent has to make rational decisions underconditions of limited information, memory or time.
In short, games are used for modelling rationality and boundedrationality. This is independent of any connection with logic. Butsome logics were designed for studying aspects of rational behaviour,and in recent years it has become increasingly common to link theselogics to suitable games. See Section 5 (‘Semantic games forother logics’) and its bibliography.
But until recently, logical games were connected with rationalbehaviour in a quite different way. On the surface, the logic inquestion had no direct connection with behaviour. But logicians andmathematicians noticed that some ideas could be made more intuitive ifthey were linked to possible aims. For example in many applications oflogical games, the central notion is that of a winning strategy forthe player (exists). Often these strategies (or their existence)turn out to be equivalent to something of logical importance thatcould have been defined without using games—for example a proof.But games are felt to give a better definition because they quiteliterally supply some motivation: (exists) is trying to win.
This raises a question that is not of much interest mathematically,but it should concern philosophers who use logical games. If we want(exists)’s motivation in a game (G) to have any explanatoryvalue, then we need to understand what is achieved if (exists) doeswin. In particular we should be able to tell a realistic story of asituation in which some agent called (exists) is trying to dosomething intelligible, and doing it is the same thing as winning inthe game. As Richard Dawkins said, raising the corresponding questionfor the evolutionary games of Maynard Smith,
The whole purpose of our search … is to discover a suitableactor to play the leading role in our metaphors of purpose. We… want to say, ‘It is for the good of … ‘.Our quest in this chapter is for the right way to complete thatsentence. (The Extended Phenotype, Oxford University Press,Oxford 1982, page 91.)
For future reference let us call this the Dawkins question.In many kinds of logical game it turns out to be distinctly harder toanswer than the pioneers of these games realised. (Marion 2009discusses the Dawkins question further.)
3. Semantic Games for Classical Logic
In the early 1930s Alfred Tarski proposed a definition of truth. Hisdefinition consisted of a necessary and sufficient condition for asentence in the language of a typical formal theory to be true; hisnecessary and sufficient condition used only notions from syntax andset theory, together with the primitive notions of the formal theoryin question. In fact Tarski defined the more general relation‘formula (phi(x_1 ,ldots ,x_n)) is true of the elements(a_1 ,ldots ,a_n)’; truth of a sentence is the special casewhere (n = 0). For example the question whether
Online Logic Puzzles Free For Adults
‘For all (x) there is (y) such that R((x, y))’ istrue
reduces to the question whether the following holds:
For every object (a) the sentence ‘There is (y) such thatR((a, y))’ is true.
This in turn reduces to:
For every object (a) there is an object (b) such that the sentence‘R((a, b))’ is true.
In this example, that’s as far as Tarski’s truthdefinition will take us.
In the late 1950s Leon Henkin noticed that we can intuitivelyunderstand some sentences which can’t be handled byTarski’s definition. Take for example the infinitely longsentence Fujitsu scansnap fi 5110eox driver windows 10. Square collage.
For all (x_0) there is (y_0) such that for all (x_1) there is(y_1) such that … R((x_0, y_0, x_1, y_1,ldots)).
Tarski’s approach fails because the string of quantifiers at thebeginning is infinite, and we would never reach an end of strippingthem off. Instead, Henkin suggested, we should consider the game wherea person (forall) chooses an object (a_0) for (x_0), then asecond person (exists) chooses an object (b_0) for (y_0), then(forall) chooses (a_1) for (x_1, exists) chooses (b_1) for(y_1) and so on. A play of this game is a win for (exists) if andonly if the infinite atomic sentence
[R(a_0, b_0, a_1, b_1,ldots)]is true. The original sentence is true if and only if player(exists) has a winning strategy for this game. Strictly Henkin usedthe game only as a metaphor, and the truth condition that he proposedwas that the skolemised version of the sentence is true, i.e. thatthere are functions (f_0, f_1,ldots) such that for every choice of(a_0, a_1, a_2) etc. we have
[ R(a_0, f_0 (a_0), a_1, f_1 (a_0, a_1), a_2, f_2 (a_0, a_1, a_2),ldots).]But this condition translates immediately into the language of games;the Skolem functions (f_0) etc. define a winning strategy for(exists), telling her how to choose in the light of earlier choicesby (forall). (It came to light sometime later that C. S. Peirce hadalready suggested explaining the difference between‘every’ and ‘some’ in terms of who chooses theobject; for example in his second Cambridge Conference lecture of1898.)
Soon after Henkin’s work, Jaakko Hintikka added that the sameidea applies with conjunctions and disjunctions. We can regard aconjunction ‘(phi wedge psi)’ as a universallyquantified statement expressing ‘Every one of the sentences(phi , psi) holds’, so it should be for the player(forall) to choose one of the sentences. As Hintikka put it, toplay the game (G(phi wedge psi), forall) chooses whether thegame should proceed as (G(phi)) or as (G(psi)). Likewisedisjunctions become existentially quantified statements about sets ofsentences, and they mark moves where the player (exists) chooseshow the game should proceed. To bring quantifiers into the same style,he proposed that the game (G(forall x phi(x))) proceeds thus:player (forall) chooses an object and provides a name (a) for it,and the game proceeds as (G(phi(a))). (And likewise withexistential quantifiers, except that (exists) chooses.) Hintikkaalso made an ingenious suggestion for introducing negation. Each gameG has a dual game which is the same as G except that theplayers (forall) and (exists) are transposed in both the rulesfor playing and the rules for winning. The game (G(neg phi)) isthe dual of (G(phi)).
One can prove that for any first-order sentence (phi), interpretedin a fixed structure (A), player (exists) has a winning strategyfor Hintikka’s game (G(phi)) if and only if (phi) is truein (A) in the sense of Tarski. Two features of this proof areinteresting. First, if (phi) is any first-order sentence then thegame (G(phi)) has finite length, and so the Gale-Stewart theoremtells us that it is determined. We infer that (exists) has awinning strategy in exactly one of (G(phi)) and its dual; so shehas a winning strategy in (G(neg phi)) if and only if shedoesn’t have one in (G(phi)). This takes care of negation.And second, if (exists) has a winning strategy for each game(G(phi(a))), then after choosing one such strategy (f_a) for each(a), she can string them together into a single winning strategy for(G(forall x phi(x))) (namely, ‘Wait and see what element (aforall) chooses, then play (f_a)’). This takes care of theclause for universal quantifiers; but the argument uses the axiom ofchoice, and in fact it is not hard to see that the statement thatHintikka’s and Tarski’s definitions of truth areequivalent is itself equivalent to the axiom of choice (given theother axioms of Zermelo-Fraenkel set theory).
It’s puzzling that we have here two theories of when a sentenceis true, and the theories are not equivalent if the axiom of choicefails. In fact the reason is not very deep. The axiom of choice isneeded not because the Hintikka definition uses games, but because itassumes that strategies are deterministic, i.e. that they aresingle-valued functions giving the user no choice of options. A morenatural way of translating the Tarski definition into game terms is touse nondeterministic strategies, sometimes called quasistrategies (seeKolaitis 1985 for details). (However, Hintikka 1996 insists that thecorrect explication of ‘true’ is the one usingdeterministic strategies, and that this fact vindicates the axiom ofchoice.)
Computer implementations of these games of Hintikka proved to be avery effective way of teaching the meanings of first-order sentences.One such package was designed by Jon Barwise and John Etchemendy atStanford, called ‘Tarski’s World’. Independentlyanother team at the University of Omsk constructed a Russian versionfor use at schools for gifted children.
In the published version of his John Locke lectures at Oxford,Hintikka in 1973 raised the Dawkins question (see above) for thesegames. His answer was that one should look to Wittgenstein’slanguage games, and the language games for understanding quantifiersare those which revolve around seeking and finding. In thecorresponding logical games one should think of (exists) as Myselfand (forall) as a hostile Nature who can never be relied on topresent the object I want; so to be sure of finding it, I need awinning strategy. This story was never very convincing; the motivationof Nature is irrelevant, and nothing in the logical game correspondsto seeking. In retrospect it is a little disappointing that nobodytook the trouble to look for a better story. It may be more helpful tothink of a winning strategy for (exists) in (G(phi)) as a kindof proof (in a suitable infinitary system) that (phi) is true.
Later Jaakko Hintikka extended the ideas of this section in twodirections, namely to natural language semantics and to games ofimperfect information (see the next section). The nameGame-Theoretic Semantics, GTS for short, has come to be usedto cover both of these extensions.
The games described in this section adapt almost trivially tomany-sorted logic: for example the quantifier (forall x_{sigma}),where (x_{sigma}) is a variable of sort (sigma), is aninstruction for player (forall) to choose an element of sort(sigma). This immediately gives us the corresponding games forsecond-order logic, if we think of the elements of a structure as onesort, the sets of elements as a second sort, the binary relations as athird and so on. It follows that we have, quite routinely, game rulesfor most generalised quantifiers too; we can find them by firsttranslating the generalised quantifiers into second-order logic.
4. Semantic Games with Imperfect Information
In this and the next section we look at some adaptations of thesemantic games of the previous section to other logics. In our firstexample, the logic (the independence-friendly logic of Hintikka andSandu 1997, or more briefly IF logic) was created in order to fit thegame. See the entry on independence friendly logic and Mann, Sandu and Sevenster 2011 for fuller accounts of thislogic.
The games here are the same as in the previous section, except that wedrop the assumption that each player knows the previous history of theplay. For example we can require a player to make a choice withoutknowing what choices the other player has made at certain earliermoves. The classical way to handle this within game theory is to makerestrictions on the strategies of the players. For example we canrequire that the strategy function telling (exists) what to do at aparticular step is a function whose domain is the family of possiblechoices of (forall) at just his first and second moves; this is away of expressing that (exists) doesn’t know how (forall)chose at his third and later moves. Games with restrictions of thiskind on the strategy functions are said to be of imperfectinformation, as opposed to the games of perfectinformation in the previous section.
To make a logic that fits these games, we use the same first-orderlanguage as in the previous section, except that a notation is addedto some quantifiers (and possibly also some connectives), to show thatthe Skolem functions for these quantifiers (or connectives) areindependent of certain variables. For example the sentence
[(forall x)(exists y/ forall x)R(x, y)]is read as: “For every (x) there is (y), not depending on(x), such that (R(x, y))”.
There are three important comments to make on the distinction betweenperfect and imperfect information. The first is that the Gale-Stewarttheorem holds only for games of perfect information. Suppose forexample that (forall) and (exists) play the following game.First, (forall) chooses one of the numbers 0 and 1. Then(exists) chooses one of these two numbers. Player (exists) winsif the two numbers chosen are the same, and otherwise player(forall) wins. We require that (exists), when she makes herchoice, doesn’t know what (forall) chose; so a Skolemfunction for her will be a constant. (This game corresponds to the IFsentence above with (R) read as equality, in a structure with adomain consisting of 0 and 1.) It’s clear that player(exists) doesn’t have a constant winning strategy, and alsothat player (forall) doesn’t have a winning strategy at all.So this game is undetermined, although its length is only 2.
One corollary is that Hintikka’s justification for readingnegation as dualising (‘players swap places’), in hisgames for first-order logic, doesn’t carry over to IF logic.Hintikka’s response has been that dualising was the correctintuitive meaning of negation even in the first-order case, so nojustification is needed.
The second comment is that already in games of perfect information, itcan happen that winning strategies don’t use all the availableinformation. For example in a game of perfect information, if player(exists) has a winning strategy, then she also has a winningstrategy where the strategy functions depend only on the previouschoices of (forall). This is because she can reconstruct her ownprevious moves using her earlier strategy functions.
When Hintikka used Skolem functions as strategies in his games forfirst-order logic, he made the strategies for a player depend only onthe previous moves of the other player. (A Skolem function for(exists) depends only on universally quantified variables.) Becausethe games were games of perfect information, there was no loss inthis, by the second comment above. But when he moved to IF logic, therequirement that strategies depend only on moves of the other playerreally did make a difference. Hodges 1997 showed this by revising thenotation, so that for example ((exists y/x)) means: “There is(y) independent of (x), regardless of which player chose(x)”.
Consider now the sentence
[(forall x)(exists z)(exists y/x)(x=y), ]played again on a structure with two elements 0 and 1. Player(exists) can win as follows. For (z) she chooses the same asplayer (forall) chose for (x); then for (y) she chooses thesame as she chose for (z). This winning strategy works only becausein this game (exists) can refer back to her own previous choices.She would have no winning strategy if the third quantifier was((exists y/xz)), again because any Skolem function for thisquantifier would have to be constant. The way that (exists) passesinformation to herself by referring to her previous choice is anexample of the phenomenon of signalling. John von Neumann andOskar Morgenstern illustrated it with the example of Bridge, where asingle player consists of two partners who have to share informationby using their public moves to signal to each other.
The third comment is that there is a dislocation between the intuitiveidea of imperfect information and the game-theoretic definition of itin terms of strategies. Intuitively, imperfect information is a factabout the circumstances in which the game is played, not about thestrategies. This is a very tricky matter, and it continues to lead tomisunderstandings about IF and similar logics. Take for example thesentence
[(exists x)(exists y/ x)(x=y),]again played on a structure with elements 0 and 1. Intuitively onemight think that if (exists) isn’t allowed to remember at thesecond quantifier what she chose at the first, then she can hardlyhave a winning strategy. But in fact she has a very easy one:‘Always choose 0’!
Compared with first-order logic, IF logic is missing a component thatthe game semantics won’t supply. The game semantics tells uswhen a sentence is true in a structure. But if we take a formula with(n) free variables, what does the formula express about the ordered(n)-tuples of elements of the structure? In first-order logic itwould define a set of them, i.e. an (n)-ary relation on thestructure; the Tarski truth definition explains how. Is there asimilar definition for arbitrary formulas of IF logic? It turns outthat there is one for the slightly different logic introduced byHodges 1997, and it leads to a Tarski-style truth definition for thelanguage of that logic. With a little adjustment this truth definitioncan be made to fit IF logic too. But for both of these new logicsthere is a catch: instead of saying when an assignment of elements tofree variables makes a formula true, we say when a set ofassignments of elements to free variables makes the formula true.Väänänen 2007 made this idea the basis for a range ofnew logics for studying the notion of dependence (see entry on dependence logic). In these logics the semantics is defined without games, although theoriginal inspiration comes from the work of Hintikka and Sandu.
In Väänänen’s logics it is easy to see why oneneeds sets of assignments. He has an atomic formula expressing‘(x) is dependent on (y)’. How can we interpret thisin a structure, for example the structure of natural numbers? It makesno sense at all to ask for example whether 8 is dependent on 37. Butif we have a set X of ordered pairs of natural numbers, it does makesense to ask whether in X the first member of each pair is dependenton the second; the answer Yes would mean that there is a function(f) such that each pair ((a,b)) in X has the form ((f(b),b)).
5. Semantic Games for Other Logics
Structures of the following kind give rise to interesting games. Thestructure (A) consists of a set (S) of elements (which we shallcall states, adding that they are often calledworlds), a binary relation (R) on (S) (we shall read(R) as arrow), and a family (P_1 ,ldots ,P_n) of subsetsof (S). The two players (forall) and (exists) play a game G on(A), starting at a state (s) which is given them, by reading asuitable logical formula (phi) as a set of instructions for playingand for winning.
Thus if (phi) is (P_i), then player (exists) wins at once if(s) is in (P_i), and otherwise player (forall) wins at once.The formulas (psi wedge theta , psi vee theta) and (negpsi) behave as in Hintikka’s games above; for example (psiwedge theta) instructs player (forall) to choose whether thegame shall continue as for (psi) or for (theta). If the formula(phi) is (Box psi), then player (forall) chooses an arrowfrom (s) to a state (t) (i.e. a state (t) such that the pair((s, t)) is in the relation (R)), and the game then proceeds fromthe state (t) according to the instructions (psi). The rule for(Diamond psi) is the same except that player (exists) makes thechoice. Finally we say that the formula (phi) is true at s inA if player (exists) has a winning strategy for this gamebased on (phi) and starting at (s).
These games stand to modal logic in very much the same way asHintikka’s games stand to first-order logic. In particular theyare one way of giving a semantics for modal logic, and they agree withthe usual Kripke-type semantics. Of course there are many types andgeneralisations of modal logic (including closely related logics suchas temporal, epistemic and dynamic logics), and so the correspondinggames come in many different forms. One example of interest is thecomputer-theoretic logic of Matthew Hennessy and Robin Milner, usedfor describing the behaviour of systems; here the arrows come in morethan one colour, and moving along an arrow of a particular colourrepresents performing a particular ‘action’ to change thestate. Another example is the more powerful modal (mu)-calculus ofDexter Kozen, which has fixed point operators; see Chapter 5 ofStirling 2001.
One interesting feature of these games is that if a player has awinning strategy from some position onwards, then that strategy neverneeds to refer to anything that happened earlier in the play.It’s irrelevant what choices were made earlier, or even how manysteps have been played. So we have what the computer scientistssometimes call a ‘memoryless’ winning strategy.
In the related ‘logic of games’, proposed by Rohit Parikh,games that move us between the states are the subject matter ratherthan a way of giving a truth definition. These games have manyinteresting aspects. In 2003 the journal Studia Logica ran anissue devoted to them, edited by Marc Pauly and Parikh.
Influences from economics and computer science have led a number oflogicians to use logic for analysing decision making under conditionsof partial ignorance. (See for example the article on epistemic logic.) There are several ways to represent states of knowledge. One is totake them as states or worlds in the kind of modal structure that wementioned at the beginning of this section. Another is to use IF logicor a variant of it. How are these approaches related? Johan vanBenthem 2006 presents some thoughts and results on this very naturalquestion. See also the papers by Johan van Benthem, Krister Segerberg,Eric Pacuit and K. Venkatesh and their references, in Part IV‘Logic, Agency and Games’ of Van Benthem, Gupta and Parikh2011 and the entry on logics for analyzing games for a sample of recent work in this area.
6. Back-and-Forth Games
In 1930 Alfred Tarski formulated the notion of two structures (A)and (B) being elementarily equivalent, i.e., that exactlythe same first-order sentences are true in (A) as are true in (B).At a conference in Princeton in 1946 he described this notion andexpressed the hope that it would be possible to develop a theory of itthat would be ‘as deep as the notions of isomorphism, etc. nowin use’ (Tarski 1946).
One natural part of such a theory would be a purely structuralnecessary and sufficient condition for two structures to beelementarily equivalent. Roland Fraïssé, aFrench-Algerian, was the first to find a usable necessary andsufficient condition. It was rediscovered a few years later by theKazakh logician A. D. Taimanov, and it was reformulated in terms ofgames by the Polish logician Andrzej Ehrenfeucht. The games are nowknown as Ehrenfeucht-Fraïssé games, or sometimesas back-and-forth games. They have turned out to be one ofthe most versatile ideas in twentieth-century logic. They adaptfruitfully to a wide range of logics and structures.
In a back-and-forth game there are two structures (A) and (B), andtwo players who are commonly called Spoiler and Duplicator. (The namesare due to Joel Spencer in the early 1990s. More recently NeilImmerman suggested Samson and Delilah, using the same initials; thisplaces Spoiler as the male player (forall) and Duplicator as thefemale (exists).) Each step in the game consists of a move ofSpoiler, followed by a move of Duplicator. Spoiler chooses an elementof one of the two structures, and Duplicator must then choose anelement of the other structure. So after (n) steps, two sequenceshave been chosen, one from (A) and one from (B):
[(a_0 ,ldots ,a_{n-1} ; b_0,ldots ,b_{n-1}).]This position is a win for Spoiler if and only if some atomic formula(of one of the forms ‘(R(v_0 ,ldots ,v_{k-1}))’ or‘(mathrm{F}(v_0 ,ldots ,v_{k-1}) = v_k)’ or‘(v_0 =v_1)’, or one of these with different variables)is satisfied by ((a_0 ,ldots ,a_{n-1})) in (A) but not by ((b_0,ldots ,b_{n-1})) in (B), or vice versa. The condition forDuplicator to win is different in different forms of the game. In thesimplest form, (EF(A, B)), a play is a win for Duplicator if andonly if no initial part of it is a win for Spoiler (i.e. she wins ifshe hasn’t lost by any finite stage). For each natural number(m) there is a game (EF_m (A, B)); in this game Duplicator winsafter (m) steps provided she has not yet lost. All these games aredetermined, by the Gale-Stewart Theorem. The two structures (A) and(B) are said to be back-and-forth equivalent if Duplicatorhas a winning strategy for (EF(A, B)), and m-equivalent ifshe has a winning strategy for (EF_m (A, B)).
One can prove that if (A) and (B) are (m)-equivalent for everynatural number (m), then they are elementarily equivalent. In fact,if Eloise has a winning strategy (tau) in the Hintikka gameG((phi)) on (A), where the nesting of quantifier scopes of(phi) has at most m levels and Duplicator has a winning strategy(varrho) in the game (EF_m (A, B)), the two strategies (tau)and (varrho) can be composed into a winning strategy of Eloise inG((phi)) on (B). On the other hand a winning strategy for Spoilerin (EF_m (A, B)) can be converted into a first-order sentence thatis true in exactly one of (A) and (B), and in which the nesting ofquantifier scopes has at most (m) levels. So we have our necessaryand sufficient condition for elementary equivalence, and a bit morebesides.
If (A) and (B) are back-and-forth equivalent, then certainly theyare elementarily equivalent; but in fact back-and-forth equivalenceturns out to be the same as elementary equivalence in an infinitary logic which is much more expressive than first-order logic. There are manyadjustments of the game that give other kinds of equivalence. Forexample Barwise, Immerman and Bruno Poizat independently described agame in which the two players have exactly (p) numbered pebbleseach; each player has to label his or her choices with a pebble, andthe two choices in the same step must be labelled with pebblescarrying the same number. As the game proceeds, the players will runout of pebbles and so they will have to re-use pebbles that werealready used. The condition for Spoiler to win at a position (and allsubsequent positions) is the same as before, except that only theelements carrying labels at that position count. The existence of awinning strategy for Duplicator in this game means that the twostructures agree for sentences which use at most (p) variables(allowing these variables to occur any number of times).
The theory behind back-and-forth games uses very few assumptions aboutthe logic in question. As a result, these games are one of the fewmodel-theoretic techniques that apply as well to finite structures asthey do to infinite ones, and this makes them one of the cornerstonesof theoretical computer science. One can use them to measure theexpressive strength of formal languages, for example database querylanguages. A typical result might say, for example, that a certainlanguage can’t distinguish between ‘even’ and‘odd’; we would prove this by finding, for each level(n) of complexity of formulas of the language, a pair of finitestructures for which Duplicator has a winning strategy in theback-and-forth game of level (n), but one of the structures has aneven number of elements and the other has an odd number. Semanticistsof natural languages have found back-and-forth games useful forcomparing the expressive powers of generalised quantifiers. (See for example Peters and Westerståhl 2006 Section IV.)
There is also a kind of back-and-forth game that corresponds to ourmodal semantics above in the same way asEhrenfeucht-Fraïssé games correspond to Hintikka’sgame semantics for first-order logic. The players start with a state(s) in the structure (A) and a state (t) in the structure (B).Spoiler and Duplicator move alternately, as before. Each time hemoves, Spoiler chooses whether to move in (A) or in (B), and thenDuplicator must move in the other structure. A move is always made bygoing forwards along an arrow from the current state. If between themthe two players have just moved to a state (s)´ in (A) and astate (t)´ in (B), and some predicate (P_i) holds at justone of (s)´ and (t)´, then Duplicator loses at once.Also she loses if there are no available arrows for her to move along;but if Spoiler finds there are no available arrows for him to movealong in either structure, then Duplicator wins. If the two playersplay this game with given starting states (s) in (A) and (t) in(B), and both structures have just finitely many states, then onecan show that Duplicator has a winning strategy if and only if thesame modal sentences are true at (s) in (A) as are true at (t)in (B).
There are many generalisations of this result, some of them involvingthe following notion. Let (Z) be a binary relation which relatesstates of (A) to states of (B). Then we call (Z) abisimulation between (A) and (B) if Duplicator can use(Z) as a nondeterministic winning strategy in the back-and-forthgame between (A) and (B) where the first pair of moves of the twoplayers is to choose their starting states. In computer science thenotion of a bisimulation is crucial for the understanding of (A) and(B) as systems; it expresses that the two systems interact withtheir environment in the same way as each other, step for step. But alittle before the computer scientists introduced the notion,essentially the same concept appeared in Johan van Benthem’s PhDthesis on the semantics of modal logic (1976).
7. Other Model-theoretic Games
The logical games in this section are mathematicians’ tools, butthey have some conceptually interesting features.
7.1 Forcing games
Forcing games are also known to descriptive set theorists asBanach-Mazur games; see the references by Kechris or Oxtobybelow for more details of the mathematical background. Model theoristsuse them as a way of building infinite structures with controlledproperties. In the simplest case (forall) and (exists) play aso-called Model Existence Game, where (exists) claims that a fixedsentence (phi) has a model while (forall) claims that he canderive a contradiction from (phi). In the beginning a countablyinfinite set (C) of new constant symbols (a_0, a_1, a_2) etc isfixed. (exists) defends a disjunction by choosing one disjunct, andan existential statement by choosing a constant from (C) as awitness. (forall) can challenge a conjunction by choosing eitherconjunct, and a universal statement by choosing an arbitrary witnessfrom (C). (exists) wins if no contradicting atomic sentences areplayed. (exists) has a winning strategy (a Consistency Property isone way of describing a winning strategy) if and only if (phi) hasa model. On the other hand, if (forall) has a winning strategy, thetree (which can be made finite) of all plays against his winningstrategy is related to a Gentzen style proof of the negation of(phi). This method of analysing sentences is closely related toBeth’s method of semantic tableaux and the Dialogical Game (seeSection 8).
To sketch the idea of the general Forcing Game, imagine that acountably infinite team of builders are building a house (A). Eachbuilder has his or her own task to carry out: for example to install abath or to wallpaper the entrance hall. Each builder has infinitelymany chances to enter the site and add some finite amount of materialto the house; these slots for the builders are interleaved so that thewhole process takes place in a sequence of steps counted by thenatural numbers. Sasural genda phool serial title song download.
To show that the house can be built to order, we need to show thateach builder separately can carry out his or her appointed task,regardless of what the other builders do. So we imagine each builderas player (exists) in a game where all the other players are lumpedtogether as (forall), and we aim to prove that (exists) has awinning strategy for this game. When we have proved this for eachbuilder separately, we can imagine them going to work, each with theirown winning strategy. They all win their respective games and theresult is one beautiful house.
More technically, the elements of the structure (A) are fixed inadvance, say as (a_0, a_1, a_2) etc., but the properties of theseelements have to be settled by the play. Each player moves by throwingin a set of atomic or negated atomic statements about the elements,subject only to the condition that the set consisting of all thestatements thrown in so far must be consistent with a fixed set ofaxioms written down before the game. (So throwing in a negated atomicsentence (neg phi) has the effect of preventing any player fromadding (phi) at a later stage.) At the end of the joint play, theset of atomic sentences thrown in has a canonical model, and this isthe structure (A); there are ways of ensuring that it is a model ofthe fixed set of axioms. A possible property P of (A) is said to beenforceable if a builder who is given the task of making Ptrue of (A) has a winning strategy. A central point (due essentiallyto Ehrenfeucht) is that the conjunction of a countably infinite set ofenforceable properties is again enforceable.
Various Löwenheim-Skolem Theorems of model theory can be provedusing variants of the Forcing Game. In these variants we do notconstruct a model but a submodel of a given model. We start with a bigmodel (M) for a sentence (or a countable set of sentences) (phi).Then we list the subformulas of (phi) and each player has asubformula with a free variable to attend to. The player’s taskis to make sure that as soon as the parameters of the subformula occurin the game, and there is a witness to the truth of the formula in thebig model, one such a witness is played. When the game is over, acountable submodel of (M) has been built in such a way that itsatisfies (phi).
The name ‘forcing’ comes from an application of relatedideas by Paul Cohen to construct models of set theory in the early1960s. Abraham Robinson adapted it to make a general method forbuilding countable structures, and Martin Ziegler introduced the gamesetting. Later Robin Hirsch and Ian Hodkinson used related games tosettle some old questions about relation algebras.
Forcing games are a healthy example to bear in mind when thinkingabout the Dawkins question. They remind us that in logical games itneed not be helpful to think of the players as opposing eachother.
7.2 Cut-and-choose games
In the traditional cut-and-choose game you take a piece of cake andcut it into two smaller pieces; then I choose one of the pieces andeat it, leaving the other one for you. This procedure is supposed toput pressure on you to cut the cake fairly. Mathematicians, not quiteunderstanding the purpose of the exercise, insist on iterating it.Thus I make you cut the piece I chose into two, then I choose one ofthose two; then you cut this piece again, and so on indefinitely. Someeven more unworldly mathematicians make you cut the cake intocountably many pieces instead of two.
These games are important in the theory of definitions. Suppose wehave a collection (A) of objects and a family (S) of properties;each property cuts (A) into the set of those objects that have theproperty and the set of those that don’t. Let (exists) cut,starting with the whole set (A) and using a property in (S) as aknife; let (forall) choose one of the pieces (which are subsets of(A)) and give it back to (exists) to cut again, once more using aproperty in (S); and so on. Let (exists) lose as soon as(forall) chooses an empty piece. We say that ((A, S)) hasrank at most (m) if (forall) has a strategy whichensures that (exists) will lose before her (m)-th move. The rankof ((A, S)) gives valuable information about the family of subsetsof (A) definable by properties in (S).
Variations of this game, allowing a piece to be cut into infinitelymany smaller pieces, are fundamental in the branch of model theorycalled stability theory. Broadly speaking, a theory is‘good’ in the sense of stability theory if, whenever wetake a model (A) of the theory and (S) the set of first-orderformulas in one free variable with parameters from (A), thestructure ((A, S)) has ‘small’ rank. A differentvariation is to require that at each step, (exists) divides intotwo each of the pieces that have survived from earlier steps, andagain she loses as soon as one of the cut fragments is empty. (In thisversion (forall) is redundant.) With this variation, the rank of((A),S) is called its Vapnik-Chervonenkis dimension; thisnotion is used in computational learning theory.
7.3 Games on the tree of two successor functions
Imagine a tree that has been built up in levels. At the bottom levelthere is a single root node, but a left branch and a right branch comeup from it. At the next level up there are two nodes, one on eachbranch, and from each of these nodes a left branch and a right branchgrow up. So on the next level up there are four nodes, and again thetree branches into left and right at each of these nodes. Continued toinfinity, this tree is called the tree of two successorfunctions (namely left successor and right successor). Taking thenodes as elements and introducing two function symbols for left andright successor, we have a structure. A powerful theorem of MichaelRabin states that there is an algorithm which will tell us, for everymonadic second-order sentence (phi) in the language appropriate forthis structure, whether or not (phi) is true in the structure.(’Monadic second-order’ means that the logic is likefirst-order, except that we can also quantify over sets ofelements—but not over binary relations on elements, forexample.)
Rabin’s theorem has any number of useful consequences. Forexample Dov Gabbay used it to prove the decidability of some modallogics. But Rabin’s proof, using automata, was notoriouslydifficult to follow. Yuri Gurevich and Leo Harrington, andindependently Andrei Muchnik, found much simpler proofs in which theautomaton is a player in a game.
This result of Rabin is one of several influential resuls that connectgames with automata. Another example is the parity gameswhich are used for verifying properties of modal systems. See forexample Stirling (2001) Chapter 6; Bradfield and Stirling (2006)discuss parity games for the modal (mu)-calculus.
8. Games of dialogue, communication and proof
Several medieval texts describe a form of debate calledobligationes. There were two disputants, Opponens andRespondens. At the beginning of a session, the disputants would agreeon a ‘positum’, typically a false statement. The job ofRespondens was to give rational answers to questions from Opponens,assuming the truth of the positum; above all he had to avoidcontradicting himself unnecessarily. The job of Opponens was to try toforce Respondens into contradictions. So we broadly know the answer tothe Dawkins question, but we don’t know the game rules! Themedieval textbooks do describe several rules that the disputantsshould follow. But these rules are not stipulated rules of the game;they are guidelines which the textbooks derive from principles ofsound reasoning with the aid of examples. (Paul of Venice justifiesone rule by the practice of ‘great logicians, philosophers,geometers and theologians’.) In particular it was open to ateacher of obligationes to discover new rules. This open-endednessimplies that obligationes are not logical games in our sense.
Not everybody agrees with the previous sentence. For example CatarinaDutilh Novaes (2007, 6) makes a detailed defence of the view thatobligationes present “a remarkable case of conceptual similaritybetween a medieval and a modern theoretical framework”. Butwhatever view we take on this question, these debates have inspiredone important line of modern research in logical games.
Imagine (exists) taking an oral examination in proof theory. Theexaminer gives her a sentence and invites her to start proving it. Ifthe sentence has the form
[phi vee psi]then she is entitled to choose one of the sentences and say ‘OK,I’ll prove this one’. (In fact if the examiner is anintuitionist, he may insist that she choose one of the sentences toprove.) On the other hand if the sentence is
[phi wedge psi]then the examiner, being an examiner, might well choose one of theconjuncts himself and invite her to prove that one. If she knows howto prove the conjunction then she certainly knows how to prove theconjunct.
The case of (phi rightarrow psi) is a little subtler. She willprobably want to start by assuming (phi) in order to deduce(psi); but there is some risk of confusion because the sentencesthat she has written down so far are all of them things to be proved,and (phi) is not a thing to be proved. The examiner can help her bysaying ‘I’ll assume (phi), and let’s see if youcan get to (psi) from there’. At this point there is a chancethat she sees a way of getting to (psi) by deducing a contradictionfrom (phi); so she may turn the tables on the examiner and invitehim to show that his assumption is consistent, with a view to provingthat it isn’t. The symmetry is not perfect: he was asking her toshow that a sentence is true everywhere, while she is inviting him toshow that a sentence is true somewhere. Nevertheless we can see a sortof duality.
Ideas of this kind lie behind the dialectical games of Paul Lorenzen.He showed that with a certain amount of pushing and shoving, one canwrite rules for the game which have the property that (exists) hasa winning strategy if and only if the sentence that she is presentedwith at the beginning is a theorem of intuitionistic logic. In agesture towards medieval debates, he called (exists) the Proponentand the other player the Opponent. Almost as in the medievalobligationes, the Opponent wins by driving the Proponent to a pointwhere the only moves available to her are blatantself-contradictions.
Lorenzen claimed that his games provided justifications for bothintuitionist and classical logic (or in his words, made them‘gerechtfertigt’, Lorenzen (1961,196)). Unfortunately any‘justification’ involves a convincing answer to theDawkins question, and this Lorenzen never provided. For example hespoke of moves as ‘attacks’, even when (like theexaminer’s choice at (phi wedge psi) above) they look morelike help than hostility.
The entry dialogical logic gives a fuller account of Lorenzen’s games and a number of morerecent variants. In its present form (January 2013) it sidestepsLorenzen’s claims about justifying logics. Instead it describesthe games as providing semantics for the logics (a point that Lorenzenwould surely have agreed with), and adds that for understanding thedifferences between logics it can be helpful to compare theirsemantics.
From this point of view, Lorenzen’s games stand as an importantparadigm of what recent proof theorists have called semantics ofproofs. A semantics of proofs gives a ‘meaning’ notjust to the notion of being provable, but to each separate step in aproof. It answers the question ‘What do we achieve by makingthis particular move in the proof?’ During the 1990s a number ofworkers at the logical end of computer science looked for games thatwould stand to linear logic and some other proof systems in the same way as Lorenzen’sgames stood to intuitionist logic. Andreas Blass, and then laterSamson Abramsky and colleagues, gave games that corresponded to partsof linear logic, but at the time of writing we don’t yet have aperfect correspondence between game and logic. This example isparticularly interesting because the answer to the Dawkins questionshould give an intuitive interpretation of the laws of linear logic, athing that this logic has badly needed. The games of Abramsky et al.tell a story about two interacting systems. But while he began withgames in which the players politely take turns, Abramsky later allowedthe players to act ‘in a distributed, asynchronousfashion’, taking notice of each other only when they choose to.These games are no longer in the normal format of logical games, andtheir real-life interpretation raises a host of new questions.
Giorgi Japaridze has proposed a ‘computability logic’ forstudying computation. Its syntax is first-order logic with some extraitems reminiscent of linear logic. Its semantics is in terms ofsemantic games with some unusual features. For example it is notalways determined which player makes the next move. The notion ofstrategy functions is no longer adequate for describing the players;instead Japaridze describes ways of reading the second player (player(exists) in our notation) as a kind of computing machine. Furtherinformation is on his website.
Another group of games of the same general family as Lorenzen’sare the proof games of Pavel Pudlak 2000. Here the Opponent (calledProver) is in the role of an attorney in a court of law, who knowsthat the Proponent (called Adversary) is guilty of some offence.Proponent will insist he is innocent, and is prepared to tell lies todefend himself. Opponent’s aim is to force Proponent tocontradict something that Proponent is on record as having saidearlier; but Opponent keeps the record and (as in the pebble gamesabove) he sometimes has to drop items from the record for lack ofspace or memory. The important question is not whether Opponent has awinning strategy (it’s assumed from the outset that he has one),but how much memory he needs for his record. These games are a usefuldevice for showing upper and lower bounds on the lengths of proofs invarious proof systems.
Logic Games For Kids
Another kind of logical game that allows lies is Ulam’s Gamewith Lies. Here one player thinks of a number in some given range. Thesecond player’s aim is to find out what that number is, byasking the first player yes/no questions; but the first player isallowed to tell some fixed number of lies in his answers. As inPudlak’s games, there is certainly a winning strategy for thesecond player, but the question is how hard this player has to work inorder to win. The measure this time is not space or memory but time:how many questions does he have to ask? Cignoli et al. 2000 Chapter 5relate this game to many-valued logic.
To return for a moment to Lorenzen: he failed to distinguish betweendifferent stances that a person might take in an argument: stating,assuming, conceding, querying, attacking, committing oneself. Whetherit is really possible to define all these notions without presupposingsome logic is a moot point. But never mind that; a refinement ofLorenzen’s games along these lines could serve as an approach toinformal logic, and especially to the research that aims tosystematise the possible structures of sound informal argument. Onthis front see Walton and Krabbe 1995. The papers in Bench-Capon andDunne 2007 are also relevant.
Bibliography
Some of the seminal papers by Henkin and Lorenzen, and some of thepapers cited below, appear in the collection InfinitisticMethods (Proceedings of the Symposium on Foundations ofMathematics, Warsaw, 2–9 September, 1959), Oxford: PergamonPress, 1961. The editors are unnamed.
Games in the History of Logic
- Dutilh Novaes, Catarina, 2007, Formalizing Medieval LogicalTheories: Suppositio, Consequentiae and Obligationes, New York:Springer-Verlag.
- Hamblin, Charles, 1970, Fallacies, London: Methuen.
- Hilbert, David, 1967, “Die Grundlagen der Mathematik”,translated as “The foundations of mathematics,” in Jeanvan Heijenoort (ed.), From Frege to Gödel, CambridgeMass.: Harvard University Press, pp. 464–479.
- Paul of Venice, Logica Magna II (8), Tractatus deObligationibus, E. Jennifer Ashworth (ed.), New York: BritishAcademy and Oxford University Press, 1988.
- Weyl, Hermann, 1925–7, “Die heutige Erkenntnislage inder Mathematik,”, translated as “The currentepistemological situation in mathematics” in Paolo Mancosu,From Brouwer to Hilbert: The Debate on the Foundations ofMathematics in the 1920s, New York: Oxford University Press,1988, pp. 123–142.
- Zermelo, Ernst, 1913, “Über eine Anwendung der Mengenlehreauf die Theorie des Schachspiels,” in E. W. Hobson and A. E. H.Love (eds.), Proceedings of the Fifth International Congress ofMathematicians, Volume II, Cambridge: Cambridge UniversityPress.
Games for Teaching Logic
- Barwise, Jon and John Etchemendy, 1995, The Language ofFirst-Order Logic, including Tarski’s World 3.0, Cambridge:Cambridge University Press.
- Carroll, Lewis, 1887, The Game of Logic, London:Macmillan.
- Dienes, Zoltan P., and E. W. Golding, 1966, Learning Logic,Logical Games, Harlow: Educational Supply Association.
- Havas, Katalin, 1999, “Learning to think: Logic forchildren,” in Proceedings of the Twentieth World Congress ofPhilosophy (Volume 3: Philosophy of Education), David M. Steiner(ed.), Bowling Green Ohio: Bowling Green State University Philosophy,pp. 11–19.
- Nifo, Agostino, 1521, Dialectica Ludicra (Logic as agame), Florence: Bindonis.
- Weng, Jui-Feng, with Shian-Shyong Tseng and Tsung-Ju Lee, 2010,“Teaching Boolean logic through game rule tuning,”IEEE Transactions, Learning Technologies, 3(4):319–328. [Uses Pac-Man games to teach Boolean logic to juniorhigh school students.]
Logical Games
- Gale, David and F. M. Stewart, 1953, “Infinite games withperfect information,” in Contributions to the Theory ofGames II (Annals of Mathematics Studies 28), H. W. Kuhn and A. W.Tucker (eds.), Princeton: Princeton University Press, pp.245–266.
- Kechris, Alexander S., 1995, Classical Descriptive SetTheory, New York: Springer-Verlag.
- Marion, Mathieu, 2009, “Why play logical games?,” inOndrej Majer, Ahti-Veikko Pietarinen, and Tero Tulenheimo eds.,Games: Unifying Logic, Language and Philosophy, New York:Springer-Verlag, pp. 3-25.
- Osbourne, Martin J. and Ariel Rubinstein, 1994, A Course inGame Theory, Cambridge: MIT Press.
- Väänänen, Jouko, 2011, Models and Games,Cambridge: Cambridge University Press.
- van Benthem, Johan, 2011, Logical dynamics of information andinteraction, Cambridge: Cambridge University Press, 2011.
- –––, 2014, Logic in games, Cambridge,MA: MIT Press.
Semantic Games for Classical Logic
- Henkin, Leon, 1961, “Some remarks on infinitely longformulas,” in Infinitistic Methods, op. cit.,pp. 167–183.
- Hintikka, Jaakko, 1973, Logic, Language-Games and Information:Kantian Themes in the Philosophy of Logic, Oxford: ClarendonPress.
- Hintikka, Jaakko, 1996, The Principles of MathematicsRevisited, New York: Cambridge University Press. [See for examplepages 40, 82 on the axiom of choice.]
- Hodges, Wilfrid, 2001, “Elementary Predicate Logic 25:Skolem Functions,” in Dov Gabbay, and Franz Guenthner (eds.),Handbook of Philosophical Logic I, 2nd edition, Dordrecht:Kluwer, pp. 86–91. [Proof of equivalence of game and Tarskisemantics.]
- Kolaitis, Ph. G., 1985, “Game quantification,” in J.Barwise and S. Feferman (eds.), Model-Theoretic Logics, NewYork: Springer-Verlag, pp. 365-421.
- Peirce, Charles Sanders, 1898, Reasoning and the Logic ofThings: The Cambridge Conferences Lectures of 1898, ed. KennethLaine Ketner, Cambridge Mass., Harvard University Press, 1992.
Semantic Games with Imperfect Information
- Hintikka, Jaakko and Gabriel Sandu, 1997, “Game-theoreticalsemantics,” in Johan van Benthem and Alice ter Meulen (eds.),Handbook of Logic and Language, Amsterdam: Elsevier, pp.361–410.
- Hodges, Wilfrid, 1997, “Compositional semantics for alanguage of imperfect information,” Logic Journal of theIGPL, 5: 539–563.
- Janssen, Theo M. V. and Francien Dechesne, 2006,“Signalling: a tricky business,” in J. van Benthem etal. (eds.), The Age of Alternative Logics: Assessing thePhilosophy of Logic and Mathematics Today, Dordrecht: Kluwer, pp.223–242.
- Mann, Allen L., Gabriel Sandu, and Merlin Sevenster, 2011,Independence-Friendly Logic: A Game-Theoretic Approach(London Mathematical Society Lecture Note Series 386), Cambridge:Cambridge University Press.
- von Neumann, John and Oskar Morgenstern, 1944, Theory of Gamesand Economic Behavior, Princeton: Princeton UniversityPress.
- Väänänen, Jouko, 2007, Dependence Logic: A NewApproach to Independence Friendly Logic, Cambridge: CambridgeUniversity Press.
Semantic Games for Other Logics
- Bradfield, Julian and Colin Stirling, 2006, “Modalmu-calculi,” in P. Blackburn et al. (eds.),Handbook of Modal Logic, Amsterdam: Elsevier, pp.721–756.
- Dekker, Paul, and Marc Pauly (eds.), 2002, Journal of Logic,Language and Information, 11(3): 287–387. [Special issue onLogic and Games.]
- Hennessy, Matthew, and Robin Milner, 1985, “Algebraic lawsfor indeterminism and concurrency,” Journal of the ACM,32: 137–162.
- Parikh, Rohit, 1985, “The logic of games and itsapplications,” in Marek Karpinski and Jan van Leeuwen (eds.),“Topics in the Theory of Computation,” Annals ofDiscrete Mathematics, 24: 111–140.
- Pauly, Marc, and Rohit Parikh (eds.), 2003, StudiaLogica, 72(2): 163–256 [Special issue on Game Logic.]
- Stirling, Colin, 2001, Modal and Temporal Properties ofProcesses, New York: Springer-Verlag.
- van Benthem, Johan, 2006, “The epistemic logic of IFgames,” in Randall Auxier and Lewis Hahn (eds.), ThePhilosophy of Jaakko Hintikka, Chicago: Open Court pp.481–513.
- van Benthem, Johan with Amitabha Gupta and Rohit Parikh, 2011,Proof, Computation and Agency, Dordrecht:Springer-Verlag.
Back-and-Forth Games
- Blackburn, Patrick with Maarten de Rijke and Yde Venema, 2001,Modal Logic, Cambridge: Cambridge University Press.
- Doets, Kees, 1996, Basic Model Theory, Stanford: CSLIPublications and FoLLI.
- Ebbinghaus, Heinz-Dieter and Jörg Flum, 1999, FiniteModel Theory, 2nd edition, New York: Springer.
- Ehrenfeucht, Andrzej, 1961, “An application of games to thecompleteness problem for formalized theories,” FundamentaMathematicae, 49: 129–141.
- Grädel, Erich with Phokion G. Kolaitis, Leonid Libkin,Maarten Marx, Joel Spencer, Moshe Y. Vardi, Yde Venema, and ScottWeinstein, 2007, Finite Model Theory, Berlin:Springer-Verlag.
- Libkin, Leonid, 2004, Elements of Finite Model Theory,Berlin, Springer-Verlag.
- Otto, Martin, 1997, Bounded Variable Logics andCounting—A Study in Finite Models (Lecture Notes in Logic,9), Berlin: Springer-Verlag.
- Peters, Stanley and Dag Westerståhl, 2006, Quantifiersin Language and Logic, Oxford: Clarendon Press.
- Tarski, Alfred, 1946, “Address at the Princeton UniversityBicentennial Conference on Problems of Mathematics (December17–19, 1946),” Hourya Sinaceur (ed.), Bulletin ofSymbolic Logic, 6 (2000): 1–44.
- van Benthem, Johan, 2001, “Correspondence Theory,” inDov Gabbay and Franz Guenthner (eds.), Handbook of PhilosophicalLogic III, 2nd edition, Dordrecht: Kluwer.
Other Model-Theoretic Games
- Anthony, Martin, and Norman Biggs, 1992, ComputationalLearning Theory, Cambridge: Cambridge University Press. [ForVapnik-Chervonenkis dimension.]
- Gurevich, Yuri and Leo Harrington, 1984,“Trees, automata,and games,” in H. R. Lewis (ed.), Proceedings of the ACMSymposium on the Theory of Computing, San Francisco: ACM, pp.171–182.
- Hirsch, Robin and Ian Hodkinson, 2002, Relation Algebras byGames, New York: North-Holland.
- Hodges, Wilfrid, 1985, Building Models by Games,Cambridge: Cambridge University Press.
- Hodges, Wilfrid, 1993, Model Theory, Cambridge: CambridgeUniversity Press.
- Oxtoby, J. C., 1971, Measure and Category, New York:Springer-Verlag.
- Ziegler, Martin, 1980, “Algebraisch abgeschlosseneGruppen,” in S. I. Adian et al. (eds.), WordProblems II: The Oxford Book, Amsterdam: North-Holland, pp.449–576.
Games of Dialogue, Communication and Proof
- Abramsky, Samson and Radha Jagadeesan, 1994, “Games and fullcompleteness for multiplicative linear logic,” Journal ofSymbolic Logic, 59: 543–574.
- Abramsky, Samson and Paul-André Melliès, 1999,“Concurrent games and full completeness,” inProceedings of the Fourteenth International Symposium on Logic inComputer Science, Computer Science Press of the IEEE, pp.431–442.
- Bench-Capon, T. J. M. and Paul E. Dunne, 2007,“Argumentation in artificial intelligence,” ArtificialIntelligence, 171: 619–641. [The introduction to a richcollection of papers on the same theme on pages 642–937.]
- Blass, Andreas, 1992, “A game semantics for linearlogic,” Annals of Pure and Applied Logic, 56:183–220.
- Cignoli, Roberto L. O., Itala M. L. D’Ottaviano, and DanieleMundici, 2000, Algebraic Foundations of Many-ValuedReasoning, Dordrecht: Kluwer.
- Felscher, Walter, 2001, “Dialogues as a foundation forintuitionistic logic,” in Dov Gabbay and Franz Guenthner (eds.),Handbook of Philosophical Logic V, 2nd edition, Dordrecht:Kluwer.
- Hodges, Wilfrid and Erik C. W. Krabbe, 2001, “Dialoguefoundations,” Proceedings of the Aristotelian Society(Supplementary Volume), 75: 17–49.
- Japaridze, Giorgi, 2003, “Introduction to computabilitylogic,” Annals of Pure and Applied Logic, 123:1–99.
- Lorenzen, Paul, 1961 “Ein dialogischesKonstruktivitätskriterium,” in InfinitisticMethods, op. cit., 1961, pp. 193–200.
- Pudlak, Pavel, 2000, “Proofs as games,” AmericanMathematical Monthly, 107(6): 541–550.
- Walton, Douglas N. and Erik C. W. Krabbe, 1995, Commitment inDialogue: Basic Concepts of Interpersonal Reasoning, Albany:State University of New York Press.
Academic Tools
How to cite this entry. |
Preview the PDF version of this entry at the Friends of the SEP Society. |
Look up this entry topic at the Internet Philosophy Ontology Project (InPhO). |
Enhanced bibliography for this entryat PhilPapers, with links to its database. |
Other Internet Resources
Logic Games
Related Entries
game theory | generalized quantifiers | logic: classical | logic: dialogical | logic: epistemic | logic: for analyzing games | logic: independence friendly | logic: infinitary | logic: informal | logic: intuitionistic | logic: linear | logic: modal | model theory | set theory