View Question
 Question
 Subject: game theory Category: Business and Money > Economics Asked by: vitaminc-ga List Price: \$10.00 Posted: 03 Nov 2002 22:27 PST Expires: 03 Dec 2002 22:27 PST Question ID: 97923
 ```1)Consider the following two person normal form game 2,2 2,2 2,2 2,2 2,2 2,2 2,2 2,2 3,3 3,3 2,0 0,0 3,3 3,3 0,0 3,3 Construct a nontrivial extensive form game-i.e., one that is not just a simultaneous move game-that has as its normal form the above game. 2)The simultaneous-move game(below)is played twice, with the outcome of the first stage observed before the second stage begins. There is no discounting. Can the payoff (4,4) be achieved in the first stage in a pure-strategy subgame-perfect Nash equilibrium? If so, give strategies that do so. If not, prove why not. L C R T 3,1 0,0 5,0 M 2,1 1,2 3,1 B 1,2 0,1 4,4``` Request for Question Clarification by rbnn-ga on 04 Nov 2002 07:16 PST `Is the comment sufficient answer?` Clarification of Question by vitaminc-ga on 04 Nov 2002 15:29 PST ```Well, i am afraid that the comment is insufficient. Do you have any more suggestion?```
 Subject: Re: game theory Answered By: rbnn-ga on 09 Nov 2002 13:30 PST Rated:
 ```Thank you for the questions. 1) The key here is that there is a notion of "information set" in an extensive form game tree. This is a set of nodes that look alike, at the time it is a person's move. Information sets in an extensive form game tree are denoted by dotted lines connecting the nodes in the information set. Unfortunately it is really hard to draw game trees in ascii. Thus, I hope that you will bear with me if I simply describe the tree. The tree has 13 nodes and three levels. The moves for player 1 are a,b,c and d. The moves for player two are A, B, C, and D. The nodes of the tree are labelled 0 through 12. The root of the tree is labelled 0. The player 1 moves from the root are as follows: There is an edge labelled a from 0 to 1. There is an edge labelled b from 0 to 2. There is an edge labelled c from 0 to 3. There is an edge lablled d from 0 to 4. There is a dotted line connecting nodes 3 and 4: these nodes are in the same information set. The player 2 moves are as follows: There is an edge labelled A from 3 to 5 There is an edge labelled B from 3 to 6 There is an edge labelled C from 3 to 7 There is an edge labelled D from 3 to 8 There is an edge labelled A from 4 to 9 There is an edge labelled B from 4 to 10 There is an edge labelled C from 4 to 11 There is an edge labelled D from 4 to 12 To find the payoffs at the terminal nodes, we just check the corresponding values in the matrx, where the rows of the matrix are labelled a through d and the columns are labelled A through D. Node 0 is a root node and has no payoff. Node 1 has payoff (2,2) Node 2 has payoff (2,2) Nodes 3 and 4 are interior nodes and have no payoff Node 5 is reached when player 1 chooses c and player 2 chooses A, so its payoff is 3,3 Node 6 is reached when player 1 chooses c and player 2 chooses B, so its payoff is 3,3 Node 7 is reached when player 1 chooses c and player 2 chooses C, so its payoff is 2,0. Node 8 is reached when player 1 chooses c and player 2 chooses D, so its payoff is 0,0. Similarly, the payoffs for nodes 9, 10, 11, and 12 are respectively: 9: (3,3) 10: (3,3) 11: (0,0) 12: (3,3) One way to think about how this game is played is as follows. Player 1 writes down a letter a-d on a piece of paper. If the letter is a or b, he deals the paper face up and each player gets 2 dollars. Otherwise, he deals the paper face down. Now player 2 has to write a letter from A through D on a piece of paper. After player 2 shows the paper to player 1, player 1 turns over his paper, and both players collect the payoff given by the row/column in the payoff matrix associated with their plays. 2) Surprisingly, the answer to this question is yes: there is a subgame perfect Nash equilibrium. Consider the following strategy profile for player 1: i. On the first round, play B. On the second round, if in the previous round (B,R) was played, then play T. Otherwise play M. For player 2, use the following strategy profile: ii. On the first round, play R. On the second round, if in the previous round (B,R) was played, then play L. Otherwise play C. Let us call this strategy profiles s1 and s2 respectively, so that the full strategy is s=(s1,s2). Note that the payoff using this strategy is (4,4)+(3,1)=(7,5), since B,R is played in the first round and (T,L) in the second. Furthermore, note that (T,L) and (M,C) are the only pure strategy Nash equilibria for a single round. We need to prove two things: A. (s1,s2) is a Nash equilibrium. B. (s1,s2) is subgame perfect. To show A, we need to show that if player 1 unilaterally changes his strategy his payoff will not increase: he cannot make more than 7. Whatever player 1 plays on the first round, player 2 will reply L or C. Since (T,L) and (M,C) are Nash equilibria for one round, player 2 cannot do better on round 2 by varying his play unilaterally. What if player 1 varies on round 1? If he plays T, then (T,R) will be played in round 1, earning him a payoff of 5. In round 2, player 2 will play C, and so player 1 cannot make more than 1 in round 1. So his maximum payoff if he varies is 6, which is worse than if he stayed. So player 1 cannot improve by varying. Similarly, player 2 cannot improve by varying. As above, player 2 cannot vary in round 2. If player 2 chooses another move in round 1, his max payoff in round 1 will be 2, and since player 1 will subsequently play M, his max payoff in round 2 is also 2, so his total max payoff is only 4. Hence, (s1,s2) is indeed a Nash equilibrium. To show B, we need to show that whatever happens in the first round, the strategy (s1,s2), applied only to the second round, is a Nash equilibrium. This is clear because (T,L) and (M,C) are each Nash equilibria, and whatever happens in the first round, one of those two will be played in the second round. Search Strategy --------------- I continued to use the text: Game Theory Evolving, by Herbert Gintis. Princeton University Press, Princeton, 2000. I also looked at a number of sites on subgame perfect equilibrium, none of which were very helpful. One reasonable one was the course notes: http://www.nada.kth.se/~rjo/course/game/chap6/game_theory_ch6.ppt ; however, this is in powerpoint format and requires a plugin from microsoft to read; I can forward the URL of the plug-in if you like, but presumably you already have access to good definitions of these terms. I found both of these problems deceptively challenging. For problem 1, I kept trying to find a game tree with full information and without simultaneous moves, that is with no nontrivial "information sets". It turns out that I doubt this is even possible - so I certainly learned a lot better what an "information set" was. For problem 2 I had to play around with a lot of different strategies. I finally found a strategy I thought would work because it was a Nash equilibrium, but it turned out not to be subgame-perfect. The key thing was realizing that the moves a player made in the second round depend, not only on his opponent's move in the first round, but on his *own* move; otherwise, the subgame perfect definition will not hold.```
 vitaminc-ga rated this answer: and gave an additional tip of: \$3.00 ```Thanks a lot! :)```

 ```according to the dominant strategies games, assuming both players are rational all games should arrive at a 3,3 equall...extensive is a moot point. 2. The outcome 4,4 will not be reached in the first game with 100% probability because player one can unilateraly change to 5,0 also player two does not have a dominant strategy of choice R. For the second game if 4,4 the cooperation game is achieved then player one should unilateraly change to strategy T.```
 ```thanks :) i think i have a clue with second question now. But, could you explain more for the first part?```