Working Papers
Forward induction in a backward inductive manner (2025)
with Martin Meier
Abstract: We propose a new rationalizability concept for dynamic games with imperfect information, forward and backward rationalizability, that combines elements from forward and backward induction reasoning. It proceeds by applying the forward induction concept of strong rationalizability (also known as extensive-form rationalizability) in a backward inductive fashion. We argue that, compared to strong rationalizability, the new concept provides a more compelling theory for how players react to surprises. Moreover, we provide an epistemic characterization of the new concept, and show that (a) it always exists, (b) in terms of outcomes it is equivalent to strong rationalizability, (c) in terms of strategies it is a refinement of the pure backward induction concepts of backward dominance and backwards rationalizability, and (d) it satisfies expansion monotonicity: if a player learns that the game was actually preceded by some moves he was initially unaware of, then this new information will only refine, but never completely overthrow, his reasoning. Strong rationalizability violates this principle.
More reasoning, less outcomes: A monotonicity result for reasoning in dynamic games (2024)
EPICENTER Working Paper No. 32
Abstract: A focus function in a dynamic game describes, for every player and each of his information sets, the collection of opponents' information sets he reasons about. Every focus function induces a rationalizability procedure in which a player believes, whenever possible, that his opponents choose rationally at those information sets he reasons about. Under certain conditions, we show that if the players start reasoning about more information sets, then the set of outcomes induced by the associated rationalizability procedure becomes smaller or stays the same. This result does not hold on the level of strategies, unless the players only reason about present and future information sets. The monotonicity result enables us to derive existing theorems, such as the relation in terms of outcomes between forward and backward induction reasoning, but also paves the way for new results.
Consequentialism in Dynamic Games (2024)
EPICENTER Working Paper No. 30
Abstract: In philosophy and decision theory, consequentialism reflects the assumption that an act is evaluated solely on the basis of the consequences it may induce, and nothing else. In this paper we study the idea of consequentialism in dynamic games by considering two versions: A commonly used utility-based version stating that the player's preferences are governed by a utility function on consequences, and a preference-based version which faithfully translates the original idea of consequentialism to restrictions on the player's preferences. Utility-based consequentialism always implies preference-based consequentialism, but the other direction is not necessarily true, as is shown by means of a counterexample. It turns out that utility-based consequentialism is equivalent to the assumption that the induced preference intensities on consequences are additive, whereas preference-based consequentialism only requires this property for every pair of strategies in isolation. We finally show that if the dynamic game either (i) has two strategies for the player we consider, or (ii) has observed past choices, or (iii) involves only two players and has perfect recall, then the two notions of consequentialism are equivalent in the absence of weakly dominated strategies.
Dynamic Consistency in Games without Expected Utility (2024)
EPICENTER Working Paper No. 29
Abstract: Within dynamic games we are interested in conditions on the players' preferences that imply dynamic consistency and the existence of sequentially optimal strategies. The latter means that the strategy is optimal at each of the player's information sets, given his beliefs there. These two properties are needed to undertake a meaningful game-theoretic analysis in dynamic games. To explore this we assume that every player holds a conditional preference relation -- a mapping that assigns to every probabilistic belief about the opponents' strategies a preference relation over his own strategies. We identify sets of very basic conditions on the conditional preference relations that guarantee dynamic consistency and the existence of sequentially optimal strategies, respectively. These conditions are implied by, but are much weaker than, assuming expected utility. That is, to undertake a meaningful game-theoretic analysis in dynamic games we can do with much less than expected utility.
Reasoning about Your Own Future Mistakes (2024)
with Martin Meier
A previous version can be found here:
EPICENTER Working Paper No. 21
Abstract: We propose a model of reasoning in dynamic games in which a player, at each information set, holds a conditional belief about his own future choices and the opponents' future choices. These conditional beliefs are assumed to be cautious, that is, the player never completely rules out any feasible future choice by himself or the opponents. We impose the following key conditions: (a) a player always believes that he will choose rationally in the future, (b) a player always believes that his opponents will choose rationally in the future, and (c) a player deems his own mistakes infinitely less likely than the opponents' mistakes. Common belief in these conditions leads to the new concept of perfect backwards rationalizability. We show that perfect backwards rationalizable strategies exist in every finite dynamic game. We prove, moreover, that perfect backwards rationalizability constitutes a refinement of both perfect rationalizability (a rationalizability analogue to Selten's (1975) perfect equilibrium) and procedural quasi-perfect rationalizability (a rationalizability analogue to van Damme's (1984) quasi-perfect equilibrium). As a consequence, it avoids both weakly dominated strategies in the normal form and strategies containing weakly dominated actions in the agent normal form.
Incomplete Information and Equilibrium (2017)
with Christian Bach
EPICENTER Working Paper No. 9
Abstract: In games with incomplete information Bayesian equilibrium constitutes the prevailing solution concept. We show that Bayesian equilibrium generalizes correlated equilibrium from complete to incomplete information. In particular, we provide an epistemic characterization of Bayesian equilibrium as well as of correlated equilibrium in terms of common belief in rationality and a common prior. Bayesian equilibrium is thus not the incomplete information counterpart of Nash equilibrium. To fill the resulting gap, we introduce the solution concept of generalized Nash equilibrium as the incomplete information analogue to Nash equilibrium, and show that it is more restrictive than Bayesian equilibrium. Besides, we propose a simplified tool to compute Bayesian equilibria.