Working Papers
Consequentialism in Dynamic Games (2024)
EPICENTER Working Paper No. 30
Abstract: In philosophy and decision theory, consequentialism reflects the assumption that an act is evaluated solely on the basis of the consequences it may induce, and nothing else. In this paper we study the idea of consequentialism in dynamic games by considering two versions: A commonly used utility-based version stating that the player's preferences are governed by a utility function on consequences, and a preference-based version which faithfully translates the original idea of consequentialism to restrictions on the player's preferences. Utility-based consequentialism always implies preference-based consequentialism, but the other direction is not necessarily true, as is shown by means of a counterexample. It turns out that utility-based consequentialism is equivalent to the assumption that the induced preference intensities on consequences are additive, whereas preference-based consequentialism only requires this property for every pair of strategies in isolation. We finally show that if the dynamic game either (i) has two strategies for the player we consider, or (ii) has observed past choices, or (iii) involves only two players and has perfect recall, then the two notions of consequentialism are equivalent in the absence of weakly dominated strategies.
Dynamic Consistency in Games without Expected Utility (2024)
EPICENTER Working Paper No. 29
Abstract: Within dynamic games we are interested in conditions on the players' preferences that imply dynamic consistency and the existence of sequentially optimal strategies. The latter means that the strategy is optimal at each of the player's information sets, given his beliefs there. These two properties are needed to undertake a meaningful game-theoretic analysis in dynamic games. To explore this we assume that every player holds a conditional preference relation -- a mapping that assigns to every probabilistic belief about the opponents' strategies a preference relation over his own strategies. We identify sets of very basic conditions on the conditional preference relations that guarantee dynamic consistency and the existence of sequentially optimal strategies, respectively. These conditions are implied by, but are much weaker than, assuming expected utility. That is, to undertake a meaningful game-theoretic analysis in dynamic games we can do with much less than expected utility.
Forward induction in a backward inductive manner (2023)
with Martin Meier
EPICENTER Working paper version
Abstract: We propose a new rationalizability concept for dynamic games with imperfect information, forward and backward rationalizability, that combines elements from forward and backward induction reasoning. It proceeds by applying the forward induction concept of strong rationalizability (also known as extensive-form rationalizability) in a backward inductive fashion: It first applies strong rationalizability from the last period onwards, subsequently from the penultimate period onwards, keeping the restrictions from the last period, and so on, until we reach the beginning of the game. We argue that, compared to strong rationalizability, the new concept provides a more compelling theory for how players react to surprises. Moreover, we show that the new concept always exists. It turns out that in terms of outcomes, the concept is equivalent to the pure forward induction concept of strong rationalizability, but both concepts may differ in terms of strategies. In terms of strategies, the new concept provides a refinement of the pure backward induction reasoning as embodied by backward dominance and backwards rationalizability. In fact, the new concept can be viewed as a backward looking strengthening of the forward looking concept of backwards rationalizability. Combining our results yields that every strongly rationalizable outcome is also backwards rationalizable. Finally, it is shown that the concept of forward and backward rationalizability satisfies the principle of supergame monotonicity: If a player learns that the game was actually preceded by some moves he was initially unaware of, then this new information will only refine, but never completely overthrow, his reasoning. Strong rationalizability violates this principle.
Expected utility as an expression of linear preference intensity (2023)
For my presentation at the One World Mathematical Game Theory Seminar, click here.
A previous version can be found here:
EPICENTER Working Paper No. 22
Abstract: In a decision problem or game we typically fix the person's utilities but not his beliefs. What, then, do these utilities represent? To explore this question we assume that the decision maker holds a conditional preference relation -- a mapping that assigns to every possible probabilistic belief a preference relation over his choices. We impose a list of axioms on such conditional preference relations that is both necessary and sufficient for admitting an expected utility representation. Most of these axioms express the idea that the decision maker's preference intensity between two choices changes linearly with the belief. Finally, we show that under certain conditions the relative utility differences are unique across the different expected utility representations.
Reasoning about Your Own Future Mistakes (2024)
with Martin Meier
A previous version can be found here:
EPICENTER Working Paper No. 21
Abstract: We propose a model of reasoning in dynamic games in which a player, at each information set, holds a conditional belief about his own future choices and the opponents' future choices. These conditional beliefs are assumed to be cautious, that is, the player never completely rules out any feasible future choice by himself or the opponents. We impose the following key conditions: (a) a player always believes that he will choose rationally in the future, (b) a player always believes that his opponents will choose rationally in the future, and (c) a player deems his own mistakes infinitely less likely than the opponents' mistakes. Common belief in these conditions leads to the new concept of perfect backwards rationalizability. We show that perfect backwards rationalizable strategies exist in every finite dynamic game. We prove, moreover, that perfect backwards rationalizability constitutes a refinement of both perfect rationalizability (a rationalizability analogue to Selten's (1975) perfect equilibrium) and procedural quasi-perfect rationalizability (a rationalizability analogue to van Damme's (1984) quasi-perfect equilibrium). As a consequence, it avoids both weakly dominated strategies in the normal form and strategies containing weakly dominated actions in the agent normal form.
Order Independence in Dynamic Games (2018)
Previous version appeared as EPICENTER Working Paper No. 8
Abstract: In this paper we investigate the order independence of iterated reduction procedures in dynamic games. We distinguish between two types of order independence: with respect to strategies and with respect to outcomes. The first states that the specific order of elimination chosen should not affect the final set of strategy combinations, whereas the second states that it should not affect the final set of reachable outcomes in the game. We provide sufficient conditions for both types of order independence: monotonicity, and monotonicity on reachable histories, respectively.
We use these sufficient conditions to explore the order independence properties of various reduction procedures in dynamic games: the extensive-form rationalizability procedure (Pearce (1984), Battigalli (1997)), the backward dominance procedure (Perea (2014)) and Battigalli and Siniscalchi's (1999) procedure for jointly rational belief systems (Reny (1993)). We finally exploit these results to prove that every outcome that is reachable under the extensive-form rationalizability procedure is also reachable under the backward dominance procedure.
Incomplete Information and Equilibrium (2017)
with Christian Bach
EPICENTER Working Paper No. 9
Abstract: In games with incomplete information Bayesian equilibrium constitutes the prevailing solution concept. We show that Bayesian equilibrium generalizes correlated equilibrium from complete to incomplete information. In particular, we provide an epistemic characterization of Bayesian equilibrium as well as of correlated equilibrium in terms of common belief in rationality and a common prior. Bayesian equilibrium is thus not the incomplete information counterpart of Nash equilibrium. To fill the resulting gap, we introduce the solution concept of generalized Nash equilibrium as the incomplete information analogue to Nash equilibrium, and show that it is more restrictive than Bayesian equilibrium. Besides, we propose a simplified tool to compute Bayesian equilibria.
Local Prior Expected Utility: A Basis for Utility Representations under Uncertainty (2015)
with Christian Nauerz
EPICENTER Working Paper No. 6
Abstract: Abstract models of decision-making under ambiguity are widely used in economics. One stream of such models results from weakening the independence axiom in Anscombe et al. (1963). We identify necessary assumptions on independence to represent the decision maker's preferences such that he acts as if he maximizes expected utility with respect to a possibly local prior. We call the resulting representation Local Prior Expected Utility, and show that the prior used to evaluate a certain act can be obtained by computing the gradient of some appropriately defined utility mapping. The numbers in the gradient, moreover, can naturally be interpreted as the subjective likelihoods the decision maker assigns to the various states. Building on this result we provide a unified approach to the representation results of Maximin Expected Utility and Choquet Expected Utility and characterize the respective sets of priors.
When do Types Induce the Same Belief Hierarchy? The Case of Finitely Many Types (2014)
EPICENTER Working Paper No. 1
Abstract: Harsanyi (1967--1968) showed that belief hierarchies can be encoded by means of epistemic models with types. Indeed, for every type within an epistemic model we can derive the full belief hierarchy it induces. But for one particular belief hierarchy, there are in general many different ways of encoding it within an epistemic model. In this paper we give necessary and sufficient conditions such that two types, from two possibly different epistemic models, induce exactly the same belief hierarchy. The conditions are relatively easy to check, and seem relevant both for practical and theoretical purposes.