Mean-Field-Type Games and Control: Direct Method

Julian Barreiro-gomez1

  • 1New York University Abu Dhabi (NYUAD)

Details

16:30 - 18:30 | Tue 15 Oct | Andino | Tu4-2-1

Session: Workshop 5

Abstract

The mean-field term has been referred to as a physics concept that attempts to describe the effect of an infinite number of particles on the motion of a single particle.
Researchers began to apply the concept to social sciences in the early 1960s to study how an infinite number of factors affect individual decisions. However, the key ingredient in a game-theoretic context is the influence of the distribution of states and or control actions into the payoffs of the decision-makers. There is no need to have large population of decision-makers. A mean-field-type game is a game in which the payoffs and/or the state dynamics coefficient functions involve not only the state and actions profiles but also the distributions of state-action process (or its marginal distributions). Games with distribution-dependent quantity-of-interest such as state and/or payoffs are particularly attractive because they capture not only the mean, but also the variance and higher order terms. The incorporation of these mean and variance terms is associated to the mean-variance paradigm introduced by H. Markowitz, 1990 Nobel Laureate in Economics. This tutorial course in devoted to finding semi-explicit solutions for mean-field-type games and control by following the direct method. We study different approaches such as non-cooperation, fully-cooperation, adversarial scenario and co-opetition for both continuous- and discrete-time cases.