In my most recent work, I have designed behavioral experiments aimed at testing several hypotheses based on the theory of social norms that I developed in my book, The Grammar of Society: the Nature and Dynamics of Social Norms (Cambridge University Press, 2006). The experimental results show that most subjects have a conditional preference for following pro-social norms. Manipulating their expectations causes major behavioral changes (i.e., from fair to unfair choices, from cooperation to defection, etc.). One of the conclusions we may draw is that there are no such things as stable dispositions or unconditional preferences (to be fair, reciprocate, cooperate, and so on). Another is that policymakers who want to induce pro-social behavior have to work on changing people's expectations about how other people behave in similar situations. These results have major consequences for our understanding of moral behavior and the construction of better normative theories, grounded on what people can in fact do.
The nature and dynamics of social norms studies how norms may emerge and become stable, why an established norm may suddenly be abandoned, how is it possible that inefficient or unpopular norms survive, and what motivates people to obey norms. In order to answer some of these questions, I have combined evolutionary and game-theoretic tools with models of decision making drawn from cognitive and social psychology. For example, I use my theory of context-dependent preferences to build more realistic evolutionary models of the emergence of pro-social norms of fairness and reciprocity.
My earlier (but never completely abandoned) research focus was the epistemic foundations of game theory. I recently wrote about belief-revision in games, and what kind of solutions our belief-revision model supports. In my past work I have analyzed the consequences of relaxing the 'common knowledge' assumption in several classes of games. My contributions include axiomatic models of players' theory of the game and the proof that -- in a large class of games -- a player's theory of the game is consistent only if the player's knowledge is limited. An important consequence of assuming bounded knowledge is that it allows for more intuitive solutions to familiar games such as the finitely repeated prisoner's dilemma or the chain-store paradox. I have also been interested in devising mechanical procedures (algorithms) that allow players to compute solutions for games of perfect and imperfect information. Devising such procedures is particularly important for Artificial Intelligence applications, since interacting software agents have to be programmed to play a variety of 'games'.