Archie Chapman
Abstract: Many theories of mind emphasize humans ability to construct mental models of situations, and to reason based on these models. The LSG, in each of its four iterations, has shown both the benefits of such models, and the costs of poor or incorrectly specified models. One key observation on the (early) success and (later) failure of our entries is that a model's predictive accuracy is more important than its realism or its precision, meaning that hueristics and rules need not be dominated by "realistic" opponent models. Reasoning over abstract rules is in some sense easier than reasoning more fine-grained and/or hyper-rational models in complex settings, such as the generalised LSG games used in the 3rd and 4th competitions. I will argue that the later instantiation of the LSG competition thereby highlight the limits and scope of rationality in decision and game theory, and with examples drawn from my own successes and failures in the LSG, and with an emphasis on evolved "conventions of reasoning".