Research in the Niv lab focuses on the neural and computational processes underlying reinforcement learning and decision-making. We study the ongoing day-to-day processes by which we learn from trial and error, without explicit instructions, to predict future events and to act upon the environment so as to maximize reward and minimize punishment. In particular, we are interested in how attention and memory processes interact with reinforcement learning to create representations that allows us to learn to solve new tasks so efficiently.
The data of interest come from decades of animal conditioning literature, and the myriad of more recent investigations into the neural underpinnings of conditioned behavior and human decision-making. Our approach is to use computational modeling techniques and analytical tools, specifically from reinforcement learning, Bayesian inference and machine learning, in combination with experimental investigations of human functional imaging and (in collaboration with other labs) data from experiments in rodents. Our emphasis is on model-based experimentation: we use computational models to define precise hypotheses about data, to design experiments, and to analyze results. In particular, we are interested in normative explanations of behavior: models that offer a principled understanding of why our brain mechanisms use the computational algorithms that they do, and in what sense, if at all, these are optimal. In our hands, the main goal of computational models is not to simulate the system, but rather to understand what high-level computations is that system realizing, and what functionality do these computations fulfill.