Our lab studies how people and animals learn from trial and error (and from rewards and punishments) to make decisions, combining computational, neural, and behavioral perspectives. We focus on understanding how subjects cope with computationally demanding decision situations, such as choice under uncertainty or in tasks (such as spatial navigation or games like chess) requiring many decisions to be made sequentially. These are problems long studied by researchers in machine learning, and we draw on algorithms from this field for detailed, quantitative hypotheses about how the brain might approach these problems. Current projects include investigating how the brain controls its own decision-making computations -- in effect, making higher-level decisions about issues like how long to deliberate or when to simply act -- and how these processes might be implicated in issues of self control and in psychiatric disorders involving compulsion.
Doll, B.B., Duncan, K.D., Simon, D.A., Shohamy, D.S., and Daw, N.D., (2015) Model-based choices involve prospective neural activity. Nature Neuroscience 18:767-72.
Shohamy, D., and Daw, N.D. (2015) Integrating memories to guide decisions. Current Opinion in Behavioral Sciences 5:85-90.
Huys, Q.J., Daw, N.D., and Dayan, P. (2015) Depression: A decision theoretic analysis. Annual Reviews of Neuroscience 8:1-23.
Fleming, S., Maloney, L., and Daw, N.D., (2013) The irrationality of categorical perception. Journal of Neuroscience 33:19060-70.
Otto, A.R., Gershman, S.J., Markman, A.M, and Daw, N.D. (2013) The curse of planning: Dissecting multiple reinforcement learning systems by taxing the central executive. Psychological Science 24:751-761.