Speaker
Details

Benchmarking decisions from visualizations and predictions
How well does a particular information display support decision-making? This question comes up when studying human behavior under different strategies for presenting information (e.g., forecast displays, data visualizations, displays or explanations of model predictions) and in our own research when we must decide how to plot our results or report effects. Understanding how helpful a visualization or other presentation is for judgment and decision-making is difficult because the observed performance in an experiment is confounded with aspects of the study design, such as how useful the information that is provided is for the task. Typical approaches to designing such studies make it difficult to assess how well study participants did relative to the best attainable performance on the task, and to diagnose sources of error in the results. I will discuss how decision-theoretic frameworks that conceive of the performance of a Bayesian rational agent can transform how we design and evaluate visualizations and other decision-support interfaces, such as explanations of model predictions.
Sponsorship of an event does not constitute institutional endorsement of external speakers or views presented.