Details
"Interpretability in Machine Learning: Perspective and applications in pediatric critical care"
Abstract:
The advent of high-dimensional datasets coupled with advances in machine learning methodologies has led to highly performant predictive models for many difficult tasks. Despite their potential value, however, such models are generally perceived as black boxes that elude human comprehension. Nevertheless, there is widespread interest, and increasing real-world deployment, of such models in a variety of domains. Often, these domains include high-stakes decision making such as in the financial, healthcare, and judiciary systems. The criticality of such decisions and the current opaqueness of complex models, has spurred a resurgence in research of model interpretability methods in the machine learning community. In this talk, we will explore conceptions of model interpretability and current efforts to form a framework for scientifically rigorous study. We will consider these in the context of clinician expectations of predictive model behavior and interpretability. Additionally, we will review state-of-the-art methods that address the identification of important input features relative to model predictions, a particularly important form of interpretability, and our application of these methods to sepsis recognition models developed for neonatal intensive care unit settings. Finally, we will discuss future directions of interpretability research, particularly as they relate to complex models for pediatric critical care.