WNCG Seminar Series: David Sontag: How Hard is Inference for Structured Prediction

Share this content

Published:
November 16, 2015

Abstract: Many machine learning tasks can be posed as structured prediction, where the goal is to predict a labeling or structured object. For example, the input may be an image or a sentence, and the output is a labeling such as an assignment of each pixel in the image to foreground or background, or the parse tree for the sentence. Despite marginal and MAP inference for many of these models being NP-hard in the worst-case, approximate inference algorithms are remarkably successful and as a result structured prediction is widely used.

What makes these real-world instances different from worst-case instances? One key difference is that in all of these applications, there is an underlying "ground truth" which structured prediction is aiming to find. In this talk, I will introduce a new theoretical framework for analyzing structured prediction algorithms in terms of their ability to achieve small Hamming error. We study the computational and statistical trade-offs that arise in this setting, and illustrate a setting where polynomial-time algorithms can perform optimal prediction, despite the corresponding MAP inference task being NP-hard.

Based on joint work with Amir Globerson, Ofer Meshi, Tim Roughgarden, and Cafer Yildirim.

Speaker Bio: David Sontag is an Assistant Professor of Computer Science and Data Science at NYU. Computer Science is part of the Courant Institute of Mathematical Sciences. His research focuses on machine learning and probabilistic inference, with a particular focus on applications to clinical medicine. For example, he is developing algorithms to learn probabilistic models for medical diagnosis directly from unstructured clinical data, automatically discovering and predicting latent (hidden) variables. Prof. Sontag collaborates with the Emergency Medicine Informatics Research Lab at Beth Israel Deaconess Medical Center and with Independence Blue Cross.

Previously, he was a post-doc at Microsoft Research New England. His Ph.D. is in Computer Science from MIT, where he worked with Tommi Jaakkola on approximate inference and learning in probabilistic models. Prof. Sontag received a bachelors degree from UC Berkeley in Computer Science, where he worked with Stuart Russell's First-Order Probabilistic Logic group.

News category:
Multimedia