News

ML Seminar: The Role of Regularization in Overparameterized Neural Networks

Oct. 19, 2020
R. Srikant (University of Illinois at Urbana-Champaign) Date: Friday, October 16, 2020Time: 11:00 AM – 12:00 PM (CDT; UTC -5) Title: The Role of Regularization in Overparameterized Neural Networks

Wireless ML Seminar - Learning to Learn to Communicate

Sept. 28, 2020
The application of supervised learning techniques for the design of the physical layer of a communication link is often impaired by the limited amount of pilot data available for each device; while the use of unsupervised learning is typically limited by the need to carry out a large number of training iterations. In this talk, meta-learning, or learning-to-learn, is introduced as a tool to alleviate these problems.

ML Seminar - On Heterogeneity in Federated Settings

Sept. 28, 2020
A defining characteristic of federated learning is the presence of heterogeneity, i.e., that data and compute may differ significantly across the network. In this talk I show that the challenge of heterogeneity pervades the machine learning process in federated settings, affecting issues such as optimization, modeling, and fairness. In terms of optimization, I discuss FedProx, a distributed optimization method that offers robustness to systems and statistical heterogeneity.

ML Seminar: On the convergence of gradient descent for wide two-layer neural networks

Sept. 21, 2020
Many supervised learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this talk, I will consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.

ML Seminar - Two Facets of Learning Robust Models (Hamed Hassani)

Sept. 14, 2020
Two Facets of Learning Robust Models: Fundamental Limits and Generalization to Natural Out-of-Distribution Inputs-Prof. Hamed Hassani (Univ. of Pennsylvania)

ML Seminar - Modeling Uncertainty in Learning with Little Data

May 14, 2020
Few-shot classification, the task of adapting a classifier to unseen classes given a small labeled dataset, is an important step on the path toward human-like machine learning. I will present what I think are some of the key advances and open questions in this area. I will then focus on the fundamental issue of overfitting in the few-shot scenario. Bayesian methods are well-suited to tackling this issue because they allow practitioners to specify prior beliefs and update those beliefs in light of observed data.

ML Seminar - Robust Distributed Training! But at What Cost?

May 14, 2020
Abstract: In this talk, we aim to quantify the robustness of distributed training against worst-case failures and adversarial nodes. We show that there is a gap between robustness guarantees, depending on whether adversarial nodes have full control of the hardware, the training data, or both. Using ideas from robust statistics and coding theory we establish robust and scalable training methods for centralized, parameter server systems. Perhaps unsurprisingly, we prove that robustness is impossible when a central authority does not own the training data, e.g., in federated learning systems.

Stefanie Jegelka: An introduction to Submodularity, Part 1

Nov. 16, 2015
Abstract: Submodular functions capture a wide spectrum of discrete problems in machine learning, signal processing and computer vision. They are characterized by intuitive notions of diminishing returns and economies of scale, and often lead to practical algorithms with theoretical guarantees. In the first part of this talk, I will give a general introduction to the concept of submodular functions, their optimization and example applications in machine learning.

Stefanie Jegelka: An introduction to submodularity, Part 2

Nov. 16, 2015
Abstract: Submodular functions capture a wide spectrum of discrete problems in machine learning, signal processing and computer vision. They are characterized by intuitive notions of diminishing returns and economies of scale, and often lead to practical algorithms with theoretical guarantees. In the first part of this talk, I will give a general introduction to the concept of submodular functions, their optimization and example applications in machine learning.

WNCG Seminar Series: David Sontag: How Hard is Inference for Structured Prediction

Nov. 16, 2015
Abstract: Many machine learning tasks can be posed as structured prediction, where the goal is to predict a labeling or structured object. For example, the input may be an image or a sentence, and the output is a labeling such as an assignment of each pixel in the image to foreground or background, or the parse tree for the sentence. Despite marginal and MAP inference for many of these models being NP-hard in the worst-case, approximate inference algorithms are remarkably successful and as a result structured prediction is widely used.