Events

Upcoming Events

Fri
Sep 11
11:00 AM
Online

Hamed Hassani (UPenn)

Date: Friday, September 11, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Fri
Sep 18
11:00 AM
Online

Francis Bach (Inria)

Date: Friday, September 18, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Fri
Sep 25
11:00 AM
Online

Virginia Smith (CMU)

Date: Friday, September 25, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Fri
Oct 02
1:30 PM
Online

Satyen Kale (Google Research) 

Date: Friday, October 2, 2020
Time: 1:30 PM – 2:30 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Abstract: TBD

Fri
Oct 09
11:00 AM
Online

Rahul Jain (USC)

Date: Friday, October 9, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD 

Abstract: TBD

Fri
Oct 16
11:00 AM
Online

Rayadurgam Srikant (UIUC)

Date: Friday, October 16, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD 

Abstract: TBD

Fri
Nov 13
11:00 AM
Online

Maryam Fazel (University of Washington)

Date: Friday, November 13, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Abstract: TBD

Fri
Nov 20
11:00 AM
Online

Stefanie Jegelka (MIT)

Date: Friday, November 20, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Abstract: TBD

Recent Events

08 May 2020

Join us for a special virtual installment of the ML Seminar Series:

In this talk, we aim to quantify the robustness of distributed training against worst-case failures and adversarial nodes. We show that there is a gap between robustness guarantees, depending on whether adversarial nodes have full control of the hardware, the training data, or both. Using ideas from robust statistics and coding theory we establish robust and scalable training methods for centralized, parameter server systems.

07 May 2020

Few-shot classification, the task of adapting a classifier to unseen classes given a small labeled dataset, is an important step on the path toward human-like machine learning. I will present what I think are some of the key advances and open questions in this area. I will then focus on the fundamental issue of overfitting in the few-shot scenario. Bayesian methods are well-suited to tackling this issue because they allow practitioners to specify prior beliefs and update those beliefs in light of observed data.

06 Mar 2020

Large-scale machine learning training, in particular, distributed stochastic gradient descent (SGD), needs to be robust to inherent system variability such as unpredictable computation and communication delays. This work considers a distributed SGD framework where each worker node is allowed to perform local model updates and the resulting models are averaged periodically. Our goal is to analyze and improve the true speed of error convergence with respect to wall-clock time (instead of the number of iterations).