Traditional modeling assumptions in statistical learning rarely hold in practice due to noisy inputs, shifts in environment, omitted variables, and even adversarial attacks. This course surveys a range of emerging topics on reliability and robustness in machine learning, which extends the standard learning paradigm that only optimizes for the average-case performance. The first quarter of the class will cover foundational techniques such as generalization bounds, M-estimation theory, and information-theoretic lower bounds. Then, we'll look at selected topics in (distributionally) robust optimization, fairness, covariate shift, adversarial training, and causal inference. This is a doctoral level class intended for students pursuing research in related areas. There are no formal prerequisites, but the class will be fast-paced and will assume a strong background in machine learning, statistics, and optimization.
Division: Decision, Risk and Operations

Spring 2023


B9145 - 001