Stochastics and Statistics Seminar Series

  1. Events
  2. Stochastics and Statistics Seminar Series

Views Navigation

Event Views Navigation

Today

Stochastics and Statistics Seminar – Amit Daniely (Google)

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract:  Can learning theory, as we know it today, form a theoretical basis for neural networks. I will try to discuss this question in light of two new results — one positive and one negative. Based on joint work with Roy Frostig, Vineet Gupta and Yoram Singer, and with Vitaly Feldman Biography: Amit Daniely is…

Unbiased Markov chain Monte Carlo with couplings

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract: Markov chain Monte Carlo methods provide consistent approximations of integrals as the number of iterations goes to infinity. However, these estimators are generally biased after any fixed number of iterations, which complicates both parallel computation. In this talk I will explain how to remove this burn-in  bias by using couplings of Markov chains and a…

Statistics, Computation and Learning with Graph Neural Networks

Abstract: Deep Learning, thanks mostly to Convolutional architectures, has recently transformed computer vision and speech recognition. Their ability to encode geometric stability priors, while offering enough expressive power, is at the core of their success. In such settings, geometric stability is expressed in terms of local deformations, and it is enforced thanks to localized convolutional…

Generative Models and Compressed Sensing

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract:  The goal of compressed sensing is to estimate a vector from an under-determined system of noisy linear measurements, by making use of prior knowledge in the relevant domain. For most results in the literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed…

Challenges in Developing Learning Algorithms to Personalize Treatment in Real Time

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract:  A formidable challenge in designing sequential treatments is to  determine when and in which context it is best to deliver treatments.  Consider treatment for individuals struggling with chronic health conditions.  Operationally designing the sequential treatments involves the construction of decision rules that input current context of an individual and output a recommended treatment.   That…

Connections between structured estimation and weak submodularity

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract:  Many modern statistical estimation problems rely on imposing additional structure in order to reduce the statistical complexity and provide interpretability. Unfortunately, these structures often are combinatorial in nature and result in computationally challenging problems. In parallel, the combinatorial optimization community has placed significant effort in developing algorithms that can approximately solve such optimization problems…

Variable selection using presence-only data with applications to biochemistry

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract: In a number of problems, we are presented with positive and unlabelled data, referred to as presence-only responses. The application I present today involves studying the relationship between protein sequence and function and presence-only data arises since for many experiments it is impossible to obtain a large set of negative (non-functional) sequences. Furthermore, if…

User-friendly guarantees for the Langevin Monte Carlo

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract: In this talk, I will revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. I will discuss the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between, on the one…

Optimization’s Implicit Gift to Learning: Understanding Optimization Bias as a Key to Generalization

MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

Abstract: It is becoming increasingly clear that implicit regularization afforded by the optimization algorithms play a central role in machine learning, and especially so when using large, deep, neural networks. We have a good understanding of the implicit regularization afforded by stochastic approximation algorithms, such as SGD, and as I will review, we understand and…


© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |