E18-304

  1. Events
  2. Venues
  3. E18-304
Events at this venue
Today

Frontiers of Efficient Neural-Network Learnability

E18-304 , United States

Abstract: What are the most expressive classes of neural networks that can be learned, provably, in polynomial-time in a distribution-free setting? In this talk we give the first efficient algorithm for learning neural networks with two nonlinear layers using tools for solving isotonic regression, a nonconvex (but tractable) optimization problem. If we further assume the…

Power of Experimental Design and Active Learning

E18-304 , United States

Classical supervised machine learning algorithms focus on the setting where the algorithm has access to a fixed labeled dataset obtained prior to any analysis. In most applications, however, we have control over the data collection process such as which image labels to obtain, which drug-gene interactions to record, which network routes to probe, which movies…

Some New Insights On Transfer Learning

E18-304 , United States

Abstract: The problem of transfer and domain adaptation is ubiquitous in machine learning and concerns situations where predictive technologies, trained on a given source dataset, have to be transferred to a new target domain that is somewhat related. For example, transferring voice recognition trained on American English accents to apply to Scottish accents, with minimal…

Probabilistic Modeling meets Deep Learning using TensorFlow Probability

E18-304 , United States

IDS.190 - Topics in Bayesian Modeling and Computation Speaker: Brian Patton (Google AI) Abstract: TensorFlow Probability provides a toolkit to enable researchers and practitioners to integrate uncertainty with gradient-based deep learning on modern accelerators. In this talk we'll walk through some practical problems addressed using TFP; discuss the high-level interfaces, goals, and principles of the…

Automated Data Summarization for Scalability in Bayesian Inference

E18-304 , United States

IDS.190 - Topics in Bayesian Modeling and Computation Abstract: Many algorithms take prohibitively long to run on modern, large datasets. But even in complex data sets, many data points may be at least partially redundant for some task of interest. So one might instead construct and use a weighted subset of the data (called a…

GANs, Optimal Transport, and Implicit Density Estimation

E18-304 , United States

Abstract: We first study the rate of convergence for learning distributions with the adversarial framework and Generative Adversarial Networks (GANs), which subsumes Wasserstein, Sobolev, and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions, under a collection of objective evaluation metrics. On the nonparametric end, we investigate the…

Conference on Synthetic Controls and Related Methods

E18-304 , United States

Organizers are Alberto Abadie (MIT), Victor Chernozhukov (MIT), and Guido Imbens (Stanford University). The program is posted here. Participation by invitation only.

Counting and sampling at low temperatures

E18-304 , United States

Abstract: We consider the problem of efficient sampling from the hard-core and Potts models from statistical physics. On certain families of graphs, phase transitions in the underlying physics model are linked to changes in the performance of some sampling algorithms, including Markov chains. We develop new sampling and counting algorithms that exploit the phase transition…

Design and Analysis of Two-Stage Randomized Experiments

E18-304 , United States

Abstract: In many social science experiments, subjects often interact with each other and as a result, one unit's treatment can influence the outcome of another unit. Over the last decade, a significant progress has been made towards causal inference in the presence of such interference between units. In this talk, we will discuss two-stage randomized…


© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |