Stochastics and Statistics Seminar Series

  1. Events
  2. Stochastics and Statistics Seminar Series

Views Navigation

Event Views Navigation

Today

Model-X knockoffs for controlled variable selection in high dimensional nonlinear regression

E18-304 , United States

Abstract: Many contemporary large-scale applications, from genomics to advertising, involve linking a response of interest to a large set of potential explanatory variables in a nonlinear fashion, such as when the response is binary. Although this modeling problem has been extensively studied, it remains unclear how to effectively select important variables while controlling the fraction…

Bias Reduction and Asymptotic Efficiency in Estimation of Smooth Functionals of High-Dimensional Covariance

E18-304 , United States

Abstract: We discuss a recent approach to bias reduction in a problem of estimation of smooth functionals of high-dimensional parameters of statistical models. In particular, this approach has been developed in the case of estimation of functionals of covariance operator Σ : Rd → Rd of the form f(Σ), B based on n i.i.d. observations…

Reducibility and Computational Lower Bounds for Some High-dimensional Statistics Problems

E18-304 , United States

Abstract: The prototypical high-dimensional statistics problem entails finding a structured signal in noise. Many of these problems exhibit an intriguing phenomenon: the amount of data needed by all known computationally efficient algorithms far exceeds what is needed for inefficient algorithms that search over all possible structures. A line of work initiated by Berthet and Rigollet…

Large girth approximate Steiner triple systems

E18-304 , United States

Abstract: In 1973 Erdos asked whether there are n-vertex partial Steiner triple systems with arbitrary high girth and quadratically many triples. (Here girth is defined as the smallest integer g \ge 4 for which some g-element vertex-set contains at least g-2 triples.) We answer this question, by showing existence of approximate Steiner triple systems with…

Optimization of the Sherrington-Kirkpatrick Hamiltonian

E18-304 , United States

Andrea Montanari Professor, Department of Electrical Engineering, Department of Statistics Stanford University This lecture is in conjunction with the LIDS Student Conference. Abstract: Let A be n × n symmetric random matrix with independent and identically distributed Gaussian entries above the diagonal. We consider the problem of maximizing xT Ax over binary vectors with ±1 entries.…

Medical Image Imputation

E18-304 , United States

Abstract: We present an algorithm for creating high resolution anatomically plausible images that are consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large databases of clinical images contain a wealth of information, medical acquisition constraints result in sparse scans that miss much of the anatomy. These characteristics often render computational analysis…

TAP free energy, spin glasses, and variational inference

E18-304 , United States

Abstract: We consider the Sherrington-Kirkpatrick model of spin glasses with ferromagnetically biased couplings. For a specific choice of the couplings mean, the resulting Gibbs measure is equivalent to the Bayesian posterior for a high-dimensional estimation problem known as "Z2 synchronization". Statistical physics suggests to compute the expectation with respect to this Gibbs measure (the posterior mean…

Capacity lower bound for the Ising perceptron

E18-304 , United States

Abstract: The perceptron is a toy model of a simple neural network that stores a collection of given patterns. Its analysis reduces to a simple problem in high-dimensional geometry, namely, understanding the intersection of the cube (or sphere) with a collection of random half-spaces. Despite the simplicity of this model, its high-dimensional asymptotics are not…

Why Aren’t Network Statistics Accompanied By Uncertainty Statements?

E18-304 , United States

Abstract: Over 500K scientific articles have been published since 1999 with the word “network” in the title. And the vast majority of these report network summary statistics of one type or another. However, these numbers are rarely accompanied by any quantification of uncertainty. Yet any error inherent in the measurements underlying the construction of the…

Univariate total variation denoising, trend filtering and multivariate Hardy-Krause variation denoising

E18-304 , United States

Abstract: Total variation denoising (TVD) is a popular technique for nonparametric function estimation. I will first present a theoretical optimality result for univariate TVD for estimating piecewise constant functions. I will then present related results for various extensions of univariate TVD including adaptive risk bounds for higher-order TVD (also known as trend filtering) as well…


© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |