Views Navigation

Event Views Navigation

Today
  • The Maps Inside Your Head

    MIT Building 32, Room 141 The Stata Center (32-141), 32 Vassar Street, Cambridge, MA, United States

    How do our brains make sense of a complex and unpredictable world? In this talk, I will discuss an information theory approach to the neural topography of information processing in the brain. First I will review the brain's architecture, and how neural circuits map out the sensory and cognitive worlds. Then I will describe how…

  • Regularized Nonlinear Acceleration

    MIT Building 32, Room 141 The Stata Center (32-141), 32 Vassar Street, Cambridge, MA, United States

    We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the…

  • Stochastics and Statistics Seminar – Amit Daniely (Google)

    MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

    Abstract:  Can learning theory, as we know it today, form a theoretical basis for neural networks. I will try to discuss this question in light of two new results — one positive and one negative. Based on joint work with Roy Frostig, Vineet Gupta and Yoram Singer, and with Vitaly Feldman Biography: Amit Daniely is…

  • Structure, Randomness and Universality

    32-G449 (Kiva) , United States

    What is the minimum possible number of vertices of a graph that contains every k-vertex graph as an induced subgraph? What is the minimum possible number of edges in a graph that contains every k-vertex graph with maximum degree 3 as a subgraph? These questions and related one were initiated by Rado in the 60s,…

  • Unbiased Markov chain Monte Carlo with couplings

    MIT Building E18, Room 304 Ford Building (E18), 50 Ames Street, Cambridge, MA, United States

    Abstract: Markov chain Monte Carlo methods provide consistent approximations of integrals as the number of iterations goes to infinity. However, these estimators are generally biased after any fixed number of iterations, which complicates both parallel computation. In this talk I will explain how to remove this burn-in  bias by using couplings of Markov chains and a…

  • Statistics, Computation and Learning with Graph Neural Networks

    Abstract: Deep Learning, thanks mostly to Convolutional architectures, has recently transformed computer vision and speech recognition. Their ability to encode geometric stability priors, while offering enough expressive power, is at the core of their success. In such settings, geometric stability is expressed in terms of local deformations, and it is enforced thanks to localized convolutional…


© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |