BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IDSS STAGE - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IDSS STAGE
X-ORIGINAL-URL:https://idss-stage.mit.edu
X-WR-CALDESC:Events for IDSS STAGE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20160313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20161106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20170312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20171105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180406T080000
DTEND;TZID=America/New_York:20180408T170000
DTSTAMP:20260405T123412
CREATED:20180223T172129Z
LAST-MODIFIED:20180223T172514Z
UID:7426-1523001600-1523206800@idss-stage.mit.edu
SUMMARY:MIT Policy Hackathon:  Data to Decisions
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/mit-policy-hackathon-data-to-decisions/
CATEGORIES:Conferences and Workshops
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2018/02/Screen-Shot-2018-02-16-at-9.26.50-AM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180403T160000
DTEND;TZID=America/New_York:20180403T170000
DTSTAMP:20260405T123412
CREATED:20171228T155630Z
LAST-MODIFIED:20180405T154714Z
UID:7191-1522771200-1522774800@idss-stage.mit.edu
SUMMARY:Computational Social Science: Exciting Progress and Future Challenges
DESCRIPTION:﻿ \nAbstract\nThe past 15 years have witnessed a remarkable increase in both the scale and scope of social and behavioral data available to researchers\, leading some to herald the emergence of a new field: “computational social science.” In this talk I highlight two areas of research that would not have been possible just a handful of years ago: first\, using “big data” to study social contagion on networks; and second\, using virtual labs to extend the scale\, duration\, and complexity of traditional lab experiments. Although these examples were all motivated by substantive problems of longstanding interest to social science\, they also illustrate how new classes of data can cast these problems in new light. At the same\, they illustrate some important limitations faced by our existing data generating platforms. I then conclude with some thoughts on how CSS might overcome some of these obstacles to progress. \nBio\nDuncan Watts is a principal researcher at Microsoft Research and a founding member of the MSR-NYC lab. He is also an AD White Professor at Large at Cornell University. Prior to joining MSR in 2012\, he was from 2000-2007 a professor of Sociology at Columbia University\, and then a principal research scientist at Yahoo! Research\, where he directed the Human Social Dynamics group. His research on social networks and collective dynamics has appeared in a wide range of journals\, from Nature\, Science\, and Physical Review Letters to the American Journal of Sociology and Harvard Business Review\, and has been recognized by the 2009 German Physical Society Young Scientist Award for Socio and Econophysics\, the 2013 Lagrange-CRT Foundation Prize for Complexity Science\, and the 2014 Everett Rogers M. Rogers Award. He is also the author of three books: Six Degrees: The Science of a Connected Age (W.W. Norton\, 2003) and Small Worlds: The Dynamics of Networks between Order and Randomness (Princeton University Press\, 1999)\, and most recently Everything is Obvious: Once You Know The Answer (Crown Business\, 2011). Watts holds a B.Sc. in Physics from the Australian Defence Force Academy\, from which he also received his officer’s commission in the Royal Australian Navy\, and a Ph.D. in Theoretical and Applied Mechanics from Cornell University.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-seminar-duncan-watts-microsoft-research-nyc/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://idss-stage.mit.edu/wp-content/uploads/2017/10/IMG_1788.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180323T110000
DTEND;TZID=America/New_York:20180323T120000
DTSTAMP:20260405T123412
CREATED:20180205T145624Z
LAST-MODIFIED:20180205T145624Z
UID:7346-1521802800-1521806400@idss-stage.mit.edu
SUMMARY:Statistical theory for deep neural networks with ReLU activation function
DESCRIPTION:Abstract: The universal approximation theorem states that neural networks are capable of approximating any continuous function up to a small error that depends on the size of the network. The expressive power of a network does\, however\, not guarantee that deep networks perform well on data. For that\, control of the statistical estimation risk is needed. In the talk\, we derive statistical theory for fitting deep neural networks to data generated from the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to logarithmic factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture\, the tuning parameter is the sparsity of the network. Specifically\, we consider large networks with number of potential parameters being much bigger than the sample size. Interestingly\, the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that scaling the network depth with the logarithm of the sample size is natural.
URL:https://idss-stage.mit.edu/calendar/statistical-theory-for-deep-neural-networks-with-relu-activation-function/
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180320T150000
DTEND;TZID=America/New_York:20180320T160000
DTSTAMP:20260405T123412
CREATED:20180223T172446Z
LAST-MODIFIED:20180223T172446Z
UID:7435-1521558000-1521561600@idss-stage.mit.edu
SUMMARY:LIDS Seminar Series - Lizhong Zheng
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/lids-seminar-series-lizhong-zheng/
LOCATION:32-141\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180316T110000
DTEND;TZID=America/New_York:20180316T120000
DTSTAMP:20260405T123412
CREATED:20180302T201932Z
LAST-MODIFIED:20180302T201932Z
UID:7461-1521198000-1521201600@idss-stage.mit.edu
SUMMARY:When Inference is Tractable
DESCRIPTION:Abstract:\nA key capability of artificial intelligence will be the ability to\nreason about abstract concepts and draw inferences. Where data is\nlimited\, probabilistic inference in graphical models provides a\npowerful framework for performing such reasoning\, and can even be used\nas modules within deep architectures. But\, when is probabilistic\ninference computationally tractable? I will present recent theoretical\nresults that substantially broaden the class of provably tractable\nmodels by exploiting model stability (Lang\, Sontag\, Vijayaraghavan\, AI\nStats ’18)\, structure in model parameters (Weller\, Rowland\, Sontag\, AI\nStats ’16)\, and reinterpreting inference as ground truth recovery\n(Globerson\, Roughgarden\, Sontag\, Yildirim\, ICML ’15). \nBio:\nDavid Sontag is an Assistant Professor in the Department of Electrical\nEngineering and Computer Science (EECS) at MIT\, and member of the\nInstitute for Medical Engineering and Science and the Computer Science\nand Artificial Intelligence Laboratory (CSAIL). Prior to joining MIT\,\nDr. Sontag was an Assistant Professor in Computer Science and Data\nScience at New York University from 2011 to 2016\, and a postdoctoral\nresearcher at Microsoft Research New England. Dr. Sontag received the\nSprowls award for outstanding doctoral thesis in Computer Science at\nMIT in 2010\, best paper awards at the conferences Empirical Methods in\nNatural Language Processing (EMNLP)\, Uncertainty in Artificial\nIntelligence (UAI)\, and Neural Information Processing Systems (NIPS)\,\nfaculty awards from Google\, Facebook\, and Adobe\, and a National\nScience Foundation Early Career Award. Dr. Sontag received a B.A. from\nthe University of California\, Berkeley.
URL:https://idss-stage.mit.edu/calendar/when-inference-is-tractable/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180313T150000
DTEND;TZID=America/New_York:20180313T160000
DTSTAMP:20260405T123412
CREATED:20180223T172349Z
LAST-MODIFIED:20180223T172349Z
UID:7433-1520953200-1520956800@idss-stage.mit.edu
SUMMARY:The Power of Multiple Samples in Generative Adversarial Networks
DESCRIPTION:Abstract \nWe bring the tools from Blackwell’s seminal result on comparing two stochastic experiments from 1953\, to shine a new light on a modern application of great interest: Generative Adversarial Networks (GAN). Binary hypothesis testing is at the center of training GANs\, where a trained neural network (called a critic) determines whether a given sample is from the real data or the generated (fake) data. By jointly training the generator and the critic\, the hope is that eventually\, the trained generator will generate realistic samples. One of the major challenges in GAN is known as “mode collapse”; the lack of diversity in the samples generated by thus trained generators. We propose a new training framework\, where the critic is fed with multiple samples jointly (which we call packing)\, as opposed to each sample separately as done in standard GAN training. With this simple but fundamental departure from existing GANs\, experimental results show that the diversity of the generated samples improve significantly. We analyze this practical gain by first providing a formal mathematical definition of mode collapse and making a fundamental connection between the idea of packing and the intensity of mode collapse. Precisely\, we show that the packed critic naturally penalizes mode collapse\, thus encouraging generators with less mode collapse. The analyses critically rely on operational interpretation of hypothesis testing and corresponding data processing inequalities\, which lead to sharp analyses with simple proofs. For this talk\, Prof. Sewoong Oh will assume no prior background on GANs. \nBiography \nSewoong Oh is an Assistant Professor of Industrial and Enterprise Systems Engineering at UIUC. He received his Ph.D. from the Department of Electrical Engineering at Stanford University. Following his Ph.D.\, he worked as a postdoctoral researcher at Laboratory for Information and Decision Systems (LIDS) at MIT. His research interest is in theoretical machine learning\, including spectral methods\, ranking\, crowdsourcing\, estimation of information measures\, differential privacy\, and generative adversarial networks. He was co-awarded the best paper award at the SIGMETRICS in 2015\, NSF CAREER award in 2016 and GOOGLE Faculty Research Award.
URL:https://idss-stage.mit.edu/calendar/the-power-of-multiple-samples-in-generative-adversarial-networks/
LOCATION:32-141\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180309T110000
DTEND;TZID=America/New_York:20180309T120000
DTSTAMP:20260405T123412
CREATED:20171215T165643Z
LAST-MODIFIED:20180305T132412Z
UID:7157-1520593200-1520596800@idss-stage.mit.edu
SUMMARY:Statistical estimation under group actions: The Sample Complexity of Multi-Reference Alignment
DESCRIPTION:Abstract: \nMany problems in signal/image processing\, and computer vision amount to estimating a signal\, image\, or tri-dimensional structure/scene from corrupted measurements. A particularly challenging form of measurement corruption are latent transformations of the underlying signal to be recovered. Many such transformations can be described as a group acting on the object to be recovered. Examples include the Simulatenous Localization and Mapping (SLaM) problem in Robotics and Computer Vision\, where pictures of a scene are obtained from different positions andorientations; Cryo-Electron Microscopy (Cryo-EM) imaging where projections of a molecule density are taken from unknown rotations\, andseveral others. \nOne fundamental example of this type of problems is Multi-Reference Alignment: Given a group acting in a space\, the goal is to estimate an orbit of the group action from noisy samples. For example\, in one of its simplest forms\, one is tasked with estimating a signal from noisy cyclically shifted copies. We will show that the number of observations needed by any method has a surprising dependency on the signal-to-noise ratio (SNR)\, and algebraic properties of the underlying group action. Remarkably\, in some important cases\, this sample complexity is achieved with computationally efficient methods based on computing invariants under the group of transformations.
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-6/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180305
DTEND;VALUE=DATE:20180306
DTSTAMP:20260405T123412
CREATED:20171025T183342Z
LAST-MODIFIED:20180501T145941Z
UID:6750-1520208000-1520294399@idss-stage.mit.edu
SUMMARY:Women in Data Science (WiDS) - Cambridge\, MA
DESCRIPTION:The global Women in Data Science (WiDS) Conference aims to inspire and educate data scientists\, regardless of gender\, and support women in the field. This one-day technical conference provides an opportunity to hear about the latest data science related research in a number of domains\, learn how leading-edge companies are leveraging data science for success\, and connect with potential mentors\, collaborators\, and others in the field.  Free and open to the public.
URL:https://idss-stage.mit.edu/calendar/women-in-data-science-wids-cambridge-ma/
LOCATION:Microsoft NERD Center\, 1 Memorial Drive\, Suite 100\, Cambridge\, MA\, 02142\, United States
CATEGORIES:Conferences and Workshops
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2017/10/Screen-Shot-2018-01-10-at-1.28.44-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180302T110000
DTEND;TZID=America/New_York:20180302T120000
DTSTAMP:20260405T123412
CREATED:20171215T165516Z
LAST-MODIFIED:20180214T152856Z
UID:7155-1519988400-1519992000@idss-stage.mit.edu
SUMMARY:One and two sided composite-composite tests in Gaussian mixture models
DESCRIPTION:Abstract: Finding an efficient test for a testing problem is often linked to the problem of estimating a given function of the data. When this function is not smooth\, it is necessary to approximate it cleverly in order to build good tests.\nIn this talk\, we will discuss two specific testing problems in Gaussian mixtures models. In both\, the aim is to test the proportion of null means. The aforementioned link between sharp approximation rates of non-smooth objects and minimax testing rates is particularly well illustrated by these problems. \n(based on joint works with Nicolas Verzelen\, Etienne Roquain and Sylvain Delattre) \nBiography:  Alexandra Carpenter is since October 2017 chair of Mathematical Statistics and Machine Learning in the Institut für Mathematische Stochastik (IMST)\, Fakultät für Mathematik (FMA)\, in the Otto-von-Guericke-Universität Magdeburg. Prior to that\, she was between 2015 and 2017 the group leader of the DFG Emmy Noether group MuSyAD on theoretical anomaly detection in the Universitaet Potsdam\, and between 2012 and 2015 in the StatsLab in the University of Cambridge as a research associate\, working with Richard Nickl. She finished her PhD in 2012 in INRIA Lille Nord-Europe under the supervision of Remi Munos and on the topic of bandit theory. Her research interests are in machine learning and mathematical statistics with an emphasis on composite testing problems\, adaptive inference in high and infinite dimension and sequential learning (e.g. bandit theory).
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-5/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180227T150000
DTEND;TZID=America/New_York:20180227T160000
DTSTAMP:20260405T123412
CREATED:20180223T172101Z
LAST-MODIFIED:20180223T172101Z
UID:7430-1519743600-1519747200@idss-stage.mit.edu
SUMMARY:Safe Learning in Robotics
DESCRIPTION:Abstract \nA great deal of research in recent years has focused on robot learning. In many applications\, guarantees that specifications are satisfied throughout the learning process are paramount. For the safety specification\, we present a controller synthesis technique based on the computation of reachable sets using optimal control. We show recent results in system decomposition to speed up this computation\, and how offline computation may be used in online applications. We then present a method combining reachability with machine learning\, which uses approximate knowledge of the dynamics to provide a least-restrictive\, safety-preserving control law which intervenes only when the computed safety guarantees require it\, or when confidence in the computed guarantee decays in light of new observations. We will illustrate these methods on a quadrotor UAV experimental platform which we have at Berkeley. \nBiography \nClaire Tomlin is the Charles A. Desoer Professor of Engineering in EECS at Berkeley. She was an Assistant\, Associate\, and Full Professor in Aeronautics and Astronautics at Stanford from 1998 to 2007\, and in 2005 joined Berkeley. Claire works in the area of control theory and hybrid systems\, with applications to air traffic management\, UAV systems\, energy\, robotics\, and systems biology. She is a MacArthur Foundation Fellow (2006)\, an IEEE Fellow (2010)\, and in 2017 was awarded the IEEE Transportation Technologies Award.​​
URL:https://idss-stage.mit.edu/calendar/safe-learning-in-robotics/
LOCATION:32-141\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180226T160000
DTEND;TZID=America/New_York:20180226T170000
DTSTAMP:20260405T123412
CREATED:20180209T173751Z
LAST-MODIFIED:20180209T174117Z
UID:7358-1519660800-1519664400@idss-stage.mit.edu
SUMMARY:Provably Secure Machine Learning
DESCRIPTION:Abstract:  The widespread use of machine learning systems creates a new class of computer security vulnerabilities where\, rather than attacking the integrity of the software itself\, malicious actors exploit the statistical nature of the learning algorithms. For instance\, attackers can add fake data (e.g. by creating fake user accounts)\, or strategically manipulate inputs to the system once it is deployed. \nSo far\, attempts to defend against these attacks have focused on empirical performance against known sets of attacks. I will argue that this is a fundamentally inadequate paradigm for achieving meaningful security guarantees. Instead\, we need algorithms that are provably secure by design\, in line with best practices for traditional computer security. \nTo achieve this goal\, we take inspiration from robust statistics and robust optimization\, but with an eye towards the security requirements of modern machine learning systems. Motivated by the trend towards models with thousands or millions of features\, we investigate the robustness of learning algorithms in high dimensions. We show that most algorithms are brittle to even small fractions of adversarial data\, and then develop new algorithms that are provably robust. Additionally\, to accommodate the increasing use of deep learning\, we develop an algorithm for certifiably robust optimization of non-convex models such as neural networks. \nBiography:   Jacob Steinhardt is a graduate student in artificial intelligence at Stanford University working with Percy Liang.   His main research interest is in designing machine learning algorithms with the reliability properties of good software. So far this has led to the study of provably secure machine learning systems\, as well as the design of learning algorithms that can detect their own failures and generalize predictably in new situations. Outside of research\, Jacob is a technical advisor to the Open Philanthropy Project\, and mentors gifted high school students through the USACO and SPARC summer programs.
URL:https://idss-stage.mit.edu/calendar/provably-secure-machine-learning/
LOCATION:32-G449 (Kiva/Patel)
CATEGORIES:IDSS Special Seminars
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180223T110000
DTEND;TZID=America/New_York:20180223T120000
DTSTAMP:20260405T123412
CREATED:20171215T164243Z
LAST-MODIFIED:20180123T190050Z
UID:7152-1519383600-1519387200@idss-stage.mit.edu
SUMMARY:Optimization's Implicit Gift to Learning: Understanding Optimization Bias as a Key to Generalization
DESCRIPTION:Abstract: \nIt is becoming increasingly clear that implicit regularization\nafforded by the optimization algorithms play a central role in machine\nlearning\, and especially so when using large\, deep\, neural\nnetworks. We have a good understanding of the implicit regularization\nafforded by stochastic approximation algorithms\, such as SGD\, and as I\nwill review\, we understand and can characterize the implicit bias of\ndifferent algorithms\, and can design algorithms with specific\nbiases. But in this talk I will focus on implicit biases of\ndeterministic algorithms on underdetermined problem. In an effort to\nuncover the implicit biases of gradient-based optimization of neural\nnetworks\, which holds the key to their empirical success\, I will\ndiscuss recent work on implicit regularization for matrix\nfactorization and for linearly separable problems with monotone\ndecreasing loss functions. \nBio: \nProfessor Nati Srebro obtained his PhD at the Massachusetts Institute\nof Technology (MIT) in 2004\, held a post-doctoral fellowship with the\nMachine Learning Group at the University of Toronto\, and was a\nVisiting Scientist at IBM Haifa Research Labs. Since January 2006\, he\nhas been on the faculty of the Toyota Technological Institute at\nChicago (TTIC) and the University of Chicago\, and has also served as\nthe first Director of Graduate Studies at TTIC. From 2013 to 2014 he\nwas associate professor at the Technion-Israel Institute of\nTechnology. Prof. Srebro’s research encompasses methodological\,\nstatistical and computational aspects of Machine Learning\, as well as\nrelated problems in Optimization. Some of Prof. Srebro’s significant\ncontributions include work on learning “wider” Markov networks\,\nincluding introducing the use of the nuclear norm for machine learning\nand matrix reconstruction and work on fast optimization techniques for\nmachine learning\, and on the relationship between learning and\noptimization.
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-4/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180220T150000
DTEND;TZID=America/New_York:20180220T160000
DTSTAMP:20260405T123412
CREATED:20180223T171917Z
LAST-MODIFIED:20180223T171917Z
UID:7427-1519138800-1519142400@idss-stage.mit.edu
SUMMARY:Submodular Optimization: From Discrete to Continuous and Back
DESCRIPTION:Abstract \nMany procedures in statistics and artificial intelligence require solving non-convex problems. Historically\, the focus has been to convexify the non-convex objectives. In recent years\, however\, there has been significant progress to optimize non-convex functions directly. This direct approach has led to provably good guarantees for specific problem instances such as latent variable models\, non-negative matrix factorization\, robust PCA\, matrix completion\, etc. Unfortunately\, there is no free lunch and it is well known that in general finding the global optimum of a non-convex optimization problem is NP-hard. This computational barrier has mainly shifted the goal of non-convex optimization towards two directions: a) finding an approximate local minimum by avoiding saddle points or b) characterizing general conditions under which the underlying non-convex optimization is tractable. \nIn this talk\, I will consider a broad class of non-convex optimization problems that possess special combinatorial structures. More specifically\, I will focus on maximization of stochastic continuous submodular functions that demonstrate diminishing returns. Despite the apparent lack of convexity in such functions\, we will see that first order methods can indeed provide strong approximation guarantees. In particular\, for monotone and continuous submodular functions\, we will show that projected stochastic gradient methods achieve a ½ approximation ratio. We then see how we can reach the tight (1-1/e) approximation guarantee by developing a new class of stochastic projection-free gradient methods. A simple variant of these algorithms also achieves a (1/e) approximation ratio in the non-monotone case. Finally\, by using stochastic continuous optimization as an interface\, we will also provide tight approximation guarantees for maximizing a (monotone or non-monotone) stochastic submodular set function subject to a general matroid constraint. \nIn this talk\, I will not assume any particular background on submodularity or optimization and will try to motivate and define all the necessary concepts. \nBiography \nAmin Karbasi is an assistant professor in the School of Engineering and Applied Science (SEAS) at Yale University\, where he leads the Inference\, Information\, and Decision (I.I.D.) Systems Group. Prior to that he was a post-doctoral scholar at ETH Zurich\, Switzerland (2013-2014). He obtained his Ph.D. (2012) and M.Sc. (2007) in computer and communication sciences from EPFL\, Switzerland and his B.Sc. (2004) in electrical engineering from the same university.
URL:https://idss-stage.mit.edu/calendar/submodular-optimization-from-discrete-to-continuous-and-back/
LOCATION:34-101
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180216T110000
DTEND;TZID=America/New_York:20180216T120000
DTSTAMP:20260405T123412
CREATED:20171207T154519Z
LAST-MODIFIED:20180118T181839Z
UID:7109-1518778800-1518782400@idss-stage.mit.edu
SUMMARY:User-friendly guarantees for the Langevin Monte Carlo
DESCRIPTION:Abstract:  \nIn this talk\, I will revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. I will discuss the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between\, on the one hand\, the Langevin Monte Carlo for sampling and\, on the other hand\, the gradient descent for optimization. I will also present non-asymptotic guarantees for the accuracy of a version of the Langevin Monte Carlo algorithm that is based on inaccurate evaluations of the gradient. Finally\, I will propose a variable-step version of the Langevin Monte Carlo algorithm that has two advantages. First\, its step-sizes are independent of the target accuracy and\, second\, its rate provides a logarithmic improvement over the constant-step Langevin Monte Carlo algorithm.\nThis is a joint work with A. Karagulyan
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-arnak-dalalyan-enseacrest/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180213T150000
DTEND;TZID=America/New_York:20180213T160000
DTSTAMP:20260405T123412
CREATED:20180223T171528Z
LAST-MODIFIED:20180223T171528Z
UID:7424-1518534000-1518537600@idss-stage.mit.edu
SUMMARY:Supervisory Control of Discrete Event Systems: A Retrospective and Two Recent Results on Security and Privacy
DESCRIPTION:Abstract \nLafortune will begin with a brief retrospective of the theory of supervisory control of discrete event systems\, initiated in the seminal work of Ramadge & Wonham over 30 years ago\, and compare it with recent work in formal methods in control. He will then present results from his group on two problems: (i) sensor deception attacks in the supervisory control layer of a cyber-physical system; and (ii) obfuscation of system secrets by insertion of fictitious events in the output stream of the system. In each case\, he will describe the group’s solution procedure\, which is based on synthesizing a discrete game structure that embeds all valid solutions. \nBiography \nStéphane Lafortune is a professor in the Department of Electrical Engineering and Computer Science at the University of Michigan\, Ann Arbor\, USA. He obtained his degrees from École Polytechnique de Montréal (B.Eng)\, McGill University (M.Eng)\, and the University of California at Berkeley (PhD)\, all in electrical engineering. He is a Fellow of IEEE (1999) and of IFAC (2017). \nLafortune’s research interests are in discrete event systems and include multiple problem domains: modeling\, diagnosis\, control\, optimization\, and applications to computer and software systems. He co-authored\, with C. Cassandras\, the textbook Introduction to Discrete Event Systems (2nd Edition\, Springer\, 2008). He has served as Editor-in-Chief of the journal Discrete Event Dynamic Systems: Theory and Applications since 2015.
URL:https://idss-stage.mit.edu/calendar/supervisory-control-of-discrete-event-systems-a-retrospective-and-two-recent-results-on-security-and-privacy/
LOCATION:32-141\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180209T110000
DTEND;TZID=America/New_York:20180209T120000
DTSTAMP:20260405T123412
CREATED:20171207T154146Z
LAST-MODIFIED:20180119T204343Z
UID:7106-1518174000-1518177600@idss-stage.mit.edu
SUMMARY:Variable selection using presence-only data with applications to biochemistry
DESCRIPTION:Abstract: \nIn a number of problems\, we are presented with positive and unlabelled data\, referred to as presence-only responses. The application I present today involves studying the relationship between protein sequence and function and presence-only data arises since for many experiments it is impossible to obtain a large set of negative (non-functional) sequences. Furthermore\, if the number of variables is large and the goal is variable selection (as in this case)\, a number of statistical and computational challenges arise due to the non-convexity of the objective. In this talk\, I present an algorithm (PUlasso) with provable guarantees for doing variable selection and classification with presence-only data. Our algorithm involves using the majorization-minimization (MM) framework which is a generalization of the well-known expectation-maximization (EM) algorithm. In particular to make our algorithm scalable\, our algorithm has two computational speed-ups to the standard EM algorithm. I provide a theoretical guarantee where we first show that our algorithm is guaranteed to converge to a stationary point\, and then prove that any stationary point achieves the minimax optimal mean-squared error of slogp/n\, where s is the sparsity of the true parameter. I also demonstrate through simulations that our algorithm out-performs state-of-the-art algorithms in the moderate p settings in terms of classification performance. Finally\, I demonstrate that our PUlasso algorithm performs well on a biochemistry example.
URL:https://idss-stage.mit.edu/calendar/stochastic-and-statistics-seminar-garvesh-raskutti-univ-of-wisconsin/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180206T160000
DTEND;TZID=America/New_York:20180206T170000
DTSTAMP:20260405T123412
CREATED:20171228T155151Z
LAST-MODIFIED:20180226T211720Z
UID:7189-1517932800-1517936400@idss-stage.mit.edu
SUMMARY:Machine Learning and Causal Inference
DESCRIPTION:Abstract: \nThis talk will review a series of recent papers that develop new methods based on machine learning methods to approach problems of causal inference\, including estimation of conditional average treatment effects and personalized treatment assignment policies. Approaches for randomized experiments\, environments with unconfoundedness\, instrumental variables\, and panel data will be considered. \nBio: \nSusan Athey is The Economics of Technology Professor at Stanford Graduate School of Business. She received her bachelor’s degree from Duke University and her Ph.D. from Stanford\, and she holds an honorary doctorate from Duke University. She previously taught at the economics departments at MIT\, Stanford and Harvard. In 2007\, Professor Athey received the John Bates Clark Medal\, awarded by the American Economic Association to “that American economist under the age of forty who is adjudged to have made the most significant contribution to economic thought and knowledge.” She was elected to the National Academy of Science in 2012 and to the American Academy of Arts and Sciences in 2008. Professor Athey’s research focuses on marketplace design and the intersection of computer science\, machine learning and economics.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-seminar-susan-athey-stanford-university/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180205T080000
DTEND;TZID=America/New_York:20180205T080000
DTSTAMP:20260405T123412
CREATED:20180119T150243Z
LAST-MODIFIED:20180119T203527Z
UID:7280-1517817600-1517817600@idss-stage.mit.edu
SUMMARY:Data Science and Big Data Analytics:  Making Data-Driven Decisions
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/data-science-and-big-data-analytics-making-data-driven-decisions/
LOCATION:online
CATEGORIES:Online events
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2017/12/Screen-Shot-2017-12-06-at-4.36.51-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180202T110000
DTEND;TZID=America/New_York:20180202T120000
DTSTAMP:20260405T123412
CREATED:20171228T200551Z
LAST-MODIFIED:20180123T191117Z
UID:7195-1517569200-1517572800@idss-stage.mit.edu
SUMMARY:Connections between structured estimation and weak submodularity
DESCRIPTION:Abstract:  Many modern statistical estimation problems rely on imposing additional structure in order to reduce the statistical complexity and provide interpretability. Unfortunately\, these structures often are combinatorial in nature and result in computationally challenging problems. In parallel\, the combinatorial optimization community has placed significant effort in developing algorithms that can approximately solve such optimization problems in a computationally efficient manner. The focus of this talk is to expand upon ideas that arise in combinatorial optimization and connect those algorithms and ideas to statistical questions. We will discuss three main vignettes: Cardinality constrained optimization; low-rank matrix estimation problems; and greedy estimation of sparse fourier components. \nBio:  Professor Negahban is currently an Assistant Professor in the Department of Statistics at Yale University.  Prior to that he worked with Professor Devavrat Shah at MIT as a postdoc and Prof. Martin J. Wainwright at UC Berkeley as a graduate student.
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-7/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171212T163000
DTEND;TZID=America/New_York:20171212T173000
DTSTAMP:20260405T123412
CREATED:20171010T165615Z
LAST-MODIFIED:20171227T201302Z
UID:6594-1513096200-1513099800@idss-stage.mit.edu
SUMMARY:IDSS Distinguished Seminar - Essential Concepts of Causal Inference:  A Remarkable History
DESCRIPTION:  \nAbstract \nI believe that a deep understanding of cause and effect\, and how to estimate causal effects from data\, complete with the associated mathematical notation and expressions\, only evolved in the twentieth century. The crucial idea of randomized experiments was apparently first proposed in 1925 in the context of agricultural field trails but quickly moved to be applied also in studies of animal breeding and then in industrial manufacturing. The conceptual understanding\, to me at least\, was tied to ideas that were developing in quantum mechanics. The key ideas of randomized experiments evidently were not applied to studies of human beings until the 1950s\, when such experiments began to be used in controlled medical trials\, and then in social science\, in education and economics. Humans are more complex than plants and animals\, however\, and with such trials came the attendant complexities of non-compliance with assigned treatment and the occurrence of Hawthorne and placebo effects. The formal application of the insights from earlier simpler experimental settings to more complex ones dealing with people\, started in the 1970s and continue to this day\, and include the bridging of classical mathematical ideas of experimentation\, including fractional replication and geometrical formulations from the early twentieth century\, with modern ideas that rely on powerful computing to implement many of the tedious aspects of design and analysis. \nBio \nDonald B. Rubin is John L. Loeb Professor of Statistics\, Harvard University\, where he has been professor since 1983\, and Department Chair for 13 of those years. He has been elected to be a Fellow/Member/Honorary Member of: the Woodrow Wilson Society\, Guggenheim Memorial Foundation\, Alexander von Humboldt Foundation\, American Statistical Association\, Institute of Mathematical Statistics\, International Statistical Institute\, American Association for the Advancement of Science\, American Academy of Arts and Sciences\, European Association of Methodology\, the British Academy\, and the U.S. National Academy of Sciences. As of 2017\, he has authored/coauthored over 400 publications (including ten books)\, has four joint patents\, and for many years has been one of the most highly cited authors in the world\, with currently over 200\,000 citations and nearly 20\,000 in 2016 alone (Google Scholar). He has received honorary doctorate degrees from Otto Friedrich University\, Bamberg\, Germany; the University of Ljubljana\, Slovenia; Universidad Santo Tomás\, Bogotá\, Colombia; Uppsala University\, Sweden; and Northwestern University\, Evanston\, Illinois. He has also received honorary professorships from the University of Utrecht\, The Netherlands; Shanghai Finance University\, China; Nanjing University of Science & Technology\, China; Xi’an University of Technology\, China; and University of the Free State\, Republic of South Africa. \n[16Mar2017]
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-seminar-series-donald-rubin-harvard-university/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171116T170000
DTEND;TZID=America/New_York:20171116T180000
DTSTAMP:20260405T123412
CREATED:20171031T202358Z
LAST-MODIFIED:20171120T192118Z
UID:6839-1510851600-1510855200@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Wherever you are in the world\, learn about admissions to the Social and Engineering Systems Doctoral Program online\, from an IDSS faculty member. Presentation will be followed by a Q&A. \nRegister for the event here. \n(If you are having trouble hearing the webinar\, try the WebEx help page.)
URL:https://idss-stage.mit.edu/calendar/ses-admissions-webinar/
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171205T160000
DTEND;TZID=America/New_York:20171205T170000
DTSTAMP:20260405T123412
CREATED:20171002T160334Z
LAST-MODIFIED:20190501T144332Z
UID:6543-1512489600-1512493200@idss-stage.mit.edu
SUMMARY:Regularized Nonlinear Acceleration
DESCRIPTION:We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system\, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm\, providing improved estimates of the solution on the fly\, while the original optimization method is running. Numerical experiments are detailed on classical classification problems. \nBio: After dual PhDs from Ecole Polytechnique and Stanford University in optimisation and finance\, followed by a postdoc at U.C. Berkeley\, Alexandre d’Aspremont joined the faculty at Princeton University as an assistant then associate professor with joint appointments at the ORFE department and the Bendheim Center for Finance. He returned to Europe in 2011 thanks to a grant from the European Research Council and is now a research director at CNRS\, attached to Ecole Normale Supérieure in Paris. His research focuses on convex optimization and applications to machine learning\, statistics and finance. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/regularized-nonlinear-acceleration
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171201T110000
DTEND;TZID=America/New_York:20171201T120000
DTSTAMP:20260405T123412
CREATED:20171120T201126Z
LAST-MODIFIED:20180801T185333Z
UID:7017-1512126000-1512129600@idss-stage.mit.edu
SUMMARY:Challenges in Developing Learning Algorithms to Personalize Treatment in Real Time
DESCRIPTION:Abstract:  \nA formidable challenge in designing sequential treatments is to  determine when and in which context it is best to deliver treatments.  Consider treatment for individuals struggling with chronic health conditions.  Operationally designing the sequential treatments involves the construction of decision rules that input current context of an individual and output a recommended treatment.   That is\, the treatment is adapted to the individual’s context; the context may include  current health status\, current level of social support and current level of adherence for example.  Data sets on individuals with records of time-varying context and treatment delivery can be used to inform the construction of the decision rules.    There is much interest in personalizing the decision rules\, particularly in real time as the individual experiences sequences of treatment.   Here we discuss our work in designing  online “bandit” learning algorithms for use in personalizing mobile health interventions. \nBiography: \nSusan A. Murphy is Professor of Statistics\, Radcliffe Alumnae Professor at the Radcliffe Institute and Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences\, all at Harvard University. Her lab focuses on  improving sequential\, individualized\, decision making in health\, in particular on clinical trial design and data analysis to inform the development of just-in-time adaptive interventions in mobile health.  The lab’s work is funded by National Institute on Drug Abuse \, by National Institute on Alcohol Abuse and Alcoholism\, by National Heart\, Lung and Blood Institute and by National Institute of Biomedical Imaging and Bioengineering.   Susan is a Fellow of the Institute of Mathematical Statistics\, a Fellow of the College on Problems in Drug Dependence\, a former editor of the Annals of Statistics\, a member of the US National Academy of Sciences\, a member of the US National Academy of Medicine and a 2013 MacArthur Fellow.
URL:https://idss-stage.mit.edu/calendar/challenges-in-developing-learning-algorithms-to-personalize-treatment-in-real-time/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171129T160000
DTEND;TZID=America/New_York:20171129T170000
DTSTAMP:20260405T123412
CREATED:20171002T155836Z
LAST-MODIFIED:20190501T144513Z
UID:6541-1511971200-1511974800@idss-stage.mit.edu
SUMMARY:Comparison Lemmas\, Non-Smooth Convex Optimization and Structured Signal Recovery
DESCRIPTION:In the past couple of decades\, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse\, low rank\, finite constellation\, etc.) from possibly noisy measurements in a variety applications in statistics\, signal processing and machine learning. While the algorithms (basis pursuit\, LASSO\, etc.) are often fairly well established\, rigorous frameworks for the exact analysis of the performance of such methods are only just emerging. The talk will introduce and describe a fairly general theory for how to determine the performance (minimum number of measurements\, mean-square-error\, probability-of-error\, etc.) of such methods for various measurement ensembles (Gaussian\, Haar\, etc.). The framework enables one to assess the performance of these methods before actual implementation and allows one to optimally choose parameters such as regularizer coefficients\, number of measurements\, etc. The theory subsumes earlier results as special cases. It builds on an inconspicuous 1962 lemma of Slepian (for comparing Gaussian processes)\, as well as on a non-trivial generalization due to Gordon in 1988\, and produces concepts from convex geometry (such as Gaussian widths and Moreau envelopes) in a very natural way. The talk will also consider extensions to certain non-Gaussian settings and their applications in massive MIMO\, one-bit compressed sensing\, graphical LASSO and phase retrieval. \n\n\nBio: Babak Hassibi is the inaugural Mose and Lillian S. Bohn Professor of Electrical Engineering at the California Institute of Technology\, where he has been since 2001\, From 2011 to 2016 he was the Gordon M Binder/Amgen Professor of Electrical Engineering and during 2008-2015 he was Executive Officer of Electrical Engineering\, as well as Associate Director of Information Science and Technology. Prior to Caltech\, he was a Member of the Technical Staff in the Mathematical Sciences Research Center at Bell Laboratories\, Murray Hill\, NJ. He obtained his PhD degree from Stanford University in 1996 and his BS degree from the University of Tehran in 1989. His research interests span various aspects of information theory\, communications\, signal processing\, control and machine learning. He is an ISI highly cited author in Computer Science and\, among other awards\, is the recipient of the US Presidential Early Career Award for Scientists and Engineers (PECASE) and the David and Lucille Packard Fellowship in Science and Engineering \n\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/babak-hassibi-california-institute-technology
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171116T170000
DTEND;TZID=America/New_York:20171116T180000
DTSTAMP:20260405T123412
CREATED:20171031T202358Z
LAST-MODIFIED:20171120T192118Z
UID:6839-1510851600-1510855200@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Wherever you are in the world\, learn about admissions to the Social and Engineering Systems Doctoral Program online\, from an IDSS faculty member. Presentation will be followed by a Q&A. \nRegister for the event here. \n(If you are having trouble hearing the webinar\, try the WebEx help page.)
URL:https://idss-stage.mit.edu/calendar/ses-admissions-webinar/
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171117T110000
DTEND;TZID=America/New_York:20171117T120000
DTSTAMP:20260405T123412
CREATED:20171120T205246Z
LAST-MODIFIED:20180801T184930Z
UID:7021-1510916400-1510920000@idss-stage.mit.edu
SUMMARY:Generative Models and Compressed Sensing
DESCRIPTION:Abstract:  \nThe goal of compressed sensing is to estimate a vector from an under-determined system of noisy linear measurements\, by making use of prior knowledge in the relevant domain. For most results in the literature\, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead\, we assume that the unknown vectors lie near the range of a generative model\, e.g. a GAN or a VAE. We show how the problems of image inpainting and super-resolution are special cases of our general framework.  \nWe show how to generalize the RIP condition for generative models and that random gaussian measurement matrices have this property with high probability. A Lipschitz condition for the generative neural network is the key technical issue for our results.  \nTime permitting we will discuss follow-up work on how GANs can model causal structure in high-dimensional probability distributions.  (Based on joint works with Ashish Bora\, Ajil Jalal\, Murat Kocaoglu\, Christopher Snyder and Eric Price) \nCode: https://github.com/AshishBora/csgm \nHomepage: users.ece.utexas.edu/~dimakis \nBiography:   \nAlex Dimakis is an Associate Professor at the ECE department\, University of Texas at Austin. He received his Ph.D. in 2008 from UC Berkeley working with Martin Wainwright and Kannan Ramchandran. He received an NSF Career award\, a Google faculty research award and the Eli Jury dissertation award. He is the co-recipient of several best paper awards including the joint Information Theory and Communications Society Best Paper Award in 2012. He is currently serving as an associate editor for IEEE Transactions on Information Theory. His research interests include information theory\, coding theory and machine learning.
URL:https://idss-stage.mit.edu/calendar/generative-models-and-compressed-sensing/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171116T170000
DTEND;TZID=America/New_York:20171116T180000
DTSTAMP:20260405T123412
CREATED:20171031T202358Z
LAST-MODIFIED:20171120T192118Z
UID:6839-1510851600-1510855200@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Wherever you are in the world\, learn about admissions to the Social and Engineering Systems Doctoral Program online\, from an IDSS faculty member. Presentation will be followed by a Q&A. \nRegister for the event here. \n(If you are having trouble hearing the webinar\, try the WebEx help page.)
URL:https://idss-stage.mit.edu/calendar/ses-admissions-webinar/
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171114T160000
DTEND;TZID=America/New_York:20171114T170000
DTSTAMP:20260405T123412
CREATED:20171002T155150Z
LAST-MODIFIED:20190501T144705Z
UID:6538-1510675200-1510678800@idss-stage.mit.edu
SUMMARY:Quantum Limits on the Information Carried by Electromagnetic Radiation
DESCRIPTION:In many practical applications information is conveyed by means of electromagnetic radiation and a natural question concerns the fundamental limits of this process. Identifying information with entropy\, one can ask about the maximum amount of entropy associated to the propagating wave. \nThe standard statistical physics approach to compute entropy is to take the logarithm of the number of possible energy states of a system. Since any continuum field can assume an uncountably infinite number of energy configurations\, the approach underlying any finite entropy calculation must also necessarily include some grouping of states together in a procedure known as coarse-graining or\, in information-theoretic parlance\, signal quantization. The problem then reduces to counting the eigenstates of the Hamiltonian of the quantum wave field. \nIn this talk\, we examine the relationship between entropy computations in a statistical physics and an information-theory context. In the latter context\, rather than attempting to directly count the number of energy eigenstates of the quantum wave field\, we constrain the geometry of the signal space and decompose the waveform into a minimum number of orthogonal basis modes. We then ask how many bits are required to represent any waveform in the space spanned by this optimal representation with a minimum quantized energy error. We show that for scalar quantization this entropy computation is completely analogous to the one for the number state channel of statistical physics\, and it has the attractive feature that the complexity of state counting is now replaced by the geometric problem of optimally covering the signal space by high-dimensional boxes\, whose size is lower bounded by quantum constraints. For bandlimited radiation in a three-dimensional space\, using this approach we can recover the Bekenstein entropy bound on the largest amount of information that can be radiated from a sphere of given radius. We also compare results with black body radiation occurring over an infinite spectrum of frequencies and along the way we provide some new results on the asymptotic dimensionality and $\epsilon$-entropy of bandlimited\, square-integrable signals. \n\n\nBio: Massimo Franceschetti received the Laurea degree (with highest honors) in computer engineering from the University of Naples\, Naples\, Italy\, in 1997\, the M.S. and Ph.D. degrees in electrical engineering from the California Institute of Technology\, Pasadena\, CA\, in 1999\, and 2003\, respectively. He is Professor of Electrical and Computer Engineering at the University of California at San Diego (UCSD). Before joining UCSD\, he was a postdoctoral scholar at the University of California at Berkeley for two years. His research interests are in physical and information-based foundations of communication and control systems. He was awarded the C. H. Wilts Prize in 2003 for best doctoral thesis in electrical engineering at Caltech\, the S.A. Schelkunoff Award in 2005 for best paper in the IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION\, a National Science Foundation (NSF) CAREER award in 2006\, an Office of Naval Research (ONR) Young Investigator Award in 2007\, the IEEE Communications Society Best Tutorial Paper Award in 2010\, and the IEEE Control theory society Ruberti young researcher award in 2012. \n\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/quantum-limits-information-carried-electromagnetic-radiation
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171107T163000
DTEND;TZID=America/New_York:20171107T173000
DTSTAMP:20260405T123412
CREATED:20171010T145759Z
LAST-MODIFIED:20171207T153512Z
UID:6591-1510072200-1510075800@idss-stage.mit.edu
SUMMARY:Social Network Experiments - Nicholas Christakis (Yale University)
DESCRIPTION:  \n\n  \nAbstract \nHuman beings choose their friends\, and often their neighbors\, and co-workers\, and we inherit our relatives; and each of the people to whom we are connected also does the same\, such that\, in the end\, we humans assemble ourselves into face-to-face social networks with particular structures. Why do we do this? And how might an understanding of human social network structure and function be used to intervene in the world to make it better? Here\, I review recent research from our lab describing several classes of interventions involving both offline and online networks that can help make the world better\, including: (1) interventions that rewire the connections between people\, and (2) interventions that manipulate social contagion\, facilitating the flow of desirable properties within groups. I will illustrate what can be done using a variety of experiments in settings as diverse as fostering cooperation in networked groups online\, to fostering health behavior change in developing world villages\, to facilitating the diffusion of innovation or coordination in groups. I will also focus on our recent experiments with “heterogenous systems” involving both humans and “dumb AI” bots\, interacting in small groups. By taking account of people’s structural embeddedness in social networks\, and by understanding social influence\, it is possible to intervene in social systems to enhance desirable population-level properties as diverse as health\, wealth\, cooperation\, coordination\, and learning. \n  \nBiography \nNicholas A. Christakis\, MD\, PhD\, MPH\, is a social scientist and physician who conducts research in the area of biosocial science\, investigating the biological predicates and consequences of social phenomena. He directs the Human Nature Lab at Yale University\, where he is appointed as the Sol Goldman Family Professor of Social and Natural Science\, with appointments in the Departments of Sociology\, Medicine\, Ecology and Evolutionary Biology\, and Biomedical Engineering. He is the Co-Director of the Yale Institute for Network Science. \nPrior to moving his lab to Yale in 2013\, Dr. Christakis was Professor of Sociology and Professor of Medicine at Harvard University\, since 2001. Prior to that\, he served in the same capacities at the University of Chicago.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-series-seminar-nicholas-christalkis-yale-university/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171106T160000
DTEND;TZID=America/New_York:20171106T180000
DTSTAMP:20260405T123412
CREATED:20171031T190817Z
LAST-MODIFIED:20171106T170223Z
UID:6833-1509984000-1509991200@idss-stage.mit.edu
SUMMARY:SES Admissions Info Session
DESCRIPTION:Join us for pizza and an Admissions Information Session on the Social and Engineering Systems Doctoral Program.
URL:https://idss-stage.mit.edu/calendar/ses-admissions-info-session/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2017/10/2017-Infinite-Display-Info-Session-Ad.png
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
END:VCALENDAR