BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IDSS STAGE - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IDSS STAGE
X-ORIGINAL-URL:https://idss-stage.mit.edu
X-WR-CALDESC:Events for IDSS STAGE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20160313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20161106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20170312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20171105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20160101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180226T160000
DTEND;TZID=America/New_York:20180226T170000
DTSTAMP:20260407T180103
CREATED:20180209T173751Z
LAST-MODIFIED:20180209T174117Z
UID:7358-1519660800-1519664400@idss-stage.mit.edu
SUMMARY:Provably Secure Machine Learning
DESCRIPTION:Abstract:  The widespread use of machine learning systems creates a new class of computer security vulnerabilities where\, rather than attacking the integrity of the software itself\, malicious actors exploit the statistical nature of the learning algorithms. For instance\, attackers can add fake data (e.g. by creating fake user accounts)\, or strategically manipulate inputs to the system once it is deployed. \nSo far\, attempts to defend against these attacks have focused on empirical performance against known sets of attacks. I will argue that this is a fundamentally inadequate paradigm for achieving meaningful security guarantees. Instead\, we need algorithms that are provably secure by design\, in line with best practices for traditional computer security. \nTo achieve this goal\, we take inspiration from robust statistics and robust optimization\, but with an eye towards the security requirements of modern machine learning systems. Motivated by the trend towards models with thousands or millions of features\, we investigate the robustness of learning algorithms in high dimensions. We show that most algorithms are brittle to even small fractions of adversarial data\, and then develop new algorithms that are provably robust. Additionally\, to accommodate the increasing use of deep learning\, we develop an algorithm for certifiably robust optimization of non-convex models such as neural networks. \nBiography:   Jacob Steinhardt is a graduate student in artificial intelligence at Stanford University working with Percy Liang.   His main research interest is in designing machine learning algorithms with the reliability properties of good software. So far this has led to the study of provably secure machine learning systems\, as well as the design of learning algorithms that can detect their own failures and generalize predictably in new situations. Outside of research\, Jacob is a technical advisor to the Open Philanthropy Project\, and mentors gifted high school students through the USACO and SPARC summer programs.
URL:https://idss-stage.mit.edu/calendar/provably-secure-machine-learning/
LOCATION:32-G449 (Kiva/Patel)
CATEGORIES:IDSS Special Seminars
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180223T110000
DTEND;TZID=America/New_York:20180223T120000
DTSTAMP:20260407T180103
CREATED:20171215T164243Z
LAST-MODIFIED:20180123T190050Z
UID:7152-1519383600-1519387200@idss-stage.mit.edu
SUMMARY:Optimization's Implicit Gift to Learning: Understanding Optimization Bias as a Key to Generalization
DESCRIPTION:Abstract: \nIt is becoming increasingly clear that implicit regularization\nafforded by the optimization algorithms play a central role in machine\nlearning\, and especially so when using large\, deep\, neural\nnetworks. We have a good understanding of the implicit regularization\nafforded by stochastic approximation algorithms\, such as SGD\, and as I\nwill review\, we understand and can characterize the implicit bias of\ndifferent algorithms\, and can design algorithms with specific\nbiases. But in this talk I will focus on implicit biases of\ndeterministic algorithms on underdetermined problem. In an effort to\nuncover the implicit biases of gradient-based optimization of neural\nnetworks\, which holds the key to their empirical success\, I will\ndiscuss recent work on implicit regularization for matrix\nfactorization and for linearly separable problems with monotone\ndecreasing loss functions. \nBio: \nProfessor Nati Srebro obtained his PhD at the Massachusetts Institute\nof Technology (MIT) in 2004\, held a post-doctoral fellowship with the\nMachine Learning Group at the University of Toronto\, and was a\nVisiting Scientist at IBM Haifa Research Labs. Since January 2006\, he\nhas been on the faculty of the Toyota Technological Institute at\nChicago (TTIC) and the University of Chicago\, and has also served as\nthe first Director of Graduate Studies at TTIC. From 2013 to 2014 he\nwas associate professor at the Technion-Israel Institute of\nTechnology. Prof. Srebro’s research encompasses methodological\,\nstatistical and computational aspects of Machine Learning\, as well as\nrelated problems in Optimization. Some of Prof. Srebro’s significant\ncontributions include work on learning “wider” Markov networks\,\nincluding introducing the use of the nuclear norm for machine learning\nand matrix reconstruction and work on fast optimization techniques for\nmachine learning\, and on the relationship between learning and\noptimization.
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-4/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180220T150000
DTEND;TZID=America/New_York:20180220T160000
DTSTAMP:20260407T180103
CREATED:20180223T171917Z
LAST-MODIFIED:20180223T171917Z
UID:7427-1519138800-1519142400@idss-stage.mit.edu
SUMMARY:Submodular Optimization: From Discrete to Continuous and Back
DESCRIPTION:Abstract \nMany procedures in statistics and artificial intelligence require solving non-convex problems. Historically\, the focus has been to convexify the non-convex objectives. In recent years\, however\, there has been significant progress to optimize non-convex functions directly. This direct approach has led to provably good guarantees for specific problem instances such as latent variable models\, non-negative matrix factorization\, robust PCA\, matrix completion\, etc. Unfortunately\, there is no free lunch and it is well known that in general finding the global optimum of a non-convex optimization problem is NP-hard. This computational barrier has mainly shifted the goal of non-convex optimization towards two directions: a) finding an approximate local minimum by avoiding saddle points or b) characterizing general conditions under which the underlying non-convex optimization is tractable. \nIn this talk\, I will consider a broad class of non-convex optimization problems that possess special combinatorial structures. More specifically\, I will focus on maximization of stochastic continuous submodular functions that demonstrate diminishing returns. Despite the apparent lack of convexity in such functions\, we will see that first order methods can indeed provide strong approximation guarantees. In particular\, for monotone and continuous submodular functions\, we will show that projected stochastic gradient methods achieve a ½ approximation ratio. We then see how we can reach the tight (1-1/e) approximation guarantee by developing a new class of stochastic projection-free gradient methods. A simple variant of these algorithms also achieves a (1/e) approximation ratio in the non-monotone case. Finally\, by using stochastic continuous optimization as an interface\, we will also provide tight approximation guarantees for maximizing a (monotone or non-monotone) stochastic submodular set function subject to a general matroid constraint. \nIn this talk\, I will not assume any particular background on submodularity or optimization and will try to motivate and define all the necessary concepts. \nBiography \nAmin Karbasi is an assistant professor in the School of Engineering and Applied Science (SEAS) at Yale University\, where he leads the Inference\, Information\, and Decision (I.I.D.) Systems Group. Prior to that he was a post-doctoral scholar at ETH Zurich\, Switzerland (2013-2014). He obtained his Ph.D. (2012) and M.Sc. (2007) in computer and communication sciences from EPFL\, Switzerland and his B.Sc. (2004) in electrical engineering from the same university.
URL:https://idss-stage.mit.edu/calendar/submodular-optimization-from-discrete-to-continuous-and-back/
LOCATION:34-101
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180216T110000
DTEND;TZID=America/New_York:20180216T120000
DTSTAMP:20260407T180103
CREATED:20171207T154519Z
LAST-MODIFIED:20180118T181839Z
UID:7109-1518778800-1518782400@idss-stage.mit.edu
SUMMARY:User-friendly guarantees for the Langevin Monte Carlo
DESCRIPTION:Abstract:  \nIn this talk\, I will revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. I will discuss the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between\, on the one hand\, the Langevin Monte Carlo for sampling and\, on the other hand\, the gradient descent for optimization. I will also present non-asymptotic guarantees for the accuracy of a version of the Langevin Monte Carlo algorithm that is based on inaccurate evaluations of the gradient. Finally\, I will propose a variable-step version of the Langevin Monte Carlo algorithm that has two advantages. First\, its step-sizes are independent of the target accuracy and\, second\, its rate provides a logarithmic improvement over the constant-step Langevin Monte Carlo algorithm.\nThis is a joint work with A. Karagulyan
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-arnak-dalalyan-enseacrest/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180213T150000
DTEND;TZID=America/New_York:20180213T160000
DTSTAMP:20260407T180103
CREATED:20180223T171528Z
LAST-MODIFIED:20180223T171528Z
UID:7424-1518534000-1518537600@idss-stage.mit.edu
SUMMARY:Supervisory Control of Discrete Event Systems: A Retrospective and Two Recent Results on Security and Privacy
DESCRIPTION:Abstract \nLafortune will begin with a brief retrospective of the theory of supervisory control of discrete event systems\, initiated in the seminal work of Ramadge & Wonham over 30 years ago\, and compare it with recent work in formal methods in control. He will then present results from his group on two problems: (i) sensor deception attacks in the supervisory control layer of a cyber-physical system; and (ii) obfuscation of system secrets by insertion of fictitious events in the output stream of the system. In each case\, he will describe the group’s solution procedure\, which is based on synthesizing a discrete game structure that embeds all valid solutions. \nBiography \nStéphane Lafortune is a professor in the Department of Electrical Engineering and Computer Science at the University of Michigan\, Ann Arbor\, USA. He obtained his degrees from École Polytechnique de Montréal (B.Eng)\, McGill University (M.Eng)\, and the University of California at Berkeley (PhD)\, all in electrical engineering. He is a Fellow of IEEE (1999) and of IFAC (2017). \nLafortune’s research interests are in discrete event systems and include multiple problem domains: modeling\, diagnosis\, control\, optimization\, and applications to computer and software systems. He co-authored\, with C. Cassandras\, the textbook Introduction to Discrete Event Systems (2nd Edition\, Springer\, 2008). He has served as Editor-in-Chief of the journal Discrete Event Dynamic Systems: Theory and Applications since 2015.
URL:https://idss-stage.mit.edu/calendar/supervisory-control-of-discrete-event-systems-a-retrospective-and-two-recent-results-on-security-and-privacy/
LOCATION:32-141\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180209T110000
DTEND;TZID=America/New_York:20180209T120000
DTSTAMP:20260407T180103
CREATED:20171207T154146Z
LAST-MODIFIED:20180119T204343Z
UID:7106-1518174000-1518177600@idss-stage.mit.edu
SUMMARY:Variable selection using presence-only data with applications to biochemistry
DESCRIPTION:Abstract: \nIn a number of problems\, we are presented with positive and unlabelled data\, referred to as presence-only responses. The application I present today involves studying the relationship between protein sequence and function and presence-only data arises since for many experiments it is impossible to obtain a large set of negative (non-functional) sequences. Furthermore\, if the number of variables is large and the goal is variable selection (as in this case)\, a number of statistical and computational challenges arise due to the non-convexity of the objective. In this talk\, I present an algorithm (PUlasso) with provable guarantees for doing variable selection and classification with presence-only data. Our algorithm involves using the majorization-minimization (MM) framework which is a generalization of the well-known expectation-maximization (EM) algorithm. In particular to make our algorithm scalable\, our algorithm has two computational speed-ups to the standard EM algorithm. I provide a theoretical guarantee where we first show that our algorithm is guaranteed to converge to a stationary point\, and then prove that any stationary point achieves the minimax optimal mean-squared error of slogp/n\, where s is the sparsity of the true parameter. I also demonstrate through simulations that our algorithm out-performs state-of-the-art algorithms in the moderate p settings in terms of classification performance. Finally\, I demonstrate that our PUlasso algorithm performs well on a biochemistry example.
URL:https://idss-stage.mit.edu/calendar/stochastic-and-statistics-seminar-garvesh-raskutti-univ-of-wisconsin/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180206T160000
DTEND;TZID=America/New_York:20180206T170000
DTSTAMP:20260407T180103
CREATED:20171228T155151Z
LAST-MODIFIED:20180226T211720Z
UID:7189-1517932800-1517936400@idss-stage.mit.edu
SUMMARY:Machine Learning and Causal Inference
DESCRIPTION:Abstract: \nThis talk will review a series of recent papers that develop new methods based on machine learning methods to approach problems of causal inference\, including estimation of conditional average treatment effects and personalized treatment assignment policies. Approaches for randomized experiments\, environments with unconfoundedness\, instrumental variables\, and panel data will be considered. \nBio: \nSusan Athey is The Economics of Technology Professor at Stanford Graduate School of Business. She received her bachelor’s degree from Duke University and her Ph.D. from Stanford\, and she holds an honorary doctorate from Duke University. She previously taught at the economics departments at MIT\, Stanford and Harvard. In 2007\, Professor Athey received the John Bates Clark Medal\, awarded by the American Economic Association to “that American economist under the age of forty who is adjudged to have made the most significant contribution to economic thought and knowledge.” She was elected to the National Academy of Science in 2012 and to the American Academy of Arts and Sciences in 2008. Professor Athey’s research focuses on marketplace design and the intersection of computer science\, machine learning and economics.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-seminar-susan-athey-stanford-university/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180205T080000
DTEND;TZID=America/New_York:20180205T080000
DTSTAMP:20260407T180103
CREATED:20180119T150243Z
LAST-MODIFIED:20180119T203527Z
UID:7280-1517817600-1517817600@idss-stage.mit.edu
SUMMARY:Data Science and Big Data Analytics:  Making Data-Driven Decisions
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/data-science-and-big-data-analytics-making-data-driven-decisions/
LOCATION:online
CATEGORIES:Online events
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2017/12/Screen-Shot-2017-12-06-at-4.36.51-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180202T110000
DTEND;TZID=America/New_York:20180202T120000
DTSTAMP:20260407T180103
CREATED:20171228T200551Z
LAST-MODIFIED:20180123T191117Z
UID:7195-1517569200-1517572800@idss-stage.mit.edu
SUMMARY:Connections between structured estimation and weak submodularity
DESCRIPTION:Abstract:  Many modern statistical estimation problems rely on imposing additional structure in order to reduce the statistical complexity and provide interpretability. Unfortunately\, these structures often are combinatorial in nature and result in computationally challenging problems. In parallel\, the combinatorial optimization community has placed significant effort in developing algorithms that can approximately solve such optimization problems in a computationally efficient manner. The focus of this talk is to expand upon ideas that arise in combinatorial optimization and connect those algorithms and ideas to statistical questions. We will discuss three main vignettes: Cardinality constrained optimization; low-rank matrix estimation problems; and greedy estimation of sparse fourier components. \nBio:  Professor Negahban is currently an Assistant Professor in the Department of Statistics at Yale University.  Prior to that he worked with Professor Devavrat Shah at MIT as a postdoc and Prof. Martin J. Wainwright at UC Berkeley as a graduate student.
URL:https://idss-stage.mit.edu/calendar/stochastics-and-statistics-seminar-7/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171212T163000
DTEND;TZID=America/New_York:20171212T173000
DTSTAMP:20260407T180103
CREATED:20171010T165615Z
LAST-MODIFIED:20171227T201302Z
UID:6594-1513096200-1513099800@idss-stage.mit.edu
SUMMARY:IDSS Distinguished Seminar - Essential Concepts of Causal Inference:  A Remarkable History
DESCRIPTION:  \nAbstract \nI believe that a deep understanding of cause and effect\, and how to estimate causal effects from data\, complete with the associated mathematical notation and expressions\, only evolved in the twentieth century. The crucial idea of randomized experiments was apparently first proposed in 1925 in the context of agricultural field trails but quickly moved to be applied also in studies of animal breeding and then in industrial manufacturing. The conceptual understanding\, to me at least\, was tied to ideas that were developing in quantum mechanics. The key ideas of randomized experiments evidently were not applied to studies of human beings until the 1950s\, when such experiments began to be used in controlled medical trials\, and then in social science\, in education and economics. Humans are more complex than plants and animals\, however\, and with such trials came the attendant complexities of non-compliance with assigned treatment and the occurrence of Hawthorne and placebo effects. The formal application of the insights from earlier simpler experimental settings to more complex ones dealing with people\, started in the 1970s and continue to this day\, and include the bridging of classical mathematical ideas of experimentation\, including fractional replication and geometrical formulations from the early twentieth century\, with modern ideas that rely on powerful computing to implement many of the tedious aspects of design and analysis. \nBio \nDonald B. Rubin is John L. Loeb Professor of Statistics\, Harvard University\, where he has been professor since 1983\, and Department Chair for 13 of those years. He has been elected to be a Fellow/Member/Honorary Member of: the Woodrow Wilson Society\, Guggenheim Memorial Foundation\, Alexander von Humboldt Foundation\, American Statistical Association\, Institute of Mathematical Statistics\, International Statistical Institute\, American Association for the Advancement of Science\, American Academy of Arts and Sciences\, European Association of Methodology\, the British Academy\, and the U.S. National Academy of Sciences. As of 2017\, he has authored/coauthored over 400 publications (including ten books)\, has four joint patents\, and for many years has been one of the most highly cited authors in the world\, with currently over 200\,000 citations and nearly 20\,000 in 2016 alone (Google Scholar). He has received honorary doctorate degrees from Otto Friedrich University\, Bamberg\, Germany; the University of Ljubljana\, Slovenia; Universidad Santo Tomás\, Bogotá\, Colombia; Uppsala University\, Sweden; and Northwestern University\, Evanston\, Illinois. He has also received honorary professorships from the University of Utrecht\, The Netherlands; Shanghai Finance University\, China; Nanjing University of Science & Technology\, China; Xi’an University of Technology\, China; and University of the Free State\, Republic of South Africa. \n[16Mar2017]
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-seminar-series-donald-rubin-harvard-university/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171116T170000
DTEND;TZID=America/New_York:20171116T180000
DTSTAMP:20260407T180103
CREATED:20171031T202358Z
LAST-MODIFIED:20171120T192118Z
UID:6839-1510851600-1510855200@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Wherever you are in the world\, learn about admissions to the Social and Engineering Systems Doctoral Program online\, from an IDSS faculty member. Presentation will be followed by a Q&A. \nRegister for the event here. \n(If you are having trouble hearing the webinar\, try the WebEx help page.)
URL:https://idss-stage.mit.edu/calendar/ses-admissions-webinar/
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171205T160000
DTEND;TZID=America/New_York:20171205T170000
DTSTAMP:20260407T180103
CREATED:20171002T160334Z
LAST-MODIFIED:20190501T144332Z
UID:6543-1512489600-1512493200@idss-stage.mit.edu
SUMMARY:Regularized Nonlinear Acceleration
DESCRIPTION:We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system\, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm\, providing improved estimates of the solution on the fly\, while the original optimization method is running. Numerical experiments are detailed on classical classification problems. \nBio: After dual PhDs from Ecole Polytechnique and Stanford University in optimisation and finance\, followed by a postdoc at U.C. Berkeley\, Alexandre d’Aspremont joined the faculty at Princeton University as an assistant then associate professor with joint appointments at the ORFE department and the Bendheim Center for Finance. He returned to Europe in 2011 thanks to a grant from the European Research Council and is now a research director at CNRS\, attached to Ecole Normale Supérieure in Paris. His research focuses on convex optimization and applications to machine learning\, statistics and finance. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/regularized-nonlinear-acceleration
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171201T110000
DTEND;TZID=America/New_York:20171201T120000
DTSTAMP:20260407T180103
CREATED:20171120T201126Z
LAST-MODIFIED:20180801T185333Z
UID:7017-1512126000-1512129600@idss-stage.mit.edu
SUMMARY:Challenges in Developing Learning Algorithms to Personalize Treatment in Real Time
DESCRIPTION:Abstract:  \nA formidable challenge in designing sequential treatments is to  determine when and in which context it is best to deliver treatments.  Consider treatment for individuals struggling with chronic health conditions.  Operationally designing the sequential treatments involves the construction of decision rules that input current context of an individual and output a recommended treatment.   That is\, the treatment is adapted to the individual’s context; the context may include  current health status\, current level of social support and current level of adherence for example.  Data sets on individuals with records of time-varying context and treatment delivery can be used to inform the construction of the decision rules.    There is much interest in personalizing the decision rules\, particularly in real time as the individual experiences sequences of treatment.   Here we discuss our work in designing  online “bandit” learning algorithms for use in personalizing mobile health interventions. \nBiography: \nSusan A. Murphy is Professor of Statistics\, Radcliffe Alumnae Professor at the Radcliffe Institute and Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences\, all at Harvard University. Her lab focuses on  improving sequential\, individualized\, decision making in health\, in particular on clinical trial design and data analysis to inform the development of just-in-time adaptive interventions in mobile health.  The lab’s work is funded by National Institute on Drug Abuse \, by National Institute on Alcohol Abuse and Alcoholism\, by National Heart\, Lung and Blood Institute and by National Institute of Biomedical Imaging and Bioengineering.   Susan is a Fellow of the Institute of Mathematical Statistics\, a Fellow of the College on Problems in Drug Dependence\, a former editor of the Annals of Statistics\, a member of the US National Academy of Sciences\, a member of the US National Academy of Medicine and a 2013 MacArthur Fellow.
URL:https://idss-stage.mit.edu/calendar/challenges-in-developing-learning-algorithms-to-personalize-treatment-in-real-time/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171129T160000
DTEND;TZID=America/New_York:20171129T170000
DTSTAMP:20260407T180103
CREATED:20171002T155836Z
LAST-MODIFIED:20190501T144513Z
UID:6541-1511971200-1511974800@idss-stage.mit.edu
SUMMARY:Comparison Lemmas\, Non-Smooth Convex Optimization and Structured Signal Recovery
DESCRIPTION:In the past couple of decades\, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse\, low rank\, finite constellation\, etc.) from possibly noisy measurements in a variety applications in statistics\, signal processing and machine learning. While the algorithms (basis pursuit\, LASSO\, etc.) are often fairly well established\, rigorous frameworks for the exact analysis of the performance of such methods are only just emerging. The talk will introduce and describe a fairly general theory for how to determine the performance (minimum number of measurements\, mean-square-error\, probability-of-error\, etc.) of such methods for various measurement ensembles (Gaussian\, Haar\, etc.). The framework enables one to assess the performance of these methods before actual implementation and allows one to optimally choose parameters such as regularizer coefficients\, number of measurements\, etc. The theory subsumes earlier results as special cases. It builds on an inconspicuous 1962 lemma of Slepian (for comparing Gaussian processes)\, as well as on a non-trivial generalization due to Gordon in 1988\, and produces concepts from convex geometry (such as Gaussian widths and Moreau envelopes) in a very natural way. The talk will also consider extensions to certain non-Gaussian settings and their applications in massive MIMO\, one-bit compressed sensing\, graphical LASSO and phase retrieval. \n\n\nBio: Babak Hassibi is the inaugural Mose and Lillian S. Bohn Professor of Electrical Engineering at the California Institute of Technology\, where he has been since 2001\, From 2011 to 2016 he was the Gordon M Binder/Amgen Professor of Electrical Engineering and during 2008-2015 he was Executive Officer of Electrical Engineering\, as well as Associate Director of Information Science and Technology. Prior to Caltech\, he was a Member of the Technical Staff in the Mathematical Sciences Research Center at Bell Laboratories\, Murray Hill\, NJ. He obtained his PhD degree from Stanford University in 1996 and his BS degree from the University of Tehran in 1989. His research interests span various aspects of information theory\, communications\, signal processing\, control and machine learning. He is an ISI highly cited author in Computer Science and\, among other awards\, is the recipient of the US Presidential Early Career Award for Scientists and Engineers (PECASE) and the David and Lucille Packard Fellowship in Science and Engineering \n\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/babak-hassibi-california-institute-technology
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171116T170000
DTEND;TZID=America/New_York:20171116T180000
DTSTAMP:20260407T180103
CREATED:20171031T202358Z
LAST-MODIFIED:20171120T192118Z
UID:6839-1510851600-1510855200@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Wherever you are in the world\, learn about admissions to the Social and Engineering Systems Doctoral Program online\, from an IDSS faculty member. Presentation will be followed by a Q&A. \nRegister for the event here. \n(If you are having trouble hearing the webinar\, try the WebEx help page.)
URL:https://idss-stage.mit.edu/calendar/ses-admissions-webinar/
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171117T110000
DTEND;TZID=America/New_York:20171117T120000
DTSTAMP:20260407T180103
CREATED:20171120T205246Z
LAST-MODIFIED:20180801T184930Z
UID:7021-1510916400-1510920000@idss-stage.mit.edu
SUMMARY:Generative Models and Compressed Sensing
DESCRIPTION:Abstract:  \nThe goal of compressed sensing is to estimate a vector from an under-determined system of noisy linear measurements\, by making use of prior knowledge in the relevant domain. For most results in the literature\, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead\, we assume that the unknown vectors lie near the range of a generative model\, e.g. a GAN or a VAE. We show how the problems of image inpainting and super-resolution are special cases of our general framework.  \nWe show how to generalize the RIP condition for generative models and that random gaussian measurement matrices have this property with high probability. A Lipschitz condition for the generative neural network is the key technical issue for our results.  \nTime permitting we will discuss follow-up work on how GANs can model causal structure in high-dimensional probability distributions.  (Based on joint works with Ashish Bora\, Ajil Jalal\, Murat Kocaoglu\, Christopher Snyder and Eric Price) \nCode: https://github.com/AshishBora/csgm \nHomepage: users.ece.utexas.edu/~dimakis \nBiography:   \nAlex Dimakis is an Associate Professor at the ECE department\, University of Texas at Austin. He received his Ph.D. in 2008 from UC Berkeley working with Martin Wainwright and Kannan Ramchandran. He received an NSF Career award\, a Google faculty research award and the Eli Jury dissertation award. He is the co-recipient of several best paper awards including the joint Information Theory and Communications Society Best Paper Award in 2012. He is currently serving as an associate editor for IEEE Transactions on Information Theory. His research interests include information theory\, coding theory and machine learning.
URL:https://idss-stage.mit.edu/calendar/generative-models-and-compressed-sensing/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171116T170000
DTEND;TZID=America/New_York:20171116T180000
DTSTAMP:20260407T180103
CREATED:20171031T202358Z
LAST-MODIFIED:20171120T192118Z
UID:6839-1510851600-1510855200@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Wherever you are in the world\, learn about admissions to the Social and Engineering Systems Doctoral Program online\, from an IDSS faculty member. Presentation will be followed by a Q&A. \nRegister for the event here. \n(If you are having trouble hearing the webinar\, try the WebEx help page.)
URL:https://idss-stage.mit.edu/calendar/ses-admissions-webinar/
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171114T160000
DTEND;TZID=America/New_York:20171114T170000
DTSTAMP:20260407T180103
CREATED:20171002T155150Z
LAST-MODIFIED:20190501T144705Z
UID:6538-1510675200-1510678800@idss-stage.mit.edu
SUMMARY:Quantum Limits on the Information Carried by Electromagnetic Radiation
DESCRIPTION:In many practical applications information is conveyed by means of electromagnetic radiation and a natural question concerns the fundamental limits of this process. Identifying information with entropy\, one can ask about the maximum amount of entropy associated to the propagating wave. \nThe standard statistical physics approach to compute entropy is to take the logarithm of the number of possible energy states of a system. Since any continuum field can assume an uncountably infinite number of energy configurations\, the approach underlying any finite entropy calculation must also necessarily include some grouping of states together in a procedure known as coarse-graining or\, in information-theoretic parlance\, signal quantization. The problem then reduces to counting the eigenstates of the Hamiltonian of the quantum wave field. \nIn this talk\, we examine the relationship between entropy computations in a statistical physics and an information-theory context. In the latter context\, rather than attempting to directly count the number of energy eigenstates of the quantum wave field\, we constrain the geometry of the signal space and decompose the waveform into a minimum number of orthogonal basis modes. We then ask how many bits are required to represent any waveform in the space spanned by this optimal representation with a minimum quantized energy error. We show that for scalar quantization this entropy computation is completely analogous to the one for the number state channel of statistical physics\, and it has the attractive feature that the complexity of state counting is now replaced by the geometric problem of optimally covering the signal space by high-dimensional boxes\, whose size is lower bounded by quantum constraints. For bandlimited radiation in a three-dimensional space\, using this approach we can recover the Bekenstein entropy bound on the largest amount of information that can be radiated from a sphere of given radius. We also compare results with black body radiation occurring over an infinite spectrum of frequencies and along the way we provide some new results on the asymptotic dimensionality and $\epsilon$-entropy of bandlimited\, square-integrable signals. \n\n\nBio: Massimo Franceschetti received the Laurea degree (with highest honors) in computer engineering from the University of Naples\, Naples\, Italy\, in 1997\, the M.S. and Ph.D. degrees in electrical engineering from the California Institute of Technology\, Pasadena\, CA\, in 1999\, and 2003\, respectively. He is Professor of Electrical and Computer Engineering at the University of California at San Diego (UCSD). Before joining UCSD\, he was a postdoctoral scholar at the University of California at Berkeley for two years. His research interests are in physical and information-based foundations of communication and control systems. He was awarded the C. H. Wilts Prize in 2003 for best doctoral thesis in electrical engineering at Caltech\, the S.A. Schelkunoff Award in 2005 for best paper in the IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION\, a National Science Foundation (NSF) CAREER award in 2006\, an Office of Naval Research (ONR) Young Investigator Award in 2007\, the IEEE Communications Society Best Tutorial Paper Award in 2010\, and the IEEE Control theory society Ruberti young researcher award in 2012. \n\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/quantum-limits-information-carried-electromagnetic-radiation
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171107T163000
DTEND;TZID=America/New_York:20171107T173000
DTSTAMP:20260407T180103
CREATED:20171010T145759Z
LAST-MODIFIED:20171207T153512Z
UID:6591-1510072200-1510075800@idss-stage.mit.edu
SUMMARY:Social Network Experiments - Nicholas Christakis (Yale University)
DESCRIPTION:  \n\n  \nAbstract \nHuman beings choose their friends\, and often their neighbors\, and co-workers\, and we inherit our relatives; and each of the people to whom we are connected also does the same\, such that\, in the end\, we humans assemble ourselves into face-to-face social networks with particular structures. Why do we do this? And how might an understanding of human social network structure and function be used to intervene in the world to make it better? Here\, I review recent research from our lab describing several classes of interventions involving both offline and online networks that can help make the world better\, including: (1) interventions that rewire the connections between people\, and (2) interventions that manipulate social contagion\, facilitating the flow of desirable properties within groups. I will illustrate what can be done using a variety of experiments in settings as diverse as fostering cooperation in networked groups online\, to fostering health behavior change in developing world villages\, to facilitating the diffusion of innovation or coordination in groups. I will also focus on our recent experiments with “heterogenous systems” involving both humans and “dumb AI” bots\, interacting in small groups. By taking account of people’s structural embeddedness in social networks\, and by understanding social influence\, it is possible to intervene in social systems to enhance desirable population-level properties as diverse as health\, wealth\, cooperation\, coordination\, and learning. \n  \nBiography \nNicholas A. Christakis\, MD\, PhD\, MPH\, is a social scientist and physician who conducts research in the area of biosocial science\, investigating the biological predicates and consequences of social phenomena. He directs the Human Nature Lab at Yale University\, where he is appointed as the Sol Goldman Family Professor of Social and Natural Science\, with appointments in the Departments of Sociology\, Medicine\, Ecology and Evolutionary Biology\, and Biomedical Engineering. He is the Co-Director of the Yale Institute for Network Science. \nPrior to moving his lab to Yale in 2013\, Dr. Christakis was Professor of Sociology and Professor of Medicine at Harvard University\, since 2001. Prior to that\, he served in the same capacities at the University of Chicago.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-series-seminar-nicholas-christalkis-yale-university/
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171106T160000
DTEND;TZID=America/New_York:20171106T180000
DTSTAMP:20260407T180103
CREATED:20171031T190817Z
LAST-MODIFIED:20171106T170223Z
UID:6833-1509984000-1509991200@idss-stage.mit.edu
SUMMARY:SES Admissions Info Session
DESCRIPTION:Join us for pizza and an Admissions Information Session on the Social and Engineering Systems Doctoral Program.
URL:https://idss-stage.mit.edu/calendar/ses-admissions-info-session/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2017/10/2017-Infinite-Display-Info-Session-Ad.png
ORGANIZER;CN="SES":MAILTO:idss_academic_office@mit.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171103T110000
DTEND;TZID=America/New_York:20171103T120000
DTSTAMP:20260407T180103
CREATED:20171120T200525Z
LAST-MODIFIED:20171120T200525Z
UID:7014-1509706800-1509710400@idss-stage.mit.edu
SUMMARY:Statistics\, Computation and Learning with Graph Neural Networks
DESCRIPTION:Abstract: \nDeep Learning\, thanks mostly to Convolutional architectures\, has recently transformed computer vision and speech recognition. Their ability to encode geometric stability priors\, while offering enough expressive power\, is at the core of their success. In such settings\, geometric stability is expressed in terms of local deformations\, and it is enforced thanks to localized convolutional operators that separate the estimation into scales. \nMany problems across applied sciences\, from particle physics to recommender systems\, are formulated in terms of signals defined over non-Euclidean geometries\, and also come with strong geometric stability priors. In this talk\, I will present techniques that exploit geometric stability in general geometries with appropriate graph neural network architectures. We will show that these techniques can all be framed in terms of local graph generators such as the graph Laplacian. We will present some stability certificates\, as well as applications to computer graphics\, particle physics and graph estimation problems. In particular\, we will describe how graph neural networks can be used to reach statistical detection thresholds in community detection on random graph families\, and attack hard combinatorial optimization problems\, such as the Quadratic Assignment Problem. \nBiography: \nJoan Bruna graduated from Universitat Politecnica de Catalunya (Barcelona\, Spain) in both Mathematics and Electrical Engineering. He obtained an M.Sc. in applied mathematics from ENS Cachan (France). He then became a research engineer in an image processing startup\, developing real-time video processing algorithms. He obtained his PhD in Applied Mathematics at Ecole Polytechnique (France). He was a postdoctoral researcher at the Courant Institute\, NYU\, New York\, and a fellow at Facebook AI Research. In 2015\, he became Assistant Professor at UC Berkeley\, Statistics Department\, and starting Fall 2016 he joined the Courant Institute (NYU\, New York) as Assistant Professor in Computer Science\, Data Science and Mathematics (affiliated). His research interests include invariant signal representations\, high-dimensional statistics and stochastic processes\, deep learning and its applications to signal processing.
URL:https://idss-stage.mit.edu/calendar/statistics-computation-and-learning-with-graph-neural-networks/
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171101T110000
DTEND;TZID=America/New_York:20171101T120000
DTSTAMP:20260407T180103
CREATED:20171120T181051Z
LAST-MODIFIED:20171120T192008Z
UID:7007-1509534000-1509537600@idss-stage.mit.edu
SUMMARY:Unbiased Markov chain Monte Carlo with couplings
DESCRIPTION:Abstract: Markov chain Monte Carlo methods provide consistent approximations of integrals as the number of iterations goes to infinity. However\, these estimators are generally biased after any fixed number of iterations\, which complicates both parallel computation. In this talk I will explain how to remove this burn-in  bias by using couplings of Markov chains and a telescopic sum argument\, inspired by Glynn & Rhee (2014). The resulting unbiased estimators can be computed independently in parallel\, and averaged. I will present coupling constructions for Metropolis-Hastings\, Gibbs and Hamiltonian Monte Carlo. The proposed methodology will be illustrated on various examples. If time permits\, I will describe how the proposed estimators can approximate the “cut” distribution that arises in Bayesian inference for misspecified models made of sub-models. \nThis is joint work with John O’Leary\, Yves F. Atchade and Jeremy Heng\,\navailable at arxiv.org/abs/1708.03625 and arxiv.org/abs/1709.00404. \nBiography: Pierre Jacob is an Assistant Professor of Statistics at Harvard University since 2015. Pierre was before a postdoctoral research fellow at the University of Oxford and the National University of Singapore. His Ph.D. was from Université Paris-Dauphine on computational methods for Bayesian inference. His current research is on algorithms amenable to parallel computing for Bayesian inference and model comparison\, with a focus on time series models. \nPierre E. Jacob\nAssistant Professor of Statistics\, Harvard University\npersonal website: sites.google.com/site/pierrejacob/\nblog: statisfaction.wordpress.com/
URL:https://idss-stage.mit.edu/calendar/unbiased-markov-chain-monte-carlo-with-couplings/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171031T160000
DTEND;TZID=America/New_York:20171031T170000
DTSTAMP:20260407T180103
CREATED:20171002T154828Z
LAST-MODIFIED:20190501T144833Z
UID:6535-1509465600-1509469200@idss-stage.mit.edu
SUMMARY:Structure\, Randomness and Universality
DESCRIPTION:What is the minimum possible number of vertices of a graph that contains every k-vertex graph as an induced subgraph? What is the minimum possible number of edges in a graph that contains every k-vertex graph with maximum degree 3 as a subgraph? These questions and related one were initiated by Rado in the 60s\, and received a considerable amount of attention over the years\, partly motivated by algorithmic applications. The study of the subject combines probabilistic arguments and explicit\, structured constructions. I will survey the topic focusing on a recent asymptotic solution of the first question\, where an asymptotic formula\, improving earlier estimates by several researchers\, is obtained by combining combinatorial and probabilistic arguments with group theoretic tools. \nBio: Noga Alon is a Baumritter Professor of Mathematics and Computer Science in Tel Aviv University\, Israel. He received his Ph. D. in Mathematics at the Hebrew University of Jerusalem in 1983 and had visiting positions in various research institutes including MIT\, the Institute for Advanced Study in Princeton\, IBM Almaden Research Center\, Bell Laboratories\, Bellcore and Microsoft Research. He joined Tel Aviv University in 1985\, served as the head of the School of Mathematical Sciences in 1999-2000\, and supervised about 20 PhD students. Since 2009 he is also a member of Microsoft Research\, Israel. He serves on the editorial boards of more than a dozen international technical journals and has given invited lectures in many conferences\, including plenary addresses in the 1996 European Congress of Mathematics and in the 2002 International Congress of Mathematician. He published more than five hundred research papers and one book. \n\n\nHis research interests are mainly in Combinatorics\, Graph Theory and their applications in Theoretical Computer Science. His main contributions include the study of expander graphs and their applications\, the investigation of derandomization techniques\, the foundation of streaming algorithms\, the development and applications of algebraic and probabilistic methods in Discrete Mathematics and the study of problems in Information Theory\, Combinatorial Geometry and Combinatorial Number Theory. \nHe is an ACM Fellow and an AMS Fellow\, a member of the Israel Academy of Sciences and Humanities since 1997 and of the Academia Europaea since 2008\, and received the Erdös prize in 1989\, the Feher prize in 1991\, the Polya Prize in 2000\, the Bruno Memorial Award in 2001\, the Landau Prize in 2005\, the Gödel Prize in 2005\, the Israel Prize in 2008\, the EMET Prize in 2011\, the Dijkstra Prize in 2016\, an Honorary Doctorate from ETH Zurich in 2013 and from the University of Waterloo in 2015. \n\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/joint-seminar-csail-theory-computation-toc
LOCATION:32-G449 (Kiva)\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171027T110000
DTEND;TZID=America/New_York:20171027T120000
DTSTAMP:20260407T180103
CREATED:20171002T194208Z
LAST-MODIFIED:20171120T180403Z
UID:6559-1509102000-1509105600@idss-stage.mit.edu
SUMMARY:Stochastics and Statistics Seminar - Amit Daniely (Google)
DESCRIPTION:Abstract:  \nCan learning theory\, as we know it today\, form a theoretical basis for neural networks. I will try to discuss this question in light of two new results — one positive and one negative. \nBased on joint work with Roy Frostig\, Vineet Gupta and Yoram Singer\, and with Vitaly Feldman \nBiography: \nAmit Daniely is an Assistant Professor at the Hebrew University in Jerusalem\, and a research scientist at Google Research\, Tel-Aviv. Prior to that\, he was a research scientist at Google Research\, Mountain-View. Even prior to that\, he was a Ph.D. student at the Hebrew University of Jerusalem\, Israel\, supervised by Nati Linial and Shai Shalev-Shwartz. His main research interest is Machine Learning Theory.
URL:https://idss-stage.mit.edu/calendar/stochastic-and-statistics-seminar-amit-daniely-google/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171024T160000
DTEND;TZID=America/New_York:20171024T170000
DTSTAMP:20260407T180103
CREATED:20171002T154138Z
LAST-MODIFIED:20190501T145009Z
UID:6530-1508860800-1508864400@idss-stage.mit.edu
SUMMARY:Regularized Nonlinear Acceleration
DESCRIPTION:We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system\, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm\, providing improved estimates of the solution on the fly\, while the original optimization method is running. Numerical experiments are detailed on classical classification problems. \nBio: After dual PhDs from Ecole Polytechnique and Stanford University in optimisation and finance\, followed by a postdoc at U.C. Berkeley\, Alexandre d’Aspremont joined the faculty at Princeton University as an assistant then associate professor with joint appointments at the ORFE department and the Bendheim Center for Finance. He returned to Europe in 2011 thanks to a grant from the European Research Council and is now a research director at CNRS\, attached to Ecole Normale Supérieure in Paris. His research focuses on convex optimization and applications to machine learning\, statistics and finance. \n\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/alexandre-tsybakov-ensae-paristech
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20171023
DTEND;VALUE=DATE:20171111
DTSTAMP:20260407T180103
CREATED:20171025T184757Z
LAST-MODIFIED:20171107T230144Z
UID:6765-1508716800-1510358399@idss-stage.mit.edu
SUMMARY:Data Science Course Launches - open registration extended to November 10
DESCRIPTION:Every day\, your organization generates new data on your customers\, your processes\, and your industry. But could you be using this data more effectively? Developed by over ten MIT faculty members at the MIT Institute for Data\, Systems and Society (IDSS)\, this course is specially designed for professionals looking to learn the latest theories and strategies to harness data.
URL:https://idss-stage.mit.edu/calendar/data-science-course-launches-open-registration-extended/
ATTACH;FMTTYPE=image/png:https://idss-stage.mit.edu/wp-content/uploads/2017/10/Screen-Shot-2017-10-25-at-2.50.28-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171020T110000
DTEND;TZID=America/New_York:20171020T120000
DTSTAMP:20260407T180103
CREATED:20171002T193921Z
LAST-MODIFIED:20171006T202431Z
UID:6555-1508497200-1508500800@idss-stage.mit.edu
SUMMARY:Inference in dynamical systems and the geometry of learning group actions - Sayan Mukherjee (Duke)
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/inference-in-dynamical-systems-and-the-geometry-of-learning-group-actions-sayan-mukherjee-duke/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20171019T163000
DTEND;TZID=UTC:20171019T173000
DTSTAMP:20260407T180103
CREATED:20170831T230110Z
LAST-MODIFIED:20171002T193958Z
UID:6078-1508430600-1508434200@idss-stage.mit.edu
SUMMARY:Special Stochastics and Statistics Seminar - John Cunningham (Columbia)
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/special-stochastics-and-statistics-seminar-john-cunningham-columbia/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171017T160000
DTEND;TZID=America/New_York:20171017T170000
DTSTAMP:20260407T180103
CREATED:20171002T153935Z
LAST-MODIFIED:20190501T145140Z
UID:6528-1508256000-1508259600@idss-stage.mit.edu
SUMMARY:The Maps Inside Your Head
DESCRIPTION:How do our brains make sense of a complex and unpredictable world? In this talk\, I will discuss an information theory approach to the neural topography of information processing in the brain. First I will review the brain’s architecture\, and how neural circuits map out the sensory and cognitive worlds. Then I will describe how highly complex sensory and cognitive tasks are carried out by the cooperative action of many specialized neurons and circuits\, each of which has a simple function. I will illustrate my remarks with one sensory example and one cognitive example. For the sensory example\, I will consider the sense of smell (“olfaction”)\, whereby humans and other animals distinguish vast arrays of odor mixtures using very limited neural resources. For the cognitive example\, I will consider the “sense of place”\, that is\, how animals mentally represent their physical location. Both examples demonstrate that brains have evolved neural circuits that exploit sophisticated principles of mathematics and information processing – principles that scientists have only recently discovered. \nBio: Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Physics Department at the University of Pennsylvania\, where he is also Director of the Computational Neuroscience Initiative. He received B.Sc. degrees in Physics and Computer Science\, and an M.Sc. in Computer Science\, from MIT. He earned a Ph.D. in Theoretical Physics at Princeton University\, and was a Junior Fellow of the Harvard Society of Fellows. \n\n\n____________________________________ \n\nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/maps-inside-your-head
LOCATION:MIT Building 32\, Room 141\, The Stata Center (32-141)\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171013T110000
DTEND;TZID=America/New_York:20171013T120000
DTSTAMP:20260407T180103
CREATED:20171002T182143Z
LAST-MODIFIED:20171006T201516Z
UID:6549-1507892400-1507896000@idss-stage.mit.edu
SUMMARY:Additivity of Information in Deep Generative Network:  The I-MMSE Transform Method - Galen Reeves (Duke University)
DESCRIPTION:
URL:https://idss-stage.mit.edu/calendar/additivity-of-information-in-deep-generative-network-the-i-mmse-transform-method-galen-reeves-duke-university/
LOCATION:MIT Building E18\, Room 304\, Ford Building (E18)\, 50 Ames Street\, Cambridge\, MA\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
END:VCALENDAR