BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IDSS STAGE - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IDSS STAGE
X-ORIGINAL-URL:https://idss-stage.mit.edu
X-WR-CALDESC:Events for IDSS STAGE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191016T160000
DTEND;TZID=America/New_York:20191016T170000
DTSTAMP:20260413T055033
CREATED:20191010T172901Z
LAST-MODIFIED:20191010T173013Z
UID:10964-1571241600-1571245200@idss-stage.mit.edu
SUMMARY:Markov Chain Monte Carlo Methods and Some Attempts at Parallelizing Them
DESCRIPTION:IDS.190 – Topics in Bayesian Modeling and Computation \nAbstract: \nMCMC methods yield approximations that converge to quantities of interest in the limit of the number of iterations. This iterative asymptotic justification is not ideal: it stands at odds with current trends in computing hardware. Namely\, it would often be computationally preferable to run many short chains in parallel\, but such an approach is flawed because of the so-called “burn-in” bias.  This talk will first describe that issue and some known resolutions\, including regeneration techniques and sequential Monte Carlo samplers.  Then I will describe a recent proposal\, joint work with John O’Leary\, Yves Atchadé and others\, that allows to completely remove the burn-in bias. In a nutshell\, the proposed unbiased estimators are constructed from pairs of chains\, that are generated over a random\, finite number of iterations. Furthermore\, their variances and costs can be made arbitrarily close to those of standard MCMC estimators\, if desired.  The proposed method is described in https://arxiv.org/abs/1708.03625 and code in R is available to reproduce the experiments at https://github.com/pierrejacob/unbiasedmcmc. \nBiography: \nPierre E. Jacob is an Associate Professor of Statistics at Harvard University.  He develops methods for statistical inference\, e.g. to run Monte Carlo methods on parallel computers\, to compare models\, to estimate latent variables\, and to deal with intractable likelihood functions. \n– \nFor more information and an up-to-date schedule\, please see https://stellar.mit.edu/S/course/IDS/fa19/IDS.190/ \n**Taking IDS.190 satisfies the seminar requirement for students in MIT’s Interdisciplinary Doctoral Program in Statistics (IDPS)\, but formal registration is open to any graduate student who can register for MIT classes. And the meetings are open to any interested researcher.   Talks will be followed by 30 minutes of tea/snacks and informal discussion.**
URL:https://stat.mit.edu/calendar/markov-chain-monte-carlo-methods-and-some-attempts-at-parallelizing-them/
LOCATION:E18-304\, United States
CATEGORIES:IDS.190 - Topics in Bayesian Modeling and Computation
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191016T090000
DTEND;TZID=America/New_York:20191016T100000
DTSTAMP:20260413T055033
CREATED:20190802T190945Z
LAST-MODIFIED:20190802T192632Z
UID:10462-1571216400-1571220000@idss-stage.mit.edu
SUMMARY:SES PhD Admissions Webinar
DESCRIPTION:Learn about admission to the Social and Engineering Systems Doctoral Program. Webinars are led by a member of the IDSS faculty who introduces the program and answers your questions. \nPlease register in advance.
URL:https://idss-stage.mit.edu/calendar/ses-phd-admissions-webinar-5/
CATEGORIES:Online events
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191011T110000
DTEND;TZID=America/New_York:20191011T120000
DTSTAMP:20260413T055033
CREATED:20190923T173105Z
LAST-MODIFIED:20190926T135551Z
UID:10860-1570791600-1570795200@idss-stage.mit.edu
SUMMARY:The Planted Matching Problem
DESCRIPTION:Abstract:\n\nWhat happens when an optimization problem has a good solution built into it\, but which is partly obscured by randomness? Here we revisit a classic polynomial-time problem\, the minimum perfect matching problem on bipartite graphs. If the edges have random weights in [0\,1]\, Mézard and Parisi — and then Aldous\, rigorously — showed that the minimum matching has expected weight zeta(2) = pi^2/6. We consider a “planted” version where a particular matching has weights drawn from an exponential distribution with mean mu/n. When mu < 1/4\, the minimum matching is almost identical to the planted one. When mu > 1/4\, the overlap between the two is given by a system of differential equations that result from a message-passing algorithm. This is joint work with Mehrdad Moharrami (Michigan) and Jiaming Xu (Duke).\n\nBiography:\n\nCristopher Moore received his B.A. in Physics\, Mathematics\, and Integrated Science from Northwestern University\, and his Ph.D. in Physics from Cornell. From 2000 to 2012 he was a professor at the University of New Mexico\, with joint appointments in Computer Science and Physics. Since 2012\, Moore has been a resident professor at the Santa Fe Institute; he has also held visiting positions at École Normale Superieure\, École Polytechnique\, Université Paris 7\, the Niels Bohr Institute\, Northeastern University\, and the University of Michigan. He has published over 150 papers at the boundary between physics and computer science\, ranging from quantum computing\, to phase transitions in NP-complete problems\, to the theory of social networks and efficient algorithms for analyzing their structure. He is an elected Fellow of the American Physical Society\, the American Mathematical Society\, and the American Association for the Advancement of Science. With Stephan Mertens\, he is the author of The Nature of Computation from Oxford University Press.\n\n\n\n–\n\n\n\nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/moore/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191009T160000
DTEND;TZID=America/New_York:20191009T170000
DTSTAMP:20260413T055033
CREATED:20191007T141618Z
LAST-MODIFIED:20191007T141618Z
UID:10922-1570636800-1570640400@idss-stage.mit.edu
SUMMARY:Probabilistic Programming and Artificial Intelligence
DESCRIPTION:IDS.190 – Topics in Bayesian Modeling and Computation \nAbstract: \nProbabilistic programming is an emerging field at the intersection of programming languages\, probability theory\, and artificial intelligence. This talk will show how to use recently developed probabilistic programming languages to build systems for robust 3D computer vision\, without requiring any labeled training data; for automatic modeling of complex real-world time series; and for machine-assisted analysis of experimental data that is too small and/or messy for standard approaches from machine learning and statistics. \nThis talk will use these applications to illustrate recent technical innovations in probabilistic programming that formalize and unify modeling approaches from multiple eras of AI\, including generative models\, neural networks\, symbolic programs\, causal Bayesian networks\, and hierarchical Bayesian modeling. Specifically\, it will present languages in which models are represented using executable code\, and in which inference is programmable using novel constructs for Monte Carlo\, optimization-based\, and neural inference. It will also present techniques for Bayesian learning of probabilistic program structure and parameters from real-world data. Finally\, this talk will review challenges and research opportunities in the development and use of general-purpose probabilistic programming languages that performant enough and flexible enough for real-world AI engineering. \nBiography: \nVikash Mansinghka is a Principal Research Scientist at MIT\, where he leads the MIT Probabilistic Computing Project. Vikash holds S.B. degrees in Mathematics and in Computer Science from MIT\, as well as an M.Eng. in Computer Science and a PhD in Computation. He also held graduate fellowships from the National Science Foundation and MIT’s Lincoln Laboratory. His PhD dissertation on natively probabilistic computation won the MIT George M. Sprowls dissertation award in computer science\, and his research on the Picture probabilistic programming language won an award at CVPR. He co-founded two VC-backed startups — Prior Knowledge (acquired by Salesforce in 2012) and Empirical Systems (acquired by Tableau in 2018) — and has consulted on probabilistic programming for leading companies in the semiconductor\, biopharma\, IT services\, and banking sectors. He served on DARPA’s Information Science and Technology advisory board from 2010-2012\, currently serves on the editorial boards for the Journal of Machine Learning Research and the journal Statistics and Computation\, and co-founded the International Conference on Probabilistic Programming. \n=========== \nFor more information and an up-to-date schedule\, please see https://stellar.mit.edu/S/course/IDS/fa19/IDS.190/ \n**Taking IDS.190 satisfies the seminar requirement for students in MIT’s Interdisciplinary Doctoral Program in Statistics (IDPS)\, but formal registration is open to any graduate student who can register for MIT classes. And the meetings are open to any interested researcher.   Talks will be followed by 30 minutes of tea/snacks and informal discussion.**
URL:https://stat.mit.edu/calendar/probabilistic-programming-and-artificial-intelligence/
LOCATION:E18-304\, United States
CATEGORIES:IDS.190 - Topics in Bayesian Modeling and Computation
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191007T160000
DTEND;TZID=America/New_York:20191007T170000
DTSTAMP:20260413T055033
CREATED:20190722T170917Z
LAST-MODIFIED:20191009T193356Z
UID:10361-1570464000-1570467600@idss-stage.mit.edu
SUMMARY:Theoretical Foundations of Active Machine Learning
DESCRIPTION:Title:\nTheoretical Foundations of Active Machine Learning\nAbstract:\nThe field of Machine Learning (ML) has advanced considerably in recent years\, but mostly in well-defined domains using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text\, but they must be trained with more images and text than a person can see in nearly a lifetime.  The computational complexity of training has been offset by recent technological advances\, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper\, so generating labeled training datasets has become a major bottleneck in ML pipelines. Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant\, redundant\, or trivial examples. This talk explores the development of active ML theory and methods over the past decade\, including recently proposed approaches to active ML with nonparametric or overparameterized models such as neural networks. \nSpeaker: Rob Nowak\, University of Wisconsin\, Madison\nReception to follow.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-speaker-seminar-with-rob-nowak-university-of-wisconsin-madison/
LOCATION:E18-304\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191002T160000
DTEND;TZID=America/New_York:20191002T170000
DTSTAMP:20260413T055033
CREATED:20191001T173138Z
LAST-MODIFIED:20191001T173138Z
UID:10895-1570032000-1570035600@idss-stage.mit.edu
SUMMARY:Behavior of the Gibbs Sampler in the Imbalanced Case/Bias Correction from Daily Min and Max Temperature Measurements
DESCRIPTION:IDS.190 Topics in Bayesian Modeling and Computation \n*Note:  The speaker this week will give two shorter talks within the usual session \nTitle: \nBehavior of the Gibbs sampler in the imbalanced case \nAbstract:   \nMany modern applications collect highly imbalanced categorical data\, with some categories relatively rare. Bayesian hierarchical models combat data sparsity by borrowing information\, while also quantifying uncertainty. However\, posterior computation presents a fundamental barrier to routine use; a single class of algorithms does not work well in all settings and practitioners waste time trying different types of MCMC approaches. This talk is motivated by an application to quantitative advertising in which we encountered extremely poor computational performance for common data augmentation MCMC algorithms but obtained excellent performance for adaptive Metropolis. To obtain a deeper understanding of this behavior\, we give strong theory results on computational complexity in an infinitely imbalanced asymptotic regime. Our results show why the data augmentations methods work poorly. \nTitle:   \nBias correction from the daily min and the max temperature measurements. \nAbstract:   \nThis will be a talk on an applied project\, which involves a mix of modeling and obtaining MCMC samplers for a data set from the climate sciences. \n=========== \nFor more information and an up-to-date schedule\, please see https://stellar.mit.edu/S/course/IDS/fa19/IDS.190/ \n**Taking IDS.190 satisfies the seminar requirement for students in MIT’s Interdisciplinary Doctoral Program in Statistics (IDPS)\, but formal registration is open to any graduate student who can register for MIT classes. And the meetings are open to any interested researcher.   Talks will be followed by 30 minutes of tea/snacks and informal discussion.**
URL:https://stat.mit.edu/calendar/pillai/
LOCATION:E18-304\, United States
CATEGORIES:IDS.190 - Topics in Bayesian Modeling and Computation
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191001T160000
DTEND;TZID=America/New_York:20191001T170000
DTSTAMP:20260413T055033
CREATED:20190920T150647Z
LAST-MODIFIED:20190920T150647Z
UID:10842-1569945600-1569949200@idss-stage.mit.edu
SUMMARY:Data-driven Coordination of Distributed Energy Resources
DESCRIPTION:The integration of distributed energy resources (DERs)\, e.g.\, rooftop photovoltaics installations\, electric energy storage devices\, and flexible loads\, is becoming prevalent. This integration poses numerous operational challenges on the lower-voltage systems to which the DERs are connected\, but also creates new opportunities for the provision of grid services. In the first part of the talk\, we discuss one such operational challenge—ensuring proper voltage regulation in the distribution network to which DERs are connected. To address this problem\, we propose a Volt/VAR control architecture that relies on the proper coordination of conventional voltage regulation devices\, e.g.\, tap changing under load (TCUL) transformers and switched capacitors and DERs with reactive power provision capability. In the second part of the talk\, we discuss one such opportunity—utilizing DERs to provide regulation services to the bulk power grid. To leverage this opportunity\, we propose a scheme for coordinating the response of the DERs so that the power injected into the distribution network (to which the DERs are connected) follows some regulation signal provided by the bulk power system operator. Throughout the talk\, we assume limited knowledge of the particular power system models and develop data-driven methods to learn them. We then utilize these models to design appropriate controls for determining the set-points of DERs (and other assets\, e.g.\, TCULs) in an optimal or nearly-optimal fashion. \nBio: \nAlejandro Dominguez-Garcıa is a Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign\, where he is affiliated with the Power and Energy Systems area. Also within ECE Illinois\, he is a Research Professor in the Coordinated Science Laboratory and in the Information Trust Institute and has been a Grainger Associate since 2011\, and a William L. Everitt Scholar since 2017. His research program aims at the development of technologies for providing a reliable and efficient supply of electricity. Specific activities within his program include work on: (i) control of distributed energy resources\, (ii) power system health monitoring and reliability analysis\, and (iii) quantifying and mitigating the impact of renewable-based generation.\n\nProfessor Dom´ınguez-Garc´ıa received the degree of “Ingeniero Industrial” from the University of Oviedo in 2001\, and the Ph.D. degree in electrical engineering and computer science from MIT in 2007. He also spent time as a post-doctoral research associate at MIT before joining the Illinois faculty in 2008. He received the NSF CAREER Award in 2010\, and the Young Engineer Award from the IEEE Power and Energy Society in 2012. In 2014\, he was invited by the National Academy of Engineering to attend the US Frontiers of Engineering Symposium and was selected by the University of Illinois at Urbana-Champaign Provost to receive a Distinguished Promotion Award. In 2015\, he received the U of I College of Engineering Dean’s Award for Excellence in Research. He is currently an associate editor of the IEEE Transactions on Control of Network Systems. He also served as an editor of the IEEE Transactions on Power Systems and IEEE Power Engineering Letters from 2011 to 2017.\n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/data-driven-coordination-distributed-energy-resources
LOCATION:32-155
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190930T160000
DTEND;TZID=America/New_York:20190930T170000
DTSTAMP:20260413T055033
CREATED:20190619T144446Z
LAST-MODIFIED:20191001T175118Z
UID:9778-1569859200-1569862800@idss-stage.mit.edu
SUMMARY:Selection and Endogenous Bias in Studies of Health Behaviors
DESCRIPTION:Abstract:\nStudies of health behaviors using observational data are prone to bias from selection in behavior choices. How important are these biases? Are they dynamic – that is\, are they influenced by the recommendations we make? Are there formal assumptions under which we can use information we have about selection on observed variables to learn about the possible bias from unobserved selection? \nAbout the Speaker:\nEmily Oster is a professor of economics. Prior to coming to Brown she was an associate professor at the University of Chicago Booth School of Business. She is affiliated with the National Bureau of Economic Research. She earned her BA and her PhD from Harvard\, in 2002 and 2006\, respectively. \n  \nReception to follow.
URL:https://idss-stage.mit.edu/calendar/selection-and-endogenous-bias-in-studies-of-health-behaviors/
LOCATION:E18-304\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190930
DTEND;VALUE=DATE:20191001
DTSTAMP:20260413T055033
CREATED:20190716T135001Z
LAST-MODIFIED:20191218T184639Z
UID:10286-1569801600-1569887999@idss-stage.mit.edu
SUMMARY:Data Science and Big Data Analytics: Making Data-Driven Decisions
DESCRIPTION:Developed by 11 MIT faculty members at IDSS\, this seven-week course is specially designed for data scientists\, business analysts\, engineers and technical managers looking to learn strategies to harness data. Offered by MIT xPRO. Course begins September 30\, 2019. \n 
URL:https://xpro.mit.edu/courses/course-v1:xPRO+DSx/?utm_medium=website&#038;utm_source=idss&#038;utm_campaign=dsx-3t-2019&#038;utm_content=event-calendar
LOCATION:online
CATEGORIES:Online events
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190927T110000
DTEND;TZID=America/New_York:20190927T120000
DTSTAMP:20260413T055033
CREATED:20190923T172454Z
LAST-MODIFIED:20191016T163112Z
UID:10858-1569582000-1569585600@idss-stage.mit.edu
SUMMARY:Frontiers of Efficient Neural-Network Learnability
DESCRIPTION:Abstract:  \nWhat are the most expressive classes of neural networks that can be learned\, provably\, in polynomial-time in a distribution-free setting? In this talk we give the first efficient algorithm for learning neural networks with two nonlinear layers using tools for solving isotonic regression\, a nonconvex (but tractable) optimization problem. If we further assume the distribution is symmetric\, we obtain the first efficient algorithm for recovering the parameters of a one-layer convolutional network. These results implicitly make use of a convex surrogate loss for generalized linear models and go beyond the kernel-method/overparameterization regime used in recent works.\n\nBiography:  \nAdam Klivans is a professor of computer science at the University of Texas at Austin who works in theoretical computer science and machine learning. He completed his doctorate in mathematics from MIT\, where he was awarded the Charles W. and Jennifer C. Johnson Prize. \nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/frontiers/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190923T140000
DTEND;TZID=America/New_York:20190923T150000
DTSTAMP:20260413T055033
CREATED:20190920T150517Z
LAST-MODIFIED:20190920T150517Z
UID:10840-1569247200-1569250800@idss-stage.mit.edu
SUMMARY:Power of Experimental Design and Active Learning
DESCRIPTION:Classical supervised machine learning algorithms focus on the setting where the algorithm has access to a fixed labeled dataset obtained prior to any analysis. In most applications\, however\, we have control over the data collection process such as which image labels to obtain\, which drug-gene interactions to record\, which network routes to probe\, which movies to rate\, etc. Furthermore\, most applications face budget limitations on the amount of labels that can be collected. Experimental design and active learning are two paradigms that involve careful selection of data points to label from a large unlabeled pool. This talk will discuss and contrast the power of experimental design and active learning\, starting with some recent advances in these paradigms and then posing open questions involving their integration and application to deep models. \nBio: Aarti Singh is an Associate Professor in the Machine Learning Department at Carnegie Mellon University. Her research lies at the intersection of machine learning\, statistics and signal processing\, and focuses on designing statistically and computationally efficient algorithms for learning from direct\, compressive and interactive queries. Her work is recognized by an NSF Career Award\, the United States Air Force Young Investigator Award\, A. Nico Habermann Junior Faculty Chair Award\, Harold A. Peterson Best Dissertation Award\, and three best student paper awards. Her service honors include serving as Program Chair for the International Conference on Machine Learning (ICML) 2020\, Program Chair for Artificial Intelligence and Statistics (AISTATS) 2017 conference\, member of the National Academy of Sciences (NAS) Committee on Applied and Theoretical Statistics\, guest editor for Electronic Journal of Statistics\, and Associate Editor of the IEEE Transactions on Information Theory and IEEE Transactions on Signal and Information Processing over Networks. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/power-experimental-design-and-active-learning
LOCATION:E18-304\, United States
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190920T110000
DTEND;TZID=America/New_York:20190920T120000
DTSTAMP:20260413T055033
CREATED:20190910T191447Z
LAST-MODIFIED:20191016T163208Z
UID:10670-1568977200-1568980800@idss-stage.mit.edu
SUMMARY:Some New Insights On Transfer Learning
DESCRIPTION:Abstract:  \nThe problem of transfer and domain adaptation is ubiquitous in machine learning and concerns situations where predictive technologies\, trained on a given source dataset\, have to be transferred to a new target domain that is somewhat related. For example\, transferring voice recognition trained on American English accents to apply to Scottish accents\, with minimal retraining. A first challenge is to understand how to properly model the ‘distance’ between source and target domains\, viewed as probability distributions over a feature space.\n\nIn this talk we will argue that various existing notions of distance between distributions turn out to be pessimistic\, i.e.\, these distances might appear high in many situations where transfer is possible\, even at fast rates. Instead we show that some new notions of distance tightly capture a continuum from easy to hard transfer\, and furthermore can be adapted to\, i.e.\, do not need to be estimated in order to perform near-optimal transfer. Finally we will discuss near-optimal approaches to minimizing sampling of target data (e.g. sampling Scottish speech)\, when one already has access to a given amount of source data (e.g. American speech).\n\nThis talk is based on some joint work with G. Martinet\, and ongoing work with S. Hanneke.\n\nBiography:  \nSamory Kpotufe is an Associate Professor in Statistics at Columbia University. He works in machine learning\, with an emphasis on nonparametric methods and high dimensional statistics. Generally\, his interests are in understanding basic learning scenarios under practical constraints from modern application domains. He has previously held positions at the Max Planck Institute in Germany\, the Toyota Technological Institute at Chicago\, and Princeton University. \nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://idss-stage.mit.edu/calendar/some-new-insights-on-transfer-learning/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190918T160000
DTEND;TZID=America/New_York:20190918T170000
DTSTAMP:20260413T055033
CREATED:20190916T194901Z
LAST-MODIFIED:20190916T194901Z
UID:10702-1568822400-1568826000@idss-stage.mit.edu
SUMMARY:Probabilistic Modeling meets Deep Learning using TensorFlow Probability
DESCRIPTION:IDS.190 – Topics in Bayesian Modeling and Computation \nSpeaker: \nBrian Patton (Google AI) \nAbstract: \nTensorFlow Probability provides a toolkit to enable\nresearchers and practitioners to integrate uncertainty with\ngradient-based deep learning on modern accelerators. In this talk\nwe’ll walk through some practical problems addressed using TFP;\ndiscuss the high-level interfaces\, goals\, and principles of the\nlibrary; and touch on some recent innovations in describing\nprobabilistic graphical models. Time-permitting\, we may touch on a\ncouple areas of research interest for the team.\n\n–\n\n**Taking IDS.190 satisfies the seminar requirement for students in MIT’s Interdisciplinary Doctoral Program in Statistics (IDPS)\, but formal registration is open to any graduate student who can register for MIT classes.  For more information and an up-to-date schedule\, please see https://stellar.mit.edu/S/course/IDS/fa19/IDS.190/\n \n**Meetings are open to any interested researcher.
URL:https://stat.mit.edu/calendar/probabilistic-modeling-meets-deep-learning-using-tensorflow-probability/
LOCATION:E18-304\, United States
CATEGORIES:IDS.190 - Topics in Bayesian Modeling and Computation
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190916T160000
DTEND;TZID=America/New_York:20190916T170000
DTSTAMP:20260413T055033
CREATED:20190920T150317Z
LAST-MODIFIED:20190920T150317Z
UID:10838-1568649600-1568653200@idss-stage.mit.edu
SUMMARY:Dynamic Monitoring and Decision Systems (DyMonDS) Framework for Data-Enabled Integration in Complex Electric Energy Systems
DESCRIPTION:In this talk\, we introduce a unifying Dynamic Monitoring and Decision Systems (DyMonDS) framework that is based on multi-layered modeling for aggregation and minimal coordination of interactions between the layers of complex electric energy systems. Using this approach\, distributed control and optimization problems are formulated so that: (1) the low-level decision-makers optimize cost of local interactions while accounting for their heterogeneous technologies\, as well as for their social and risk preferences; and\, (2) the higher layer aggregators and coordinators optimize the cost of all interactions at their levels to enable cooperative control. The interactions of each layer are abstracted by using unifying energy state space and the Lagrange coefficients associated with the general physical laws. This sets the bases for both nonlinear control of power electronically-switched automation and for market design formulation. Potential benefits (such as enhanced reliability\, resiliency\, and efficiency) from integrating flexible technologies\, storage\, and control\, in particular\, are illustrated on simple IEEE test systems. \nBio: Marija Ilić has retired as a Professor Emerita at Carnegie Mellon University. She is currently a Senior Staff in the Energy Systems Group 73 at the MIT Lincoln Laboratory. She is also a Senior Research Scientist at MIT in LIDS and IDSS. She is an IEEE Life Fellow. She was the first recipient of the NSF Presidential Young Investigator Award for Power Systems. In addition to her academic work\, she has gained considerable industry experience as the founder of New Electricity Transmission Software Solutions\, Inc. (NETSS\, Inc.). She has co-authored several books on the subject of large-scale electric power systems and has co-organized an annual multidisciplinary Electricity Industry conference series at Carnegie Mellon with participants from academia\, government\, and industry. She was the founder and co-director of the Electric Energy Systems Group (EESG) at Carnegie Mellon University. Currently\, she is building EESG@MIT\, in the same spirit as EESG@CMU. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/dynamic-monitoring-and-decision-systems-dymonds-framework-data-enabled
LOCATION:32-155
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190911T160000
DTEND;TZID=America/New_York:20190911T170000
DTSTAMP:20260413T055033
CREATED:20190910T184518Z
LAST-MODIFIED:20190910T190807Z
UID:10666-1568217600-1568221200@idss-stage.mit.edu
SUMMARY:Automated Data Summarization for Scalability in Bayesian Inference
DESCRIPTION:IDS.190 – Topics in Bayesian Modeling and Computation \nAbstract: \nMany algorithms take prohibitively long to run on modern\, large datasets. But even in complex data sets\, many data points may be at least partially redundant for some task of interest. So one might instead construct and use a weighted subset of the data (called a “coreset”) that is much smaller than the original dataset. Typically running algorithms on a much smaller data set will take much less computing time\, but it remains to understand whether the output can be widely useful. (1) In particular\, can running an analysis on a smaller coreset yield answers close to those from running on the full data set? (2) And can useful coresets be constructed automatically for new analyses\, with minimal extra work from the user? We answer in the affirmative for a wide variety of problems in Bayesian inference. We demonstrate how to construct “Bayesian coresets” as an automatic\, practical pre-processing step. We prove that our method provides geometric decay in relevant approximation error as a function of coreset size. Empirical analysis shows that our method reduces approximation error by orders of magnitude relative to uniform random subsampling of data. Though we focus on Bayesian methods here\, we also show that our construction can be applied in other domains. \nBiography: \nTamara Broderick is an Associate Professor in EECS at MIT. \n**Meetings are open to any interested researcher.  \n**Taking IDS.190 satisfies the seminar requirement for students in MIT’s Interdisciplinary Doctoral Program in Statistics (IDPS)\, but formal registration is open to any graduate student who can register for MIT classes.  For more information and an up-to-date schedule\, please see https://stellar.mit.edu/S/course/IDS/fa19/IDS.190/ \n 
URL:https://stat.mit.edu/calendar/automated-data-summarization-for-scalability-in-bayesian-inference/
LOCATION:E18-304\, United States
CATEGORIES:IDS.190 - Topics in Bayesian Modeling and Computation
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190906T110000
DTEND;TZID=America/New_York:20190906T120000
DTSTAMP:20260413T055033
CREATED:20190903T150512Z
LAST-MODIFIED:20190903T152812Z
UID:10580-1567767600-1567771200@idss-stage.mit.edu
SUMMARY:GANs\, Optimal Transport\, and Implicit Density Estimation
DESCRIPTION:Abstract:  \nWe first study the rate of convergence for learning distributions with the adversarial framework and Generative Adversarial Networks (GANs)\, which subsumes Wasserstein\, Sobolev\, and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions\, under a collection of objective evaluation metrics. On the nonparametric end\, we investigate the minimax optimal rates and fundamental difficulty of the implicit density estimation under the adversarial framework. On the parametric end\, we establish a theory for general neural network classes\, that characterizes the interplay on the choice of generator and discriminator. We investigate how to obtain a good statistical guarantee for GANs through the lens of regularization. We discover and isolate a new notion of regularization\, called the generator/discriminator pair regularization\, that sheds light on the advantage of GANs compared to classical approaches for density estimation. We develop novel oracle inequalities as the main tools for analyzing GANs\, which is of independent theoretical interest. \nLater\, we proceed to discuss optimal transport\, estimating under the Wasserstein metric\, and how to use them for implicit density estimation. We will point out an interesting connection between pair regularization and optimal transport.\n\n\nBiography: \nDr. Liang is an assistant professor at Chicago Booth. He is also the George C. Tiao faculty fellow in data science research. His current research interests include computational and algorithmic aspects of statistical inference\, machine learning and statistical learning theory\, stochastic methods in non-convex optimization. \nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/liang/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190530
DTEND;VALUE=DATE:20190601
DTSTAMP:20260413T055033
CREATED:20190502T161055Z
LAST-MODIFIED:20190502T161155Z
UID:9592-1559174400-1559347199@idss-stage.mit.edu
SUMMARY:Learning for Dynamics and Control (L4DC)
DESCRIPTION:Over the next decade\, the biggest generator of data is expected to be devices which sense and control the physical world. This explosion of real-time data that is emerging from the physical world requires a rapprochement of areas such as machine learning\, control theory\, and optimization. While control theory has been firmly rooted in tradition of model-based design\, the availability and scale of data (both temporal and spatial) will require rethinking of the foundations of our discipline. From a machine learning perspective\, one of the main challenges going forward is to go beyond pattern recognition and address problems in data driven control and optimization of dynamical processes. Our overall goal is to create a new community of people that think rigorously across the disciplines\, asks new questions\, and develops the foundations of this new scientific area.
URL:https://l4dc.mit.edu/
LOCATION:32-123\, United States
CATEGORIES:Conferences and Workshops
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190520
DTEND;VALUE=DATE:20190522
DTSTAMP:20260413T055033
CREATED:20190417T144745Z
LAST-MODIFIED:20190417T145145Z
UID:9389-1558310400-1558483199@idss-stage.mit.edu
SUMMARY:Conference on Synthetic Controls and Related Methods
DESCRIPTION:Organizers are Alberto Abadie (MIT)\, Victor Chernozhukov (MIT)\, and Guido Imbens (Stanford University). The program is posted here. \nParticipation by invitation only.
URL:https://idss-stage.mit.edu/calendar/conference-on-synthetic-controls-and-related-methods/
LOCATION:E18-304\, United States
CATEGORIES:Conferences and Workshops
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190514T160000
DTEND;TZID=America/New_York:20190514T170000
DTSTAMP:20260413T055033
CREATED:20190301T172026Z
LAST-MODIFIED:20190501T142034Z
UID:8991-1557849600-1557853200@idss-stage.mit.edu
SUMMARY:Learning Engines for Healthcare: Using Machine Learning to Transform Clinical Practice and Discovery
DESCRIPTION:The overarching goal of my research is to develop cutting-edge machine learning\, AI and operations research theory\, methods\, algorithms\, and systems to understand the basis of health and disease; develop methodology to catalyze clinical research; support clinical decisions through individualized medicine; inform clinical pathways\, better utilize resources & reduce costs; and inform public health. \nTo do this\, Prof. van der Schaar is creating what she calls Learning Engines for Healthcare (LEH’s). An LEH is an integrated ecosystem that uses machine learning\, AI and operations research to provide clinical insights and healthcare intelligence to all the stakeholders (patients\, clinicians\, hospitals\, administrators). In contrast to an Electronic Health Record\, which provides a static\, passive\, isolated display of information\, an LEH provides a dynamic\, active\, holistic & individualized display of information including alerts. \nIn this talk Prof. van der Schaar will focus on 3 steps in the development of LEH’s: \n\nBuilding a comprehensive model that accommodates irregularly sampled\, temporally correlated\, informatively censored and non-stationary processes in order to understand and predict the longitudinal trajectories of diseases.\nEstablishing the theoretical limits of causal inference and using what has been established to create a new approach that makes it possible to better estimate individualized treatment effects.\nUsing Machine Learning itself to automate the design and construction of entire pipelines of Machine Learning algorithms for risk prediction\, screening\, diagnosis\, and prognosis.\n\nBio: Professor van der Schaar is John Humphrey Plummer Professor of Machine Learning\, Artificial Intelligence\, and Medicine at the University of Cambridge\, a Turing Faculty Fellow at The Alan Turing Institute in London\, where she leads the effort on data science and machine learning for personalized medicine. Prior to this\, she was a Chancellor’s Professor at UCLA and MAN Professor of Quantitative Finance at the University of Oxford. She is an IEEE Fellow (2009). She has received the Oon Prize on Preventative Medicine from the University of Cambridge (2018).  She has also been the recipient of an NSF Career Award\, 3 IBM Faculty Awards\, the IBM Exploratory Stream Analytics Innovation Award\, the Philips Make a Difference Award and several best paper awards\, including the IEEE Darlington Award. She holds 35 granted USA patents. Her current research focus is on data science\, machine learning\, AI and operations research for medicine. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/lids-seminar-series-mihaela-van-der-schaar
LOCATION:32-155
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190513
DTEND;VALUE=DATE:20190514
DTSTAMP:20260413T055033
CREATED:20190328T163321Z
LAST-MODIFIED:20190328T163321Z
UID:9178-1557705600-1557791999@idss-stage.mit.edu
SUMMARY:Data Science and Big Data Analytics: Making Data-Driven Decisions
DESCRIPTION:Developed by 11 MIT faculty members at IDSS\, this seven-week course is specially designed for data scientists\, business analysts\, engineers and technical managers looking to learn strategies to harness data. Offered by MIT xPRO. Course begins May\, 13\, 2019.
URL:https://mitxpro.mit.edu/courses/course-v1:MITxPRO+DSx+2T2019/about?utm_medium=website&#038;utm_source=idss&#038;utm_campaign=ds-su19&#038;utm_content=event-calendar
CATEGORIES:Online events
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190510T080000
DTEND;TZID=America/New_York:20190510T170000
DTSTAMP:20260413T055033
CREATED:20190204T204606Z
LAST-MODIFIED:20190307T163046Z
UID:8832-1557475200-1557507600@idss-stage.mit.edu
SUMMARY:Counting and sampling at low temperatures
DESCRIPTION:Abstract: \nWe consider the problem of efficient sampling from the hard-core and Potts models from statistical physics. On certain families of graphs\, phase transitions in the underlying physics model are linked to changes in the performance of some sampling algorithms\, including Markov chains. We develop new sampling and counting algorithms that exploit the phase transition phenomenon and work efficiently on lattices (and bipartite expander graphs) at sufficiently low temperatures in the phase coexistence regime. Our algorithms are based on Pirogov-Sinai theory and the cluster expansion\, classical tools from statistical physics. Joint work with Tyler Helmuth and Guus Regts. \n Biography: \nWill Perkins is an assistant professor in the Department of Mathematics\, Statistics\, and Computer Science at the University of Illinois at Chicago. His research interests are in probability\, combinatorics\, and algorithms. He received his PhD in 2011 from New York University\, then was a postdoc at Georgia Tech and faculty at the University of Birmingham before moving to UIC in 2018. \nMIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/tbd-willperkins/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190508T150000
DTEND;TZID=America/New_York:20190508T160000
DTSTAMP:20260413T055033
CREATED:20190426T181812Z
LAST-MODIFIED:20190501T142054Z
UID:9525-1557327600-1557331200@idss-stage.mit.edu
SUMMARY:Representing Short-Term Uncertainties in Capacity Expansion Planning Using an Rolling-Horizon Operation Model
DESCRIPTION:Flexible resources such as batteries and demand-side management technologies are needed to handle future large shares of variable renewable power. Wind and solar power introduce more short-term uncertainty that have to be considered when making investment decisions as it significantly impacts the value of flexible resources. \nIn this work we present a method for using duals from a rolling horizon operational model\, with wind power uncertainty and market representations\, to represent power system operation in an investment problem. The method is based on benders decomposition and special considerations are made due to the nature of the rolling horizon operational framework. \nBio: Espen Flo Bødal is a PhD student from the Norwegian University of Science and Technology (NTNU) in Trondheim\, Norway. He has a master degree in Electric Power Engineering and are currently starting his third year of the PhD on the topic of ”Large-Scale Hydrogen Production for Wind and Hydro Power in Constrained Transmission Grids”. From September 2018 to May 2019 he is a visiting at LIDS with Audun Botterud. \n____________________________________ \nTea talks are 20-minute-long informal chalk-talks for the purpose of sharing ideas and making others aware about some of the topics that may be of interest to the LIDS and Stats audience. If you are interested in presenting in the upcoming tea talks\, please email lids_stats_tea@mit.edu.
URL:https://lids.mit.edu/news-and-events/events/representing-short-term-uncertainties-capacity-expansion-planning-using
LOCATION:32 – LIDS Lounge\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS & Stats Tea Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190507T160000
DTEND;TZID=America/New_York:20190507T170000
DTSTAMP:20260413T055033
CREATED:20190129T150047Z
LAST-MODIFIED:20190514T131508Z
UID:8800-1557244800-1557248400@idss-stage.mit.edu
SUMMARY:Design and Analysis of Two-Stage Randomized Experiments
DESCRIPTION:Abstract:\nIn many social science experiments\, subjects often interact with each other and as a result\, one unit’s treatment can influence the outcome of another unit. Over the last decade\, a significant progress has been made towards causal inference in the presence of such interference between units. In this talk\, we will discuss two-stage randomized experiments\, which enable the identification of the average spillover effects as well as that of the average direct effect of one’s own treatment. In particular\, we consider the setting with noncompliance\, in which some units in the treatment group do not receive the treatment while others in the control group may take up one. This implies that there may exist the spillover effect of the treatment assignment on the treatment receipt as well as the spillover effect of the treatment receipt on the outcome. To address this complication\, we generalize the instrumental variables method by allowing for interference between units and show how to identify the average complier direct effect. We also establish the connections between our nonparametric randomization-inference approach and the two-stage least squares regression. The proposed methodology is motivated by and applied to an ongoing randomized evaluation of the India’s National Health Insurance Program (RSBY). Joint work with Zhichao Jiang and Anup Malani. \nAbout the Speaker:\nKosuke Imai is Professor in the Department of Government and the Department of Statistics at Harvard University. He is also an affiliate of the Institute for Quantitative Social Science where his primary office is located. Before moving to Harvard in 2018\, Imai taught at Princeton University for 15 years where he was the founding director of the Program in Statistics and Machine Learning. He specializes in the development of statistical methods and their applications to social science research and is the author of Quantitative Social Science: An Introduction (Princeton University Press\, 2017). Outside of Harvard\, Imai is currently serving as the President of the Society for Political Methodology. He is also Professor of Visiting Status in the Graduate Schools of Law and Politics at The University of Tokyo.
URL:https://idss-stage.mit.edu/calendar/idss-distinguished-speaker-seminar-may/
LOCATION:E18-304\, United States
CATEGORIES:IDSS Distinguished Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190503T110000
DTEND;TZID=America/New_York:20190503T120000
DTSTAMP:20260413T055033
CREATED:20190204T203624Z
LAST-MODIFIED:20190206T173354Z
UID:8827-1556881200-1556884800@idss-stage.mit.edu
SUMMARY:Stochastics and Statistics Seminar Series
DESCRIPTION:
URL:https://stat.mit.edu/calendar/tbd-tracyke/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190501T150000
DTEND;TZID=America/New_York:20190501T160000
DTSTAMP:20260413T055033
CREATED:20190424T142934Z
LAST-MODIFIED:20190501T142123Z
UID:9457-1556722800-1556726400@idss-stage.mit.edu
SUMMARY:Generalization and Learning under Dobrushin's Condition
DESCRIPTION:Statistical learning theory has largely focused on learning and generalization given independent and identically distributed (i.i.d.) samples. Motivated by applications involving time-series data\, there has been a growing literature on learning and generalization in settings where data is sampled from an ergodic process. This work has also developed complexity measures\, which appropriately extend Rademacher complexity to bound the generalization error and learning rates of hypothesis classes in this setting. Rather than time-series data\, our work is motivated by settings where data is sampled on a network or a spatial domain\, and thus do not fit well the framework of prior work. We provide learning and generalization bounds for data that are complexly dependent\, yet their distribution satisfies the standard Dobrushin condition. Indeed\, we show that the standard complexity measures of (Gaussian) Rademacher complexity and VC dimension are sufficient measures of complexity for the purposes of bounding the generalization error and learning rates of hypothesis classes in our setting. Moreover\, our generalization bounds only degrade by constant factors compared to their i.i.d. analogs and our learnability bounds degrade by log factors in the size of the training set. \nJoint work with Constantinos Daskalakis\, Nishanth Dikkala\, and Siddhartha Jayanti. \nBio: Yuval Dagan is a PhD student at the EECS department of MIT. He received his Bachelor’s and Master’s degrees from the Technion – Israel Institute of Technology. \n____________________________________ \nTea talks are 20-minute-long informal chalk-talks for the purpose of sharing ideas and making others aware about some of the topics that may be of interest to the LIDS and Stats audience. If you are interested in presenting in the upcoming tea talks\, please email lids_stats_tea@mit.edu.
URL:https://lids.mit.edu/news-and-events/events/generalization-and-learning-under-dobrushins-condition
LOCATION:32 – LIDS Lounge\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS & Stats Tea Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190430T160000
DTEND;TZID=America/New_York:20190430T170000
DTSTAMP:20260413T055033
CREATED:20190301T171816Z
LAST-MODIFIED:20190501T142201Z
UID:8989-1556640000-1556643600@idss-stage.mit.edu
SUMMARY:On Coupling Methods for Nonlinear Filtering and Smoothing
DESCRIPTION:Bayesian inference for non-Gaussian state-space models is a ubiquitous problem with applications ranging from geophysical data assimilation to mathematical finance. We will discuss how deterministic couplings between probability distributions enable new solutions to this problem. \nWe first consider filtering in high-dimensional models with nonlinear (potentially chaotic) dynamics and sparse observations in space and time. While the ensemble Kalman filter (EnKF) yields robust ensemble approximations of the filtering distribution in this setting\, it is limited by linear forecast-to-analysis transformations. To generalize the EnKF\, we propose a methodology that transforms the non-Gaussian forecast ensemble at each assimilation step into samples from the current filtering distribution via a sequence of local nonlinear couplings. These couplings are based on transport maps that can be computed quickly using convex optimization\, and that can be enriched in complexity to reduce the intrinsic bias of the EnKF. We discuss the low-dimensional structure inherited by the transport maps from the filtering problem\, including decay of correlations\, conditional independence\, and local likelihoods. We then exploit this structure to regularize the estimation of the maps in high dimensions and with a limited ensemble size. \nWe also present variational methods—again based on transport—for smoothing and sequential parameter estimation in non-Gaussian state-space models. These methods rely on results linking the Markov properties of a target measure to the existence of low-dimensional couplings\, induced by transport maps that are decomposable. The resulting algorithms can be understood as a generalization\, to the non-Gaussian case\, of the square-root Rauch–Tung–Striebel Gaussian smoother. \nThis is joint work with Ricardo Baptista\, Daniele Bigoni\, and Alessio Spantini. \nBio: Youssef Marzouk is an associate professor in the Department of Aeronautics and Astronautics at MIT and co-director of the MIT Center for Computational Engineering. He is also director of MIT’s Aerospace Computational Design Laboratory and a member of MIT’s Statistics and Data Science Center. \nHis research interests lie at the intersection of physical modeling with statistical inference and computation. In particular\, he develops methodologies for uncertainty quantification\, inverse problems\, large-scale Bayesian computation\, and optimal experimental design in complex physical systems. His methodological work is motivated by a wide variety of engineering\, environmental\, and geophysics applications. \nHe received his SB\, SM\, and PhD degrees from MIT and spent several years at Sandia National Laboratories before joining the MIT faculty in 2009. He is a recipient of the Hertz Foundation Doctoral Thesis Prize (2004)\, the Sandia Laboratories Truman Fellowship (2004-2007)\, the US Department of Energy Early Career Research Award (2010)\, and the Junior Bose Award for Teaching Excellence from the MIT School of Engineering (2012). He is an Associate Fellow of the AIAA and currently serves on the editorial boards of the SIAM Journal on Scientific Computing\, Advances in Computational Mathematics\, and the SIAM/ASA Journal on Uncertainty Quantification. He is an avid coffee drinker and occasional classical pianist. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/lids-seminar-series-youssef-marzouk
LOCATION:32-155
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190426T110000
DTEND;TZID=America/New_York:20190426T120000
DTSTAMP:20260413T055033
CREATED:20190401T154526Z
LAST-MODIFIED:20190423T144817Z
UID:9202-1556276400-1556280000@idss-stage.mit.edu
SUMMARY:Robust Estimation: Optimal Rates\, Computation and Adaptation
DESCRIPTION:Abstract: Chao Gao will discuss the problem of statistical estimation with contaminated data. In the first part of the talk\, I will discuss depth-based approaches that achieve minimax rates in various problems. In general\, the minimax rate of a given problem with contamination consists of two terms: the statistical complexity without contamination\, and the contamination effect in the form of modulus of continuity. In the second part of the talk\, I will discuss computational challenges of these depth-based estimators. An interesting relation between statistical depth function and a general f-learning framework will be discussed\, which leads to a computation strategy via minimax optimization in the framework of generative adversarial nets (GAN). Finally\, I will address the problem of adaptive estimation under contamination model. It turns out adaptive estimation becomes a much harder task with contamination. Besides the classical logarithmic cost of adaptive estimation in some cases\, it can be shown that in certain situation\, adaptation can be completely impossible with any rate. \nBiography: Chao Gao is an assistant professor in statistics at University of Chicago. I graduated from Yale University. My advisor is Harry Zhou. My research lies in nonparametric and high-dimensional statistics\, network analysis\, Bayes theory and robust statistics.MIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/chaogao/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190424T150000
DTEND;TZID=America/New_York:20190424T160000
DTSTAMP:20260413T055033
CREATED:20190423T172233Z
LAST-MODIFIED:20190430T195016Z
UID:9425-1556118000-1556121600@idss-stage.mit.edu
SUMMARY:Hierarchical Bayesian Network Model for Probabilistic Estimation of EV Battery Life
DESCRIPTION:Bayesian models are applied to probabilistic analysis of phenomena which deal with multiple external stochastic factors and unmeasurable variables. Considering the large amount of available data for the EV driving\, recharging and grid services such as solar charging which contains uncertainties and measurement errors\, and their hierarchical effect on the battery life\, this application of Bayesian models can be useful for the aging probabilistic analysis. Causality is of utmost importance for batteries as their aging is affected by a high number of hierarchical variables that depend upon external factors to the battery. Acknowledging the advantages of Bayesian models\, we propose a hierarchical Bayesian model for the probabilistic battery degradation evaluation. Priors distributions are defined based on expert knowledge and Marco Chain Monte Carlo (MCMC) sampling is used to draw the posteriors. This modeling approach reflects the uncertainties of measurements and process\, provides more informative results\, and it is applicable to any type of input data with proper training. \nBio: Mehdi Jafari (Ph.D. Michigan Technological University\, 2018; M.Sc. University of Tabriz\, 2011; B.Sc. University of Tabriz\, 2008; all in Electrical Engineering) is a postdoctoral associate in the Laboratory for Information and Decision Systems (LIDS) at MIT. He is working on Energy Storage solutions for the power system applications and renewables integration. He also has worked on probabilistic analysis of the battery energy storage aging behavior\, especially in the electrified transportation and vehicle-to-grid applications. He has authored more than 30 journal and conference papers in the energy storage\, electric vehicles\, renewable energy and power system fields. His current research interests include energy storage role in renewables integration\, battery energy storage performance and degradation in power system and transportation electrification applications. \n____________________________________ \nTea talks are 20-minute-long informal chalk-talks for the purpose of sharing ideas and making others aware about some of the topics that may be of interest to the LIDS and Stats audience. If you are interested in presenting in the upcoming tea talks\, please email lids_stats_tea@mit.edu.
URL:https://lids.mit.edu/news-and-events/events/hierarchical-bayesian-network-model-probabilistic-estimation-ev-battery-life
LOCATION:32 – LIDS Lounge\, 32 Vassar Street\, Cambridge\, MA\, 02139\, United States
CATEGORIES:LIDS & Stats Tea Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190423T160000
DTEND;TZID=America/New_York:20190423T170000
DTSTAMP:20260413T055033
CREATED:20190301T171624Z
LAST-MODIFIED:20190501T142253Z
UID:8987-1556035200-1556038800@idss-stage.mit.edu
SUMMARY:Memory-Efficient Adaptive Optimization for Humungous-Scale Learning
DESCRIPTION:Adaptive gradient-based optimizers such as AdaGrad and Adam are among the methods of choice in modern machine learning. These methods maintain second-order statistics of each model parameter\, thus doubling the memory footprint of the optimizer. In behemoth-size applications\, this memory overhead restricts the size of the model being used as well as the number of examples in a mini-batch. We describe a novel\, simple\, and flexible adaptive optimization method with sublinear memory cost that retains the benefits of per-parameter adaptivity while allowing for larger models and mini-batches. We give convergence guarantees for our method and demonstrate its effectiveness in training some of the largest deep models used at Google. \nBio: Yoram Singer is the head of Principles Of Effective Machine learning (POEM) research group in Google Brain and a professor of Computer Science at Princeton.  He was a member of the technical staff at AT&T Research from 1995 through 1999 and an associate professor at the Hebrew University from 1999 through 2007. He is a fellow of AAAI. His research on machine learning algorithms received several awards. \n____________________________________ \nThe LIDS Seminar Series features distinguished speakers who provide an overview of a research area\, as well as exciting recent progress in that area. Intended for a broad audience\, seminar topics span the areas of communications\, computation\, control\, learning\, networks\, probability and statistics\, optimization\, and signal processing. 
URL:https://lids.mit.edu/news-and-events/events/lids-seminar-series-yoram-singer
LOCATION:32-G449 (KIva/Patel)
CATEGORIES:LIDS Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190419T110000
DTEND;TZID=America/New_York:20190419T120000
DTSTAMP:20260413T055033
CREATED:20190204T202923Z
LAST-MODIFIED:20190430T195704Z
UID:8822-1555671600-1555675200@idss-stage.mit.edu
SUMMARY:Stochastics and Statistics Seminar Series
DESCRIPTION:Logistic regression is a fundamental task in machine learning and statistics. For the simple case of linear models\, Hazan et al. (2014) showed that any logistic regression algorithm that estimates model weights from samples must exhibit exponential dependence on the weight magnitude. As an alternative\, we explore a counterintuitive technique called improper learning\, whereby one estimates a linear model by fitting a non-linear model. Past success stories for improper learning have focused on cases where it can improve computational complexity. Surprisingly\, we show that for sample complexity (number of examples needed to achieve a desired accuracy level)\, improper learning leads to a doubly-exponential improvement in dependence on weight magnitude over estimation of model weights\, and more broadly over any so-called “proper” learning algorithm. This provides a positive resolution to a COLT 2012 open problem of McMahan and Streeter. As a consequence of this improvement\, we also resolve two open problems on the sample complexity of boosting and bandit multi-class classification. \nDylan Foster is a postdoctoral researcher at the MIT Institute for Foundations of Data Science. In 2018 he received his PhD in computer science at Cornell University\, advised by Karthik Sridharan. His research focuses on theory for machine learning in real-world settings. He is particularly interested in all aspects of generalization theory\, particularly as it applies to deep learning\, non-convex optimization\, and interactive learning problems including online and bandit learning. Dylan previously received his BS and MS in Electrical Engineering from USC in 2014. He has received awards including the NDSEG PhD fellowship\, Facebook PhD fellowship\, and best student paper award at COLT. \nMIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/dylanfoster/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
END:VCALENDAR