BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IDSS STAGE - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IDSS STAGE
X-ORIGINAL-URL:https://idss-stage.mit.edu
X-WR-CALDESC:Events for IDSS STAGE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200508T110000
DTEND;TZID=America/New_York:20200508T120000
DTSTAMP:20260517T025907
CREATED:20200108T203459Z
LAST-MODIFIED:20200108T205206Z
UID:11560-1588935600-1588939200@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/ben-arous2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200501T110000
DTEND;TZID=America/New_York:20200501T120000
DTSTAMP:20260517T025907
CREATED:20200108T203919Z
LAST-MODIFIED:20200121T195044Z
UID:11563-1588330800-1588334400@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/tbd-10/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200424T110000
DTEND;TZID=America/New_York:20200424T120000
DTSTAMP:20260517T025907
CREATED:20200108T202325Z
LAST-MODIFIED:20200109T142402Z
UID:11557-1587726000-1587729600@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/wellner2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200417T110000
DTEND;TZID=America/New_York:20200417T120000
DTSTAMP:20260517T025907
CREATED:20200108T201326Z
LAST-MODIFIED:20200108T201422Z
UID:11555-1587121200-1587124800@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/arias-castro2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200410T110000
DTEND;TZID=America/New_York:20200410T120000
DTSTAMP:20260517T025907
CREATED:20200108T200821Z
LAST-MODIFIED:20200108T204923Z
UID:11553-1586516400-1586520000@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/rinaldo2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200320T110000
DTEND;TZID=America/New_York:20200320T120000
DTSTAMP:20260517T025907
CREATED:20200108T192919Z
LAST-MODIFIED:20200108T192919Z
UID:11551-1584702000-1584705600@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/finucane2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200313T110000
DTEND;TZID=America/New_York:20200313T120000
DTSTAMP:20260517T025907
CREATED:20200108T192614Z
LAST-MODIFIED:20200108T192614Z
UID:11549-1584097200-1584100800@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/spielman2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200306T110000
DTEND;TZID=America/New_York:20200306T120000
DTSTAMP:20260517T025907
CREATED:20200108T190607Z
LAST-MODIFIED:20200108T204744Z
UID:11547-1583492400-1583496000@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/gunasekar2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200228T110000
DTEND;TZID=America/New_York:20200228T120000
DTSTAMP:20260517T025907
CREATED:20200108T185358Z
LAST-MODIFIED:20200108T203703Z
UID:11545-1582887600-1582891200@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/ramanan2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200221T110000
DTEND;TZID=America/New_York:20200221T120000
DTSTAMP:20260517T025907
CREATED:20200108T155803Z
LAST-MODIFIED:20200108T155803Z
UID:11536-1582282800-1582286400@idss-stage.mit.edu
SUMMARY:TBD
DESCRIPTION:TBD
URL:https://stat.mit.edu/calendar/barber2020/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200214T110000
DTEND;TZID=America/New_York:20200214T120000
DTSTAMP:20260517T025907
CREATED:20200108T154414Z
LAST-MODIFIED:20200127T154235Z
UID:11534-1581678000-1581681600@idss-stage.mit.edu
SUMMARY:Diffusion K-means Clustering on Manifolds: provable exact recovery via semidefinite relaxations
DESCRIPTION:Abstract: We introduce the diffusion K-means clustering method on Riemannian submanifolds\, which maximizes the within-cluster connectedness based on the diffusion distance. The diffusion K-means constructs a random walk on the similarity graph with vertices as data points randomly sampled on the manifolds and edges as similarities given by a kernel that captures the local geometry of manifolds. Thus the diffusion K-means is a multi-scale clustering tool that is suitable for data with non-linear and non-Euclidean geometric features in mixed dimensions. Given the number of clusters\, we propose a polynomial-time convex relaxation algorithm via the semidefinite programming (SDP) to solve the diffusion K-means. In addition\, we also propose a nuclear norm (i.e.\, trace norm) regularized SDP that is adaptive to the number of clusters. In both cases\, we show that exact recovery of the SDPs for diffusion K-means can be achieved under suitable between-cluster separability and within-cluster connectedness of the submanifolds\, which together quantify the hardness of the manifold clustering problem. We further propose the localized diffusion K-means by using the local adaptive bandwidth estimated from the nearest neighbors. We show that exact recovery of the localized diffusion K-means is fully adaptive to the local probability density and geometric structures of the underlying submanifolds. \nBio: Xiaohui Chen received a Ph. D. in Electrical and Computer Engineering in 2013 from the University of British Columba (UBC)\, Vancouver\, Canada. He was a post-doctoral fellow at the Toyota Technological Institute at Chicago (TTIC)\, a philanthropically endowed academic computer science institute located on the University of Chicago campus. In 2013 he joined the University of Illinois at Urbana-Champaign (UIUC) as an Assistant Professor of Statistics\, and since 2019 he is an Associate Professor of Statistics at UIUC. In 2019-2020 he is visiting the Institute for Data\, Systems\, and Society (IDSS) at Massachusetts Institute of Technology (MIT). He received numerous notable awards\, including an NSF CAREER Award in 2018\, an Arnold O. Beckman Award at UIUC in 2018\, an ICSA Outstanding Young Researcher Award in 2019\, an Associate appointment in the Center for Advanced Study at UIUC in 2020-2021\, and a Simons Fellowship in Mathematics from the Simons Foundation in 2020-2021.
URL:https://stat.mit.edu/calendar/chen2020
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200207T110000
DTEND;TZID=America/New_York:20200207T120000
DTSTAMP:20260517T025907
CREATED:20200108T153346Z
LAST-MODIFIED:20200113T195921Z
UID:11531-1581073200-1581076800@idss-stage.mit.edu
SUMMARY:Gaussian Differential Privacy\, with Applications to Deep Learning
DESCRIPTION:Abstract: \nPrivacy-preserving data analysis has been put on a firm mathematical foundation since the introduction of differential privacy (DP) in 2006. This privacy definition\, however\, has some well-known weaknesses: notably\, it does not tightly handle composition. This weakness has inspired several recent relaxations of differential privacy based on the Renyi divergences. We propose an alternative relaxation we term “f-DP”\, which has a number of nice properties and avoids some of the difficulties associated with divergence based relaxations. First\, f-DP preserves the hypothesis testing interpretation of differential privacy\, which makes its guarantees easily interpretable. It allows for lossless reasoning about composition and post-processing\, and notably\, a direct way to analyze privacy amplification by subsampling. We define a canonical single-parameter family of definitions within our class that is termed “Gaussian Differential Privacy”\, based on hypothesis testing of two shifted normal distributions. We prove that this family is focal to f-DP by introducing a central limit theorem\, which shows that the privacy guarantees of any hypothesis-testing based definition of privacy (including differential privacy) converge to Gaussian differential privacy in the limit under composition. This central limit theorem also gives a tractable analysis tool. We demonstrate the use of the tools we develop by giving an improved analysis of the privacy guarantees of noisy stochastic gradient descent. This is joint work with Jinshuo Dong and Aaron Roth. \nBiography: \nWeijie Su is an Assistant Professor of Statistics at the Wharton School\, University of Pennsylvania. He is an associated faculty of the Applied Mathematics and Computational Science program at the University of Pennsylvania and a co-director of Penn Research in Machine Learning. Prior to joining Penn\, he received his Ph.D. in Statistics from Stanford University in 2016. His research interests span machine learning\, mathematical statistics\, private data analysis\, large-scale optimization\, and multiple hypothesis testing. He is a recipient of the Theodore Anderson Dissertation Award in Theoretical Statistics in 2016 and the NSF CAREER Award in 2019.
URL:https://stat.mit.edu/calendar/su2020
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191206T110000
DTEND;TZID=America/New_York:20191206T120000
DTSTAMP:20260517T025907
CREATED:20191017T134413Z
LAST-MODIFIED:20191112T204103Z
UID:10988-1575630000-1575633600@idss-stage.mit.edu
SUMMARY:Inferring the Evolutionary History of Tumors
DESCRIPTION:Abstract: \nBulk sequencing of tumor DNA is a popular strategy for uncovering information about the spectrum of mutations arising in the tumor\, and is often supplemented by multi-region sequencing\, which provides a view of tumor heterogeneity. The statistical issues arise from the fact that bulk sequencing makes the determination of sub-clonal frequencies\, and other quantities of interest\, difficult. In this talk I will discuss this problem\, beginning with its setting in population genetics. The data provide an estimate of the site frequency spectrum (SFS) of the mutations in the tumor\, which is used as the basis for inference. I will describe how Approximate Bayesian Computation can be used for inference in problems like this one in which likelihoods are intractable. I will also describe a model for selective clonal sweeps that estimates the number of subclones that have arisen in the tumor; here the inference is based on a method of moments using the SFS. Time permitting\, I will describe some novel experimental methods we are developing to understand the 3D structure of tumors\, paving the way for some challenging inferential problems that will require engagement from data scientists and others. \nBiography: \nSimon Tavaré joined Columbia University in 2018 as the Herbert and Florence Irving Director of the Irving Institute for Cancer Dynamics and a professor in the Departments of Statistics and Biological Sciences. From 1978 to 2003\, Simon worked in the USA and from 2003\, he was a professor in the Department of Applied Mathematics and Theoretical Physics and the Department of Oncology at the University of Cambridge\, England. From February 2013 to January 2018\, he was director of the Cancer Research UK Cambridge Institute\, which had become a department of the University of Cambridge in January 2013. His research focuses on statistical bioinformatics and computational biology\, particularly evolutionary approaches to understanding cancer biology. Dr. Tavaré is an elected fellow of the Academy of Medical Sciences and of the Royal Society\, and a member of the European Molecular Biology Organization. He was president of the London Mathematical Society from 2015 to 2017 and was elected a fellow of the American Mathematical Society and a foreign associate of the U.S. National Academy of Sciences in 2018.
URL:https://stat.mit.edu/calendar/tavare/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191122T110000
DTEND;TZID=America/New_York:20191122T120000
DTSTAMP:20260517T025907
CREATED:20191017T134223Z
LAST-MODIFIED:20191115T204645Z
UID:10986-1574420400-1574424000@idss-stage.mit.edu
SUMMARY:Automated Data Summarization for Scalability in Bayesian Inference
DESCRIPTION:Abstract: \nMany algorithms take prohibitively long to run on modern\, large data sets. But even in complex\ndata sets\, many data points may be at least partially redundant for some task of interest. So one might instead construct and use a weighted subset of the data (called a “coreset”) that is much smaller than the original dataset. Typically running algorithms on a much smaller data set will take much less computing time\, but it remains to understand whether the output can be widely useful. (1) In particular\, can running an analysis on a smaller coreset yield answers close to those from running on the full data set? (2) And can useful coresets be constructed automatically for new analyses\, with minimal extra work from the user? We answer in the affirmative for a wide variety of problems in Bayesian inference. We demonstrate how to construct “Bayesian coresets” as an automatic\, practical pre-processing step. We prove that our method provides geometric decay in relevant approximation error as a function of coreset size. Empirical analysis shows that our method reduces approximation error by orders of magnitude relative to uniform random subsampling of data. Though we focus on Bayesian methods here\, we also show that our construction can be applied in other domains. \nBiography: \nTamara Broderick is an Associate Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)\, the MIT Statistics and Data Science Center\, and the Institute for Data\, Systems\, and Society (IDSS). She completed her Ph.D. in Statistics at the University of California\, Berkeley in 2014. Previously\, she received an AB in Mathematics from Princeton University (2007)\, a Master of Advanced Study for completion of Part III of the Mathematical Tripos from the University of Cambridge (2008)\, an MPhil by research in Physics from the University of Cambridge (2009)\, and an MS in Computer Science from the University of California\, Berkeley (2013). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning. She has been awarded an AISTATS Notable Paper Award (2019)\, NSF CAREER Award (2018)\, a Sloan Research Fellowship (2018)\, an Army Research Office Young Investigator Program award (2017)\, Google Faculty Research Awards\, an Amazon Research Award\, the ISBA Lifetime Members Junior Researcher Award\,\nthe Savage Award (for an outstanding doctoral dissertation in Bayesian theory and methods)\, the Evelyn Fix Memorial Medal and Citation (for the Ph.D. student on the Berkeley campus showing the greatest promise in statistical research)\, the Berkeley Fellowship\, an NSF Graduate Research Fellowship\, a Marshall Scholarship\, and the Phi Beta Kappa Prize (for the graduating Princeton senior with the highest academic average). \n–\nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/broderick/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191115T110000
DTEND;TZID=America/New_York:20191115T120000
DTSTAMP:20260517T025907
CREATED:20191017T134056Z
LAST-MODIFIED:20191108T190943Z
UID:10984-1573815600-1573819200@idss-stage.mit.edu
SUMMARY:Understanding machine learning with statistical physics
DESCRIPTION:Abstract: \nThe affinity between statistical physics and machine learning has long history\, this is reflected even in the machine learning terminology that is in part adopted from physics. Current theoretical challenges and open questions about deep learning and statistical learning call for unified account of the following three ingredients: (a) the dynamics of the learning algorithm\, (b) the architecture of the neural networks\, and (c) the structure of the data. Most existing theories are not taking in account all of those three aspects in a satisfactory manner. In this talk I will describe some of the results stemming from statistical physics applied to machine learning and how it does include the three ingredients\, although in a very simplified manner. Then I will focus on the current results improving our modelling in each of the three aspects covering recent articles [1-4]. \n[1] Aubin\, B.\, Maillard\, A.\, Krzakala\, F.\, Macris\, N.\, & Zdeborová\, L.; The committee machine: Computational to statistical gaps in learning a two-layers neural network. NeurIPS’18.\n[2] Sarao Mannelli\, S.\, Biroli\, G.\, Cammarota\, C.\, Krzakala\, F.\, & Zdeborová\, L.; Who is Afraid of Big Bad Minima? Analysis of Gradient-Flow in a Spiked Matrix-Tensor Model. NeurIPS’19.\n[3] Aubin\, B.\, Loureiro\, B.\, Maillard\, A.\, Krzakala\, F.\, & Zdeborová\, L.; The spiked matrix model with generative priors. NeurIPS’19.\n[4] Goldt\, S.\, Mézard\, M.\, Krzakala\, F.\, & Zdeborová\, L.; Modelling the influence of data structure on learning in neural networks. Preprint arXiv:1909.11500. \nBiography: \nLenka Zdeborová is a researcher at CNRS working in the Institute of Theoretical Physics in CEA Saclay\, France. She received a PhD in physics from University Paris-Sud and from Charles University in Prague in 2008. She spent two years in the Los Alamos National Laboratory as the Director’s Postdoctoral Fellow. In 2014\, she was awarded the CNRS bronze medal\, in 2016 Philippe Meyer prize in theoretical physics and an ERC Starting Grant\, in 2018 the Irène Joliot-Curie prize. She is editorial board member for Journal of Physics A\, Physical review E and Physical Review X.  Lenka’s expertise is in applications of methods developed in statistical physics\, such as advanced mean field methods\, replica method and related message passing algorithms\, to problems in machine learning\, signal processing\, inference and optimization.
URL:https://stat.mit.edu/calendar/zdeborova/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191108T110000
DTEND;TZID=America/New_York:20191108T120000
DTSTAMP:20260517T025907
CREATED:20191017T133140Z
LAST-MODIFIED:20191104T140845Z
UID:10982-1573210800-1573214400@idss-stage.mit.edu
SUMMARY:SDP Relaxation for Learning Discrete Structures: Optimal Rates\, Hidden Integrality\, and Semirandom Robustness
DESCRIPTION:Abstract:\n\nWe consider the problems of learning discrete structures from network data under statistical settings. Popular examples include various block models\, Z2 synchronization and mixture models. Semidefinite programming (SDP) relaxation has emerged as a versatile and robust approach to these problems. We show that despite being a relaxation\, SDP achieves the optimal Bayes error rate in terms of distance to the target solution. Moreover\, SDP relaxation is provably robust under the so-called semirandom model\, which frustrates many existing algorithms. Our proof involves a novel primal-dual analysis that establishes what we call the hidden integrality property: the SDP relaxation tightly approximates the optimal (yet unimplementable) integer programs with oracle information.\n\nJoint work with Yingjie Fei (Cornell Ph.D.)\, who won 2nd place in INFORMS Nicholson Student Paper Competition.\n\nBio: Yudong Chen is an assistant professor at the School of Operations Research and Information Engineering (ORIE)\, Cornell University. Before joining Cornell\, he was a postdoctoral scholar at the Department of Electrical Engineering and Computer Sciences at University of California\, Berkeley. He obtained his Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin\, and his M.S. and B.S. from Tsinghua University. His research interests include machine learning\, high-dimensional and robust statistics\, convex and non-convex optimization\, and applications in communication and computer networks.
URL:https://stat.mit.edu/calendar/chen/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191029T110000
DTEND;TZID=America/New_York:20191029T120000
DTSTAMP:20260517T025907
CREATED:20191022T130053Z
LAST-MODIFIED:20191028T165950Z
UID:11020-1572346800-1572350400@idss-stage.mit.edu
SUMMARY:Communicating uncertainty about facts\, numbers and science
DESCRIPTION:The claim of a ‘post-truth’ society\, in which emotional responses trump balanced consideration of evidence\, presents a strong challenge to those who value quantitative and scientific evidence: how can we communicate risks and unavoidable scientific uncertainty in a transparent and trustworthy way? \nCommunication of quantifiable risks has been well-studied\, leading to recommendations for using an expected frequency format. But deeper uncertainty about facts\, numbers\, or scientific hypotheses needs to be communicated without losing trust and credibility. This is an empirically researchable issue\, and I shall describe some current randomised experiments concerning the impact on audiences of alternative verbal\, numerical and graphical means of communicating uncertainty. \nAvailable evidence may often not permit a quantitative assessment of uncertainty\, and I will also examine scales being used to summarise degrees of ‘confidence’ in conclusions\, in terms of the quality of the research underlying the whole assessment. \nAbout the speaker: Professor Sir David Spiegelhalter is Chair of the Winton Centre for Risk and Evidence Communication in the University of Cambridge\, which aims to improve the way that statistical evidence is used by health professionals\, patients\, lawyers and judges\, media and policy-makers. He advises organisations and government agencies on risk communication and is a regular media commentator on statistical issues\, with a particular focus on communicating uncertainty. His background is in medical statistics\, and he has over 200 refereed publications and is co-author of 6 textbooks\, as well as The Norm Chronicles (with Michael Blastland)\, and Sex by Numbers. He works extensively with the media\, and presented the BBC4 documentaries “Tails you Win: the Science of Chance”\, the award-winning “Climate Change by Numbers”\, and in 2011 came 7 th in an episode of BBC1’s Winter Wipeout. He was elected Fellow of the Royal Society in 2005\, and knighted in 2014 for services to medical statistics. He was President of the Royal Statistical Society for 2017-2018. His bestselling book\, The Art of Statistics\, was published in March 2019. He is @d_spiegel on Twitter\, and his homepage is http://www.statslab.cam.ac.uk/~david/.
URL:https://idss-stage.mit.edu/calendar/communicating-uncertainty-about-facts-numbers-and-science/
LOCATION:32-D643
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191025T110000
DTEND;TZID=America/New_York:20191025T120000
DTSTAMP:20260517T025907
CREATED:20191017T132846Z
LAST-MODIFIED:20191021T134432Z
UID:10980-1572001200-1572004800@idss-stage.mit.edu
SUMMARY:Accurate Simulation-Based Parametric Inference in High Dimensional Settings
DESCRIPTION:Abstract: \nAccurate estimation and inference in finite sample is important for decision making in many experimental and social fields\, especially when the available data are complex\, like when they include mixed types of measurements\, they are dependent in several ways\, there are missing data\, outliers\, etc. Indeed\, the more complex the data (hence the models)\, the less accurate are asymptotic theory results in finite samples.  This is in particular the case\, for example\, with logistic regression\, with possibly also random effects to account for the dependence structure between the outcomes\, or more generally\, when the likelihood function or the estimating equations have non closed-form expression. Moreover\, resampling techniques such as the Bootstrap can also be quite inaccurate in these settings\, unless (complex) corrections are provided. We propose instead a simulation based method\, the Iterative Bootstrap (IB)\, that can be used\, very generally\, to obtain a) unbiased estimators in high dimensional settings\, b) finite sample distributions for inference\, with\, under suitable conditions\, the exact probability coverage property. The method is based on an initial estimator\, that does not need to be consistent and can hence be chosen for numerical convenience\, and/or can have some desirable properties such as robustness. We present the main theoretical results and the relationships with well-established methods\, as well as simulation studies involving complex models and different estimators. \nAbout the Speaker: \nMaria-Pia Victoria-Feser is currently professor of statistics at the Geneva School of Economics and Management\, University of Geneva\, Switzerland. She received her Ph.D. in econometrics and statistics from the University of Geneva\, and started her carrier as a lecturer at the London School of Economics and Management. She was awarded the Latzis International Prize for her Ph.D. thesis\, as well as doctoral and professorial fellowships from the Swiss National Science Foundation. \nMaria-Pia Victoria-Feser’s research interests are in statistical methodology (robust statistics\, model selection and simulation based inference in high dimensions for complex models) with applications in economics (welfare economics\, extremes)\, psychology and social sciences (generalized linear latent variable models)\, and engineering (time series for geo-localization). She has published in leading journals in statistics as well as in related fields.
URL:https://stat.mit.edu/calendar/victoria-feser/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191018T110000
DTEND;TZID=America/New_York:20191018T120000
DTSTAMP:20260517T025907
CREATED:20191015T180210Z
LAST-MODIFIED:20191015T180614Z
UID:10971-1571396400-1571400000@idss-stage.mit.edu
SUMMARY:Towards Robust Statistical Learning Theory
DESCRIPTION:Abstract: \nReal-world data typically do not fit statistical models or satisfy assumptions underlying the theory exactly\, hence reducing the number and strictness of these assumptions helps to lessen the gap between the “mathematical” world and the “real” world. The concept of robustness\, in particular\, robustness to outliers\, plays the central role in understanding this gap. The goal of the talk is to introduce the principles and robust algorithms based on these principles that can be applied in the general framework of statistical learning theory. These algorithms avoid explicit (and often bias-producing) outlier detection and removal\, instead taking advantage of induced symmetries in the distribution of the data. \nI will discuss uniform deviation bounds for the mean estimators of heavy-tailed distributions and applications of these bounds to robust empirical risk minimization. \nImplications of proposed techniques for logistic regression and regression with quadratic loss will be highlighted. \nThis talk is partially based on a joint work with Timothée Mathieu. \nBiography: \nStanislav Minsker is currently an Assistant Professor in the Department of Mathematics at the University of Southern California. He received B.Sc. in Mathematics from the Novosibirsk State University in 2007 and Ph.D. in Mathematics from the Georgia Institute of Technology in 2012. Prior to joining USC\, he was a Visiting Assistant Professor at Duke University and worked in Quantitative Analytics at Wells Fargo Securities. His main research interests are in the areas of statistical learning theory\, robust statistics\, and concentration of measure inequalities.
URL:https://stat.mit.edu/calendar/minsker/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191011T110000
DTEND;TZID=America/New_York:20191011T120000
DTSTAMP:20260517T025907
CREATED:20190923T173105Z
LAST-MODIFIED:20190926T135551Z
UID:10860-1570791600-1570795200@idss-stage.mit.edu
SUMMARY:The Planted Matching Problem
DESCRIPTION:Abstract:\n\nWhat happens when an optimization problem has a good solution built into it\, but which is partly obscured by randomness? Here we revisit a classic polynomial-time problem\, the minimum perfect matching problem on bipartite graphs. If the edges have random weights in [0\,1]\, Mézard and Parisi — and then Aldous\, rigorously — showed that the minimum matching has expected weight zeta(2) = pi^2/6. We consider a “planted” version where a particular matching has weights drawn from an exponential distribution with mean mu/n. When mu < 1/4\, the minimum matching is almost identical to the planted one. When mu > 1/4\, the overlap between the two is given by a system of differential equations that result from a message-passing algorithm. This is joint work with Mehrdad Moharrami (Michigan) and Jiaming Xu (Duke).\n\nBiography:\n\nCristopher Moore received his B.A. in Physics\, Mathematics\, and Integrated Science from Northwestern University\, and his Ph.D. in Physics from Cornell. From 2000 to 2012 he was a professor at the University of New Mexico\, with joint appointments in Computer Science and Physics. Since 2012\, Moore has been a resident professor at the Santa Fe Institute; he has also held visiting positions at École Normale Superieure\, École Polytechnique\, Université Paris 7\, the Niels Bohr Institute\, Northeastern University\, and the University of Michigan. He has published over 150 papers at the boundary between physics and computer science\, ranging from quantum computing\, to phase transitions in NP-complete problems\, to the theory of social networks and efficient algorithms for analyzing their structure. He is an elected Fellow of the American Physical Society\, the American Mathematical Society\, and the American Association for the Advancement of Science. With Stephan Mertens\, he is the author of The Nature of Computation from Oxford University Press.\n\n\n\n–\n\n\n\nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/moore/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190927T110000
DTEND;TZID=America/New_York:20190927T120000
DTSTAMP:20260517T025907
CREATED:20190923T172454Z
LAST-MODIFIED:20191016T163112Z
UID:10858-1569582000-1569585600@idss-stage.mit.edu
SUMMARY:Frontiers of Efficient Neural-Network Learnability
DESCRIPTION:Abstract:  \nWhat are the most expressive classes of neural networks that can be learned\, provably\, in polynomial-time in a distribution-free setting? In this talk we give the first efficient algorithm for learning neural networks with two nonlinear layers using tools for solving isotonic regression\, a nonconvex (but tractable) optimization problem. If we further assume the distribution is symmetric\, we obtain the first efficient algorithm for recovering the parameters of a one-layer convolutional network. These results implicitly make use of a convex surrogate loss for generalized linear models and go beyond the kernel-method/overparameterization regime used in recent works.\n\nBiography:  \nAdam Klivans is a professor of computer science at the University of Texas at Austin who works in theoretical computer science and machine learning. He completed his doctorate in mathematics from MIT\, where he was awarded the Charles W. and Jennifer C. Johnson Prize. \nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/frontiers/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190920T110000
DTEND;TZID=America/New_York:20190920T120000
DTSTAMP:20260517T025907
CREATED:20190910T191447Z
LAST-MODIFIED:20191016T163208Z
UID:10670-1568977200-1568980800@idss-stage.mit.edu
SUMMARY:Some New Insights On Transfer Learning
DESCRIPTION:Abstract:  \nThe problem of transfer and domain adaptation is ubiquitous in machine learning and concerns situations where predictive technologies\, trained on a given source dataset\, have to be transferred to a new target domain that is somewhat related. For example\, transferring voice recognition trained on American English accents to apply to Scottish accents\, with minimal retraining. A first challenge is to understand how to properly model the ‘distance’ between source and target domains\, viewed as probability distributions over a feature space.\n\nIn this talk we will argue that various existing notions of distance between distributions turn out to be pessimistic\, i.e.\, these distances might appear high in many situations where transfer is possible\, even at fast rates. Instead we show that some new notions of distance tightly capture a continuum from easy to hard transfer\, and furthermore can be adapted to\, i.e.\, do not need to be estimated in order to perform near-optimal transfer. Finally we will discuss near-optimal approaches to minimizing sampling of target data (e.g. sampling Scottish speech)\, when one already has access to a given amount of source data (e.g. American speech).\n\nThis talk is based on some joint work with G. Martinet\, and ongoing work with S. Hanneke.\n\nBiography:  \nSamory Kpotufe is an Associate Professor in Statistics at Columbia University. He works in machine learning\, with an emphasis on nonparametric methods and high dimensional statistics. Generally\, his interests are in understanding basic learning scenarios under practical constraints from modern application domains. He has previously held positions at the Max Planck Institute in Germany\, the Toyota Technological Institute at Chicago\, and Princeton University. \nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://idss-stage.mit.edu/calendar/some-new-insights-on-transfer-learning/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190906T110000
DTEND;TZID=America/New_York:20190906T120000
DTSTAMP:20260517T025907
CREATED:20190903T150512Z
LAST-MODIFIED:20190903T152812Z
UID:10580-1567767600-1567771200@idss-stage.mit.edu
SUMMARY:GANs\, Optimal Transport\, and Implicit Density Estimation
DESCRIPTION:Abstract:  \nWe first study the rate of convergence for learning distributions with the adversarial framework and Generative Adversarial Networks (GANs)\, which subsumes Wasserstein\, Sobolev\, and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions\, under a collection of objective evaluation metrics. On the nonparametric end\, we investigate the minimax optimal rates and fundamental difficulty of the implicit density estimation under the adversarial framework. On the parametric end\, we establish a theory for general neural network classes\, that characterizes the interplay on the choice of generator and discriminator. We investigate how to obtain a good statistical guarantee for GANs through the lens of regularization. We discover and isolate a new notion of regularization\, called the generator/discriminator pair regularization\, that sheds light on the advantage of GANs compared to classical approaches for density estimation. We develop novel oracle inequalities as the main tools for analyzing GANs\, which is of independent theoretical interest. \nLater\, we proceed to discuss optimal transport\, estimating under the Wasserstein metric\, and how to use them for implicit density estimation. We will point out an interesting connection between pair regularization and optimal transport.\n\n\nBiography: \nDr. Liang is an assistant professor at Chicago Booth. He is also the George C. Tiao faculty fellow in data science research. His current research interests include computational and algorithmic aspects of statistical inference\, machine learning and statistical learning theory\, stochastic methods in non-convex optimization. \nThe MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/liang/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190510T080000
DTEND;TZID=America/New_York:20190510T170000
DTSTAMP:20260517T025907
CREATED:20190204T204606Z
LAST-MODIFIED:20190307T163046Z
UID:8832-1557475200-1557507600@idss-stage.mit.edu
SUMMARY:Counting and sampling at low temperatures
DESCRIPTION:Abstract: \nWe consider the problem of efficient sampling from the hard-core and Potts models from statistical physics. On certain families of graphs\, phase transitions in the underlying physics model are linked to changes in the performance of some sampling algorithms\, including Markov chains. We develop new sampling and counting algorithms that exploit the phase transition phenomenon and work efficiently on lattices (and bipartite expander graphs) at sufficiently low temperatures in the phase coexistence regime. Our algorithms are based on Pirogov-Sinai theory and the cluster expansion\, classical tools from statistical physics. Joint work with Tyler Helmuth and Guus Regts. \n Biography: \nWill Perkins is an assistant professor in the Department of Mathematics\, Statistics\, and Computer Science at the University of Illinois at Chicago. His research interests are in probability\, combinatorics\, and algorithms. He received his PhD in 2011 from New York University\, then was a postdoc at Georgia Tech and faculty at the University of Birmingham before moving to UIC in 2018. \nMIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/tbd-willperkins/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190503T110000
DTEND;TZID=America/New_York:20190503T120000
DTSTAMP:20260517T025907
CREATED:20190204T203624Z
LAST-MODIFIED:20190206T173354Z
UID:8827-1556881200-1556884800@idss-stage.mit.edu
SUMMARY:Stochastics and Statistics Seminar Series
DESCRIPTION:
URL:https://stat.mit.edu/calendar/tbd-tracyke/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190426T110000
DTEND;TZID=America/New_York:20190426T120000
DTSTAMP:20260517T025907
CREATED:20190401T154526Z
LAST-MODIFIED:20190423T144817Z
UID:9202-1556276400-1556280000@idss-stage.mit.edu
SUMMARY:Robust Estimation: Optimal Rates\, Computation and Adaptation
DESCRIPTION:Abstract: Chao Gao will discuss the problem of statistical estimation with contaminated data. In the first part of the talk\, I will discuss depth-based approaches that achieve minimax rates in various problems. In general\, the minimax rate of a given problem with contamination consists of two terms: the statistical complexity without contamination\, and the contamination effect in the form of modulus of continuity. In the second part of the talk\, I will discuss computational challenges of these depth-based estimators. An interesting relation between statistical depth function and a general f-learning framework will be discussed\, which leads to a computation strategy via minimax optimization in the framework of generative adversarial nets (GAN). Finally\, I will address the problem of adaptive estimation under contamination model. It turns out adaptive estimation becomes a much harder task with contamination. Besides the classical logarithmic cost of adaptive estimation in some cases\, it can be shown that in certain situation\, adaptation can be completely impossible with any rate. \nBiography: Chao Gao is an assistant professor in statistics at University of Chicago. I graduated from Yale University. My advisor is Harry Zhou. My research lies in nonparametric and high-dimensional statistics\, network analysis\, Bayes theory and robust statistics.MIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/chaogao/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190419T110000
DTEND;TZID=America/New_York:20190419T120000
DTSTAMP:20260517T025907
CREATED:20190204T202923Z
LAST-MODIFIED:20190430T195704Z
UID:8822-1555671600-1555675200@idss-stage.mit.edu
SUMMARY:Stochastics and Statistics Seminar Series
DESCRIPTION:Logistic regression is a fundamental task in machine learning and statistics. For the simple case of linear models\, Hazan et al. (2014) showed that any logistic regression algorithm that estimates model weights from samples must exhibit exponential dependence on the weight magnitude. As an alternative\, we explore a counterintuitive technique called improper learning\, whereby one estimates a linear model by fitting a non-linear model. Past success stories for improper learning have focused on cases where it can improve computational complexity. Surprisingly\, we show that for sample complexity (number of examples needed to achieve a desired accuracy level)\, improper learning leads to a doubly-exponential improvement in dependence on weight magnitude over estimation of model weights\, and more broadly over any so-called “proper” learning algorithm. This provides a positive resolution to a COLT 2012 open problem of McMahan and Streeter. As a consequence of this improvement\, we also resolve two open problems on the sample complexity of boosting and bandit multi-class classification. \nDylan Foster is a postdoctoral researcher at the MIT Institute for Foundations of Data Science. In 2018 he received his PhD in computer science at Cornell University\, advised by Karthik Sridharan. His research focuses on theory for machine learning in real-world settings. He is particularly interested in all aspects of generalization theory\, particularly as it applies to deep learning\, non-convex optimization\, and interactive learning problems including online and bandit learning. Dylan previously received his BS and MS in Electrical Engineering from USC in 2014. He has received awards including the NDSEG PhD fellowship\, Facebook PhD fellowship\, and best student paper award at COLT. \nMIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/dylanfoster/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190412T110000
DTEND;TZID=America/New_York:20190412T120000
DTSTAMP:20260517T025907
CREATED:20190204T202500Z
LAST-MODIFIED:20190206T173126Z
UID:8820-1555066800-1555070400@idss-stage.mit.edu
SUMMARY:Exponential line-crossing inequalities
DESCRIPTION:Abstract: \nThis talk will present a class of exponential bounds for the probability that a martingale sequence crosses a time-dependent linear threshold. Our key insight is that it is both natural and fruitful to formulate exponential concentration inequalities in this way. We will illustrate this point by presenting a single assumption and a single theorem that together strengthen many tail bounds for martingales\, including classical inequalities (1960-80) by Bernstein\, Bennett\, Hoeffding\, and Freedman; contemporary inequalities (1980-2000) by Shorack and Wellner\, Pinelis\, Blackwell\, van de Geer\, and de la Pena; and several modern inequalities (post-2000) by Khan\, Tropp\, Bercu and Touati\, Delyon\, and others. In each of these cases\, we give the strongest and most general statements to date\, quantifying the time-uniform concentration of scalar\, matrix\, and Banach-space-valued martingales\, under a variety of nonparametric assumptions in discrete and continuous time. In doing so\, we bridge the gap between existing line-crossing inequalities\, the sequential probability ratio test\, the Cramer-Chernoff method\, self-normalized processes\, and other parts of the literature. Time permitting\, I will briefly discuss applications to sequential covariance matrix estimation\, adaptive clinical trials and multi-armed bandits via the notion of “confidence sequences”. \n(joint work with Steve Howard\, Jas Sekhon and Jon McAuliffe\, preprint https://arxiv.org/abs/1808.03204) \n Biography: \nAaditya Ramdas is an assistant professor in the Department of Statistics and Data Science and the Machine Learning Department at Carnegie Mellon University. Previously\, he was a postdoctoral researcher in Statistics and EECS at UC Berkeley from 2015-18\, mentored by Michael Jordan and Martin Wainwright. He finished his PhD at CMU in Statistics and Machine Learning\, advised by Larry Wasserman and Aarti Singh\, winning the Best Thesis Award. His undergraduate degree was in Computer Science from IIT Bombay. A lot of his research focuses on modern aspects of reproducibility in science and technology — involving statistical testing and false discovery rate control in static and dynamic settings. He also works on some problems in sequential decision-making and online uncertainty quantification \nMIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/tbd-aadityaramdas/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190322T110000
DTEND;TZID=America/New_York:20190322T120000
DTSTAMP:20260517T025907
CREATED:20190204T195726Z
LAST-MODIFIED:20190319T124452Z
UID:8818-1553252400-1553256000@idss-stage.mit.edu
SUMMARY:Optimization of random polynomials on the sphere in the full-RSB regime
DESCRIPTION:Abstract: \nThe talk will focus on optimization on the high-dimensional sphere when the objective function is a linear combination of homogeneous polynomials with standard Gaussian coefficients. Such random processes are called spherical spin glasses in physics\, and have been extensively studied since the 80s. I will describe certain geometric properties of spherical spin glasses unique to the full-RSB case\, and explain how they can be used to design a polynomial time algorithm that finds points within small multiplicative error from the global minimum. \nBiography: \nEliran Subag is a Junior Fellow in the Simons Society of Fellows\, at the Courant Institute\, NYU.\nMIT Statistics and Data Science Center host guest lecturers from around the world in this weekly seminar.
URL:https://stat.mit.edu/calendar/tbd-eliransubag/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190315T110000
DTEND;TZID=America/New_York:20190315T120000
DTSTAMP:20260517T025907
CREATED:20190219T155710Z
LAST-MODIFIED:20190219T163823Z
UID:8905-1552647600-1552651200@idss-stage.mit.edu
SUMMARY:Subvector Inference in Partially Identified Models with Many Moment Inequalities
DESCRIPTION:Abstract: \nIn this work we consider bootstrap-based inference methods for functions of the parameter vector in the presence of many moment inequalities where the number of moment inequalities\, denoted by p\, is possibly much larger than the sample size n. In particular this covers the case of subvector inference\, such as the inference on a single component associated with a treatment/policy variable of interest. We consider a min-max of (centered and non-centered) Studentized statistics and study the properties of the associated critical values. In order to establish that we provide a new finite sample analysis that does not rely on Donsker’s properties and establish new central limit theorems for the min-max of the components of random matrices. Furthermore\, we consider the anti-concentration properties of the min-max of the components of a Gaussian matrix and propose bootstrap based methods to estimate them. In turn this provides a valid data-driven to set the tuning parameters of the bootstrap-based inference methods. Importantly\, the tuning parameters generalize choices of literature for Donsker’s classes (and showing why those would not be appropriate in our setting) which might better characterize finite sample behavior. This is co-authored with Federico Bugni and Victor Chernozhukov. \nLink to paper: https://arxiv.org/abs/1806.11466 \nBiography: \nAlexandre Belloni is a Professor at Duke University. He received his Ph.D. in Operations Research from the Massachusetts Institute of Technology (2006) and a M.Sc. in Mathematical Economics from IMPA (2002). He deferred the offer to join the faculty at Duke University to accept the IBM Herman Goldstein Postdoctoral Fellowship (2006-2007). Professor Belloni’s research interests are on econometrics\, statistics and optimization. He received the 2007 Young Researchers Competition in Continuous Optimization Award. His research papers have appeared in journals such as Econometrica\, Review of Economic Studies\, Annals of Statistics\, Marketing Science\, Management Science and Operations Research. He serves as associate editor for different journals and is currently the Area Editor for Machine Learning and Data Science at Operations Research.
URL:https://stat.mit.edu/calendar/tbd-alexbelloni/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar Series
END:VEVENT
END:VCALENDAR