I am the Barnum-Simons Chair in Mathematics and Statistics, and Professor of Electrical Engineering (by courtesy) at Stanford University. Until 2009, I was the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. I graduated from the Ecole Polytechnique in 1993 with a degree in science and engineering, and received my Ph.D. in Statistics from Stanford University in 1998.
I am very grateful for the awards I have received over the years. These include the 2006 Alan T. Waterman Award from NSF, which recognizes the achievements of early-career scientists; the George Polya Prize awarded by the Society of Industrial and Applied Mathematics (SIAM) (2010), the Collatz Prize from the International Council for Industrial and Applied Mathematics (2011), the Lagrange Prize in Continuous Optimization from the Mathematical Optimization Society (MOS) and SIAM (2012), the Dannie Heineman Prize presented by the Academy of Sciences at Göttingen (2013), the AMS-SIAM George David Birkhoff Prize in Applied Mathematics (2015), the Prix Pierre Simon de Laplace from the Société Française de Statistique (2016), the Ralph E. Kleinman Prize from SIAM (2017). I was selected as the Wald Memorial Lecturer by the Institute of Mathematical Statistics (2017). I am a member of the National Academy of Sciences (elected in 2014) and the American Academy of Arts and Sciences (elected in 2014). In 2017, I received a MacArthur Fellowship popularly known as the ‘genius award’. I received the 2020 Princess of Asturias Award for Technical and Scientific Research. The IEEE Board of Directors selected me along with Terence Tao and Justin Romberg to receive the 2021 IEEE Jack S. Kilby Signal Processing Medal.
A copy of my CV is available here.
My work lies at the interface of mathematics, statistics, information theory, signal processing and scientific computing, and is about finding new ways of representing information and extracting information from complex data. For example, I helped launch the field known as compressed sensing, which has led to advances in the efficiency and accuracy of data collection and analysis, and can be used to significantly speed up MRI scanning times. More broadly, I am interested in theoretical and applied problems characterized by incomplete information. My work combines ideas from probability theory, statistics and mathematical optimization to answer questions such as whether it is possible to recover the phase of a light field from intensity measurements only as in X-ray crystallography; or users' preferences for items from just a few samples as in recommender systems; or fine details of an object from low-frequency data as in microscopy.
In the last ten years or so, my work has mostly been of a statistical nature. The ongoing data science revolution has been driven by impressive technological advances in the capture, storage, and processing of data, across a wide range of domains. Of particular interest is the recent progress in machine learning which provides us with many potentially effective tools to learn from datasets of ever increasing sizes and make useful predictions. Some of these tools have proven to be powerful and extremely complex at the same time. While many scientists and engineers are, for good reasons, slowly getting comfortable with the idea of using models that are extremely difficult to interpret — black boxes if you will — two things cannot be compromised upon. The first is the reproducibility of scientific results. If I use a black box to determine which genomic regions influence a trait, e.g. the susceptibility to autism, how do I make sure that my findings can be reproduced in follow-up studies? How do I make sure they are robust and will not be rapidly dismissed? The second concerns the validity of predictions. As we are increasingly turning to machine learning systems to support human decisions, how do we determine their validity? If a learning algorithm predicts the GPA of a prospective college applicant, what guarantees do I have concerning the accuracy of this prediction? Our recent work has addressed these concerns. We have developed broad methodologies that can be wrapped around any black box as to produce results that can be trusted.