Grant investigators: Catherine Olsson and Daniel Dewey
This page was reviewed but not written by the grant investigators.
Open Philanthropy recommended a total of approximately $2,300,000 over five years in PhD fellowship support to 10 promising machine learning researchers that together represent the 2020 class of the Open Phil AI Fellowship. This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized. These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.
We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutionsCHAR(59) while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.
The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.
The 2020 Class of Open Phil AI Fellows
Alex Tamkin
Alex is a PhD student in Computer Science at Stanford University, where he is advised by Noah Goodman and a member of the Stanford NLP Group. Alex’s research focuses on unsupervised learning: how can we understand and guide systems that learn general-purpose representations of the world? Alex received his B.S. in Computer Science from Stanford, and has spent time at Google Research on the Brain and Language teams. For more information, visit his website.
Clare Lyle
Clare is pursuing a PhD in Computer Science at the University of Oxford as a Rhodes Scholar, advised by Yarin Gal and Marta Kwiatkowska. She is interested in developing theoretical tools to better understand the generalization properties of modern ML methods, and in creating principled algorithms based on these insights. She received a BSc in mathematics and computer science from McGill University in 2018. For more information, see her website.
Cody Coleman
Cody is a computer science Ph.D. student at Stanford University, advised by Professors Matei Zaharia and Peter Bailis. His research focuses on democratizing machine learning by reducing the cost of producing state-of-the-art models and creating novel abstractions that simplify machine learning development and deployment. His work spans from performance benchmarking of hardware and software systems (i.e., DAWNBench and MLPerf) to computationally efficient methods for active learning and core-set selection. He completed his B.S. and M.Eng. in electrical engineering and computer science at MIT. For more information, visit his website.
Dami Choi
Dami is a PhD student in computer science at the University of Toronto, supervised by David Duvenaud and Chris Maddison. Dami is interested in ways to make neural network training faster, more reliable, and more interpretable via inductive bias. Previously, she spent a year at Google as an AI Resident, working with George Dahl on studying optimizers, and speeding up neural network training. She obtained her Bachelor’s degree in Engineering Science from the University of Toronto. You can find more about Dami’s research in her scholar page.
Dan Hendrycks
Dan Hendrycks is a second-year PhD student at UC Berkeley, advised by Jacob Steinhardt and Dawn Song. His research aims to disentangle and concretize the components necessary for safe AI. This leads him to work on quantifying and improving the performance of models in unforeseen out-of-distribution scenarios, and he works towards constructing tasks to measure a model’s alignment with human values. Dan received his BS from the University of Chicago. You can find out more about his research at his website.
Ethan Perez
Ethan is a PhD student in Computer Science at New York University working with Kyunghyun Cho and Douwe Kiela of Facebook AI Research. His research focuses on developing learning algorithms that have the long-term potential to answer questions that people cannot. Supervised learning cannot answer such questions, even in principle, so he is investigating other learning paradigms for generalizing beyond the available supervision. Previously, Ethan worked with Aaron Courville and Hugo Larochelle at the Montreal Institute for Learning Algorithms, and he has also spent time at Facebook AI Research and Google. Ethan earned a Bachelor’s from Rice University as the Engineering department’s Outstanding Senior. For more information, visit his website.
Frances Ding
Frances is a PhD student in Computer Science at UC Berkeley advised by Jacob Steinhardt and Moritz Hardt. Her research aims to improve the reliability of machine learning systems and ensure that they have positive, equitable social impacts. She is interested in developing algorithms that can handle dynamic environments and adaptive behavior in the real world, and in building empirical and theoretical understanding of modern ML methods. Frances received her B.A. in Biology from Harvard University and her M.Phil. in Machine Learning from the University of Cambridge. For more information, visit her website.
Leqi Liu
Leqi is a PhD student in machine learning at Carnegie Mellon University, where she is advised by Zachary Lipton. Her research aims to develop learning systems that can infer human preferences from their behaviors, and better facilitate humans to achieve their goals and well-being. In particular, she is interested in bringing theory from social sciences into algorithmic design. You can learn more about her research on her website.
Peter Henderson
Peter is a PhD student at Stanford University advised by Dan Jurafsky. His research focuses on creating robust decision-making systems grounded in causal inference mechanisms — particularly in natural language domains. He also spends time investigating reproducible and thorough evaluation methodologies to ensure that such systems perform as expected when deployed. Peter’s other work reaches into policy and technical issues related to the use of machine learning in governance and law, as well as applications of machine learning for positive social impact. Previously he earned his B.Eng. and M.Sc. from McGill University with a thesis on reproducibility and reusability in deep reinforcement learning advised by Joelle Pineau and David Meger. For more information, see his website.
Stanislav Fort
Stanislav is a PhD student at Stanford University, advised by Surya Ganguli. His research focuses on developing a scientific understanding of deep learning and on applications of machine learning and artificial intelligence in the physical sciences, in domains spanning from X-ray astrophysics to quantum computing. Stanislav spent a year as a Google AI Resident, where he worked on deep learning theories and their applications in collaboration with colleagues from Google Brain and DeepMind. He received his Bachelor’s and Master’s degrees in Physics at Trinity College, University of Cambridge, and a Master’s degree at Stanford University. For more information, visit his website.