This page was reviewed but not written by the grant investigator. Ought staff also reviewed this page prior to publication.
The Open Philanthropy Project recommended a grant of $525,000 to Ought for general support. Ought is a new organization with a mission to “leverage machine learning to help people think.” Ought plans to conduct research on deliberation and amplification, a concept we consider relevant to AI alignment.1 Our funding, combined with another grant from Open Philanthropy Project technical advisor Paul Christiano, is intended to allow Ought to hire up to three new staff members and provide one to three years of support for Ought’s work, depending how quickly they hire.
Background
This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. Ought is a new 501(c)(3) organization founded by Andreas Stuhlmüller, a former researcher at Stanford’s Computation and Cognition lab.2 Ought’s goal is to conduct research and build tools that leverage machine learning for deliberation, and to do so in a scalable way.
About the grant
Proposed activities
Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today.
Andreas believes that AI will ultimately be used to help people deliberate and make wise decisions. Ought will focus on this application, conducting theoretical and empirical work informed by real-world problems and data. Andreas thinks that it is helpful to pursue a concrete vision for how transformative AI might benefit and empower people, because such a vision can be criticized and improved, and can guide more theoretical research.
Early on, Ought plans to focus on conceptual research and implementation of prototypes for decomposing and automating deliberation. Depending on research outcomes, Ought expects to move towards a more empirical and application-driven approach over time.
Ought plans to publish its results, thoughts, code, and progress in online posts for the benefit of other researchers, and will publish in academic outlets if the additional effort is clearly justified. We do not expect a significant number of academic publications to result from this grant, and would consider such publications a bonus instead of part of the basic case for the funding.
For more information on Ought’s vision, see this page by Andreas.
Case for the grant
The basic case for the grant is as follows:
- We consider research on deliberation and amplification as an approach to AI safety both important and neglected.
- Paul Christiano is excited by Ought’s plan and work, and we trust his judgement.
- Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans.
- We have seen some minor indications that Ought is well-run and has a reasonable chance at success, such as: an affiliation with Stanford’s Noah Goodman3, which we believe will help with attracting talent and funding; acceptance into the Stanford-Startx4 accelerator; and that Andreas has already done some research, application prototyping, testing, basic organizational set-up, and public talks at Stanford and USC.
Budget
Our funding is for general support. Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher.
Plans for follow-up
We plan to check in annually with Ought through a phone call with Andreas as well as a review of new published results, such as online writeups, published code, and academic papers, if any. Our Program Officer and investigator for this grant, Daniel Dewey, will conduct these check-ins, accompanied by another technical advisor.
Key questions for follow-up
We plan to consider the following questions when following up with Ought:
- Has there been any progress on hires?
- How has research progressed?
- How has implementation progressed?
- How has testing progressed?
- Have there been any leads on other researchers who are noticing and/or building on your work?
- Have any significant plans changed?
Additionally, there are two situations where we might consider a renewal or expansion of funds:
- Ought wants to make additional hires while maintaining a reasonable level of funding reserves.
- After 2-2.5 years, Ought would like to extend its runway while maintaining a four-person team.
In either situation, Daniel believes he would lean strongly toward renewal or increased support, provided Ought is making research progress that looks impressive to us and our technical advisors (we consider other metrics of success less important at this time).
Sources
DOCUMENT | SOURCE |
---|---|
Ought, Our Approach, 2018 | Source (archive) |
Paul Christiano, Directions and desiderata for AI alignment [archive only] | Source |
Stanford Computation and Cognition Lab, Homepage, December 2017 [archive only] | Source |
Stanford Computation and Cognition Lab, Noah Goodman, December 2017 [archive only] | Source |
Stanford-Startx, Homepage, December 2017 [archive only] | Source |