We just launched the 2023 Open Philanthropy AI Worldviews Contest.
We’re looking for essays that influence our views on one of two questions:
- What is the probability that AGI[1]By “AGI” we mean something like “AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.” AGI is a notoriously thorny concept to define precisely. What we’re actually interested in is the potential … Continue reading is developed by January 1, 2043?
- Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?
We plan to distribute $225,000 in prizes across six winning entries.
This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.
The contest deadline is May 31, 2023. All work posted for the first time after September 23, 2022 is eligible.
For more details, and to submit your entries, visit our contest page.
Footnotes
1 | By “AGI” we mean something like “AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.” AGI is a notoriously thorny concept to define precisely. What we’re actually interested in is the potential existential threat posed by advanced AI systems. To that end, we welcome submissions that are oriented around related concepts, such as transformative AI, human-level AI, or PASTA. |
---|