Discussion about this post

User's avatar
Cody Rushing's avatar

One case against some of the verifiable prizes you described above is that AIs will continue to get much better at the types of research where you can verify the outcome (https://blog.redwoodresearch.org/p/ais-can-now-often-do-massive-easy), and that AI labs will be in an amazing position to use their powerful AI labor on such tractable research problems. So, most human research efforts should potentially focus on areas where success isn't currently verifiable (and try to make it more verifiable / make direct progress in it instead)

George Ingebretsen's avatar

Relatedly, an AI Alignment prize was a contending alternative to the OAI superaligment team.

From: https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted

> He added that he was thinking of committing a billion dollars to the issue, which many A.I. experts considered the most important unsolved problem in the world, potentially by endowing a prize to incentivize researchers around the world to study it. Although the graduate student had “heard vague rumors about Sam being slippery,” he told us, Altman’s show of commitment won him over. He took an academic leave to join OpenAI. But, in the course of several meetings in the spring of 2023, Altman seemed to waver. He stopped talking about endowing a prize. Instead, he advocated for establishing an in-house “superalignment team.”

No posts

Ready for more?