
AI safety community successfully advocates for a global AI development slowdown by December 2027
11
Ṁ5502027
19%
chance
1D
1W
1M
ALL
This market resolves to YES if, by December 31, 2027, the AI safety community successfully advocates for and achieves a significant global slowdown in frontier AI development.
A "significant global slowdown" requires at least two of the following:
A formal international agreement signed by at least 3 of the top 5 AI-producing countries to limit or pause certain types of AI development
Major AI labs (at least 3 of the top 5) publicly committing to and implementing significant voluntary slowdowns
Implementation of substantial regulatory barriers to rapid AI development in at least 3 major AI-producing countries
The slowdown must be explicitly connected to AI safety concerns and must represent a material change from the previous development pace.
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Related questions
Related questions
Will someone commit violence in the name of AI safety by 2030?
60% chance
Will there be a global "pause" on cutting-edge AI research due to government regulation by 2025?
6% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
18% chance
Will any world leader call for a global AI pause by EOY 2027?
88% chance
Will any AI researchers be killed by someone explicitly trying to slow AI capabilities by end of 2028?
27% chance
Will any developed country establish a limit on compute for AI training by 2026?
21% chance
Will the US regulate AI development by end of 2025?
32% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
59% chance
AI Safety Clock at 16 minutes to midnight by October 2025?
49% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
19% chance