If Artificial General Intelligence has an okay outcome, what will be the reason?
➕
Plus
443
Ṁ270k
2200
27%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
13%
Eliezer finally listens to Krantz.
7%
Other
6%
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
6%
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
4%
Someone solves agent foundations
2%
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
2%
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
2%
Ethics turns out to be a precondition of superintelligence
1.8%
Orthogonality Thesis is false.
1.7%
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
1.1%
Humans become transhuman through other means before AGI happens
1%
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
1%
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
1%
Aliens invade and stop bad |AI from appearing
1%
AGI is never built (indefinite global moratorium)
1%
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
1%
AI control gets us helpful enough systems without being deadly

Duplicate of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence with user-submitted answers. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

Get
Ṁ1,000
and
S3.00
Sort by:

@Krantz This was too long to fit.

Enough people understand that we can control a decentralize GOFAI by using a decentralized constitution that is embedded into a free and open market that sovereign individuals can earn a living by aligning.  Peace and sanity is achieved game theoretically by making the decentralized process that interpretably advances alignment the same process we use to create new decentralized money.  We create an economy that properly rewards the production of valuable alignment data and it feels a lot like a school that pays people to check each other's homework.  It is a mechanism that empowers people to earn a living by doing alignment work decentrally in the public domain.  This enables us to learn the second bitter lesson: "We needed to be collecting a particular class of data, specifically confidence and attention intervals for propositions (and logical connections of propositions) within a constitution.".

If we radically accelerated the collection of this data by incentivizing it's growth monetarily in a way that empowers poor people to become deeply educated, we might just survive this.

The fact that the Krantz stuff is #2 and #3 here and not something like "one of OpenAI/Anthropic/DeepMind solves the alignment problem" indicates a complete market failure.

bought Ṁ50 YES

@LoganZoellner Maybe you should correct the market. I've got plenty of limit orders to be filled.

bought Ṁ50 YES

A friend made a two video series about it, he is pretty smart and convinced me that AI fear is kind of misguided

https://youtu.be/RbMWIzJEeaQ?si=asqn6uadLXPpeDjJ

bought Ṁ400 YES

@AlexeiTurchin That and trade offs. Like if AI A is really good in task x it will suck shit at task y. That’s why alpha go kill’s LLMs every time at go

@Krantz If anyone is willing to cheerfully and charitablity explain their position on this, I'd like to pay you here:

https://manifold.markets/Krantz/who-will-successfully-convince-kran?r=S3JhbnR6

Humanity coordinates to prevent the creation of potentially-unsafe AIs.

This is really hard, but it's boundedly hard. There's plenty of times we Did the Thing, Whoops (leaded gasoline, WW2, social media) but there's also some precedent for the top tiny percent of humans, coming together to Not Do the Thing or Only Slightly Do the Thing. (Nuclear war, engineered smallpox, human-animal hybrids, project Sundial)

Its easy to underestimate the impact of individuals deciding to not push capabilities, but consider voting: rationally completely impotent, and yet practically it completely decides the outcome.

This market is an interesting demonstration of a fail state for Manifold (one user actively dumping money into a market for marketing and no strong incentive to bet against that without a definitive end in sight).

There was a LessWrong article listing different sampling assumptions in anthropics, one of which was the Super-Strong Self-Sampling Assumption (SSSSA): I am randomly selected from all observer-moments relative to their intelligence/size. This would explain why I'm a human rather than an ant. However, since I don’t find myself as a superintelligence, this may be evidence that conscious superintelligence is rare. Alternatively, it could imply that "I" will inevitably become a superintelligence, which could be considered an okay outcome.

@Phi I think this lines up well with a segment in

https://knightcolumbia.org/content/ai-as-normal-technology

"Market success has been strongly correlated with safety... Poorly controlled AI will be too error prone to make business sense"

opened a Ṁ50 YES at 7% order

Recent bets do not represent my true probabilities. But I want to move Krantz's stupid answers down the list, and it's much cheaper to buy others up than to buy those down.

How about this: AI is collective intelligence of Earth civilization, with both humans and machines being vital parts of it. As the AI will evolve humans will become more heavily augmented and genetically modified, definition of "human" will stretch, humanity will evolve as subsystem of the AI

P.S. And maybe it already exist and it is busy to reorganize our world ;)

they are will be relatively humanlike

they will be "computerish"

[they will be] generally weakly motivated compared to humans

These are 3 very different possibilities, why are they all listed in the same option as if they're all true at once?

bought Ṁ50 NO

@nsokolsky unverifiable -> 0% chance of resolving yes

Slightly off topic but I find very amusing the mental image of blankspace737 / stardust / Tsar Nicholas coming across this market, seeing the "god is real" option at 0% and repeatedly trying, with growing desperation, to bet it up for infinite winnings, only to be met each time with the server error message

bought Ṁ40 NO

Even in good cases, 20% of max attainable CEV seems unlikely. I expect that outcomes are extremely heavy-tailed such that even if alignment is basically solved, we rarely get anything close to 20% of maximum. There’s a lot of room at the top. May also be that maximum is unbounded!

not to pathologize, and I could very well be projecting, but seeing krantz's writing for the first time took me back Ratatouille-style to the types of ideas my hypomanic episodes used to revolve around before I knew how to ground myself. it's not I think that any of it is delusional, but from what I've seen of the general frantic vibe of his writing and of his responses to criticism it seems like he has such deeply held idealistic notions of what the future holds that it starts to get (in my experience, at least) difficult and almost painful to directly engage with and account for others' thoughts on your ideas at anything more than a superficial level if they threaten to disrupt said notions. maybe that's normal, idk

Here are the steps.

Step 1. Eliezer (or anyone with actual influence in the community) listens to Krantz.

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6

Step 2. Krantz shows Eliezer how simple proposition constitutions can be used to interpretably align other constitutions and earn points in the process thus providing an option to pivot away from machine learning and back into gofai.

Step 3. We create a decentralized mechanism where people maintain their own constitutions privately with the intention that they will be used to compete over aligning a general constitution to earn 'points' (Turning 'alignment work' into someone everyone can earn crypto for doing).

Step 4. Every person understands that the primary mechanisms for getting an education, hearing the important news and voting on the social contract is through maintaining a detailed constitution of the truth.

Step 5. Our economy is transformed into a competition for who can have the most extensive, accepted and beneficial constitution for aligning the truth.

https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6

bought Ṁ10 YES

anyone know what's going on with the unbettable 0% options returning NaN?

It seems this market is heavily suffering from being linked when many of the options are not mutually exclusive

yeah

big benefit of all possibilities being written by the same person is there's less of that

bought Ṁ10 NO

@CalebW In your opinion, what would be the right problem, methods, world model, and thinking? The vagueness of this option seems to turn it into a grab bag akin to "because of a reason"

@TheAllMemeingEye when AGI goes right, there will be many reasons for that and there likely won't be a consensus opinion on which one was the most important. This market can be resolved to only one option. This unifying option is simply "Eliezer is wrong in many ways". Also it's a meme, which makes me want to buy it up.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules