
Duplicate of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence with user-submitted answers. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
@Krantz This was too long to fit.
Enough people understand that we can control a decentralize GOFAI by using a decentralized constitution that is embedded into a free and open market that sovereign individuals can earn a living by aligning. Peace and sanity is achieved game theoretically by making the decentralized process that interpretably advances alignment the same process we use to create new decentralized money. We create an economy that properly rewards the production of valuable alignment data and it feels a lot like a school that pays people to check each other's homework. It is a mechanism that empowers people to earn a living by doing alignment work decentrally in the public domain. This enables us to learn the second bitter lesson: "We needed to be collecting a particular class of data, specifically confidence and attention intervals for propositions (and logical connections of propositions) within a constitution.".
If we radically accelerated the collection of this data by incentivizing it's growth monetarily in a way that empowers poor people to become deeply educated, we might just survive this.
@LoganZoellner Maybe you should correct the market. I've got plenty of limit orders to be filled.
A friend made a two video series about it, he is pretty smart and convinced me that AI fear is kind of misguided
@AlexeiTurchin That and trade offs. Like if AI A is really good in task x it will suck shit at task y. That’s why alpha go kill’s LLMs every time at go
@Krantz If anyone is willing to cheerfully and charitablity explain their position on this, I'd like to pay you here:
https://manifold.markets/Krantz/who-will-successfully-convince-kran?r=S3JhbnR6
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
This is really hard, but it's boundedly hard. There's plenty of times we Did the Thing, Whoops (leaded gasoline, WW2, social media) but there's also some precedent for the top tiny percent of humans, coming together to Not Do the Thing or Only Slightly Do the Thing. (Nuclear war, engineered smallpox, human-animal hybrids, project Sundial)
Its easy to underestimate the impact of individuals deciding to not push capabilities, but consider voting: rationally completely impotent, and yet practically it completely decides the outcome.
There was a LessWrong article listing different sampling assumptions in anthropics, one of which was the Super-Strong Self-Sampling Assumption (SSSSA): I am randomly selected from all observer-moments relative to their intelligence/size. This would explain why I'm a human rather than an ant. However, since I don’t find myself as a superintelligence, this may be evidence that conscious superintelligence is rare. Alternatively, it could imply that "I" will inevitably become a superintelligence, which could be considered an okay outcome.
@Phi I think this lines up well with a segment in
https://knightcolumbia.org/content/ai-as-normal-technology
"Market success has been strongly correlated with safety... Poorly controlled AI will be too error prone to make business sense"
How about this: AI is collective intelligence of Earth civilization, with both humans and machines being vital parts of it. As the AI will evolve humans will become more heavily augmented and genetically modified, definition of "human" will stretch, humanity will evolve as subsystem of the AI
P.S. And maybe it already exist and it is busy to reorganize our world ;)
they are will be relatively humanlike
they will be "computerish"
[they will be] generally weakly motivated compared to humans
These are 3 very different possibilities, why are they all listed in the same option as if they're all true at once?
Slightly off topic but I find very amusing the mental image of blankspace737 / stardust / Tsar Nicholas coming across this market, seeing the "god is real" option at 0% and repeatedly trying, with growing desperation, to bet it up for infinite winnings, only to be met each time with the server error message
not to pathologize, and I could very well be projecting, but seeing krantz's writing for the first time took me back Ratatouille-style to the types of ideas my hypomanic episodes used to revolve around before I knew how to ground myself. it's not I think that any of it is delusional, but from what I've seen of the general frantic vibe of his writing and of his responses to criticism it seems like he has such deeply held idealistic notions of what the future holds that it starts to get (in my experience, at least) difficult and almost painful to directly engage with and account for others' thoughts on your ideas at anything more than a superficial level if they threaten to disrupt said notions. maybe that's normal, idk
Here are the steps.
Step 1. Eliezer (or anyone with actual influence in the community) listens to Krantz.
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6
https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6
Step 2. Krantz shows Eliezer how simple proposition constitutions can be used to interpretably align other constitutions and earn points in the process thus providing an option to pivot away from machine learning and back into gofai.
Step 3. We create a decentralized mechanism where people maintain their own constitutions privately with the intention that they will be used to compete over aligning a general constitution to earn 'points' (Turning 'alignment work' into someone everyone can earn crypto for doing).
Step 4. Every person understands that the primary mechanisms for getting an education, hearing the important news and voting on the social contract is through maintaining a detailed constitution of the truth.
Step 5. Our economy is transformed into a competition for who can have the most extensive, accepted and beneficial constitution for aligning the truth.
https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
@CalebW In your opinion, what would be the right problem, methods, world model, and thinking? The vagueness of this option seems to turn it into a grab bag akin to "because of a reason"
@TheAllMemeingEye when AGI goes right, there will be many reasons for that and there likely won't be a consensus opinion on which one was the most important. This market can be resolved to only one option. This unifying option is simply "Eliezer is wrong in many ways". Also it's a meme, which makes me want to buy it up.