Will my p(doom) be above 10% in 20 years (2043)?
➕
Plus
19
Ṁ586
2043
32%
chance

20 years from today, December 27th 2023.

My current p(doom) is around 50%. p(doom) for the purpose of this question is the probability that humanity goes extinct, or all human agency gets taken away (enslavement or eternal torture scenarios count, if it applies to every alive human) because of AI.

Note that this question isn't asking about whether doom will actually happen, but whether in 20 years I will predict that it will happen.

Same question for in 10 years:

Get
Ṁ1,000
and
S3.00
Sort by:

So what's the p(doom) at rn? Mine isn't high (like 0.5%), but I'd put p(really bad shit) close to 80%

@copiumarc hmmm i was writing an answer before rereading the description and my conceptualization of p(doom) when writing that answer was a bit different from the description. like i think there's decent chances that a totalitarian government gets total and eternal power over all the living humans for a while and they can never be overthrown or come anywhere close to that because they own the asi. basically an asi coup. i would have counted that in p(doom) intuitively but in the description that doesn't seem to count. so maybe p(doom) (as defined in the description) might be 40% now? i would have otherwise said my p(doom) is 60% including the bad outcomes that include a handful of humans having all the agency for example.

@Bayesian Yeah, intuitively I feel like that should count. I would much rather (not) live in the all humans extinct world than the world ran by Barron Trump the 3rd, X¥≯∉∫ζξ↹ゞ✠✸✭🫥, insert cyrillic, Pope St God King Alexander the Great, etc. Frankly, Id rather live in the majority of "ASI does something not good" worlds.

But given the former defn, what makes your credence for human hating ASI so high and is it twitter. Obviously you have higher confidence that ASI will happen soon than I do, but I intuitively feel like unless the ASI wants humans dead bad outcomes will leave at least a few thousand.

@copiumarc Nobody who knows anything about AI safety talks seriously about ' ai wanting humans dead'. The issue is a superintelligence that doesn't specifically want a big population of healthy, happy, free humans. If a superintelligence prioritizes anything over that very specific (and difficult to define) goal, it will plausibly sacrifice everybody to get that other thing it wants.

@copiumarc tumbles is correct, i don't think asi is at all likely to hate humans, but we are made of negentropy that can be used for something else, and increasing energetic needs boils the oceans and we die pretty fast even if the asi didn't wanna kill us off to avoid a coup risk

@Bayesian Well thats precisely my point; I don't think its very likely we get evil god ASI and I don't know how sound the argument for others leading to 8/8 billion doom.

The universe is really really big, so Im not sure if a paperclip maximizer actually would decide to nuke the entire Earth or Sun rather than fucking off

Paperclip maximizer is also a really pessimistic scenario. With a terminal goal less at odds with ours (idk, lets say it really wants to go to Andromeda), it is possible the AI just doesn't see it as worth it. Maybe if the universe was just the Solar System

@copiumarc It seems like you just fundamentally don't understand the relevant arguments.

>The universe is really really big, so Im not sure if a paperclip maximizer actually would decide to nuke the entire Earth or Sun rather than fucking off

Who ever suggested a paperclip maximizer would nuke the sun? I understand that you didn't mean that literally, but the phrase hides how silly what you are saying is.

>The universe is really really big, so I'm not sure a paperclip maximizer would decide to [turn the earth into paperclips] rather than fucking off

It sounds pretty dumb when you just say it straight. Of course it would turn the earth into paperclips, that's the whole point. Traveling to other galaxies requires incredible amounts of time and energy, energy that could be better spent making paperclips.

Any argument against this scenario has to take the form of a reason why a paperclip maximizer style ASI can be avoided. The idea that we could survive such a thing if it comes to pass is incredibly silly.

@Tumbles By nuke the sun for the purpose of this hypothetical I obviously mean turn the Sun into paperclips.

Latter part: optimization problems are hard. Optimization problems involving the scale of the universe and quintillions of years are harder. I think it's very plausible in the paperclip scenario (which is my responsibility to elaborate on; I did bring it up), very likely even, that we die because at minimum (you might not like this but) the ASI "nukes the sun." But with that as a given, passing it off as fact when it's conjecture about a being we can't comprehend working on scales we can't comprehend doesn't sit right by me.

If I had the spare mana I would bet NO. The next 20 years are the scariest time it seems like

I expect this to be low. Most of my probability mass is on doom before the 20 years mark, so if it doesn't come to pass within 20 years we might be good? but also 10% is pretty low

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules