Basically will Isaac King end up finding an arangement where the experiment actually takes place and $1000 is actually wagered? I’ll be accepting of things like the total wagered is $1000 vs $1000 on both sides, or one side risks $1000 and the other doesn’t, or stuff like that.
If a scam is involved where someone finds a way to not pay up it still counts if an arrangement was made and the experiment run.
The twitter thread in question:
See also FAQ2 of my AGI-when market which talks about more ways it's hard to conduct a fair Turing test.
I'm game to help with this. My first questions would be about cheap ways to win. Like I don't think ChatGPT or Claude have any way to send two separate messages in a row. So asking a bot to do that would unmask it instantly, right?
Would it work for the human foil to have a list of purely technical limitations of chatbots and just artificially (ha) adhere to those same limitations as well?
Then there's the problem of unmasking the bot because of ways in which it's superhuman. Like composing a rhyming poem impromptu on a novel topic. A possible fix for that could be that the human foil is actually a cyborg. Namely, the human is able, at their own discretion, to paste in a chatbot reply if it's strictly better/smarter than they could come up.
Plus something about artificial delays to remove response time as a factor?
(Pulling this off will be so much more work than one would predict. Or I guess that's why this market is at 8%!)