Discussion about this post

User's avatar
Kevin's avatar

I notice that Dean's article doesn't mention the word 'disempower' at all. Alon's rebuttal doesn't use the word either, but alludes to a superintelligence using coercion to get humans to do what it wants, and humans handing over information without coercion.

Superintelligence doesn't have to "take over the world" all at once. Disempowerment can happen with a gradual, voluntary surrender of agency.

Humans are today handing over their agency by typing or copy-pasting everything about themselves into today's chatbots to get their work done faster (not to mention the amount of surveillance video being captured), which the AI companies use for future training. This is a great vector for tacit knowledge to become understood by a superintellgence.

Your point on the word "doomer" resonates strongly with me. The doomer label is absolutely misappropriated and is a category error. One counterpoint is that there are some that believe that we can decide the future, yet call themselves "doomer" out of a desire to scare people, as Liron Shapira says of himself. I think it's totally fine as long as someone is calling themselves a doomer.

Overall a great article that really frames the disagreements well, and is a good primer for many who don't yet understand the risks. Thank you for writing this!

Nathan Metzger's avatar

The more complex the domain, the more intelligence (efficient optimization) confers an advantage. The complexity of reality at large, even it's irreducible complexity, is strong evidence for the advantage that would be conferred to a superintelligent AI. Or heck, even an AI that is only as cognitively capable as the smartest human who has ever lived, which copies itself indefinitely, coordinates perfectly, never slacks off, and communicates at extremely high bandwidth.

No posts

Ready for more?