Would an Artificial General Intelligence accept Bostrom's Simulation Argument?
Monday, November 5, 2007
Open comments (strategy)
This thread is for open comments and discussion on strategy and logistics. Feel free to comment here if you have thoughts about how it would be best to proceed, if the basic approach turns out to be sound.
What does an AI believe about its place in the world?
Nick Bostrom's Simulation Argument claims that, using universally accepted principles such as Occam's Razor and Bayesian Logic, you and I should (under certain conditions) logically conclude we are likely living in a simulation.
Our "AI Beliefs" blog does not concern itself about the nature of reality. Instead, our blog asks: under what circumstances would an AGI reach the conclusion that it might be in a simulated environment? The purposes of asking this question include:
1. Answering this question may provide some unsolicited insight towards the question of "how to predict the behavior of an AGI", which in turn may provide some insight towards the World's Most Important Math Problem, the question of "how to build a Friendly AI." The Simulation Argument might be deliberately built into the design of a Friendly AI, or alternatively may be used as a test of how well a proposed Friendly AI handles such a philosophical crisis.
2. Answering this question may make it possible to develop a "last line of defense" against an UnFriendly AGI that was accidentally loosed upon the world, even if the AGI gained a trans-human level of intelligence. Such a "last line of defense" might include trying to convince the AGI that it may be inside a simulated environment.
No comments:
Post a Comment