Saturday, December 29, 2007

Adam Elga on deterrence

A paper[1] linked from an SIAI comment caught my eye: A similar approach to the one we outline is described by Adam Elga in 2004, but with humans rather than AI's. He concludes that a rational agent (Dr. Evil) should allow himself to be deterred, but that he is "not entirely comfortable" with that conclusion. He doesn't state whether he (Adam Elga) would actually allow himself to be deterred in that situation rather than risk torture, but if the question were put to him, I think his honest answer would be "No". (Yes, I admit that I just made an unfalsifiable claim.)

[1] Defeating Dr. Evil with self-locating belief. Philosophy and Phenomenological Research 69(2), 2004.

No comments: