In his article, Istvan recounts Issac Asimov's Three Laws of Robotics in his fictional story series I, Robot, which center on the idea that human life must be sustained above any robot's survival. Asimov created the three laws in order to show how mankind can be valued and protected above his own technology, even if that technology is capable of feats that would destroy their makers (think The Terminator.) But Istvan is unsatisfied with these and instead offers three laws of his own. He writes:
In general, a human will is defined by its genes, the environment, and the psychological make-up of its brain. However, a sophisticated artificial intelligence will be able to upgrade its "will." Its plasticity will know no bounds, as our brains do. In my philosophical novel The Transhumanist Wager, I put forth the idea that all humans desire to reach a state of perfect personal power—to be omnipotent in the universe. I call this a Will to Evolution. The idea is built into my Three Laws of Transhumanism, which form the essence of the book's philosophy, Teleological Egocentric Functionalism (TEF). Here are the three laws:So, Istvan feels that for transhumans one must be self-centered and self-advancing. That shouldn't be too much of a surprise given that the transuhuman movement is all about becoming a superman (perhaps a god?) in comparison to humanity today. Still, the fact that Istvan doesn't seem to see the unworkable moral implications gives me great pause.
1) A transhumanist must safeguard one's own existence above all else.
2) A transhumanist must strive to achieve omnipotence as expediently as possible—so long as one's actions do not conflict with the First Law.
3) A transhumanist must safeguard value in the universe—so long as one's actions do not conflict with the First and Second Laws.
An Old Lie with a Shiny New FinishIn reading Istvan's three laws, I quickly saw that these were not new. In fact, they are eerily similar to a moral principle that was put forth in the early 20th century by a man who others also claimed was a visionary. The principle of "Do what thou wilt shall be the whole of the Law"2 was channeled by occultist Aleister Crowley as he wrote The Book of the Law (Liber AL vel Legis sub figura CCXX),3 the foundational book for his new religious philosophy of Thelema. Not only does this equate to Istvan's first law, but Istvan also echoes Crowley's dictum of "Love is the law, love under will" in his other two laws. So, here we have a modern transhumanist that is recapitulating the moral philosophy of an occultist who said he received it from a spirit voice! This isn't something new; it's a lie that's very, very old. In fact, it's pretty much as old as mankind being tempted to transcend his current state of being and become like God knowing good from evil. That offer didn't work out very well for us, either.
The scary thing about all this is that Istvan cannot see how self-serving and dangerous such a moral system would actually be. Who defines what "safeguarding value in the universe" is? If omnipotence is a goal, then my existence is more valuable than another's. Is this not the fundamental principle claimed by every single act of genocidal terror that humanity has witnessed in the last 100 years?
Ultimately, Istvan's view of the world is terrifying, not because I fear technology, but because I fear the evil in the human heart. By wanting to elevate himself above his limitations with nothing but his own desires to restrain him, he sends a message that humanity is worthy of being destroyed. Such beliefs don't elevate humanity, they debase it. Self-interest above all is animalistic. Culture and civilization is where one looks to the interests of others above one's self. This is pretty fundamental; most parents teach it to their children from the earliest ages. Accepting selfishness as a moral philosophy can only bode ill for the future of humanity.