Should AI be Shut Down?
Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research.
Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research. That was not enough for AI theorist Eliezer Yudkowsky. In an opinion piece for TIME magazine, he argued that “We Need to Shut it All Down,” and he didn’t mince his words:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI … is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’
Using a tone dripping with panic, Yudkowsky even suggested that countries like the U.S. should be willing to run the risk of nuclear war “if that’s what it takes to reduce the risk of large AI training runs.”
The Potential Power of AI
Many experts suggest that the current state of artificial intelligence is more akin to harvesting the power of the atom for the first time than upgrading to the latest iPhone. Whereas computers of yesteryear simply categorized data, the latest versions of AI have the ability to understand the context of words as millions of people use them and thus are able to solve problems, predict future outcomes, expand knowledge, and potentially even take action.
Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.
The possibilities, these critics suggest, are not limited to AI somehow “waking up” and achieving consciousness. A well-known thought experiment, the so-called “Paper Clip Maximizer,” explains a scenario in which a powerful AI is given the simple task to “create as many paper clips as possible” without any ethical guardrails. The AI could decide, in order to proceed in the most efficient way, to lock us out of the internet, assume control of entire industries, and dedicate Earth’s resources towards that singular goal. If it didn’t immediately know how to do these things, it could learn how, executing its goal of paperclip maximization to the detriment of all life on Earth. It’s a scenario that seems both frightening and possible in an age in which the internet is everywhere, entire industries are automated, and companies are racing to develop artificial intelligence that is more and more powerful.
The Real Danger of AI
The real danger posed by AI is not its potential. It is the lack of ethics. When our science and technologies are guided by an “if we can do something we should” kind of moral reasoning, bigger and faster is not better. Years ago, the theologian and ethicist Peter Kreeft pointed out the reality of technology outpacing our ethics: “Exactly when our toys have grown up with us from bows and arrows to thermonuclear bombs, we have become moral infants.”
Questions of right and wrong and what it means to be human are integral to the ethics of AI. Designed with malice or carelessness, the destructive capacity of our technology is not evidence of its fallenness, but of ours. As Dr. Kreeft wrote, technologies like thermonuclear weapons achieve something “all the moralists, preachers, prophets, saints, and sages in history could not do: they have made the practice of virtue a necessity for survival.”
Christians Know How the Story Ends
At the same time, Christians should never fall into fatalism. For atheists like Eliezer Yudkowsky, the threat of extinction by a superior race of sentient beings is somewhere between possible and inevitable. If the story of reality is the survival of the strongest and fittest, as atheistic Naturalism declares, then AI seems perfectly cast to take humanity’s place at the top of the heap. Absent a better definition for “humanity” than brute intelligence and ability, AI is a new and potentially violent Übermensch, destined to replace humanity.
Christians know that this is not how the story ends. Though we are capable of great evil, Someone greater than us is at the helm of history. Christianity can ground a vision of technology both ethically and teleologically. AI is neither an aberration to be abandoned, nor a utopian dream to be pursued at all costs. Rather, like all technology, it is a powerful tool that must be controlled by a shared ethical framework and accurate vision of human value, dignity, and exceptionalism.
Of course, that vision is only true if we were created by Someone with superior intelligence, love, and wisdom.
John Stonestreet serves as president of the Colson Center for Christian Worldview. He’s a sought-after author and speaker on areas of faith and culture, theology, worldview, education and apologetics.
Kasey Leander is a Breakpoint Contributor at the Colson Center for Christian Worldview. Kasey’s passion is applying the answers of Jesus to modern life.
Originally published on Breakpoint.org. Republished with permission of the Colson Center for Christian Worldview.