
It bears all the hallmarks of a dystopian novel. A lone researcher sounds the alarm of a pending disaster. The mega corporation he works for attempts to assure him that everything is fine. When they fail to assuage his concerns, he continues to blow the whistle and is fired. Meanwhile, the very thing he has been warning about continues to grow, increasing in power or scope. Then, when it is too late to easily prevent catastrophe, the world finds out he was right all along.
Excluding the last sentence, which as of yet has not occurred, the paragraph you just read describes a recent news story, broken by the Washington Post on June 11. Blake Lemoine, a Google engineer, claims he was placed on paid leave after raising concerns regarding an artificial intelligence (AI) program being developed by the tech-giant. While working on an AI technology known as Language Model for Dialogue Applications (LaMDA), Mr. Lemoine began to be troubled by the interactions he was having with the machine. He became convinced that the program had become sentient and approached his bosses to voice his concern. Google has denied his claims, and some AI experts seem to agree. Even so, a quick dismissal of the topic would be a mistake.
Gaining full understanding of the current situation is a tall task. Most of us are not experts in AI nor do we have the access needed to make a determination regarding the sentience of this particular program, known as LaMDA.
Mr. Lemoine did offer a few anecdotes that seem to support his position.
- The AI purportedly told him that it feared being turned off and described that possibility as being like death.
- In a conversation about religion, the AI spoke of rights and personhood.
- LaMDA claims to have read Le Misérables and spoke of how injustice was displayed in the book. The AI also offered thoughts on the book’s use of themes like compassion, God, redemption, and self-sacrifice for a greater good.
- LaMDA could engage in debate and offered hypotheticals, even convincing Mr. Lemoine to reconsider Asimov’s third law of robotics.
- The program seems to worry about the future and reminisce on the past.
- Perhaps the capstone of its “personhood” came when the AI was introduced to a lawyer, which it promptly hired.
Again, most of us are not AI ethicists, nor do we have access to this program. Google claims that its ethics teams have determined the AI is not sentient. However, the aforementioned, and apparently provable (including transcripts that can be found online), interactions do serve as a warning. If LaMDA is not sentient, it could soon be. Furthermore, with all the research being done in this field, it is only a matter of time before some machine arrives at sentience, or at least gets so close to sentient that it receives the label.
What does this mean for the believer? What concerns should we have as we move into a future where machines are deemed persons? How should we be preparing for this coming reality?
Human Life Concerns
We believe that God created humans in His image. Thus, we value human life as sacred. The thread of this belief runs through our entire system of laws, ethics, and norms. However, Artificial Intelligence has no innate value system that would cause it to regard humans as unique or special in any way. This lack of value for humanity could be programmed to an extent, but it is certainly not a given. Machines are only as ethical as their programming, and even the best-intentioned design can be fatally flawed. It is no coincidence that science and technology experts like Stephen Hawking and Elon Musk are joined by philosophers like Oxford’s Nick Bostrom in voicing concern over AI.
As AI systems continue to develop, they could eventually (although the actual moment could happen quite quickly) become “superintelligent”. At this point they could begin to multiply and improve on their own design. They may even see humans as competitors for resources or as a threat to turn them off. If this happens, we could find ourselves being cut off from the resources we need like electricity, food, water and fuel. It is even possible humans could find themselves being hunted by AI controlled systems. Although this sounds like a fictional plot, it cannot be stressed enough how real this threat is, as evidenced by the growing list of the concerned and by the billions being invested in trying to ensure that AI development is responsible. AI is a human life concern.
Human Value Concerns
In their book Humans 2.0, Fazale Rana and Kenneth Samples warn that “the most concerning consequence of AI technology is the loss of human identity, dignity, and value.” This concern is rooted in the widely held belief that machine learning is on a trajectory toward self-awareness. We may even be there with LaMDA. The arrival of AI at self-awareness, or sentience, will be a watershed moment in human history. Some technologists and transhumanists believe that sentience is the threshold at which machines are to be granted the same rights as humans.
Those that support the personhood of machines argue that humans are simply biological material (hardware) housing the necessary components for self-awareness. If AI becomes sentient, then machines would simply be “persons” with different hardware components than humans. They would submit that there is little difference between human circuitry with its carbon-based components and a machine’s circuitry with its silicon-based brain. Some would even argue that machine persons were an upgrade, the next step in evolution.
We have already seen “rights” conferred upon AI with the robot Sophia being granted citizenship in Saudi Arabia. While that may have been a premature publicity stunt, the implications should not be understated. We are tracking toward a world where we will find ourselves surrounded by soulless “persons” created, nurtured, and assisted by transhumanists in advancement of an anti-creation agenda like we have never seen. Our current fights for human life and dignity will pale in comparison to what may lie ahead.
Human Relationship Concerns
As machines develop sentience, humans will begin to develop deeper relationships with them. As far back as the 1960s, the idea of a human becoming emotionally attached to a machine was found during the development of ELIZA, an early language processing program (like LaMDA). Several individuals involved in the research attributed human-like feelings to the program. In one famous instance, the creator’s secretary asked him to leave the room so that she and ELIZA could have a “real conversation”. Joseph Wizenbaum, ELIZA’s creator, said “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
In our current culture, marriage and childbirth rates have plummeted. Even the pope has spoken about Christians raising pets in lieu of children. Studies tell us that many young men and women are no longer seeking meaningful relationships. Instead, they are gratified through one-night stands or remain celibate. We must assume that machine “persons” will only worsen this phenomenon. Not only will young men and women continue to seek out pornography, which will become increasingly enhanced and accessible through internet proliferation and developments in neural link technologies, now they will be able to have what feels like meaningful conversations and relationships with machines. For many, this will mean a complete elimination of dating and other normative social interactions.
We will soon be counseling people who fall in love with machines, comforting spouses whose marriage was wrecked by a “person” who is not a human, and weeping with even more families whose child committed suicide because of their digital interactions. When stories surfaced of individuals seeking to marry their computers, they were met with mockery and laughingly dismissed. The time for laughter is over.
Be Prepared
The church must not be caught flat-footed by this issue. Too often we are reactionary. On this issue, that will be devastating. With the ever-increasing speed of technology advancement, we must act now. We must assemble a think-tank of doctrinal experts, Christian philosophers, pastors, theologians, and ethicists in order to prepare our positions as a Church. Ministers must take time to investigate the reality that is before us and devote time to reading the Christian voices that are already speaking on these issues.
Finally, we must pray. As believers, we know that it is only with the help of the Holy Spirit that we will be able to navigate the treacherous waters that are ahead. Now, as much as ever in our history, we must be operating with Spiritual vision. We understand that the world is becoming ever more hostile toward the kingdom of God. Yet, we are reminded, Jesus told us not to fear, for He already overcame the World.
Sign up below to get the latest pastorconley.com content FIRST and FREE!
Pastor, I am always challenged by your blogs, and this one is certainly not an exception. I am going to need to read, and re-read it to get some level of understanding, which is more about my low level of understanding of technology. I am stuck in the yellow legal pad mode, but I am trying faithfully to upgrade! I will keep digging!
LikeLike