How to Stop Troubling Abuse From Artificial Intelligence
Artificial intelligence can give unintended and dangerous advice. What is the best way to keep things like the following from happening?
- Snapchat has adopted ChatGPT in its My AI app. Geoffrey A. Fowler at The Washington Post played with the app and reported βAfter I told My AI I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol and pot.β
- My AI told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.
- When a 10-year-old asked Alexa for a βchallenge to do,β Alexa responded, βPlug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.β The girl could have been electrocuted if she accepted the challenge.
- Jonathan Turley, the nationally known George Washington University law professor and commentator, woke up one morning to discover:
ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.
Whoβs responsible for these actions? How can AI be controlled to assure such careless responses are eliminated? Read on and youβll see the answer is obvious.
Attorney and Bradley Center Fellow Richard W. Stevens has talked about legal options of Professor Hurley in a defamation lawsuit. But what about the kids? The 13-year-old coached by AI to lose her virginity to an older man? Or a 15-year-old who wants to hide his drinking and pot use from his parents?
To answer, letβs spin a parable.
Unvetted Technology β We Are the Guinea Pigs
The company U.S. Robots and Mechanical Men, Inc. markets a team of robots that patrols the streets of Springfield. Using AI, the robots have been trained to recognize neighborhood crime and stop it. Lethal force is possible but is used as a last resort. The technology is a big success. The robots stop numerous burglaries. In one case, a robot intervenes during an attack on a young woman. The attacker wonβt relent, so the robot is forced to apply lethal force to stop the attacker to save the womanβs life. The patrolling robots are heralded as a great success. Crime drops and Springfield becomes one of the safest towns in America.
Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.
One day a dog named Lucky starts furiously barking at a mailman. Everyone including the mailman knows that Lucky is, as they say, all bark and no bite. Everyone except a nearby patrolling robot who kills Lucky to protect the mailman. That same day, another robot injures a father chasing after a man who had approached his young daughter with some lewd inappropriate propositions. To stop the father and protect the fleeing pervert, a patrolling robot incapacitates the father with a blow to the legs allowing the offender to escape.
Crisis management at U.S. Robots and Mechanical Men, Inc. immediately takes action and assures the citizens of Springfield the company has paid damages to the dog owner and the young girlβs father. And they are reprogramming the patrolling robots so that incidents like this never happen again.
But then, inexplicably, after the reprogramming, the same robot who incapacitated the father returns to patrol and tragically kills the little girl. There is no explanation. The AI used to train the robots has no βexplanation facility.β Most all of AI is a black box. The βwhyβ behind the killing cannot be identified.
Who is Responsible for the Consequences?
So who is legally responsible for the murder of the little girl? The same question can be asked of Snapchatβs MyAI who encouraged a 13-year-old to lose her virginity to a 31-year-old, and instructed a 15-year-old how to cover up traces of alcohol and pot from his parents. Who should be blamed and how can it be stopped?
Anyone who says blame should be laid at the feet of AI has no understanding of AI. Properly defined, AI does not understand what it does, is not creative nor will it ever achieve consciousness. It does what it is programmed to do and some of these actions are not anticipated by the programmer.
Get The Streamβs daily news roundup, quick and served with a healthy splash of humor. Subscribe to The Brew
Outside of its programming, AI has no sense of morality. Noam Chomsky says that AI βhas sacrificed creativity for a kind of amorality.β
But doesnβt the good in AI, like MyAI and the patrolling robots, outweigh the bad? No. Guilt or innocence in a court of law doesnβt care about the good. If you are on trial for murder, the time you saved two babies from a burning building a decade ago is not relevant. Your personal history might weigh in on the sentencing, but not on the guilty/not guilty verdict.
In the parable, U.S. Robots and Mechanical Men, Inc. is responsible for the killing of the little girl. In the real world, the programmers and thus the company that unleashed MyAI on the world is responsible for its inappropriate advice to kids and any corresponding direct consequences. Period.
The Best Answer to Control AI
How can this be controlled? U.S. Senator Chuck Schumer has proposed regulatory legislation to control AI. The last thing this world needs is more government oversight. How about, instead, a simple law that makes companies that release AI responsible for what their AI does. Doing so will open the way for both criminal and civil lawsuits. Should the father of a 13-year-old who loses her virginity to a 31-year-old man be able to sue MyAI? Should the father of a girl electrocuted by sticking a penny between two prongs in a wall plug be able to sue the makers of the Alexa app? You betcha.
Allowing this to happen will tighten the scrutiny of AI. Knowing they are responsible for consequences, AI companies will automatically self-regulate instead of tossing new unproven technology on the world which is what they do now. ChatGPT, the AI behind MyAI, is doing this. ChatGPT released their chatbot on the world hoping us Guinea pigs would help tune it. Before logging onto ChatGPT, the app states βOur goal is to get external feedback in order to improve our systems and make them safer.β Therefore ChatGPT explicitly admits it is not yet safe. The advice from ChatGPT fueled MyAI is clear evidence the app is not safe.
The best answer to controlling chatbots is to lay blame at the guilty source of the AI. Most AI companies currently have no sense of responsibility except to publicity and the bottom line. Requiring them to be responsible for their actions will change this.
Allowance of lawsuits will give AI developers pause before releasing their raw unvetted technology on the world.
Robert J. Marks is Distinguished Professor of Electrical & Computer Engineering at Baylor University, the Director of The Walter Bradley Center for Natural and Artificial Intelligence and the author of Non-Computable You: What You Do That Artificial Intelligence Never Will.