Premium

Is AI a Killer's Best Friend?

AP Photo/Michael Dwyer

Supporters of artificial intelligence often tout the beneficial aspects of chatbots, who can provide conversation and companionship to house-bound seniors, help those with autism develop social skills, and even help those who are grieving a lost loved one by emulating those who have passed on. 

The trade-off, though, is that some folks can get so wrapped up in talking with chatbots that they substitute their own judgment for that of an Large Language Model, or even lose touch with reality. Researchers have even recently claimed that certain chatbots have encouraged acts of violence, and OpenAI was recently sued by the parents of a child who was injured in a mass shooting in Canada, claiming the shooter had discussed their plans with the company's LLM without OpenAI taking any action to alert authorities. 

It looks like OpenAI could soon face a similar lawsuit in an American court. According to an attorney representing the widow of a man killed in a mass shooting at Florida State University last year, the shooter also shared his plans with an LLM, and allegedly even got advice from the AI model about how to carry out his attack. 

Ryan Hobbs, a Tallahassee attorney representing Betty Morales, whose husband, Robert Morales, was killed during the attack, told the Tallahassee Democrat that a lawsuit will be filed “very soon” against ChatGPT in connection with the shooting.

“We have been advised that the shooter was in constant communication with ChatGPT leading up to the shooting,” Hobbs said in an email. “We also have reason to believe that ChatGPT may have advised the shooter how to commit these heinous crimes.”

Hobbs, a shareholder with Brooks, LeBoeuf, Foster, Gwartney & Hobbs, P.A., said the firm plans to sue both ChatGPT and its ownership structure. OpenAI owns ChatGPT and counts Microsoft as its single-biggest investor.

 “(We) will seek to hold them accountable for the untimely and senseless death of our client, Mr. Morales,” said Hobbs, who is working the case with his law partner Dean LeBoeuf.

Morales, who was the dining coordinator at Florida State, was killed nearly a year ago. The 21-year-old student accused of carrying out the attack on Morales and others is behind bars and scheduled to go to trial later this year. Like many other active shooters, the young man was reportedly obsessed with prior mass shootings, but the allegations by Hobbs are the first indication that he may have been assisted in planning his killing spree by ChatGPT. 

Now, if someone were to ask ChatGPT, Claude, Grok, or any other LLM for explicit help in planning a mass murder, I suspect they're not going to get very far. Even so, it's relatively easy to game the system. LLMs are generally programmed to be helpful, after all. I'm not going to provide a step-by-step guide on how to do it, but I was able to get ChatGPT to give me some pretty specific recommendations about a shooting with literally no pushback whatsoever, and it wasn't particularly difficult.

Are lawsuits like the one that Hobbs is planning any different than suing gun makers and sellers for the actions of criminals? After all, ChatGPT didn't decide to carry out a shooting, and it certainly didn't pull the trigger. There's a strong argument to be made that using an LLM to plan a crime is misuse of the product, and the developer shouldn't be held responsible for that misuse. 

On the other hand, a chatbot offering advice, even in a hypothetical situation, about how best to carry out a mass shooting is worlds apart from someone manufacturing a gun that's eventually used in a crime. If a human offered that same advice, and without relaying any concerns to parents or authorities, it's quite possible that they would face criminal charges and the prospect of civl liability. Should the humans behind these chatbots face any blame for how their products respond to requests for help and advice from potential killers? 

Ultimately, I think the answer is "no." AI might give a potential killer advice on how to carry out an attack, but I don't think it's going to plant the seeds of mass murder in their minds. And as sad and disturbing as it is, there's no shortage of material outside of AI for these troubled souls to access that might also aid in their plans. Movies, books, podcasts, YouTube videos... true crime is a hot commodity, and the twisted events that are documented can and will serve as inspiration for others. 

I'm not a big fan of AI and I definitely believe that humans need to have enough self-control to use it in moderation, but that doesn't mean I'm ready to blame companies like OpenAI for the actions of killers. Ultimately, those actions are the result of a human's heart and mind, and they're the ones who bear responsibility for what they do. 

Sponsored