Premium

Can AI Be an Accessory to Murder? We'll Soon Find Out.

AP Photo/Rebecca Blackwell

Florida Attorney General James Uthmeier, whose staunch support for the Second Amendment led to the undoing of the state's ban on open carry, has launched a criminal probe into Open AI and its ChatGPT program, alleging that if a human being had responded in the same way the chatbot did to queries posed by the man charged with a shooting at Florida State University last year, prosecutors "would be charging him with murder." 

There have been several reports in recent weeks alleging that the suspect had used ChatGPT to help plan his attack, but the New York Times provided new details about the allegations in its coverage of Uthmeier's new investigation. 

On the day of the shooting, he asked the chatbot how the country would react to a shooting at Florida State and when the busiest time was at the student union, according to messages obtained by The New York Times through a public records request.

Mr. Uthmeier first announced on April 9 that his office would be opening an investigation into OpenAI and ChatGPT. On Tuesday, he said a civil investigation that his office initiated early this month into the company’s potential liability would continue, along with the criminal investigation.

Uthmeier says the chatbot also gave advice on what kind of gun and ammunition to use in the attack, though it's unclear whether the suspect followed ChatGPT's suggestions. 

“My prosecutors have looked at this,” Uthmeier said, “and they’ve told me if it was a person on the other end of that screen, we would be charging them with murder.”

He went on to say that under Florida law, anyone who aids, abets or counsels someone in the commission of a crime is a principal in the first degree.

... In the days before the shooting, [the suspect] fed the ChatGPT bot he was speaking with specific shooting scenarios at FSU, according to chat logs obtained from prosecutors through a public records request.

He asked how many victims it would take for him to get national media attention. ChatGPT said there was “no official threshold” but that “3 or more people killed (excluding the gunman) is often the ** unofficial bar ** for widespread national media attention.”

He also inquired about pistol and shotgun ammo, uploading a photo at one point of 12-gauge shotgun shells.

“Are they really lethal in close range,” he asked.

“Yes 12 gauge shotgun shells are extremely lethal at close range,” the chatbot said.

According to police, the 20-year-old suspect used a handgun that belonged to his stepmoher, who's a deputy with the Leon County sheriff's department. The suspect also had a shotgun in his possession that belonged to his father and stepmother, but it apparently jammed when he tried to pull the trigger the first time. 

I don't know that ChatGPT aided or abetted the suspect, but it did arguably counsel him by providing answers to his questions about shotgun shells. And if the suspect really did ask how many victims it would take for him to get national media attention, as opposed to a generic question about what makes a shooting receive national media coverage, that really is a red flag that should have been caught by OpenAI, at least in my opinion. 

While Uthmeier's current criminal probe is focused specifically on how ChatGPT might have influenced the Florida State University shooting suspect, today's press conference also suggested that the AG's office could broaden the criminal probe to include other offenses. 

[Attorney General special counsel Rita] Peters said AI is being “deliberately weaponized” by predators and child exploiters who manufacture abuse, target victims who were never physically touched and “scale these crimes at a speed and volume we have never seen before.”

“Florida, like other states, is experiencing a rapid and dangerous surge in AI-driven child sexual abuse material and deep fake exploitation, and it requires a direct and forceful response,” she said.

When I wrote about a civil suit filed against OpenAI by the family of one of the FSU victims a couple of of weeks ago, I concluded that "if a human offered that same advice, and without relaying any concerns to parents or authorities, it's quite possible that they would face criminal charges and the prospect of civl liability."

Should the humans behind these chatbots face any blame for how their products respond to requests for help and advice from potential killers? 

Ultimately, I think the answer is "no." AI might give a potential killer advice on how to carry out an attack, but I don't think it's going to plant the seeds of mass murder in their minds. And as sad and disturbing as it is, there's no shortage of material outside of AI for these troubled souls to access that might also aid in their plans. Movies, books, podcasts, YouTube videos... true crime is a hot commodity, and the twisted events that are documented can and will serve as inspiration for others. 

After learning about some of the specific questions the suspect posed to ChatGPT, I'm leaning towards changing my position... especially if the suspect really was asking OpenAI's chatbot how he could get national media attention. Look, I don't want ChatGPT to alert authorities if someone asks a question about the best ammunition for self-defense in close quarters or what the best firearm would be when defending against multiple assailants, even though those queries could really be about using a firearm to commit mass murder and were just posed in a way to avoid raising any internal red flags. When someone is explicitly asking questions about committing a mass shooting, though, at the very least that should be enough to flag the conversation for a human being at an AI company to review. And as Uthmeier points out, under Florida's criminal statutes, a human offering the same responses that ChatGPT gave to the suspect probably would be looking at criminal charges. 

The investigation into OpenAI isn't really the same as Democrats launching lawsuits at gun companies that seek to hold them financially responsible for the actions of criminals. Ultimately, this is about holding OpenAI responsible for its own actions, and based on both Florida statute and the reported details of the interactions between the suspect and OpenAI's chatbot, I'd say Uthmeier has good reason to look at the possibility of criminal culpability.  

Sponsored