Premium

Everytown Enlists AI to Attack Right to Keep and Bear Arms

AP Photo/Philip Kamrass, File

What we call artificial intelligence, or AI, isn't quite the stuff of science fiction, but it's close. Scary close. The only upside is that, for the most part, it's still kind of stupid on a lot of things. We've likely all had AI tell us stuff that simply wasn't true.

And Everytown, which also tells people stuff that isn't true, is now going to use AI in its anti-gun efforts.

Yeah, really.

It seems that they've not just "developed" an AI system to help, but there are massive warning flags already.

On Monday, Everytown announced they had created the Everytown Evidence Engine, or E3, an AI system they claimed would help them “harness AI policy to identify gun safety solutions.”

They made the move to AI because “efficient systems for analysis can lead to new questions and new answers in the field of gun violence prevention research.”

How reliable is Everytown’s new AI?

You can judge for yourself.

Claude

Everytown admitted its new EAI system was built using Claude, an AI system designed by the firm Anthropic. Both Claude and Anthropic have had significant problems.

Just four days ago, in a story titled “Anthropic Admitted Claude Code Broke. We Were Right,” a reporter at Medium announced he had found issues with the system.

The reporter’s hard work forced Anthropic to admit that Claude had major problems.

In a story titled “An update on recent Claude Code quality reports,” Anthropic claimed they had fixed everything.

“Over the past month, we’ve been looking into reports that Claude’s responses have worsened for some users. We’ve traced these reports to three separate changes that affected Claude Code, the Claude Agent SDK, and Claude Cowork. The API was not impacted,” Anthropic claimed.

The firm also promised they would “do things differently to avoid these issues,” and that more of their staff would use the public version of the software.

It's kind of strange they'd do this amid at least one lawsuit alleging that AI helped plan a mass killing, among many other troubling incidents of AI encouraging criminal acts or acts of self-harm.

My arguing with Grok over a bit of 19th-century woodworking text seems pretty milquetoast in comparison. Grok wasn't telling me I should murder my family with my chisels, after all.

Now that Everytown has thrown in with the AI bots, though, what will we see this particular product churn out? Antropic's Claude has problems, to say the least. It's actually encouraged people to believe they have a divine purpose, when they don't, beyond that which every person has as a child of God. It's called "AI psychosis," and now Everytown's followers are going to be slurping it up like a stockbroker hitting the Columbian Bam-Bam at a party in the 1980s.

How many pathetic and wrong beliefs are going to be perpetuated with this?

Even Everytown knows that there are "limitations" in what their new system can do, but the problem is that far too often, AI doesn't acknowledge its limitations. It speaks authoritatively about topics, dismisses information as irrelevant when it's not, and ultimately "thinks" it knows what it's saying when it doesn't.

So, kind of like your average Everytown supporter or staffer, really.

Will this change much in the gun rights space? Probably not. I don't know about you, but there's no way in hell I'm taking the word of Everytown's AI chatbot over literally everything I've seen with my own two eyes. We all know how "garbage in/garbage out" works, too. This E3 thing isn't something people are going to go to in order to find real answers, only to get their anti-gun beliefs reinforced.

At least, I hope that's the case. People can be pretty dumb at times, though, and I'd hate to find out that I'm wrong. If so, we've got a big problem, because we've got a nation of sheep who really have been waiting for their AI overlords to tell them what to think.

Sponsored