What we call AI isn't what we science fiction nerds have long expected. It's good. It's pretty close, even, but it's not really thinking. It has no qualms about things and no morals. But like any tool, it can do almost anything that a determined user wants it to do.
This has led to a lot of confusion throughout our society. When I see a video, it's getting to the point where I can't automatically tell that it's an AI-created video. It's easy to make people think they're seeing something they're not.
And as Lee Williams notes over at his Substack, it's likely to be turned against us.
Someday soon you will receive a video call, or perhaps a text or voice mail video from a local politician well known to you. They may be a sheriff or chief of police, or even a state official. This person will begrudgingly tell you there was a bad vote in the statehouse, and something you use daily, and value highly is now illegal and must be destroyed. It may be a standard-capacity magazine or a brace or a specific type of weapon or ammunition. As a result, thousands of the items will likely be eliminated.
You’re smiling right now. After all, you would never do something like that, right? The truth is you’re not the target. The Artificial Intelligence (AI) message was not meant for you, someone who values their Constitutional rights enough to keep informed by visiting websites like this. The false AI message was meant for those who are far less informed, and it will produce exactly what its creators intended.
It is happening right now.
Last week, the State Department sent every U.S. Embassy a warning that a recent video of Secretary of State Marco Rubio was a sham. The fake messages were sent via text, voicemail and Signal to a group that included governors, U.S. Senators and even foreign ministers.
“The State Department is aware of this incident and is currently monitoring and addressing the matter,” U.S. State Department spokeswoman Tammy Bruce told the media. “The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department's cybersecurity posture to prevent future incidents.”
Williams argues that this kind of technology will be used against gun owners sooner or later.
I'd go a step beyond what he said, though, and point out that it already has.
In February 2024, I wrote about how Parkland father Manuel Oliver's anti-gun organization used AI to recreate the voices of kids killed in shootings, calling for gun control.
This didn't seem to be particularly effective, but it was an assault on our rights using AI. This wasn't the most nefarious use possible, admittedly. It's not like the scenario that Williams suggests in his piece, but it most definitely was an opening salvo that will likely be followed up by others.
And what he suggests might happen could have some nasty ramifications.
Think about the people who are ready to go to war for their gun rights, but don't necessarily follow politics. Most do follow gun policy, but there are always some who don't. Imagine them being told that they'll need to turn in their guns next week. What do you think those folks will do? It's not clean them up so the officers taking the guns don't get dirty, that's for sure.
Then there are the things we're not even thinking about. There are the various ways these could be used that are beyond our own imagining, simply because we're not broken enough to ponder such things.
Honestly, while I find AI to be useful for some things, the ramifications of this version of AI are also kind of terrifying.
Join the conversation as a VIP Member