The United States Government made a bold but quiet move this January by revoking Executive Order 14110, which had provided federal direction on the safe and responsible development of artificial intelligence. Signed in 2023, this order was the government’s only formal stance on the ethical use of AI. It outlined the rulebook for keeping this rapidly evolving technology from going off the rails. Now that it’s been rolled back, we’re left with a giant question mark hovering over what comes next.
This change comes at a time when AI is growing faster than ever and popping up everywhere from healthcare to finance to TikTok filters. But with this growth comes deepfakes, data scraping, and facial recognition systems that don’t always know where the ethical lines are drawn. When AI can mimic your voice, forge a convincing video, or harvest your personal information, all without raising a red flag, we’re not just dealing with smart tools; we’re dealing with serious security risks.
What's at Stake?
Right now, fake content made by AI is becoming alarmingly realistic. We’ve seen videos showing people saying things they’ve never said, photos that look real but aren’t, and audio clips that are entirely fabricated. These fakes can spark chaos, from damaging someone’s reputation to manipulating entire audiences, putting some of our most important fields at risk.
- Legal system: Imagine a courtroom where audio or video evidence can no longer be trusted. The justice system takes a significant hit if deepfakes can be slipped into proceedings.
- Media: Journalists now need to vet content not just for bias or accuracy, but whether it’s real. In the age of AI-generated news clips, public trust becomes fragile.
- Law enforcement: There’s a fine line between using AI to solve crimes and using it in ways that invade privacy. Without proper rules, the line blurs fast.
- Finance: There have already been real cases where AI has been used to impersonate executives and pull off high-dollar scams. AI tools also influence trading decisions, and without proper guardrails, they could mess with the stability of entire markets.
So, What Can Be Done?
While the revocation of EO 14110 is a setback, it also gives companies and developers a moment to regroup and think smarter. The keyword here is authenticity. If we can’t always control what AI can do, we can get better at verifying what’s real.
That means investing in ways to authenticate digital content. Think digital watermarks, secure storage systems, and better training for anyone working with AI tools. It also means having clear internal rules for how AI gets used — what’s allowed, what’s not, and what to watch out for.
AI is going to keep growing. That’s a given. But in a world where seeing no longer means believing, the ability to prove what’s real matters more than ever.
The takeaway? Just because EO 14110 is gone doesn’t mean the ethics surrounding AI usage should go with it. Responsibility is shifting from the government to developers, businesses, and anyone who interacts with AI. And while the lines may be blurred, one thing is clear: highlighting truth in the digital world isn’t optional – it’s a must.
This blog is based on a contributed article from SWEAR CEO Jason Crawforth for Forbes. Click here to continue reading.