
The robocall seemed authentic enough. Last year, New Hampshire voters received what seemed to be President Biden discouraging them from participating in the primary election. Only it wasn’t Biden at all—it was a deepfake so convincing that election officials immediately launched an investigation. The incident reveals the fundamental inadequacy of our current approach to the deepfake crisis.
While cybersecurity experts debate detection algorithms, they’re missing a critical point: the detection arms race is already lost. Every improvement in identification technology is quickly matched by advances in generation capabilities.
Meanwhile, as Stanford researchers have documented, even well-intentioned platforms with robust content moderation can become vectors for state-aligned disinformation campaigns.
Brazil’s Cybersecurity Competency Center (CISSA) has been pursuing a fundamentally different approach. Rather than trying to identify fake media after it’s created, we’re developing systems that establish authentic media provenance at creation using zero-knowledge proofs—a cryptographic technique that could transform how we establish trust in our increasingly synthetic media landscape.
The Detection Delusion
The current paradigm treats deepfakes as a detection problem: build better algorithms to identify manipulated content, deploy them across platforms and hope they keep pace with generation technology. This approach isn’t just inadequate—it’s backwards.
Detection systems must achieve near-perfect accuracy to be useful, while generation systems only need to fool detectors some of the time to spread misinformation effectively.
The Synthesia case earlier this year illustrates these limitations perfectly. Despite extensive content moderation efforts, bad actors still managed to create convincing fake news broadcasts for a Chinese disinformation campaign.
The Core Challenge: Authentication vs. Modification
The fundamental issue isn’t technical—it’s conceptual.
We’ve been treating media authentication like document signing, where any modification invalidates the signature. But legitimate media workflows require constant modification: cropping for social media, compression for bandwidth, format conversion for different platforms.
Traditional digital signatures fail because they can’t distinguish between legitimate modifications (like resizing an image) and deceptive ones (replacing someone’s face). Any modification whatsoever will cause signature verification to fail. This forces a false choice between authentication and usability—which inevitably leads to abandoning authentication entirely.
Proving Truth Without Revealing Everything
Zero-knowledge proofs (ZKPs) allow someone to prove they know something without revealing the information itself. Instead of signing the raw media file—which breaks with any change—we can sign claims about the media: “This video was recorded by camera X at time Y” or “This image has only been cropped and compressed.”
CISSA’s research focuses on creating ZKP systems that establish a “chain of custody” for digital media. When a photo is taken, the camera’s secure enclave generates cryptographic proof of capture time, location and device identity. When the image is edited, each modification step adds its own proof, creating an auditable trail from creation to publication.
Crucially, this approach allows legitimate modifications while detecting deceptive ones. Cropping a photo updates the proof chain to reflect new dimensions. But replacing someone’s face would require breaking the cryptographic connection to the original capture—something computationally infeasible with proper implementation.
The Green Padlock Vision
The success of HTTPS offers a template for deploying universal media authentication.
Twenty years ago, secure web connections were complex, expensive and rarely used. Today, browsers display security indicators for every website, and users instinctively look for the “green padlock” before entering sensitive information.
We envision a similar system for authenticated media. News articles, social media posts and digital communications would display verification indicators showing the source and modification history of embedded images and videos. Just as users learned to check for HTTPS, they could learn to verify media provenance before sharing or believing content.
The user experience must be simple. Most people don’t understand TLS certificates or cryptographic protocols, but they recognize the padlock icon. Similarly, media authentication needs clear visual indicators that work across cultures and education levels.
Implementation Challenges
Deploying universal media authentication faces significant hurdles. Current prototypes work acceptably for photos, but video content remains challenging with current techniques.
Industry adoption presents another challenge. Unlike HTTPS, which was driven by e-commerce needs and regulatory requirements, media authentication lacks clear commercial incentives. Social media platforms profit from engagement, not truth, and authenticated content might actually reduce viral sharing if users become more skeptical of unverified media.
Regulatory intervention may be necessary. Legislation like the EU’s Digital Services Act could mandate authentication features for platforms above certain size thresholds, creating the regulatory framework needed for widespread adoption.
From Research to Reality
CISSA’s implementation emphasizes practicality over perfection.
Rather than trying to authenticate all media immediately, CISSA is focused on high-risk categories: political content during elections, financial information and public safety announcements. This targeted approach allows us to refine the technology while demonstrating clear value.
Mobile optimization is critical. Hardware partnerships with manufacturers could embed authentication capabilities at a system level, similar to today’s biometric authentication standard.
Rebuilding Information Trust
Media authentication alone won’t solve the misinformation crisis, but it could fundamentally change how we evaluate digital content. By making provenance visible and verifiable, authentication systems shift the burden of proof from skeptical consumers to content creators. This reversal—from “prove it’s fake” to “prove it’s real”—could transform information ecosystems.
The implications extend beyond individual verification decisions. Researchers could analyze provenance data to understand how authentic vs. synthetic content propagates differently. Platforms could adjust algorithms to favor verified content without engaging in subjective fact-checking. Regulators could require authentication for political ads while preserving free speech principles.
Most importantly, authentication systems could restore agency to information consumers. Instead of relying on platform algorithms or fact-checkers to determine credibility, users could make their own informed decisions based on verifiable provenance data.
A Global Imperative
The deepfake crisis demands global solutions, not just technological ones.
As synthetic media capabilities spread to malicious actors worldwide, the window for implementing preventive measures narrows rapidly. The choice isn’t between perfect detection and imperfect authentication—it’s between proactive verification systems and continued reliance on reactive detection that becomes less effective daily.
Zero-knowledge proofs offer a path forward that preserves privacy while enabling verification, allows legitimate use while preventing abuse, and works globally while respecting local sovereignty. The technology exists; the challenge now is building the institutional partnerships and regulatory frameworks necessary for implementation.
As traditional media economics collapse and information warfare intensifies, the ability to distinguish authentic vs. synthetic content becomes essential for democratic governance and social cohesion. We can continue playing defense in an arms race we’re destined to lose, or we can build authentication systems that make that arms race irrelevant. The choice is ours, but time is running out.