The Trust Inversion
There's a steady flow of hyper-realistic content in my digital bloodstream lately – the type that evokes both wonder and terror. The type that makes me think: Was that made by AI?
I have a discerning eye. It’s kind of my job to know what’s happening in AI tech and policy, so I’m always on guard. And yet, every once in a while, I'm stumped. That video of a giraffe fighting a lion - amazing! Did Sora make that? That podcast with the Chinese philosopher – fascinating! Was that NotebookLM? She looks great in that selfie! What filter is that?
When I look at such content now, I'm basically asking three things:
- Did a human make that?
- Did an AI create or modify it in some way?
- Did a human and AI actively work together to make it?
Three Types of Authentication
Human Created
First, we need a system to verify whether a digital artifact was created by a human without any AI assistance. This type of authentication is useful for three reasons:
- There will be a market for human-created digital artifacts because it will be increasingly rare. As someone who has recently come to appreciate the value of a hand-crafted Japanese watch, I anticipate a similar premium being attached to digital artifacts that are human-created, which will naturally stand out in a world of AI slop. And it will be easier to discover, buy and enjoy such things with a positive filter, rather than to eliminate all the slop.
- For legal reasons, we may need to authenticate certain media as being AI-free (contracts, copyright, CCTV footage). This kind of human-authentication will be useful to solve cyber crimes and even to grade exams.
- A verification system for human-creation is an easy way for honest people to signal their value. They created it. They want you to know it. They will authenticate it.
Synthetic Content
Generative AI is helping us recreate, remix and reimagine the world in spectacular ways. It’s also the weapon of choice in fraud, child porn and cyber warfare today.
So when something looks realistic but feels sketchy, I immediately want to know if AI was involved. In a future scenario, AI agents may create content by themselves, without any prompts, some of it potentially harmful. That’s why an authentication system for ‘synthetic content’ is important. Whether it's a parody video of a public figure, a voice note with some artificial bounce, or a morphed image used in a news story, I want to know explicitly if it was AI-generated or modified.
Mixed Media
Most digital content will be a blend of human and AI expression. Such hybrid content will require its own system of content authentication and provenance.
This matters especially in professional contexts. When a doctor signs off on an AI-assisted diagnosis, when a lawyer submits a brief with AI-generated citations, when an artist sells creative work that incorporates AI tools – we need a verifiable record of that collaboration.
How was it originally created? Who modified it? What was changed, and by what means? This kind of transparency is crucial in resolving copyright disputes, fraud investigations and negligence claims involving both human and AI systems.
We certainly don't need to track this at a granular level (not every keystroke, please). That would be overkill. But I do think there's value in knowing, at a reasonable level of detail, when and how humans and AI worked to create something together. The challenge lies in striking the right balance – enough detail to be useful, but not so much that it becomes invasive.
Technical Measures
Technology already exists to authenticate these types of content. Watermarks are being used to inform users if something was ‘AI-generated’. On the market now is a camera that claims to capture, prove and verify that an image is ‘real’. Meanwhile, industry groups like C2PA have developed open standards that attach cryptographically secure metadata to digital media, creating a tamper-evident record of a file's origin and edit history.
But these systems are not foolproof. They can be easily bypassed (a screenshot being the most trivial of methods). Moreover, these authentication systems will be used largely by good actors – honest users, responsible companies, and transparent governments. Bad actors will strip off the labels and abuse the system in other ways. That's fine. If the content seems off and it doesn't have a credential, I'll assume it's been altered. This inversion of trust is already happening, and it will only accelerate with time.
Don’t Trust, Just Verify
The vast majority of digital content today is still being created by humans, though some of it with the help of AI. Therefore, for the time being, it is reasonable to assume that most content is ‘real’ and we should implement authentication systems to signal when it is not.
Over time, we may need to flip this idea on its head. Given the current trajectory of genAI, a majority of digital content out there in the future may soon be ‘synthetic’. Our instincts might shift. From “trust but verify” to “don’t trust, just verify”.
In a recent interview, Sam Altman, the CEO of OpenAI, suggests that we're heading toward a world where the distinction between real and AI-generated becomes simply irrelevant. He argues that we'll gradually adjust what we consider "real enough" – much like we've already accepted that iPhone photos are AI-processed without worrying too much about it.
The pushback in the comments was immediate and striking. "If we can't agree on what's real, we can't function as a society," one person wrote.
Altman is probably right about the direction of travel. The trust inversion is coming – that much seems inevitable. But his suggestion that we'll simply move on feels dangerously passive. The question isn't whether we'll adjust to this reality, but how we'll navigate it. That's precisely why we need robust content authentication and provenance now – not to prevent the future Altman describes, but to ensure we don't just sleepwalk into it.
The Role of Regulation
Some governments are preparing by requiring platforms to monitor, label and verify all synthetic content. The Indian government has also proposed such rules.
I strongly believe that voluntary measures should do most of this heavy lifting. I want Apple to introduce an “OG mode” where my camera takes a raw photo without any AI-processing, and embeds an invisible marker into the image to establish this fact. And I think more companies should support, adopt and improve on efforts like those being initiated by C2PA.
That said, I think there are situations where new legal measures are warranted (if there’s a regulatory gap, and in contexts of low trust societies, poor digital literacy or lax enforcement for example). I would only caution that governments should adopt regulations that are flexible and agile by design, rather than rigid and prescriptive. Because I am certain that as time goes on, we will have to adapt these systems to reflect a new reality, a new social contract for trust.
Ready or Not
Very soon – probably within minutes of publishing this – I'll be browsing the internet, asking myself again: was that created by AI?
I want to be able to answer that question with some degree of certainty. A few clicks and I should see a label or some metadata. And I’ll know what I'm looking at.
That's the world I'm hoping we build: one where credentials become the norm, where honest actors make verification easy, where the burden of proof shifts to those who want to be trusted. It won’t be perfect. It won’t be easy. But it's better than trusting nothing at all.
This is both hope and a plea. If we don't build these systems now – voluntarily, thoughtfully, collaboratively – we'll be forced into crude solutions later.
The trust inversion is coming. The question is, are we ready for it?
(Thanks to Rahul Matthan, Kailash Nadh and Balaraman Ravindran for helping sharpen my thoughts on this topic. The views expressed here are my own).