background

Synthetic Auth Report - Issue # 026


Greetings!

This week: AI agents are accumulating real-world authority faster than anyone has defined who's responsible for their actions; quantum computing carries a genuine threat to the encryption underpinning every digital credential you own, and is already being used as a buzzword to sell snake oil before it delivers on either promise; and the tools built to prove that a piece of content is real are failing, or being turned against real content. If the systems verifying who we are are working exactly as designed, what does it mean that the designs are no longer good enough?


IDENTITY CRISIS

Autonomous AI Agents Have an Ethics Problem — The question of who's responsible when an AI agent causes harm is no longer hypothetical. Undark reports that AI agents can now make phone calls, file work orders, create cryptocurrency wallets, and act across applications at machine speed — the kind of stuff that used to require a human with fingers. They are public actors with real-world reach, and nobody agreed on the accountability model before we let them loose. Kant would ask: is an agent that cannot be held morally responsible even an agent at all? We apparently don't care.

AGI Has Not Arrived — But the Definition Keeps Moving — A paper published in Nature recently claimed AGI, AI as broadly capable as a human, is already here. Gary Marcus and two colleagues wrote a rebuttal, and their argument is sharp: the goalposts didn't get crossed, they got quietly moved. The original concept of AGI required AI that could adapt reliably to genuinely novel, unpredictable situations. What today's best systems do is pattern-match extremely well on problems similar to what they've trained on — impressive, but not the same thing. Put them somewhere unfamiliar, and they break in ways humans don't. The economic numbers add up to the same conclusion: AI-driven automation is projected to improve overall productivity by less than 1% over a decade, and most companies report no meaningful return on AI investment. The authors also identify a subtler issue they call epistemia: AI models give confident answers even when the evidence is thin or contradictory, while humans in the same situation naturally express doubt. The output looks identical; what's underneath is entirely different. Calling that general intelligence, the authors argue, isn't a scientific conclusion — it's a press release.

The Agentic Identity Problem Isn't New — We're Just Ignoring It AgainFast Company argues this week that the identity panic around agentic AI is somewhat overblown: enterprises already navigated the same stretch when containers, microservices, and robotic process automation arrived, and the same principles — least privilege, lifecycle management, auditability — still apply. Agent authentication is just machine identity, extended. The frameworks exist. The playbook is written. Which raises the more uncomfortable question: if we already know what good looks like, why does identity dark matter keep proliferating — AI agents accumulating access invisibly, operating across cloud boundaries ungoverned, untraceable when something goes wrong? Knowing the answer and applying it are, it turns out, entirely different cognitive acts.


QUANTUM CORNER

We Need to Disentangle Hype from AI and Quantum Computing — A sharp piece in TechPolicy Press this week flags the next great hype cycle before it fully arrives. The authors — attorneys at DLA Piper writing for a policy audience — argue that quantum computing is on the same trajectory AI was a decade ago: enormous genuine promise, a technology that feels just out of reach, and an ecosystem already primed for inflated claims. The portmanteau "Quantum AI" is already circulating as a buzzword, and the fraud is already real: last year's global Quantum AI investment scam, complete with celebrity deepfakes, defrauded people worldwide by trading on the mystique of a technology most people find too abstract to interrogate. The FTC has its own paper trail of "quantum" snake oil — a 2020 case against the Quantum Wellness Botanical Institute over an "age-reversing formula," and a 1998 infomercial marketer claiming his addiction cures came from a doctor's "revolutionary breakthrough discovered while studying quantum physics." The word has been laundering nonsense for decades. The underlying science, meanwhile, is real and consequential: quantum computing does threaten current encryption, and a sufficiently capable machine could crack RSA — the standard protecting your bank login, medical records, and private communications — in hours rather than the billions of years classical computers would require. As Michio Kaku is quoted in the article: "of all the theories proposed in this century, the silliest is quantum theory" — and the only thing it has going for it is that it is unquestionably correct. That paradox is precisely what makes it so exploitable. The challenge, the authors argue, is keeping the enthusiasm while cutting the hype — before "quantum-washing" becomes as endemic as AI-washing already is.


ARTIFICIAL AUTHENTICITY

No Single Tool Can Verify What's Real — Microsoft's New Report — A major study from Microsoft maps the current state of media authentication and arrives at a sobering conclusion: no single method, not watermarking, not provenance metadata, not AI detection, can reliably authenticate content on its own. To understand why, it helps to know what these tools actually are. Provenance metadata is information embedded in a file that records its origin. It records who made it, with what tool, and when. The industry standard for this is C2PA (Coalition for Content Provenance and Authenticity), a specification co-founded by Microsoft, Adobe, the BBC, and others that functions like a nutrition label for digital content: it attaches a cryptographically signed record to a file, so that anyone reading it can see the chain of custody from creation to publication. Watermarking is different. It embeds a signal directly into the pixels or audio of the content itself, invisible to the human eye, that persists even if the file is re-exported or the metadata stripped. AI detection tools try to identify statistical patterns left behind by generative models. Each approach has specific failure modes: metadata can be stripped by platforms that don't support C2PA; watermarks can be removed or degraded; and AI detectors are increasingly fooled as generation models improve. The report also flags a counterintuitive attack: a small, insignificant edit to an authentic photo can cause a provenance validator to flag it as AI-generated — meaning the tools meant to establish authenticity can be weaponized to discredit real content. The report's authors propose "high-confidence authentication" as the way forward: layering C2PA provenance with watermarking so that defeating one layer doesn't defeat the other. It's a roadmap, but also an honest acknowledgment of how wide the gap is between what authentication tools promise and what they currently deliver.

Stop Chasing the Fake. Fingerprint the Real.TechRadar covers a proposal from Instagram CEO Adam Mosseri that quietly inverts the whole detection problem: instead of trying to identify what's AI-generated — a losing game as models improve — verify what's human at the moment of creation. Mosseri's specific suggestion is that camera manufacturers cryptographically sign images at the moment of capture, creating a chain of custody from shutter click to post. Your phone or camera would essentially issue a digital birth certificate for every photo, linking it to the device, time, and location it was taken — provable origin rather than merely claimed origin. This is, notably, exactly what C2PA was designed to enable: the standard already supports capture-time signing, and some cameras have begun implementing it. TechRadar's critique is practical: the approach only works at meaningful scale if every major platform agrees to read and surface those credentials — a cross-industry coordination problem that remains unsolved. Mosseri himself acknowledged that platforms will get worse at detecting AI-generated content over time as the technology improves, which is precisely why he argues the industry needs to shift from chasing fakes to certifying the real.


CARBON-BASED PARADOX

Here is the thread running through this week's three sections: we have always used external systems to anchor identity, and the assumptions those systems were built on are now being outpaced.

Every system this week was designed with a human at the center. Agent accountability frameworks assumed a person would be traceable at the end of the chain. RSA encryption assumed the math would stay hard longer than the hardware would improve. Content authentication assumed that certifying origin was a problem of standards, not of adversarial intent. None of those assumptions were wrong when they were made. All of them are now insufficient.

The generational dimension makes this concrete. Older users treat digital identity as a layer on top of a self that exists independently: if the system breaks, the self persists. For younger people who built identity online, there is no meaningful "underneath." The cryptographic certificate, the authenticated account, the verified post: these aren't proxies for identity, they are identity. The design being challenged isn't just technical infrastructure; for one generation, it's the medium through which selfhood was constructed.

That's the actual paradox. It's not that we trusted broken systems. It's that we built systems good enough to become load-bearing, and then kept adding weight.


background

Subscribe to Synthetic Auth