background

Synthetic Auth Report - Issue # 025


Greetings!

This week, Europe makes its boldest moves yet toward digital sovereignty — replacing American platforms, building its own payment rails, and asserting that a continent's identity shouldn't depend on someone else's infrastructure. Meanwhile, AI barrels ahead: Nature declares machines generally intelligent, AI-generated code overwhelms open source maintainers, Google lets you walk through synthetic worlds, and Waymo's "autonomous" cars turn out to have a human in the Philippines picking their lane. Can you build a border around a technology that learns from everything, everywhere, all at once?


IDENTITY CRISIS

France Says "Au Revoir" to Zoom — Digital Sovereignty Gets Real. In one of the more concrete acts of digital self-determination we've seen, France announced it will move 2.5 million civil servants off Zoom, Microsoft Teams, and other U.S. video platforms by 2027, replacing them with a homegrown system called Visio. Austria's military has already dropped Microsoft Office. The European Commission's tech sovereignty commissioner warned that reliance on foreign platforms "can be weaponized against us." And it's not just communications — Europe is moving to break up with Visa and Mastercard too. On February 2, the European Payments Initiative and the EuroPA Alliance signed a landmark agreement to build a pan-European payment network around the digital wallet Wero, already serving 47 million users across Belgium, France, and Germany. The goal: let Europeans pay and transfer money across borders without touching a single American network, reducing dependence on the two companies that currently process roughly $24 trillion in transactions per year. ECB President Christine Lagarde called the need for Europe's own digital payment system "urgent."

Nature Says We Already Have AGI — And Most Experts Don't Want to Hear It. A team of philosophers, linguists, and machine learning researchers published a commentary in Nature making a provocative claim: artificial general intelligence already exists. Their case rests on Turing's original 1950 framework — not perfection, not superintelligence, but broad, flexible cognitive competence across multiple domains. The evidence they cite is hard to dismiss: GPT-4.5 was judged to be human 73% of the time in a Turing test (more often than actual humans were), LLMs have earned gold medals at the International Mathematical Olympiad, and they collaborate with professional mathematicians on real proofs. Yet 76% of leading AI researchers surveyed in 2025 said scaling current approaches would be "unlikely" to yield AGI. The authors argue this disconnect stems from shifting definitions, emotional resistance to the implications, and commercial interests that benefit from keeping AGI perpetually just around the corner.

Humanity's Rite of Passage — According to the CEO Building It. In a January essay titled "The Adolescence of Technology", Anthropic CEO Dario Amodei borrows a scene from Carl Sagan's Contact — where humanity's representative asks aliens how they survived their own technological adolescence — and argues we're now living that question ourselves. His central thesis: the capability question is largely settled, but the maturity question is wide open. AI is advancing faster than the social, political, and institutional systems meant to govern it, and the window for getting governance right is narrowing. Amodei is careful to avoid both doomerism and hype, cautioning against the "quasi-religious" thinking that dominated 2023–2024 risk discourse and the pendulum swing toward uncritical optimism in 2025–2026. The essay is a companion piece to his earlier Machines of Loving Grace, which painted the upside — this one maps the gauntlet. For a field that oscillates between salvation and apocalypse, it's a rare attempt to hold both in view at once.


QUANTUM CORNER

Google Issues a Post-Quantum Call to Arms. On February 6, Google published a pointed blog post co-authored by its President of Global Affairs and the founder of Google Quantum AI, urging the industry and policymakers to accelerate the transition to post-quantum cryptography. Their message is blunt: adversaries are already running "store now, decrypt later" campaigns — harvesting encrypted data today to crack open once quantum computers mature. Google has been preparing since 2016, rolling out post-quantum capabilities across its products and building "crypto agility" — the ability to swap cryptographic algorithms without disrupting services. The post calls on governments to mandate quantum-safe standards, fund migration efforts, and treat this not as a future concern but as a present-tense infrastructure emergency.

Stanford's Light Trap Could Unlock Million-Qubit Machines. One reason quantum computers aren't yet cracking your encryption is that they're hard to scale up. The basic units of a quantum computer — qubits — store information in individual atoms, but reading that information out has been painfully slow: you often had to check them sequentially. On February 2, Stanford researchers published a breakthrough in Nature that changes this. They built tiny light-based chambers (optical cavities) on standard silicon chips that sit around individual atoms and efficiently collect the photons those atoms emit. The key advance: these cavities can be mass-produced in arrays, allowing researchers to read information from qubits in parallel rather than one by one. The team demonstrated this with 40 working cavities and built a larger prototype with over 500, suggesting a realistic path toward the million-qubit machines that would make quantum computers truly powerful.


ARTIFICIAL AUTHENTICITY

AI Slop Is Killing Open Source From the Inside. In what might be the identity crisis of non-human code, a RedMonk analysis documents how AI-generated garbage contributions are overwhelming open source maintainers. Ghostty now permanently bans anyone submitting bad AI code. cURL killed its $86,000 bug bounty after six years — the AI slop made it untenable. The implicit social contract of open source — effort for mentorship, quality for community — assumed contributions were expensive to create. AI broke that filter. The question now isn't just "who wrote this code?" but "did anyone write this code?" Qodo, an AI code review platform, is among the tools emerging to help teams detect what's real — essentially building an immune system for codebases flooded with synthetic contributions.

Anthropic Builds an AI Civil Servant for the UK. Anthropic partnered with the UK government to build a Claude-powered AI assistant for GOV.UK, initially helping job seekers navigate employment services with persistent, personalized context. Users control their data and can opt out anytime. Separately, Anthropic's own research found that developers using AI assistance scored 17% lower on mastery tests than those who coded by hand — suggesting AI can be a shortcut to output but a detour from understanding.

Google's Project Genie: Build a World, Lose Yourself In It. Project Genie, now available to Google AI Ultra subscribers, lets users create and explore AI-generated interactive worlds in real time. You define a character, prompt a world, and walk through it as Genie 3 generates the path ahead. It's an identity sandbox — or, as Baudrillard might note, one more step toward a simulacrum we prefer to the real thing.

Our Collective AI Fantasies Are Shaping Who We Become. A new study in Computers in Human Behavior introduces the "AI Imaginary Model", a framework built on social identity theory that examines how shared cultural visions of AI — the utopian dreams, the dystopian fears, the sci-fi tropes — aren't just reflections of the technology. They actively shape it. Think of AI imaginaries as something like modern mythology: just as ancient societies used stories of Prometheus or Icarus to encode their relationship with fire and flight — setting the boundaries of what was sacred, dangerous, or possible — our collective narratives about AI encode how we relate to this technology before we ever touch it. The researchers argue that these imaginaries influence how people adopt AI, what policies governments write, and ultimately how individuals construct their own "technological identity" — the sense of self that emerges from how you relate to the tools you use. Whether you see AI as a collaborator or a replacement isn't just an opinion, it's a filter that determines what AI becomes for you. A Medium essay from Megasis Network traces the same thread through culture — from AI-generated art to media portrayals — arguing that AI is now reshaping not just what we do, but who we think we are. Kant held that we never perceive the world directly, only through the categories our minds impose. Our AI imaginaries may be the new categories.

Waymo's "Self-Driving" Taxis Are Secretly Phoning the Philippines. During a Congressional hearing last week, Waymo's chief safety officer admitted that when the company's robotaxis get stumped, they receive guidance from human operators — some of whom are based in the Philippines. Waymo insists these workers don't remotely drive the vehicles, but they do determine what lane the car should pick and propose navigation paths. Senator Ed Markey called it a safety and cybersecurity risk, warning that overseas remote operations "may be more susceptible to physical takeover by hostile actors." Tesla's VP of vehicle engineering confirmed at the same hearing that Tesla uses similar remote operators. The autonomous vehicle that phones a friend overseas when it's confused is, perhaps, the most honest metaphor for our entire digital identity moment: we keep calling things autonomous that are secretly tethered to someone, somewhere, making the real decisions.


CARBON-BASED PARADOX

Two forces dominated this week. The first is Europe's accelerating bid to carve out its own digital identity — ditching Zoom for Visio, building Wero to replace Visa and Mastercard, asserting that a continent's communications and payments should not run on someone else's rails. This is sovereignty as identity: Europe deciding who it is by deciding what infrastructure it controls.

The second force is AI's relentless expansion — machines declared generally intelligent in Nature, AI slop overwhelming open source faster than humans can filter it, "autonomous" cars that secretly phone Manila for directions, AI-generated worlds you can walk through, and the open question of who owns your digital likeness after you die. Each story this week pushed the same direction: AI capabilities are growing faster than any single institution's ability to contain them.

Europe can absolutely build sovereign AI. France backed Mistral. Germany is investing in sovereign cloud. If you control the rails and the models, the sovereignty story holds. But AI is uniquely difficult to contain within borders. A payment network can be geofenced. A video platform can be swapped out. AI can't be treated the same way — not because borders are pointless, but because of how AI actually works. Sovereign LLMs still train on the global internet. Open source models cross every border the moment they're published. The AI slop choking open source codebases originates from models built everywhere and lands in repositories maintained anywhere. The developers building Europe's AI tools learned their craft on American platforms — and as Anthropic's own research showed this week, the tools don't just assist the worker, they shape the worker's skills. Even a fully European AI stack inherits the knowledge, biases, and architectural assumptions of a fundamentally borderless technology. Swapping Zoom for Visio is a policy decision. Achieving genuine AI sovereignty means contending with a technology whose defining property is that it learns from everything, everywhere, all at once. Hume argued that reason is, and ought only to be, the slave of the passions. Europe's passion for sovereignty is rational and more necessary than ever — but AI may be the first technology where the distance between drawing a border and enforcing it has never been wider.


background

Subscribe to Synthetic Auth