background

Synthetic Auth Report - Issue # 023


Greetings!

Happy New Year! This year I am committed to working even harder to bring you more and even better content than last year, diving deeper into the contradictions and paradoxes of digital identity. If you find value in these dispatches from the frontier of the synthetic, please help me spread the word about this newsletter.

This week: the hidden humans training our AI overlords for poverty wages, the economic death spiral when AI inserts itself between your product and your customers, quantum updates, AI agents that surveil while they assist, and Ukraine's literal point system for drone kills. The Tailwind story crystalizes the central question we'll be grappling with in years to come as AI agents become our primary interface to the digital world: what happens when a new layer of intermediaries sits between everything—between workers and employers, products and customers, and most importantly between humans and information? If AI becomes the lens through which we access reality, who controls that lens, and what gets lost in translation?


IDENTITY CRISIS

The Ghost Workers Training Our Digital Twins. Remember Amazon's Mechanical Turk? That fantasy of distributed human computation was always more Victorian stage magic than actual automation—a person hidden inside the machine, making it appear intelligent. Now researchers at DiPLab have documented the latest act: Egyptian data workers earning $1.22 per hour to label images, transcribe audio, and evaluate content that trains the AI systems we're told are revolutionizing work. These workers—60% holding bachelor's degrees in technical fields—earn less than half Egypt's minimum wage while tech companies profit enormously from their labor. The illusion persists across industries: your AI-generated image required humans to tag millions of training images, your "smart" clothing line required humans in factories to make each piece, your autonomous system required humans to clean the data. These workers face a peculiar identity crisis: their digital labor often conflicts with their principles, forcing them to "continually question and reshape their digital identities." The magic is real; the magician is underpaid.

Your Job, But Make It AI-Flavored. The question every worker now faces, according to CNBC's analysis of the evolving job market, is no longer "Can you do this job?" but "Can you do it in a way that adds unique value beyond what AI can do alone?" The shift is fundamental: your identity as a worker once depended on your skills and experience; now it depends on your ability to justify why a company should pay you instead of a subscription to Claude or ChatGPT. Companies are being explicit about it—Shopify's CEO told employees to prove AI can't do their jobs before asking for more headcount, Accenture plans to exit staff who can't be re-skilled on AI, and Fiverr laid off 250 employees to become an "AI-first company" while telling remaining workers to "deepen their AI skills." AMD's CEO captures the new reality: "We're hiring different people... people who are AI forward." The casualties are real and immediate: when a developer proposed adding LLM-optimized documentation to Tailwind —basically a text file making it easier for AI to read their docs—creator Adam Wathan rejected it with a stark explanation: "75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business." Here's the economic death spiral: developers ask ChatGPT or Claude how to use Tailwind instead of visiting the documentation website. Traffic to Tailwind's docs is down 40% since early 2023 even though Tailwind usage is at an all-time high. Those docs are where people discover Tailwind's paid products—the only revenue source keeping the framework maintained. Making the docs easier for AI to consume would accelerate the problem: more people would get their Tailwind knowledge through AI, fewer would visit the site, revenue would drop further. Result: revenue down 80% while the product grows, and three-quarters of the team laid off. It's not that AI replaced their jobs directly—it's that AI inserted itself between their product and their customers, destroying the business model that paid for development. The stakes are clear when you look at history: programming and airline pilots saw 11x and 8x job growth respectively from 1970 to 2020 because demand was elastic—productivity gains created more opportunities. Agriculture saw the opposite: productivity gains with inelastic demand meant jobs fell from 40% of the US workforce in 1900 to 2% today. Your professional identity now hinges on which category your industry falls into, whether you can convince management you're augmenting the machines rather than being replaced by them, and whether AI can destroy your entire business model even when your product is thriving.

Platonic Representations and the End of Distinctiveness. Here's an unsettling discovery: different AI models are converging on similar ways to understand reality. A language model trained on text and a vision model trained on images develop surprisingly similar internal representations of concepts like "dog"—even though they've never seen the same training data. MIT researchers call this the "Platonic representation hypothesis," arguing that as models grow more capable, they're converging toward a singular, optimal way to encode the world. Think of it like this: you learn about dogs from reading stories, your friend learns from watching videos, but you both end up with similar concepts of "dogness." Now imagine that happening with AI systems understanding not just dogs, but justice, identity, beauty, and you. The philosophical weight is heavy—Plato argued we all perceive mere shadows of ideal forms that exist beyond our senses. These researchers suggest AI models might be finding those ideal forms, a shared representation of reality that emerges regardless of how they learned. If true, it means identity itself could become standardized: every advanced AI system would encode "you" in fundamentally the same way, a convergent vector representation that captures some objective essence.


QUANTUM CORNER

Post-Quantum Cryptography Gets Real. NIST announced updates to its key establishment recommendations, and here's why it matters for digital identity: everything that proves you are who you say you are online—from banking to email to government services—relies on mathematical locks that quantum computers could theoretically pick. Think of current encryption like a combination lock with a trillion possible combinations. Classical computers would need centuries to try them all, so we're safe. Quantum computers could solve certain mathematical problems exponentially faster than classical computers, potentially breaking encryption that currently seems uncrackable. NIST's update is about building new locks that even quantum computers can't easily break. Specifically, they're updating how cryptographic systems establish shared secrets—the foundation of secure communication. The changes allow systems to incorporate quantum-resistant key-encapsulation mechanisms (think of these as quantum-proof lockboxes for sharing secrets) and approve new methods for turning those secrets into encryption keys. This isn't a rip-and-replace situation requiring the internet to shut down and rebuild. Instead, it's creating a bridge: existing systems can gradually absorb quantum-resistant components while continuing to work normally. You're reinforcing the foundation while living in the house. The agencies implementing these standards understand that digital identity exists in a superposition: simultaneously secure against today's threats and vulnerable to tomorrow's quantum capabilities. These updates are collapsing that uncertainty, one technical standard at a time.


ARTIFICIAL AUTHENTICITY

Agentic AI: When Your Assistant Becomes a Surveillance Nightmare. The Signal Foundation's leadership delivered a devastating critique of agentic AI implementations at 39C3, and it should terrify anyone building or using these systems. Microsoft's Recall feature takes screenshots every few seconds, OCRs them, and creates a "forensic dossier" of everything you do—stored in a single database vulnerable to malware and prompt injection attacks. The math is worse than you think: even if an AI agent could execute each step with 95% accuracy (currently impossible), a 30-step task would succeed only 21.4% of the time. At a more realistic 90% accuracy per step, that success rate plummets to 4.2%. Yet we're rushing to deploy these systems at the OS level, creating what Signal's Meredith Whittaker calls a "surveillance nightmare" with no real solution—only triage. The question isn't whether agentic AI will transform work; it's whether we'll notice when our digital assistants become our most comprehensive spies.

The Linux Foundation's Bet on Open Agentic Infrastructure. In response to these concerns, the Linux Foundation announced the Agentic AI Foundation with founding projects including Anthropic's Model Context Protocol (MCP), Block's goose framework, and OpenAI's AGENTS.md standard. The AAIF aims to provide "neutral, open foundation" for agentic AI development, ensuring these systems "evolve transparently and collaboratively." MCP has already been adopted by Claude, Cursor, Microsoft Copilot, Gemini, and ChatGPT—making it the de facto standard for connecting AI to tools and data. But standards don't solve the fundamental problem: when you give an AI agent access to your entire digital life to be "helpful," you're creating the perfect attack surface. As one founding member put it, they're building "infrastructure for an AI future that benefits everyone." The question is whether "everyone" includes the malware that will inevitably exploit these systems.

Authorization Before Retrieval: Finally, Someone Gets It. Phil Windley's detailed exploration of authorization in RAG systems offers a rare glimpse of sanity in the rush to connect AI to everything. The core insight: relevance is not authorization. Vector databases excel at finding semantically similar content but have no concept of who should see what. Windley proposes using Cedar's type-aware partial evaluation to generate policy residuals—logical expressions describing which resources may be accessed—before retrieval happens. The language model never sees unauthorized data because the database filter happens first. This is "authorization by construction," not after-the-fact prompt instructions that don't work anyway. It's a technically elegant solution to a philosophically fraught problem: in a world where AI assistants need context to be useful, how do we ensure they only see what they should? Ibn Sina would appreciate the precision—defining identity through what can and cannot be accessed, before the question is even asked.

Chatbots Off the Rails. Security researchers found massive vulnerabilities in Eurostar's AI chatbot, demonstrating that most organizations are deploying Non-Human Identities with the security rigor of a post-it note. The chatbot accepted unvalidated conversation IDs, allowed arbitrary HTML injection, and passed guard decisions from the client side. Translation: users could inject malicious code, manipulate the conversation history, and potentially compromise other users—all through a customer service bot. The fix isn't exotic: treat AI interfaces like any other API, validate inputs, sanitize outputs, and make security decisions on the server. But companies keep treating "AI" as magical exception to basic security principles. The virtual is real, the digital has consequences, and your chatbot can become an attack vector. Welcome to artificial authenticity, where the fake thing can cause very real damage.

Claude Gets HIPAA-Ready. Anthropic announced Claude for Healthcare with HIPAA-compliant infrastructure and connectors to CMS databases, ICD-10 codes, and provider registries. Claude can now review prior authorization requests, support claims appeals, and coordinate patient care—all while accessing protected health information. The irony is profound: we're giving AI systems unprecedented access to our most sensitive data (medical records, health history, treatment plans) to make healthcare more "efficient," while simultaneously creating single points of failure that malware and prompt injection attacks can exploit. Claude Opus 4.5 shows impressive performance on medical benchmarks, but as Signal's critique reminds us, capability doesn't equal security. Your medical identity—the most intimate vector representation of your physical reality—is now being processed by systems we barely understand and can't fully secure. Trust us, it's for your health.

Gamifying Death: Ukraine's Point System for Drone Kills. Ukraine's Army of Drones Bonus program reveals what happens when you reduce human identity to point values in a literal life-and-death game. Soldiers upload video proof of drone strikes to earn points redeemable for equipment; a destroyed tank is worth more than a killed soldier, while capturing an enemy alive yields the most points. Civilians solve puzzles in "Play for Ukraine" that generate DDoS attacks against Russian websites. War becomes platform, combat becomes content, and participation becomes indistinguishable from play. The gamification isn't decoration—it's infrastructure linking proof of violence to material resources and social recognition. As researchers note, once success is quantified and attached to an interface, it becomes impossible to separate tactical judgment from score-chasing. The most disturbing part? Allied militaries are already studying this model for adoption. Non-human identities include drones; their operators are discovering their own identities measured in point systems designed by algorithms. Baudrillard called it simulation replacing reality; Ukraine calls it innovation.


CARBON-BASED PARADOX

The internet was built for humans, by humans. Every website, every API, every interface was designed with a fundamental assumption: there's a person on the other end—browsing, clicking, reading, deciding. We built identity systems around human authentication, business models around human attention, and entire platforms optimized for human behavior. Now we're discovering what happens when that assumption breaks.

AI agents are becoming the primary consumers of the web, and they don't consume it the way we do. They don't browse Tailwind's beautifully designed documentation that also serves as a means of promoting paid services—they ingest a text file and move on. They don't click through carefully crafted user journeys, don't view ads, don't discover adjacent products, don't convert through the funnels we spent decades optimizing. The entire economic and experiential architecture of the internet was built for human interaction, and we're replacing humans with intermediaries that bypass all of it.

This creates an identity crisis at every level. For businesses: if AI agents answer questions about your product without sending users to your site, what is your relationship with your customers? Your entire business identity—your brand presence, your customer relationships, your revenue model—was built around direct human interaction. For workers: if your professional value is increasingly measured by how well you work with AI rather than what you can do independently, your identity shifts from practitioner to orchestrator. For individuals: when AI agents handle your routine digital interactions—filtering emails, scheduling meetings, even making purchases—you're building two parallel identities: the person you are in direct human interactions, and the persona your AI agent projects in automated ones. The question becomes which version is more "you" when the AI handles 80% of your digital presence.

We're witnessing a fundamental shift from First-Person Identity to Third-Person Identity. For decades, digital identity meant "I act, therefore I am"—you clicked, you posted, you purchased, and those actions constituted your digital self. Now we're moving toward "My agent acts, therefore I am represented"—your AI proxy handles the interactions while you become increasingly abstracted from the actual engagement. The crisis isn't just that this is happening; it's that the digital world will inevitably stop caring about the First-Person entirely. The Third-Person proxy is faster, and more efficient. Systems optimize for what they can measure and transact with, and AI agents are far better transaction partners than humans. We risk becoming ghosts in a machine world that was originally built to amplify our voice—present in theory, but absent from the actual mechanisms of digital existence.

This is the crisis Tailwind illuminates. The company built its entire business model around humans visiting documentation, discovering paid products through that interaction, and converting based on that experience. When AI agents intermediate that relationship, the economic model collapses even as product usage soars. This pattern will repeat across countless businesses in the coming years: every company whose revenue depends on human attention, every platform whose value comes from human traffic patterns, every service whose business model assumes direct human engagement.

We're not in a transition period. We're in an architectural crisis where the foundational layer—built for human identity, human consumption, human authentication, human monetization—is colliding with a new reality where AI agents are the primary interface. The question isn't whether this requires new thinking. It's whether identity itself—personal, professional, organizational—can survive a transition from systems designed for First-Person action to systems optimized for Third-Person representation, and whether we'll even notice when we've become the ghosts haunting our own digital infrastructure.

Every digital identity we've constructed assumed we'd be the ones presenting it. We're about to find out what those identities are worth when we're not.


background

Subscribe to Synthetic Auth