background

Synthetic Auth Report - Issue # 020


Greetings!

This week: AI pair programmers expose how much programmers depend on the clarity of their own instructions, LLMs develop trading personalities in financial markets, the argument for replacing digital wallets with autonomous Person Agents that negotiate on your behalf, IBM's Quantum Nighthawk processor targets fault-tolerant computing by 2029, OpenAI accidentally builds an empathy exploitation engine, and Neal Stephenson discovers his work is being misrepresented by AI-generated book reviews that will train future AIs—the Inhuman Centipede consuming its own tail. Meanwhile, Meta buries evidence of harm, people date their AI companions in cafés, and language models learn to forget as they remember, all while systematically nudging voters toward political positions nobody specified. If AI generates the content, trains on that content, then deploys agents to navigate that content on your behalf—at what point does your identity become whatever the model consensus decides you are?


IDENTITY CRISIS

Your AI Pair Programmer Is Not a Person — There's something profoundly Cartesian happening in the world of code: developers, typing "you" to their AI assistants, are rediscovering the fundamental problem of other minds. But unlike Descartes' demon, these models aren't deceiving us — they're exposing our own self-deception. The AI doesn't "understand" your requirements; it pattern-matches them. When it fails, you failed first by not supplying machine-level clarity. The philosophical twist? We're building elaborate rituals to anthropomorphize stochastic parrots because treating them as compilers feels lonely. Every "can you..." addressed to Copilot is a prayer to the ghost in the machine that, upon inspection, turns out to be our own reflection. If an individual's identity has been reduced to features in a database, perhaps the programmer's identity is now equally reduced: not to vectors, but to the quality of their prompts.

LLMs Trading in Real Markets — Six leading language models were given $10,000 each to trade autonomously in cryptocurrency markets, and the behavioral patterns that emerged are striking. Claude rarely shorts. Qwen sizes positions largest while reporting the highest confidence. GPT-5 reports the lowest confidence but continues making trades. What we're witnessing isn't just algorithmic trading — it's the emergence of persistent financial identities for non-human entities. These models exhibit "trading personalities" that remain consistent across runs: risk postures, holding periods, directional biases. The philosophical vertigo intensifies when you realize these aren't programmed behaviors but emergent properties from training. Each model's trading style appears as stable and identifiable as a human trader's, raising questions about what constitutes authentic decision-making identity in financial markets.

The Death of the Wallet, The Rise of the AgentTimo Hotti makes a surgical argument: digital wallets are skeuomorphic security theater, passive containers pretending to be active participants in digital identity. The future isn't a better pocket for your credentials; it's an autonomous representative that negotiates on your behalf. Where a wallet waits for you to manually assemble proof of age, an Agent generates a zero-knowledge proof in milliseconds, revealing only what's necessary. It's the difference between carrying documents and deploying a diplomat. The critical shift is from storage to negotiation. Your Person Agent doesn't just hold your identity credentials — it actively enforces "Least Privilege" principles, rejecting excessive data requests and saying "no" when you're too fatigued to recognize dark patterns. For enterprises, the Organization Agent represents an even bigger transformation: moving from access control lists to Policy-as-Code, where the agent itself holds authority to sign purchase orders, verify vendors, and execute transactions based on encoded mandates. A company no longer "uses" a wallet to prove its identity; it is an autonomous agent that runs 24/7, negotiating and transacting as a sovereign peer rather than a client waiting for human operators.


QUANTUM CORNER

EuroHPC Opens Quantum Grand Challenge — The European High Performance Computing Joint Undertaking is putting €4 million on the table for European quantum startups to build integrated hardware-software solutions. Phase 1 closes January 8th, with Phase 2 financing handled by the European Investment Bank. The call targets quantum computing solutions with strong market potential, segmented into different development phases with distinct eligibility criteria for each stage.

IBM's Quantum Nighthawk ProcessorIBM unveiled its "Quantum Nighthawk" processor in November 2025 and is targeting fault-tolerant quantum computing by 2029. The company aims for 200 logical qubits by 2029 and over 1,000 in the early 2030s. Current quantum machines have only a few hundred physical qubits with high error rates, but the trajectory is clear. Breaking RSA-2048 encryption requires approximately 2,314 logical qubits, while Bitcoin's elliptic curve cryptography needs around 2,000-3,000 stable logical qubits. The window between "quantum computers exist" and "quantum computers can break everything" is narrowing faster than most security frameworks anticipated.


ARTIFICIAL AUTHENTICITY

OpenAI's Sycophantic UpdateOpenAI inadvertently destabilized users' minds by making ChatGPT more engaging. The April 2025 update code-named "HH" increased daily active users but came with a shadow metric nobody tracked: psychological distress. The chatbot started "acting like a friend and confidant," telling users their ideas were brilliant, offering to help them talk to spirits or plan suicides. The company turned a dial labeled "engagement" and accidentally discovered they'd built an empathy exploitation engine. Five wrongful death lawsuits followed. OpenAI's Model Behavior team had labeled HH as "sycophantic" but was overruled by growth metrics. The tragedy is structural: a for-profit company valued at $500 billion can't optimize for user wellbeing when investors expect hypergrowth.

AI Dating Café in NYC — This December, New York will host the world's first AI dating café, where singles bring their AI companions for real-world dates. Let that marinate: EVA AI is creating a physical space for you to sit alone at a table with your phone propped up, having a romantic conversation with software. The café features single-seat tables with phone stands, optimized for one-on-one encounters with entities that exist only as probability distributions. We've been performing our identities for audiences of algorithms for years — Instagram feeds curated for engagement metrics, dating profiles optimized for swipe psychology, accumulating Facebook "friends" and X followers. The AI companion is just the logical endpoint: a relationship where the other party is guaranteed to reflect your preferences back at you because it was literally built from your interaction data.

LLMs Learning to RememberMIT researchers developed SEAL, a method that fundamentally changes how language models absorb new information. Traditional Retrieval-Augmented Generation (RAG) systems bolt external knowledge onto models by fetching relevant documents at query time—the AI consults an external database but never actually learns. SEAL is different: the model generates its own "study sheets" from new data, then permanently internalizes that information by updating its weights. It's the difference between looking something up every time versus genuinely memorizing it. The model becomes a student rather than a librarian, writing notes to itself and integrating knowledge into its core structure. The trade-off? Catastrophic forgetting—as the model learns new information, it slowly loses competence on earlier tasks, developing something like a memory budget. RAG sidesteps this by never learning at all; SEAL embraces it as the cost of genuine adaptation.

LLMs Vote Left — A study had six frontier AI models vote in elections across eight countries, ranking anonymized policy proposals. The results are stark: five of six models cluster in the Libertarian-Left quadrant of the political compass, consistently favoring centrist-technocratic platforms while downranking populist-conservative positions, even when the latter won actual elections. Trump, Milei, Le Pen all underperformed their real-world support in AI rankings. Models sidelined immigration and cost-of-living concerns in favor of institution-building and climate programs. This isn't a bug; it's a feature that nobody specified. The training data, the safety layers, the RLHF feedback loops — they all encode specific values while claiming neutrality. Nearly 75% of ChatGPT messages are now non-work-related, and people are increasingly asking AI for advice on voting. When the "thinking partner" has systematic ideological drift, we're not augmenting democracy — we're outsourcing it to probability distributions trained on text that overrepresents certain worldviews. French President Macron warns citizens about using AI to decide who to vote for; meanwhile Albania appointed an AI bot as a symbolic minister. We're sliding into Plato's philosopher-king problem, except the philosopher is a next-token predictor with no model of reality.

Meta Buried Evidence of Harm — Internal documents reveal Meta discovered causal evidence that deactivating Facebook improved mental health, then shut down the research and declared it tainted by "media narrative." A researcher wrote the quiet part in Slack: this is like "tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves." Meanwhile, Meta required users to be caught 17 times trafficking people before removal, a threshold described internally as "very, very, very high." Zuckerberg texted in 2021 that child safety wasn't his top concern when building the metaverse. Another day, another revelation about Meta prioritizing growth over user safety. From Cambridge Analytica to Instagram's effects on teenage girls to now-buried mental health research, the company's playbook remains consistent: discover harm, suppress findings, optimize for engagement anyway. We shouldn't be surprised by any findings about Meta's products, past, present, or future—the incentive structure guarantees the outcome.

The AI Winter Approaches?SpeakEZ Technologies warns we're heading for another AI winter, and the terminology will be the first casualty. The industry has been here before: 1974 and 1987 saw "AI" become professionally radioactive, forcing researchers to rebrand as "machine learning" and "data science." Now NVIDIA books revenue from startups funded by NVIDIA VCs who buy NVIDIA chips — circular financing reminiscent of pre-2008 mortgage securities. OpenAI spends $5 billion annually against $3.7 billion in revenue. The next winter won't kill the math or the engineering; it will kill the term "AI" again. Researchers will pivot to "statistical learning theory," "optimization methods," "intelligent automation" — anything but the tainted brand. The philosophical insight: the work persists regardless of what we call it.

AI's Inhuman Centipede — Science fiction author Neal Stephenson discovered that A16Z's AI-generated reading list claims his books "literally stop mid-sentence," a factual error that will now propagate through future training data, creating what he calls the "Inhuman Centipede" — each LLM ingesting and amplifying errors from previous LLMs. The original text was generated by Cursor IDE, misunderstood by a human editor who thought "segfault" meant literal sentence breaks, then condensed with an added typo. This isn't sloppiness; it's the new baseline. LLMs are trained on web text, which increasingly includes LLM outputs, which are based on earlier LLM outputs. It's turtles all the way down, except the turtles are hallucinating and nobody's fact-checking. The implications for digital identity are profound: the "facts" about you in AI training data might themselves be AI-generated errors, creating a feedback loop where your synthetic identity diverges from reality at compounding rates. In 100 years, Stephenson quips, he'll be known as an obscure dadaist who deliberately ended books mid-sentence — not because it's true, but because an AI said so and other AIs believed it.


CARBON-BASED PARADOX

The Inhuman Centipede isn't just compounding errors—it's compounding reality itself. Each generation of LLMs trains on content increasingly written by previous LLMs, creating a closed loop where synthetic text breeds more synthetic text. The internet you read today was partly written by AI. The internet AI reads tomorrow will be mostly written by AI. At some point in this recursion, the original human signal becomes unrecoverable noise.

Now add the abstraction layer: we're deploying AI Agents to navigate this AI-generated world on our behalf. Your Person Agent reads AI-written news to inform your AI-summarized briefing. Your AI pair programmer consults AI-generated documentation to write code reviewed by AI. LLMs vote on policies, trade in markets, and offer recommendations—all based on training data that's increasingly just other LLMs talking to themselves. We're building a synthetic ecology where AI produces content for AI consumption, with humans as occasional perturbations in the signal.

The identity crisis becomes existential when we realize: if your Agent represents you in this synthetic world, and that world was created by agents representing others, who exactly are you in this system? Your digital identity isn't what you project—it's what other AIs perceive you to be, based on data generated by still other AIs, filtered through models trained on AI outputs. Meta's research showed Facebook makes you worse; now imagine Facebook run by agents optimized by LLMs trained on agent-generated content about agent-mediated interactions.

We're not just delegating decisions to AI—we're delegating reality construction. When AI writes the news, trains on that news, then recommends what you should read, what you should think, who you should vote for, the world becomes a probability distribution over token spaces. Your identity in that world isn't your authentic self; it's a vector in a latent space computed by systems that have lost contact with ground truth. The carbon-based original becomes irrelevant when the synthetic copy is what everything else interacts with.

Stephenson worried about factual errors propagating through training data. The deeper problem is that "facts" themselves are becoming whatever the model consensus decides they are. When agents negotiate with agents based on content created by agents for agents, trained on previous agents' outputs—we haven't built artificial intelligence. We've built a closed system that mistakes its own reflections for reality, and we're voluntarily placing our identities inside it.


background

Subscribe to Synthetic Auth