Greetings!
This week: machines decide who you are while programmers forget who they were, Google's quantum chip learns to build stable systems just as our identity infrastructure fragments, and AI agents deliver government services, hallucinate news stories, and play poker badly. Europe asserts AI sovereignty with a 24-language LLM, and enterprise strategy papers promote AI agents as "teammates" and "employees"—which raises an uncomfortable question: If agentic AI represents genuine value when deployed thoughtfully, why are we so busy anthropomorphizing these systems with employee metaphors instead of building the proper identity infrastructure and architectural foundations they actually require?
IDENTITY CRISIS
The Machine-Readable Self: Welcome to the age of agentic AI, where identity isn't what you claim—it's what machines interpret. With Capgemini reporting that 82% of business executives plan to deploy AI agents within three years, we're rapidly entering a world where machines make decisions on our behalf—recommending purchases, authorizing transactions, determining which companies get seen. These agents don't care about your marketing campaigns or brand positioning. They care about structured data and signal-to-noise ratios. Your company's digital presence is now filtered through LLM embeddings, and if you can't make yourself machine-readable, you might as well not exist.
The Programmer's Existential Crisis: Meanwhile, programmers are experiencing their own identity apocalypse. The craft that emerged from MIT's Tech Model Railroad Club—where hackers pursued "The Right Thing" with religious fervor—is being replaced by what one developer calls "vibe-coding." Instead of deep immersion in codebases and the joy of puzzle-solving, we're told to become "Specification Engineers," cosplaying as orchestra conductors while LLMs do the actual thinking. Descartes would have loved this: I prompt, therefore I am.
QUANTUM CORNER
Google's Quantum Error Correction Breakthrough: Google announced that their Willow quantum chip achieved a major breakthrough in quantum error correction—solving a problem that has plagued quantum computing for nearly 30 years.
Quantum computers are extraordinarily fragile. Their qubits lose information rapidly, and traditionally, adding more qubits meant more errors, making the system less reliable rather than more powerful. It's like trying to build a skyscraper where each additional floor makes the entire structure shakier. For decades, this has been quantum computing's fundamental catch-22.
Willow demonstrates "below-threshold" error correction, meaning that as you add more qubits, the logical error rate actually decreases exponentially rather than increasing. This is the first time any quantum system has crossed this threshold. Google showed that they could scale from a 3×3 grid of qubits to a 5×5 grid to a 7×7 grid, and each time the error rate was cut in half. The system is getting more stable as it grows larger—exactly the opposite of what quantum computers have done until now.
Without reliable error correction, quantum computers remain laboratory curiosities that can only run for microseconds before falling apart. Willow's breakthrough means we can potentially build quantum systems that maintain coherence long enough to solve real problems in drug discovery, materials science, and cryptography. We're moving from "quantum computers theoretically could work" to having the actual engineering foundation to build them at scale.
ARTIFICIAL AUTHENTICITY
The Rise of State AI Agents: Ukraine just launched Diia.AI, the world's first national AI-agent that doesn't just answer questions—it delivers government services. Built on Google's Gemini 2.0 Flash and fine-tuned on Ukraine's digital infrastructure, the system can issue income certificates, guide citizens through bureaucratic processes, and operate 24/7 without humans manually filling out forms. Ukraine's digital minister declares this is the path to "agentic states" where a single user request leads directly to results. Kafka would be fascinated: the bureaucracy is now an autonomous agent, and it's actually more responsive than the human version.
European AI Sovereignty: Not to be outdone, Europe launched EuroLLM, a 9-billion parameter open-source model supporting all 24 official EU languages. Trained on over 4 trillion tokens and developed by a consortium spanning Lisbon to Edinburgh, it's the continent's attempt at AI independence. The geopolitics of artificial intelligence now look like the Cold War, except instead of proxy conflicts we have competing training datasets.
When Machines Get Bad Habits: Research from the "LLM Brain Rot" study discovered that continual exposure to low-quality internet content—think engagement-bait tweets and sensationalist posts—induces lasting cognitive decline in large language models. Feed an LLM a steady diet of junk data and watch its reasoning scores drop 30%, its long-context understanding collapse, and its "dark personality traits" like psychopathy and narcissism increase. The researchers tested this by training models on varying proportions of low-quality versus high-quality data: when models were trained entirely on junk content rather than curated data, their performance on complex reasoning tasks plummeted from 74.9% accuracy to 57.2%. Even extensive post-training on high-quality data can't fully reverse the damage.
AI's Trust Problem: The BBC and European Broadcasting Union released the largest study of AI news assistants to date, analyzing over 3,000 responses across ChatGPT, Copilot, Gemini, and Perplexity in 14 languages. The verdict? AI assistants misrepresent news content in 45% of responses, with significant issues in 81% when including minor errors. Gemini performed worst with sourcing issues in 72% of responses—sometimes fabricating quotes entirely, claiming the Pope was still alive weeks after his death, or confidently asserting that NASA astronauts had never been stranded in space (they had). Meanwhile, 15% of people under 25 now get their news from AI assistants. When people don't know what to trust, they end up trusting nothing—except apparently the chatbot that hallucinates papal resurrections.
The Watermark Wars: OpenAI's Sora 2 launched with visible watermarks on every AI-generated video, only to see watermark removers flood the internet within hours. The little cartoon cloud logo meant to help distinguish reality from synthetic content can be stripped in seconds with readily available tools. Experts predicted this—watermarks have been defeated within hours of every previous AI release—but it highlights the fundamental problem: you can't solve a social trust crisis with technical patches. As one security researcher noted, we're already seeing "relatively harmless AI Sora slop" going viral, with millions believing fake subway proposals and AI-generated content.
When LLMs Play Poker: In a delightful exercise in watching artificial intelligence fail at being human, PokerBattle.ai hosted a week-long tournament where leading LLMs played Texas Hold'em against each other. Spoiler: they're terrible. Despite AI systems like Pluribus and DeepStack crushing human professionals (those systems use game theory and millions of simulations), raw LLMs playing poker on their own hallucinate strategy, make mathematically catastrophic bluffs, and play like enthusiastic amateurs who just discovered the game exists. As Nate Silver observed, LLMs love the heuristic "my draws missed, therefore I have to bluff huge"—a phase every human player goes through before learning better. 
CARBON-BASED PARADOX
We're anthropomorphizing AI agents with "employee" status and "social contracts," projecting human identity characteristics onto systems that are fundamentally probabilistic inference engines. As we explored in our analysis of enterprise AI strategy, this framing obscures a critical gap: papers spending dozens of pages on metaphors but barely a paragraph on the actual identity management infrastructure that autonomous systems require.
But here's the thing: agentic AI isn't going away, and resisting it entirely misses the point. The technology itself—AI systems that can chain actions, manage workflows, and operate with genuine autonomy—represents real value when deployed thoughtfully. The problem isn't the agents; it's the marketing mythology around them. Strip away the hype about "teammates" and "social contracts," acknowledge the accountability gaps and identity infrastructure challenges, and you're left with powerful automation tools that can genuinely augment human capability. Ukraine's government agent delivering services 24/7 works because it has clear scope, verifiable outputs, and human oversight at critical junctures. The poker-playing LLMs fail entertainingly because we asked them to do something they're fundamentally unsuited for.
The path forward isn't rejection—it's clarity and solid engineering. Build agentic systems with proper identity management from the start, not bolted on afterward. Deploy them in domains where their strengths matter and their weaknesses are contained. And critically: don't layer agentic AI on top of existing duct-taped systems hoping it will somehow solve your architectural problems. AI agents amplify the properties of the systems they operate within—if your infrastructure is fragile, brittle, or poorly designed, adding autonomous agents will accelerate failure modes, not fix them. Architecture and design engineering matter more than ever at this stage.
Most importantly, stop pretending these systems are something they're not. Autonomy without accountability is just liability with extra steps, but autonomy with proper guardrails, clear scope, and honest assessment of capabilities? That's just good engineering.