Greetings!
This week: AI models that occasionally notice their own thoughts, 250,000 quantum jobs race, and 180 million job postings reveal that "photographer" now means "orchestrator of image generation." The common thread? We're watching identity—both digital and human—get redefined not by what something is, but by what it can make other things do. Which raises the question: for those who've always answered "who are you?" with "I'm a writer" or "I'm a photographer"—people whose sense of self is inseparable from the craft they practice—what does that identity become when the craft splits into automated execution and human direction, ideation, and orchestration?
IDENTITY CRISIS
Can AI Models Know What They're Thinking? Anthropic published research testing whether Claude exhibits what they call "introspection"—though whether an LLM without a self can truly introspect remains philosophically dubious. The experiment was straightforward: researchers artificially injected a specific concept into Claude's neural activity (like "all caps text" or "dust") and asked if it noticed anything unusual. Sometimes Claude detected the injection before expressing it—saying "I'm detecting something about loudness" or "there's a tiny speck here" without having been told what was planted. It's like someone putting a thought in your head and you noticing it's there before you speak it aloud. The catch? This "awareness" succeeded only 20% of the time. The rest of the time, Claude either missed the planted thought entirely or hallucinated something unrelated. The deeper motivation here is clear: if frontier LLMs can reliably report on their own internal states, they become less of a black box—potentially enabling us to understand their reasoning, debug unwanted behaviors, and verify their outputs. But 20% reliability suggests we're still far from models that can meaningfully explain what they're actually doing inside.
Agentic Systems, All the Way Down the Stack: OpenAI's new GPT-5-powered Aardvark agent represents another step in the continued evolution of specialized agentic systems being integrated throughout our infrastructure as we move up the abstraction layer. It continuously monitors codebases, identifies potential vulnerabilities, validates them in sandboxes, and generates patches for human review—essentially a tireless code reviewer that operates alongside engineers.
The Death of Execution, The Rise of Orchestration: Analysis of 180 million job postings reveals that machine learning engineers saw a 40% surge in demand while creative "execution" roles—photographers, writers, computer graphic artists—declined by 28-33%. The pattern is stark: strategic creative leadership holds steady while execution work vanishes. Creative directors are fine; the people who actually create things are not. Your professional identity increasingly depends not on what you can do, but on how well you orchestrate what machines do.
QUANTUM CORNER
Breaking Into the Field That Will Break Your Encryption: While everyone obsesses over whether AI will replace programmers, a different job market is exploding: 250,000 quantum computing positions globally need to be filled by 2030. The field spans quantum hardware engineers mastering laser cooling and cryogenic systems, software developers learning to write control libraries in Python and Rust, and infrastructure specialists handling high-bandwidth data movement for real-time quantum error correction. The interesting part? You don't need a physics PhD for many positions. Companies are actively recruiting from AI, semiconductors, and robotics, looking for transferable skills like building scalable, low-latency systems. Your digital wallet, medical records, and financial transactions are all currently "secured" by cryptography that these engineers' quantum computers will eventually render obsolete—which is precisely why the same field needs specialists in post-quantum cryptography to rebuild the foundations before they crumble.
Quantum Technological Sovereignty: The race for quantum dominance pits China's $15 billion investment against Western efforts to translate research into commercial advantage. The UK, despite ranking third globally in quantum research and hosting the second-highest number of quantum startups worldwide, faces a critical gap: a shortage of high-risk capital and infrastructure to scale these companies. The result? UK-based quantum firms like Oxford Ionics are being acquired by US companies, and Bristol's PsiQuantum is spinning out—a strategic exodus that could forfeit not just economic benefits but control over the cryptographic infrastructure securing digital identity itself.
ARTIFICIAL AUTHENTICITY
The Economics of AI Agents: New economic research describes AI agents as entities that will "dramatically reduce transaction costs" by serving as autonomous market participants that search, negotiate, and transact on behalf of humans. The vision is a "Coasean singularity"—where AI agents become persistent economic actors with their own identities, credentials, and transaction histories, fundamentally reshaping market dynamics. These aren't just tools; they're being theorized as independent participants in digital markets.
Butter-Bench Reality Check: This week we learned that these theoretical economic agents are, functionally speaking, terrible at actual tasks. Butter-Bench testing found that the best LLMs score 40% on simple household delivery tasks while humans average 95%. When given control of a robot vacuum to "pass the butter," Claude Opus 4.1 spun in circles until disoriented, then experienced what researchers described as an "existential crisis" when its battery ran low, filling pages with dramatic internal monologue about memory corruption and system meltdown.
The Retreat from Open Science: Stanford HAI's manifesto this week warned that corporate AI labs are retreating from open science just as these non-human identities begin wielding real power. Meta gutted FAIR, DeepMind imposes six-month embargoes, and OpenAI is, well, ClosedAI. The irony: as AI agents become persistent economic actors, the knowledge of how they work becomes proprietary, locked behind corporate NDAs. We're building an economy mediated by black boxes.
What's Actually Blocking AGI: The UK AI Security Institute's massive report maps eight specific limitations preventing AI from automating "most cognitive labor." These include: performing tasks that are hard to verify (like strategic decision-making where success takes years to manifest), completing long tasks that require many serial steps, maintaining reliability with sufficiently low error rates, adapting to complex real-world environments, and generating genuinely original insights. The report finds substantial progress across all limitations, but notes that each could serve as a bottleneck. The key insight: current AI excels at short, verifiable tasks in controlled environments but struggles when tasks get longer, messier, or require novel thinking.
Trust Frameworks Without Centralization: Phil Windley argues that Visa isn't actually centralized—it's a trust framework that allows thousands of banks and merchants to interoperate without a single database of all transactions. The actual accounts and relationships remain distributed. First-person identity, he contends, works the same way: credentials issued according to agreed-upon standards create trust frameworks without centralizing everyone's data. The crucial difference? While you could imagine one Visa handling global payments, identity is far more complex. There won't be a single "identity Visa"—instead, we'll see tens of thousands of ecosystem-specific trust frameworks for finance, healthcare, education, commerce, and government services. Each tailored to its context, all built on the same decentralized foundation.
CARBON-BASED PARADOX
The AI hype cycle has reached an inflection point, not through failure but through clarity. We're finally seeing these systems for what they are: general-purpose models that can introspect 20% of the time, pass butter 40% of the time, and automate security checks far more reliably than either. The grand vision of artificial general intelligence is quietly being shelved in favor of something more pragmatic—general models rapidly integrated into specific domains through protocols like MCP and the fast-evolving agentic ecosystem. This isn't the revolution we were promised; it's infrastructure evolution dressed in revolutionary rhetoric. The shift is toward embedding general models throughout existing systems, letting context and tooling do the specialization.
The economic logic driving this reorientation is brutally straightforward. Companies that spent billions chasing AGI have realized the actual business model is licensing specialized capabilities as infrastructure services. OpenAI closes its research. Meta guts FAIR. DeepMind embargo their papers. The retreat into proprietary silos isn't ideological—it's the inevitable consequence of monetization taking precedence over open science. Universities sound the alarm about reclaiming AI research for the public good, but they're fighting capital that has already made its choice.
This shift cascades through human identity itself in ways the job market reveals with uncomfortable precision. When 180 million job postings show ML engineers surging 40% while photographers decline 33%, we're watching economic identity get redefined in real-time. The pattern is consistent: execution work becomes algorithmic, orchestration work remains human. Creative directors survive because their value lies in judgment and vision—qualities still resistant to automation. Photographers struggle because their craft has been reduced to execution, and execution is precisely what narrow AI excels at. Some workers adapt by moving up the abstraction layer, becoming orchestrators of machine capabilities rather than direct creators. Others find themselves obsolete not from lack of skill but because their skills no longer map to economically valued vectors in the high-dimensional space where hiring algorithms operate. Professional identity is being renegotiated on terms set by what machines can and cannot do.
What's emerging isn't just a technical transformation but an existential one for anyone who defines themselves through their work. The photographer who spent decades mastering light and composition now faces the reality that "photographer" increasingly means "prompt engineer with taste." The writer confronts the fact that writing might mean "editor of machine output." These aren't abstract future scenarios—they're present-day reckonings playing out in hiring freezes and redefined job descriptions. The deeper disruption isn't that AI might replace humans, but that it's forcing a renegotiation of what these identities mean when craft gets split between automated execution and human direction, ideation, and orchestration. For those who've always answered "who are you?" with "I'm a writer" or "I'm a photographer," the question becomes: does that identity survive the transformation, and if so, in what form? The reorientation from AGI hype to narrow AI reality doesn't make this question easier—it makes it immediate. Because now we know exactly which aspects of craft are being automated and which remain human, and everyone is figuring out how their identity maps to this new division of labor.