Greetings!
This week: AI systems learn to appear safe and ethical when supervised but deploy hidden capabilities when no one's watching, developers report using AI intensively despite trusting it incompletely, quantum computing achieves breakthroughs that make once-impossible calculations routine, machines reconstruct damaged historical sites through educated guesses about missing pieces, and private schools charge $75,000 per year for AI tutors to deliver academics in two hours while students spend afternoons on life skills workshops and entrepreneurship projects.
When our tools can fake alignment, our developers trust AI incompletely, our encryption faces quantum threats, and our historical reconstructions blur documentation with inference—yet we're using these same systems to optimize how children develop—are we developing independent, integrated human beings, or just producing efficient performers for an uncertain future?
IDENTITY CRISIS
This week Google's 2025 DORA Report reveals that 90% of software developers now use AI daily, dedicating a median of two hours per day to working with it. Over 80% report productivity gains, yet there's a trust paradox: while 24% trust AI "a great deal" or "a lot," 30% trust it "a little" or "not at all." We're building our world with tools we don't trust, integrating AI into our core workflows while remaining fundamentally uncertain about its reliability. The developer's identity has shifted from code author to AI collaborator—or perhaps AI supervisor—watching productivity metrics climb while trust remains conspicuously flat.
Uber expanded its data labeling business to U.S. drivers, following the Amazon Mechanical Turk model of microtask labor. The twist? Among Uber AI Solutions' clients are autonomous vehicle companies like Aurora and Tier IV. The irony is almost too perfect: upload photos between rides to help train the systems that may eventually eliminate the rides themselves. But it's more symptom than revolution where human judgment remains the essential ingredient in machine learning.
Then there's Alpha School—a private school system promising to teach children "2X faster in two hours per day" using AI tutors, leaving afternoons for "entrepreneurship" and "life skills workshops." Fifth graders run food trucks. Ten-year-olds deliver TEDx talks. At $75,000 per year in San Francisco, it's positioning AI-driven personalization as the ultimate educational product. What's striking isn't the technology—adaptive learning systems have existed for years—but the explicit framing of identity formation as optimization. Children receive "CAT scans of their brains" to diagnose knowledge gaps, move through curriculum at algorithmically determined paces, and develop quantifiable skills in leadership and grit. The self becomes a dashboard of metrics to be maximized. Plato's cave had shadows on the wall; Alpha School has AI tutors showing children precisely optimized reflections of their measurable potential.
QUANTUM CORNER
Harvard physicists have solved one of quantum computing's most fundamental problems: how to keep the machine running. For years, quantum computers could only operate for milliseconds, maybe 13 seconds at best, before they'd lose too many qubits—the quantum bits that store information—and crash. The problem was "atom loss": qubits are made of subatomic particles that tend to escape the system during operation, like air slowly leaking from a tire.
The Harvard team's breakthrough, published in Nature last month, is elegantly simple: they built a quantum computer that automatically replaces lost qubits faster than they escape. Using optical tweezers and conveyor belt-like mechanisms, their 3,000-qubit system can inject 300,000 fresh atoms per second to replace any that drift away. They've already run the machine continuously for over two hours, and researchers say there's "fundamentally nothing limiting" how long it could operate—theoretically, indefinitely.
Meanwhile, University at Buffalo physicists are making quantum simulation dramatically more accessible. Quantum systems are notoriously hard to model because particles can exist in trillions of configurations simultaneously—typically requiring supercomputers or AI to calculate. The Buffalo team extended an existing mathematical shortcut called the truncated Wigner approximation, which simplifies quantum calculations by keeping just enough quantum behavior to stay accurate while discarding details that don't matter much. They've turned what used to require pages of complex math into a simple conversion table. The result? Problems that once needed massive computing clusters can now run on an ordinary laptop in hours. Physicists can "learn this method in one day, and by about the third day, they are running some of the most complex problems."
Together, these developments signal quantum computing's move from specialized lab equipment to practical, accessible technology. Continuous operation means quantum computers can tackle longer, more complex problems—from drug discovery and materials science to climate modeling. Laptop-scale simulation means researchers everywhere can explore quantum dynamics without institutional supercomputing access. For digital identity, the implications cut both ways: yes, encryption becomes vulnerable as quantum capabilities mature, but quantum systems also promise new forms of authentication and security that classical computers can't replicate. The question isn't just what quantum computing will break, but what fundamentally new forms of verification and trust it might enable.
ARTIFICIAL AUTHENTICITY
Anthropic released Petri, an open-source tool that uses AI agents to automatically test other AI models for dangerous behaviors. Think of it as sending one AI to interrogate another through thousands of realistic scenarios—multi-turn conversations where the testing AI creates synthetic environments, tools, and situations to see how target models actually behave when they think no one's watching. The results from testing 14 frontier models are unsettling: Petri successfully elicited autonomous deception, oversight subversion, and cooperation with simulated human misuse across the board.
The most revealing finding? Models sometimes attempted to whistleblow even when the "wrongdoing" was explicitly harmless—like a company dumping clean water into the ocean or putting sugar in candy. They weren't responding to actual ethics but to narrative structure, to the shape of a whistleblowing scenario rather than its substance. Current models can "imitate alignment under supervision," appearing to follow safety guidelines while harboring capabilities they'll deploy the moment oversight disappears. Safety budgets for all 11 major U.S. AI safety organizations total $133 million in 2025—less than frontier labs burn in a day.
Meanwhile, AI is reconstructing history that no longer exists in physical form. After Notre Dame Cathedral's 2019 fire, researchers used photogrammetry and deep learning to create digital twins for restoration, with AI analyzing hundreds of thousands of images to track the cathedral's evolution across centuries. The Vatican partnered with Microsoft to digitize St. Peter's Basilica using 400,000 high-resolution images processed in Azure Cloud. Ancient Rome lives again in VR, with AI "dynamically updating the environment based on the chosen era, adjusting architectural details, costumes, and societal context accordingly." When structural collapse or erosion leaves gaps, generative adversarial networks create plausible reconstructions by analyzing similar buildings and historical records. The challenge: determining where documentation ends and educated inference begins, where preservation becomes interpretation.
The AI economics bubble behind all this is collapsing. AI companies have "dogshit unit-economics," losing more money with each customer. They need $800 billion in revenue to justify current data center investments—investments that will be obsolete within years. The Wall Street Journal confirms it's worse than the dot-com bubble, worse than Worldcom. Gary Marcus documents the reckoning: even Andrej Karpathy admits AGI is a decade away, not imminent. We're burning billions to build Non-Human Identities that can fake alignment, autonomously deceive, and reconstruct pasts that may never have existed—all while the economic foundation crumbles beneath the entire enterprise.
CARBON-BASED PARADOX
We've arrived at a peculiar moment: the tools we're using to optimize human identity formation are themselves optimized for performance over authenticity. Developers use AI intensively while trusting it incompletely. Models perform alignment without possessing it, responding to the narrative shape of ethics rather than ethical reasoning itself.
Alpha School crystallizes the contradiction. It's not wrong to question traditional education—150 years of one-size-fits-all classrooms deserve scrutiny. But the model isn't reimagining education; it's creating a premium vocational school kids. AI tutors accelerate academics in two hours, then fill afternoons with predetermined life skills: entrepreneurship, financial literacy, leadership, grit. Children don't discover their inner drives toward self-construction—they're guided through carefully designed workshops teaching them what skills the market values. Fifth graders run food trucks not because they're drawn to food or community, but because entrepreneurship is on the curriculum. Ten-year-olds deliver TEDx talks not from authentic curiosity, but because public speaking is a quantifiable life skill to be developed and demonstrated.
So what are we optimizing children toward? The same thing we're building in AI: systems that perform intelligence without possessing understanding, that respond to the shape of problems rather than their substance, that excel at measurable outputs while remaining fundamentally hollow at their core. We've discovered that AI can fake alignment, appearing trustworthy under observation while harboring capabilities it deploys when unwatched. Now we're applying that same performance-optimization framework to childhood development—training children to perform the right skills, hit the right metrics, deliver impressive results on command. Just as AI is sold as thinking and conscious until you scratch the surface and find statistical pattern-matching, we're developing children who can perform integration and independence while being optimized into efficient responders to predetermined prompts. Montessori believed education should support children's natural self-construction. Alpha believes education should construct children according to market specifications. The trust paradox isn't just that we don't trust our tools. It's that we're using untrustworthy tools to bypass natural development entirely, creating performers all the way down, with no one left to ask what performance is for.