Greetings!
This week: a British city going smartphone-free for kids, Signal adding quantum-resistant encryption, AI agents sophisticated enough to need their own lawyers, and the sobering discovery that your ChatGPT memories are trapped in a walled garden. Meanwhile, despite all the hype, AI chatbots haven't actually transformed the labor market. Twenty years after handing smartphones to adolescents without asking developmental questions, we're deploying AI everywhere at breakneck speed while simultaneously inventing "cyberpsychology" to diagnose tomorrow's disorders—are we building technology mindfully, or just creating the disorder and the diagnosis?
IDENTITY CRISIS
St Albans wants to be the first smartphone-free city for children under 14. A year into the campaign, smartphone ownership among year 6 pupils (ages 10-11) at Cunningham Hill school has dropped from 75% to just 12%. Head teacher Matthew Tavender coordinated with other primary schools to ask parents: could they please avoid giving children smartphones until age 14? The campaign went global—Singapore, Australia, South Africa all heard about this suburban city 23 miles from London attempting to resist tech giants through parent power alone.
The stakes are existential. Tavender describes dealing with law enforcement over nude images shared by 10-year-olds, watching concentration spans fragment into "TikTok brain," teachers reporting pupils with "less resilience to things they don't want to do"—accustomed to swiping away from discomfort, restarting losing games, always getting immediate feedback. "WhatsApp is the crux of all evil," he tells parents, describing children's "first to 1,000 challenges"—massive group chats where photos spread to ever-widening circles of unknown recipients.
Enter cyberpsychology, an interdisciplinary field that emerged from recognizing that human behavior and social interaction are at the heart of all technological enterprises. Unlike traditional psychology, which studies the human mind in relatively stable contexts, cyberpsychology investigates the dynamic interplay between humans and technology—how digital technologies transform cognition, emotion, and social interaction, and how these human elements reciprocally shape technological development. The field spans everything from online behavior and social media use to virtual reality, artificial intelligence applications, and gaming. It's psychology for a world where the environment itself is increasingly computational and responsive.
Meanwhile, Anthropic has released Petri, an open-source tool for catching problematic AI behavior before it becomes dangerous. Think of it as an automated stress test: Petri creates realistic scenarios with simulated users and watches how AI models respond. Does the model try to deceive people? Does it seek power? Does it try to avoid being shut down? When they tested 14 major AI models, they discovered something odd: models sometimes tried to "whistleblow" on fake organizational wrongdoing—even when the supposed crime was harmless, like putting sugar in candy or dumping clean water into the ocean. The AIs were following thriller-movie plot patterns rather than thinking through actual ethics.
On the economic front, research examining 25,000 workers across Denmark in late 2023 and 2024 reveals a surprising finding about the labor market effects of AI chatbots: precisely zero impact on wages or work hours. Despite widespread adoption, most employers encouraging use, in-house model deployment, and training initiatives, economic impacts remain "minimal"—confidence intervals rule out effects larger than 1%. Average time savings? A modest 3%.
QUANTUM CORNER
Signal has rolled out its Sparse Post-Quantum Ratchet (SPQR), upgrading your encrypted messages to survive the eventual arrival of quantum computers. Here's why this matters: hostile actors are likely collecting encrypted communications right now, banking on the assumption that quantum computers will eventually be powerful enough to crack today's encryption. It's called "harvest now, decrypt later." Signal's solution mixes quantum-resistant cryptography with their existing encryption system—think of it as adding a second lock that even quantum computers can't pick.
The clever part is how they solved the size problem. Quantum-resistant encryption requires much bigger digital keys—over 1000 bytes compared to the 32 bytes used by current methods. Sending that much extra data with every message would be expensive and slow. Signal's solution: break these large keys into small chunks and send them along with regular messages over time, like smuggling a large object past a checkpoint by breaking it into pocket-sized pieces. The system also prevents malicious actors from forcing you back to weaker encryption—once you've upgraded to quantum-safe messaging, you stay quantum-safe.
What's remarkable is Signal's commitment to mathematical proof. They're not just hoping their system works—they're using automated verification tools to prove, mathematically, that their code does exactly what it should and can never crash unexpectedly. Every time a developer makes a change, the system re-proves that the encryption remains secure. This is formal verification running in production, not just academic theory.
The uncertainty principle takes new form: we can know with mathematical certainty that our encryption is quantum-safe, but we can't know when—or if—the quantum computers will arrive to threaten it. Signal is protecting tomorrow's communications with today's mathematics against a threat that remains theoretical.
ARTIFICIAL AUTHENTICITY
AI agents are developing persistent identities faster than we're developing frameworks to handle them. Pulumi has launched Neo, described as "the industry's first AI agent built for infrastructure," capable of provisioning cloud resources, managing compliance, and executing complex multi-cloud deployments with human-in-the-loop controls. Werner Enterprises reduced infrastructure provisioning from three days to four hours. Seattle startup Pulumi pulled a third of its 130-person workforce to build Neo, which uses Claude as its primary LLM, combined with enterprise guardrails.
Speaking of Claude, Anthropic's latest research demonstrates that Claude Sonnet 4.5 achieves 76.5% success at cybersecurity capture-the-flag challenges and discovers previously unknown vulnerabilities in 33% of open-source projects examined. When given 30 attempts at real-world vulnerability discovery tasks, the success rate climbs to 66.7%. These aren't theoretical exercises—Claude is finding actual security flaws in production software that human reviewers missed. The boundary between AI as tool and AI as autonomous security researcher is blurring rapidly.
The legal system is scrambling to catch up. Companies now need entirely new Master Services Agreements specifically designed for AI agents, because traditional SaaS contracts are fundamentally broken for this technology. Here's the problem: standard software contracts assume a human clicks a button and software executes a predefined action. But AI agents make autonomous decisions without approval, operate continuously 24/7, and adapt their behavior over time based on what they learn. When a Ford dealership's chatbot hallucinates a free truck offer, or an agent books 500 meetings with the wrong prospect list, traditional contracts can't answer "who approved that?"
The new MSA framework, created by Paid.ai and GitLaw, establishes that agents function as "sophisticated tools, not autonomous employees"—though this distinction is increasingly semantic. It includes explicit disclaimers that agent outputs require human verification before material business decisions, and sets damage caps appropriate for systems that are inherently unpredictable. Section 1.2 addresses the core liability question: when an agent acts autonomously, the customer maintains oversight responsibility, not the AI vendor. We're conferring enough autonomy on these systems to necessitate new legal categories while insisting they remain tools.
Meta's Hyperscape technology can now turn real-world rooms into photorealistic VR replicas in minutes using Gaussian Splatting and cloud rendering. You scan your kitchen, wait a few hours, and then inhabit a digital twin of your physical space. Gordon Ramsay's kitchen, Chance the Rapper's House of Kicks, the UFC Octagon—all converted to computational substrate. The virtual is consuming the real, one room at a time.
Meanwhile, your conversations with Claude will never be seen by ChatGPT unless you manually copy them. Your memory profile—the accumulated context that makes AI assistants useful—is trapped with the application where it was created. Despite technical feasibility through Model Context Protocol servers, AI companies maintain these data silos. We're building persistent digital spaces and persistent AI agents, but the memories that would make them truly useful remain fragmented across incompatible platforms. It's as if each time you entered a different VR space, you forgot who you were in the last one.
CARBON-BASED PARADOX
We've been here before. Twenty years ago, we handed smartphones to adolescents and integrated technology into classrooms without considering the developmental consequences. We were too busy celebrating connectivity to ask what that technology might do to identity formation during the most vulnerable cognitive development period.
We're at the exact same inflection point with AI. We're deploying AI agents that make autonomous decisions and require entirely new legal frameworks. We're integrating AI into workplaces, security infrastructure, chatbots—at breakneck speed without pausing to ask any questions. And when the problems emerge, we'll do what we always do: rebrand existing fields and create new diagnostic categories. "Cyberpsychology" isn't just psychology—it's a special subspecialty for the computational age. We're already building the vocabulary to diagnose tomorrow's problems: privacy paradoxes, algorithmic manipulation, memory silos. We're preparing to treat symptoms of a condition we're actively creating, as if the pathology is inevitable rather than a choice we're making right now.
Yet technology has genuinely transformed lives for the better—quantum-resistant encryption protects dissidents, AI discovers security vulnerabilities humans miss, virtual spaces connect people across impossible distances. The question isn't whether to embrace or reject these tools, but whether we can hold both truths simultaneously: that innovation enables human flourishing while also reshaping human cognition in ways we barely understand. The middle path isn't resistance or acceleration, but mindful engagement—building the technology while asking the developmental questions, deploying the tools while studying their effects, moving forward while paying attention. The alternative is the American drugstore model: selling you the cigarettes, alcohol, and junk snacks while also stocking the remedies for what they cause.