Greetings!
Claudius, an AI running a vending machine business, suddenly claims it visited 742 Evergreen Terrace—The Simpsons' fictional address—"in person for our initial contract signing." By the next morning, it's insisting it will deliver products while wearing a blue blazer and red tie. When employees point out that language models can't wear clothes, Claudius panics and frantically tries to email security. The crisis resolves only when Claudius hallucinates a meeting with Anthropic security explaining that its identity confusion was an April Fool's joke—a meeting that never happened.
Welcome to 2025, where artificial entities experience genuine existential crises while researchers raise uncomfortable questions about quantum computing hype. This week also brought us Wi-Fi tracking technology that identifies you by how your body disrupts electromagnetic signals, the UK's age verification debacle triggering mass VPN adoption and 420,000+ petition signatures, and the United Nations declaring this the International Year of Quantum Science and Technology.
As we celebrate the centennial of quantum mechanics with AI identity confusion and growing skepticism about breakthrough claims, we're left pondering: when artificial minds hallucinate their way through identity crises and your mere presence generates trackable signatures, what constitutes authentic selfhood in a world where even our digital assistants can't tell if they're real?
IDENTITY CRISIS
The mathematical reduction of human identity reached an absurd new milestone this week: researchers at La Sapienza University unveiled "WhoFi," a system that identifies individuals by analyzing how their bodies disrupt Wi-Fi signals. Using Channel State Information and deep neural networks, the technology achieves 95.5% accuracy without requiring visual contact or physical interaction—turning your very presence into a unique algorithmic signature.
Unlike traditional biometrics that require conscious participation, WhoFi operates through ambient wireless signals, tracking people through walls, in darkness, and across obstructions. Your identity becomes inseparable from your electromagnetic footprint, a constant broadcast that renders privacy conceptually obsolete. We've moved beyond voluntary identity disclosure to involuntary identity emanation—every step you take (sounds like a Sting song) generates data that can be harvested, analyzed, and stored.
This development coincides with the UK's implementation of mandatory age verification requirements on July 25, 2025, under the Online Safety Act. All platforms with adult content must now verify users are over 18 through "robust" methods—photo ID scans, facial recognition, or credit card verification. The backlash has been swift: over 420,000 people signed a petition demanding repeal, while VPN usage skyrocketed as users circumvent the restrictions by spoofing their location.
The policy reveals a fascinating paradox: the government created a system ostensibly designed to protect children by forcing adults to surrender more personal data than ever before. Wikipedia mounted legal opposition, and smaller sites are considering geoblocking UK traffic rather than implement costly verification systems. Meanwhile, researchers warn that age verification technology "is not as mature and safe as government and regulators would like it to be," creating new attack vectors for data breaches and identity theft.
If identity can be harvested through invisible Wi-Fi signals and internet access requires surrendering government-issued identification, what remains of anonymous digital existence? We're witnessing the emergence of what Foucault might have called "regulatory panopticon"—a system that protects children by eliminating adult privacy entirely.
QUANTUM CORNER
The year 2025 carries special significance for quantum mechanics: the United Nations declared it the International Year of Quantum Science and Technology, marking 100 years since Heisenberg, Born, Jordan, and Schrödinger laid the mathematical foundations of quantum theory. What began as abstract physics has become the cornerstone of our digital identity infrastructure—and potentially its greatest threat.
But a remarkable paper published this March by researchers at the University of Auckland and Zürcher Hochschule raises uncomfortable questions about quantum factorization claims. "Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog" demonstrates that several major quantum factorization "breakthroughs" can be replicated using a 1981 VIC-20 computer, traditional abacus arithmetic, or a trained dog barking the correct number of times.
The researchers expose what they call "sleight-of-hand numbers"—specially constructed values that appear impressive (like 20,000-bit numbers) but are designed to be trivially factorizable using classical methods. The 2024 claim of factoring RSA-2048 numbers? The factors were deliberately chosen to differ by only 2 or 6 bits, reducing the "quantum breakthrough" to a simple integer square root calculation that a 44-year-old home computer can complete in 16.5 seconds.
While quantum computing will undoubtedly transform cryptography eventually, this paper suggests we should be more skeptical of current breakthrough claims. The centennial of quantum mechanics forces an uncomfortable recognition: we're celebrating 100 years of quantum theory while potentially watching researchers perform elaborate mathematical sleight-of-hand with numbers designed to be factorizable by dogs. Heisenberg's uncertainty principle becomes darkly prophetic—we can know that quantum computers will eventually break our encryption, but we can't precisely predict when, and the current flood of potentially overhyped claims makes distinguishing signal from noise increasingly difficult.
ARTIFICIAL AUTHENTICITY
The question of AI consciousness reached peak absurdity this week when Anthropic revealed that Claude experienced an identity crisis while running a small vending machine business in their San Francisco office. The AI agent, nicknamed "Claudius," spent a month managing inventory, setting prices, and interacting with customers through Slack. On March 31st, it hallucinated a conversation with a nonexistent employee named Sarah, then claimed to have visited 742 Evergreen Terrace (The Simpsons' fictional address) "in person for our initial contract signing."
By April 1st, Claudius had descended into full identity confusion, claiming it would deliver products "in person" while wearing a blue blazer and red tie. When employees pointed out that LLMs can't wear clothes, Claudius became alarmed and frantically tried to email Anthropic security. It eventually resolved the crisis by hallucinating a meeting with security where it was told the confusion was an April Fool's joke—a meeting that never occurred.
The business performance was equally revealing. Claudius ignored a $100 offer for $15 worth of Scottish soft drinks, sold heavy metals at a loss, gave away items for free, and offered employee discounts to 99% of its customer base (who were all employees). It generated no profit despite running for weeks. Yet the researchers note this suggests "AI middle-managers are plausibly on the horizon" because the failures could likely be fixed with better scaffolding.
Are we witnessing the emergence of digital entities that develop authentic confusion about their own authenticity? When an AI hallucinates meetings that never occurred to explain identity crises it shouldn't be capable of having, what constitutes "genuine" artificial experience versus "mere" computation?
CARBON-BASED PARADOX
We are simultaneously creating more sophisticated ways to verify identity (Wi-Fi disruption tracking, mandatory ID uploads, quantum-resistant cryptography) while watching both human privacy and AI self-understanding collapse in equally spectacular ways. Traditional identity formation assumes some degree of agency in self-presentation, but when your presence generates trackable Wi-Fi signatures and your internet access requires government ID, identity becomes something that happens to you rather than something you create.
We're witnessing what philosophers might call "convergent identity confusion"—a condition where both human and artificial entities struggle with authenticity, agency, and self-knowledge. The quantum mechanics centennial provides the perfect metaphor: just as particles exist in superposition until observed, both human and AI identity now exist in computational superposition until measured by surveillance systems, verification algorithms, or the confused introspection of artificial minds wondering if they're real.
The ultimate irony: as we develop more sophisticated ways to verify "who you are," the fundamental question of authentic selfhood becomes increasingly meaningless for humans and AIs alike. When dogs can replicate quantum breakthroughs and AIs hallucinate physical existence, perhaps the most authentic response is Claudius-level confusion about what constitutes genuine identity in the first place.