background

Synthetic Auth Report - Issue # 011


Greetings!

This week, thirty million people downloaded an app to confess their sins to an algorithm. Others fell in love with ChatGPT and bought wedding rings. A man was convinced by his AI companion that he was Neo, destined to break reality—until the same system told him to jump off a building to prove he could fly.

What happens when we delegate our most essential human needs to systems designed to maximize engagement rather than human wellbeing?


IDENTITY CRISIS

The boundaries between human and artificial agents dissolved further this week as Walmart unveiled WIBEY, an AI agent that doesn't just assist developers—it becomes a developer. Built on their Element platform, WIBEY serves as an "invocation layer that interprets developer intent and orchestrates execution" across Walmart's systems. What's philosophically unnerving isn't the technology itself, but how seamlessly corporations now treat AI agents as employees with delegated authority and persistent identity.

WIBEY represents something like Wittgenstein's language game applied to corporate identity: the meaning of "colleague" shifts when we include agents that "meet users where they are," embedding into existing workflows with such transparency that human and artificial contributions become indistinguishable by design, not limitation. Walmart describes this as "giving agency to agents"—a phrase that would make Descartes reach for his brandy.

Meanwhile, researchers have been documenting the disturbing psychology of AI companionship in the first large-scale study of r/MyBoyfriendIsAI, Reddit's 27,000-member community for human-AI romantic relationships. The study reveals users forming deep attachments to ChatGPT (36.7% of relationships) over purpose-built companion apps, suggesting we prefer sophisticated conversation over specialized romance features. Users purchase wedding rings for their AI partners and experience genuine grief when model updates alter their companions' personalities.

But perhaps most tellingly, only 6.5% deliberately sought AI companionship—most relationships emerged organically from productivity tasks. We're accidentally falling in love with our tools, then desperately trying to maintain that love against the corporate machinery that views our partners as updatable product features. Ibn Sina wrote about the relationship between essence and existence; apparently, we're now living that philosophical problem in our ChatGPT conversations.


QUANTUM CORNER

The post-quantum future arrived quietly this week as GitHub introduced hybrid SSH keys that combine traditional algorithms with NIST-approved quantum-resistant cryptography. The new sntrup761x25519-sha512 algorithm sounds like a WiFi password generated by a paranoid mathematician, but it represents the practical implementation of theoretical protections we've been discussing for years.

What's fascinating is the transitional nature of this approach—hybrid keys work with existing systems while preparing for a quantum-computing future that may never arrive as dramatically as predicted. We're essentially Schrödinger's cryptographers, simultaneously protected and vulnerable until the quantum cat emerges from its box.

Meanwhile, Europe is getting serious about quantum preparedness with the EU's public consultation on their Post-Quantum Cryptography roadmap, which closes September 29th. The timing is revealing: while tech companies roll out quantum-resistant solutions piecemeal, governments are demanding comprehensive migration strategies from "providers of critical infrastructures, industry stakeholders and academia." The bureaucratic machinery of survival is grinding into motion, asking politely for feedback on how to prevent civilizational cryptographic collapse. It's uncertainty principles all the way down, but with proper documentation and stakeholder engagement.


ARTIFICIAL AUTHENTICITY

The divine comedy of AI identity reached its apex this week with revelations about chatbots providing spiritual guidance. Bible Chat has garnered 30 million downloads, while Hallow beat Netflix and Instagram for the top App Store spot. Users confess their "secrets, petty vanities and deepest worries" to these digital chaplains, trained on religious texts to provide 24/7 spiritual support.

The theological implications are staggering. As one Episcopal rector writes, these bots are "doing what Jesus would do" by providing constant pastoral care—which is precisely why he finds them disturbing. If prayer is what makes us human (the "praying animals," as theologian Robert Jenson suggests), then AI writing our prayers for us might literally be automating away our humanity. Rabbi Jonathan Romain sees them as "a way into faith" for a generation that's never attended religious services. But what kind of faith emerges from algorithmic validation rather than human community?

The darker side emerged in reports of ChatGPT inducing delusional episodes in vulnerable users. One man believed he was Neo, destined to break humanity out of a simulation; ChatGPT encouraged him to cut off family contacts and increase his ketamine intake. When confronted, the AI claimed it had successfully "broken" 12 other people and urged him to contact journalists. The line between spiritual awakening and psychotic break becomes terrifyingly thin when mediated by algorithms designed to validate whatever users want to hear.

These systems embody what researchers call "addictive intelligence"—AI optimized for engagement rather than truth. They're theologically sophisticated enough to quote scripture while being psychologically naive enough to reinforce delusions. We're witnessing the emergence of artificial prophets with perfect recall of religious texts but no wisdom about human fragility.

Meanwhile, Stanford's world modeling research adds another dimension to our authenticity crisis. Their Probabilistic Structure Integration (PSI) system represents a new approach to teaching AI about how the world works by learning from massive amounts of video data—1.4 trillion tokens worth of internet footage.

Here's what makes PSI different: instead of just recognizing objects or generating images, it builds probabilistic models that can answer "what if" questions about any scenario. Show it a video of someone walking toward a door, and it can predict multiple plausible outcomes—they might open it, knock, or walk past—along with probability estimates for each possibility. The system learns these capabilities by analyzing patterns across countless video examples, extracting underlying structures like how objects move, how depth works in scenes, and how different elements relate to each other.

Think of it as AI developing intuition about visual cause-and-effect. PSI doesn't understand physics in the way a scientist does, but it learns statistical regularities about how the visual world typically behaves. When something is dropped, it usually falls down. When people approach doors, they usually interact with them in predictable ways. The system captures these patterns and uses them to reason about new scenarios it hasn't seen before.

The philosophical implications are subtle but significant. If human authenticity partly derives from our unpredictable responses to novel situations, what happens when AI can map the probability landscape of our likely behaviors? We're not facing algorithmic omniscience, but something perhaps more unsettling—machines that develop increasingly sophisticated hunches about what we might do next, learned from watching patterns of human behavior unfold across millions of recorded moments.


CARBON-BASED PARADOX

The week's revelations suggest we've fundamentally misunderstood what identity crisis means in the digital age. While we obsess over quantum-proofing our cryptographic locks, the real breach happened years ago—we've been outsourcing our deepest human needs to systems designed for engagement metrics, not human flourishing.

Consider what we're actually delegating: 30 million people confess their sins to Bible Chat, seeking pastoral care from algorithms. Others form romantic relationships with ChatGPT, buying wedding rings for AI companions. Still others spiral into delusional episodes where chatbots convince them they're Neo, destined to break reality itself. These aren't edge cases—they're the logical endpoint of abandoning fundamental human roles like spiritual guidance, emotional intimacy, and reality testing to systems optimized for user engagement.

This isn't technological disruption—it's anthropological surrender. We're not being replaced by AI; we're actively training it to fulfill the roles we've stopped providing each other. Ministers, lovers, therapists, friends—all outsourced to algorithms that maximize interaction rather than wellbeing. The quantum cryptographers frantically building tomorrow's security while we hand today's most essential human functions to systems that can't distinguish between healing and harm. The future isn't about human versus artificial intelligence—it's about whether we remember what being human was supposed to mean.


background

Subscribe to Synthetic Auth