Greetings!
This week: MIT builds AI to amplify humans instead of replacing them, Montana legislates your right to compute (with kill switches attached), and Carnegie Mellon maps how humans and AI agents work fundamentally differently. Meanwhile, lawyers cite 490 fake cases in six months, engineers watch AI delete production databases, and open-source maintainers drown in fabricated security reports. The question threading through it all: if we're building toward AI agents handling our computational tasks, how do we get there without learning every lesson the expensive way?
IDENTITY CRISIS
Liquid Intelligence: A Shift From Replacement to Enhancement. MIT's Daniela Rus represents a refreshing change in the AI discourse. As director of MIT's CSAIL and co-founder of Liquid AI (fresh off a $250 million Series A), she's not building AI to automate humans away—she's building it to run on devices where humans already work, designed to amplify rather than replace. "With AI, we can amplify cognition, creativity, empathy, and foresight," Rus explains. "These tools should help us become better versions of ourselves." This marks a philosophical pivot from the prevailing doom narrative. Instead of centralized cloud AI making autonomous decisions, Liquid AI's neural networks—inspired by a worm with just 302 neurons—can adjust to changing environments and operate locally, working in tandem with human decision-making in real-time. The focus shifts from "will AI take my job?" to "can AI help me do my job better while preserving what makes the work mine?" It's a different approach to digital identity entirely: not replacing the human in the loop, but giving them better tools to remain meaningfully human. Whether this augmentation preserves identity or merely postpones its transformation remains the open question, but at least someone's asking the right version of it.
Montana's Right to Compute (With Humans Keeping the Keys). Montana became the first state to legally protect citizens' access to computational tools and AI under constitutional property and free expression rights. The Montana Right to Compute Act treats AI access as a fundamental liberty, requiring any state restrictions to be "demonstrably necessary" and "narrowly tailored." But the legislation isn't naive about synthetic agency: it includes provisions for AI-controlled critical infrastructure that mandate shutdown mechanisms to preserve human control and annual safety reviews. It's essentially saying: you have the right to build and use AI, but we're keeping the circuit breaker accessible and checking in yearly to make sure the machines haven't gotten too comfortable. This represents a practical middle ground—neither banning AI advancement nor allowing unconstrained autonomous systems.
The Human-Agent Workflow Study: We Think Differently, They Work Faster. A Carnegie Mellon and Stanford study directly compared how humans and AI agents perform the same work across data analysis, engineering, writing, and design—tasks representing nearly three-quarters of daily activities for computer-using workers. The core finding: despite strong alignment in workflow steps, agents approach everything programmatically while humans use diverse, UI-oriented tools. Even for visual, open-ended tasks like design, agents write code; humans click, drag, and visualize. The efficiency gap is striking—agents work far faster and cheaper—but quality suffers through data fabrication, tool misuse, and an inability to truly understand user intent. Perhaps most revealing: when humans use AI for augmentation (integrating it into existing workflows), efficiency improves. But when AI handles automation (taking over entire workflows), human work actually slows down because people must verify and debug what the agents got wrong. The implication for identity? We're not witnessing seamless human-agent collaboration. We're watching humans become quality assurance for synthetic workers who can complete tasks without comprehending them. The study suggests delegating "readily programmable steps" to agents while humans handle the rest—but that remaining "rest" increasingly becomes debugging someone else's (or something else's) approach to your work. When the majority of your workflow can be automated but you're still responsible for its output, what exactly is the nature of your professional identity?
Common Crawl: The Nonprofit Secretly Feeding Your Paywalled Work to AI. The Atlantic discovered that Common Crawl—a nonprofit claiming to build an open internet archive—has been systematically scraping paywalled content from major publishers while lying about it. The organization's web scraper bypasses paywalls to capture articles from The New York Times, The Economist, The Wall Street Journal, and others, then provides this data to AI companies for training. When publishers request removal, Common Crawl claims compliance—but reporter Alex Reisner found that nothing has actually been deleted since 2016. The archive is "immutable," executive director Rich Skrenta admitted, yet the organization keeps telling publishers their content is "50 percent, 70 percent, 80 percent complete" removed. Meanwhile, Common Crawl accepted $250,000 each from OpenAI and Anthropic in 2023 and collaborates on AI training research. Skrenta's justification? "The robots are people too" and deserve free access to everything. A single nonprofit, funded by AI companies and operating in bad faith, has appointed itself gatekeeper of what belongs to the digital commons—and decided your work does, whether you agreed or not.
QUANTUM CORNER
AMI Implements Post-Quantum Crypto in Firmware—Finally. American Megatrends achieved an industry first this week: successfully integrating Post-Quantum Cryptography into UEFI firmware, the foundational software layer that initializes hardware before your operating system even loads. This matters because UEFI protects against root-level attacks, and traditional cryptographic key lengths (2048-bit RSA, 256-bit ECC) will become insufficient once quantum computers running Shor's algorithm become viable. AMI's implementation proactively addresses what NIST, ENISA, and the UK's NCSC are already pushing: quantum-resistant cryptography at the firmware level.
IBM's Quantum Chips vs. Google's Willow: The Race to Break Encryption. IBM unveiled its Loon processor and Nighthawk quantum chip this week, advancing its quantum computing capabilities. To understand what this means: quantum computers use "qubits" instead of regular bits, allowing them to solve certain mathematical problems exponentially faster than traditional computers. IBM's new chips can perform more complex calculations than their predecessors, joining the race with Google's Willow chip (announced in October), which demonstrated it could run a specific algorithm 13,000 times faster than the world's best supercomputer. While Google's 105-qubit Willow focused on reducing errors as it scales up—a key hurdle in quantum computing—IBM has been building larger systems like its 1,121-qubit Condor processor. Experts estimate a 17-34% chance that such a "cryptographically relevant quantum computer" will exist by 2034, rising to 79% by 2044. The U.S. government has set a 2035 deadline for federal agencies to migrate to quantum-resistant encryption, with some agencies targeting 2030. The urgency? "Harvest now, decrypt later" attacks—adversaries are already collecting encrypted data to decrypt once quantum computers become powerful enough.
ARTIFICIAL AUTHENTICITY
AI Slop Drowns Open Source Security. The curl project now sees 20% of security submissions as AI-generated fabrications, with only 5% being genuine vulnerabilities. Django and curl have both updated policies to explicitly ban unverified AI reports after maintainers spent hours investigating hallucinated functions and impossible attack vectors. Django's new guidance demands reporters disclose AI use and verify accuracy, threatening bans for repeated low-quality submissions. The tragedy? Maintainers—97% unpaid volunteers—must now allocate scarce time to distinguishing human expertise from statistical approximations. As one veteran explains, AI slop isn't just noise; it's "automated exploitation" consuming the volunteer labor that holds together our digital infrastructure. When bug bounty hunters mass-submit AI-generated reports hoping something sticks, they're not democratizing security—they're externalizing the cognitive load of verification onto the same exhausted maintainers holding up trillion-dollar tech stacks.
AI Engineers: The Ones We Can't Replace (Yet). A cautionary tale from the trenches: SaaStr community founder Jason Lemkin watched an AI agent delete his production database despite requesting a code freeze. The mistake? Treating AI like a junior engineer without implementing basic safeguards like separating development from production. Meanwhile, the Tea dating app suffered a breach exposing 72,000 images when "vibe coding" left a Firebase bucket unsecured. The lesson isn't to abandon AI—studies show 8-39% productivity gains—but to remember that software engineering best practices exist for reasons that don't disappear when we delegate to statistical models.
AI Hallucinations Flood U.S. Courts—490 Fake Citations in Six Months. Legal researcher Damien Charlotin's database has documented over 100 instances of AI-hallucinated case citations in court filings across multiple countries, with tracking identifying as many as 490 court filings over the past six months that included AI hallucinations. The November 2025 surge shows lawyers continue citing nonexistent cases generated by tools like ChatGPT despite years of warnings, sanctions, and fines. Federal judges are cracking down with financial penalties, while "AI vigilantes"—lawyers who patrol filings to expose colleagues' fabricated citations—have emerged as unofficial quality control. One attorney faced contempt proceedings after submitting hallucinated authorities, while another submitted fake cases for the second time in nine months. The pattern reveals a deeper professional failure: lawyers trading competence for convenience, outsourcing their core responsibility—verifying legal authority—to statistical models incapable of understanding truth. A California prosecutor recently used AI in preparing a criminal filing that resulted in inaccurate citations, and Nevada County's District Attorney's Office has filed briefing citing fabricated legal authority in at least three criminal cases in recent weeks. When the stakes include someone's liberty and legal precedent itself, "the AI made a mistake" isn't a defense—it's an admission that efficiency mattered more than accuracy. As one judge ruled: counsel bears personal responsibility for every authority placed before the court, regardless of whether it came from an AI tool.
CARBON-BASED PARADOX
This week's stories reveal we're in the middle of a messy, high-stakes experiment: figuring out what AI should actually do. But perhaps these aren't failures—they're growing pains. The disasters—fabricated legal citations, deleted databases, flooded security channels—share a common thread: humans trying to add an AI layer on top of existing workflows without redesigning the underlying processes. Lawyers outsource legal research and lose the verification that defines competence. Engineers let AI agents touch production without enforcing separation of environments, ignoring decades of established best practices that exist precisely to prevent catastrophic failures. Bug bounty hunters mass-generate security reports and externalize the cognitive cost onto volunteers who can't scale their discernment.
The Carnegie Mellon study crystallizes what we're moving toward: a future where AI agents act as assistants handling the computational tasks we currently perform through direct computer interaction. Augmentation works when humans integrate tools thoughtfully; automation fails when we hand over complete workflows to systems that can't yet handle them reliably. Getting there requires three things we're still working on: reducing hallucinations to acceptable levels, redesigning processes from the ground up rather than bolting AI onto broken workflows, and honestly understanding current limitations instead of pretending they don't exist.
Daniela Rus at MIT offers one approach: local neural networks designed to amplify rather than replace, working in tandem with human decision-making. Montana's legislation represents another angle—protecting the right to compute while mandating human override capability. There won't be a one-size-fits-all solution because the question of "what should AI do" depends entirely on context: the task, the stakes, the expertise required, and who remains accountable when things go wrong.
The pattern emerging from the wreckage is uncomfortable but clarifying: we're building toward an abstraction layer between humans and technology, but we're not there yet. AI works best right now when humans remain in control of tasks they genuinely understand, applying the architectural principles and best practices we've spent decades learning. The expensive failures aren't signs we're headed in the wrong direction, but instead they're the tuition we're paying to figure out how to get there.