|
Editor's note: This is part of an ongoing series exploring tensions in technology through dialogue. We take two substantive articles with differing perspectives on the same phenomenon and put them in conversation—not to declare winners, but to illuminate what each perspective sees clearly and where genuine disagreement lies. The goal is to find the middle ground that honors both viewpoints. The Craft vs. The Commodity: What We Lose (and Gain) When AI Writes Our CodeAI code generation has arrived, and programmers are having radically different reactions. Some see their identity under siege. Others see inefficiency finally being eliminated. Both are watching the same technology reshape software development, but they might as well be observing different universes. What Højberg SaysHøjberg opens with a love letter to programming. He describes coding as craft and identity—the flow state of a full-screen terminal, the satisfaction of elegant puzzle-solving, the connection to MIT's original hacker culture that pursued "The Right Thing" with religious fervor. Programming, for him, is fundamentally about immersion, understanding, and the joy of creation. This identity is now under threat. The AI industry's vision of "vibe-coding"—writing specifications in Markdown while AI agents do the actual programming—represents an existential crisis. He sees a future where programmers become mere "operators," reduced to what he dismissively calls "Specification Engineering." The creative work gets outsourced to machines while humans are left with "directorial drudgery." The comparison to high-level languages like Fortran is, he argues, fundamentally wrong. Fortran built on programming's foundations—it expanded expressibility without eliminating precision. LLMs do the opposite: they introduce non-determinism and imprecision into a field that has always valued predictability and compositionality. As Dijkstra warned, natural language programming replaces formal systems that "rule out all sorts of nonsense" with ambiguous instructions that invite chaos. The practical consequences are already visible. Developers gloss over AI-generated code rather than reading it closely, blindly accepting it if CI passes. Code reviewers become the first line of quality control instead of the last, forced to catch hallucinated libraries and uncalled functions. The author who submitted the code takes no responsibility: "whoopsie, Claude wrote that." But the deeper loss is cognitive. Peter Naur's "Programming as Theory Building" argues that understanding a codebase is programming's actual product, more valuable than the software itself. This understanding only comes from immersion: diving into modules, wrestling with bad designs until better solutions emerge, feeling the dissonance of repetitive code until you discover elegant abstractions. Joan Didion wrote to discover what she was thinking; programmers code to build mental models of systems. AI-assisted development short-circuits this process. Skimming AI summaries of completed tasks robs developers of deep understanding. "Frictionless" generation means we never explore alternative solutions, never iterate toward quality, never develop the theory that enables effective maintenance. We end up with code built on "broken bedrock" that we don't truly understand. The social fabric frays too. Instead of pair programming with colleagues, sketching architectures together, or learning from mentors, developers increasingly turn to LLMs. Management mandates specific AI tools—violating the sacred autonomy programmers have had over their personalized toolsets. The profession that let people earn a living from their hobby is being reduced to something that removes "the fun part of the job." Højberg's conclusion is clear: even if LLMs deliver on their promises, the cost is too high. "I want to drive, immerse myself in craft, play in the orchestra, and solve complex puzzles. I want to remain a programmer, a craftsperson." What O'Brien SaysO'Brien starts with a simple observation: AI code generation is changing what's worth paying for. That React table library with pagination you've been licensing? Claude Sonnet can generate a custom implementation in five minutes. Most developers only use one or two features of any library they buy, so why not generate exactly what you need instead? This isn't theoretical. O'Brien has started answering "me" when asked who wants to rewrite pagination logic. The moat around specialized libraries is shrinking rapidly. If you can answer "Can I just replace that?" in five minutes, then replace it. The same logic applies to open source. Logging libraries like Log4j or Winston exist because developers needed consistent solutions across projects. But most teams use only a fraction of the functionality. These days, generating a lightweight 200-line logging library with exactly the formatting you need is often easier than adding a dependency with complexity you'll never use. The shift extends beyond individual libraries to how we approach problems. Previously, a new requirement meant assembling senior engineers to consider architecture alternatives, debate patterns, and choose frameworks—expensive discussions about abstractions that could take days. Now O'Brien increasingly delegates that "thinking" step to AI models that propose solutions in parallel while he evaluates and refines. The time between idea and execution keeps shrinking. More often, architectural discussions now focus on evaluating the outputs of five or six AI models rather than debating ideas for abstractions. This fundamentally changes the value proposition of frameworks and build tools. O'Brien spent years working on Jakarta Commons—utilities that solved countless minor problems. Those projects may become irrelevant when developers can generate simple functionality on demand. Even Maven's ecosystem of training and documentation may matter less than documenting build systems in ways AI models can understand. The economic implications are stark: software generation makes it harder to justify paying for prepackaged solutions. Both proprietary and open source libraries lose value when custom generation is faster. Frameworks existed to capture standard code that generative models now produce on demand. O'Brien emphasizes he doesn't view this as threatening developer employment—he expects we'll need more developers, and more people will consider themselves developers. But certain practices are expiring: purchasing software, adopting "star stage" open source projects, and having expensive architectural discussions about abstractions. The future holds more custom-built code and fewer compromises to fit preexisting systems. Code generation doesn't just speed up development—it fundamentally changes what's worth building, buying, and maintaining. Where They DisagreeOn what programming fundamentally is:
On AI-generated code quality:
On the learning process:
On what developers actually value:
On the social dimension:
On power dynamics:
Where They Actually Agree (Even If They Don't Realize It)Dependencies have gotten bloated: Both acknowledge that libraries and frameworks carry complexity most teams never use. Højberg mentions this as an aside about boilerplate; O'Brien makes it central to his argument. They agree on the problem, just not whether AI generation is the right solution. Something fundamental is changing: Neither treats AI code generation as just another tool. Højberg sees it as threatening programming's essence; O'Brien sees it as fundamentally reshaping software economics. Both recognize this is a phase transition, not incremental change. Not all AI code generation is bad: Højberg concedes he doesn't "really mind replacing sed with Claude" or asking for documentation clarification. O'Brien acknowledges that shared libraries still offer "interoperability benefits." Neither advocates absolutism. The current state has problems: Højberg rails against management mandates and productivity theater. O'Brien critiques expensive architectural discussions and dependency bloat. Both see dysfunction in how software development currently works—they just diagnose different root causes. What's Really at StakeThis debate is ultimately about what counts as real work in programming. Højberg represents programmers who experience coding as craft—where the process of writing code is inseparable from understanding systems, where struggle and iteration build mental models, where the journey is the point. For this worldview, AI generation isn't just assistance; it's theft of the meaningful part of work. O'Brien represents programmers who experience coding as problem-solving—where implementation is often tedious rather than enlightening, where discussions about abstractions can become expensive bikeshedding, where getting to working solutions efficiently is what matters. For this worldview, AI generation eliminates toil and lets developers focus on actual problems. The tension reveals a split that probably existed before AI: programmers who love programming versus programmers who love solving problems through programming. AI is just making this distinction impossible to ignore. There's also a class dimension here that neither article fully addresses. Højberg writes as a "Principal Frontend Engineer"—someone senior enough to spend time in flow states and craft elegant solutions. O'Brien writes as someone making pragmatic decisions about licensing costs and dependency management—possibly with business pressures Højberg doesn't face. When you're optimizing for craft, AI is a threat. When you're optimizing for delivered value under budget constraints, AI is a tool. The Middle GroundWhat Højberg gets right: Understanding does emerge from writing code yourself. Peter Naur's "theory-building" is real—you can't maintain systems you don't understand, and understanding comes from immersion. AI-generated code that developers gloss over creates genuine technical debt and quality problems. The psychological research on automation bias supports his concerns about skills degradation. And the social dimension matters—programming has always been as much about collaboration and knowledge transfer as individual coding. What O'Brien gets right: Not all code deserves craft attention. Rewriting pagination logic for the hundredth time isn't enlightening; it's toil. Many libraries are genuinely bloated with features nobody uses. The economics really are shifting—when AI can generate custom solutions in minutes, the value proposition of paid libraries and even "star" open source projects changes fundamentally. And time spent in expensive architectural discussions can be wasteful when concrete AI-generated alternatives clarify trade-offs faster. The path forward: The answer isn't to choose between craft and efficiency—it's to recognize that different code deserves different approaches. Use AI generation for: Boilerplate, one-off utilities, exploring API possibilities, generating test cases, creating straightforward implementations of well-understood patterns. This is where O'Brien's pragmatism shines—eliminate toil, reduce dependencies, get to working solutions. Preserve human craft for: Core business logic, novel algorithms, architectural decisions, code that will be maintained for years, systems that need deep understanding. This is where Højberg's concerns are legitimate—these are places where "theory-building" matters, where the process of coding creates understanding you can't shortcut. The crucial skill becomes knowing which is which. Junior developers often can't make this distinction—everything feels equally mysterious. Senior developers like Højberg have the judgment to know when immersion builds valuable understanding versus when it's just grinding through familiar patterns. And crucially: read the AI-generated code. Højberg is right that glossing over AI output is dangerous. If you're going to use AI generation, you need to develop the discipline to review it as carefully as you'd review a junior developer's PR. This isn't natural—it takes conscious effort to overcome the "eyes glazing over" effect. The real question isn't whether to use AI code generation, but whether we can develop a professional culture that uses it appropriately: generating implementations where generation makes sense, while preserving craft and deep understanding where they matter. That requires judgment AI can't provide. |