background

The Making of Digital Identity - 02 - The Cryptographic Solution


Part 1 of this series left off with the question of whether we can verify identity without storing the proof in recoverable form.
In Part 2, we will examine whether the question was answered and how.

The Cryptographic Solution


At the end of the 1960s, digital identity had a problem: to verify identity, the system must know the secret; but if the system knows the secret, the secret can be stolen.

Allan Scherr had proven this by stealing the CTSS password file in 1962. A software bug had proven it again in 1966 by accidentally displaying everyone’s passwords as a login greeting. The pattern was clear: plaintext passwords were a disaster waiting to happen. But what was the alternative?

For years, this seemed like an unsolvable trade-off. You could have verification or you could have security, but not both. The secret had to be accessible to be useful, but accessibility meant exposure.

Then, in the early 1970s, researchers discovered something remarkable: you could verify knowledge of a secret without storing the secret itself. And separately, they invented something even more remarkable: secrets that could be shared publicly without compromising security.

These weren’t just technical improvements—they were philosophical breakthroughs that changed what “identity” and “authentication” could mean. This is the story of how cryptography saved digital identity, and how two divergent visions of secure computing emerged from the ashes of CTSS’s trust-based model.


The Paradox of Verification

The Question Nobody Knew How to Answer

It’s worth remembering what was actually at stake here. Nobody in 1962 was worried about “identity theft” in the modern sense—the concern wasn’t that someone would steal your sense of self. The concern was practical: someone could consume your CPU allocation, access your files, get you billed for their work. Authentication was about access control to resources, not verification of personhood.

But the technical problem was the same regardless of the stakes: to verify access rights, the system must know the secret; but if the system knows the secret, the secret can be stolen.

This is more than a technical challenge—it’s a logical paradox. Authentication requires comparison: you must check what the user provides against what you’ve stored. But storage means vulnerability. The very information that proves identity can be used to forge identity.

For years, this seemed like an unsolvable trade-off. You could have verification or you could have security, but not both. The secret had to be accessible to be useful, but accessibility meant exposure.

Then, in the early 1970s, researchers discovered something remarkable: you could verify knowledge of a secret without storing the secret itself.

The One-Way Mirror: Hashing and Salt

The breakthrough came from thinking about the problem differently. What if the system didn’t need to know your password—only needed to recognize evidence that you knew it?

The computing world had fractured since CTSS’s heyday. MIT’s successor project, Multics (Multiplexed Information and Computing Service), launched in 1965 as an ambitious reimagining of time-sharing with security built into its core architecture. Meanwhile, at Bell Labs—initially a Multics partner—Ken Thompson and Dennis Ritchie grew frustrated with Multics’s complexity and began work on something simpler in 1969: Unix.

Unix inherited CTSS’s basic authentication model (username and password) but needed its own implementation. Robert Morris, working on Unix at Bell Labs, found the answer in one-way functions—mathematical transformations that are easy to compute forward but practically impossible to reverse. Take a password, run it through this transformation, store only the result. When someone logs in, transform what they typed and compare it to the stored value. If they match, they knew the password. But even with access to the stored value, you can’t work backwards to discover the original.

Here’s a simplified example of how it works:

User creates password: hello123
System runs it through the hash function: a8f5f167f44f4964e6c998dee827110c
System stores only the hash, discards the original password
When user logs in with hello123, system hashes it again, compares to stored value
If they match → access granted. If not → denied
An attacker stealing the database gets a8f5f167f44f4964e6c998dee827110c but can’t reverse it to get hello123

Morris adapted the DES encryption algorithm: use the password as a key to encrypt an all-zero block 25 times. The multiple iterations made brute-force attacks expensive—not impossible, but costly enough to matter. Store this transformed value instead of the password itself.

But Morris went further. He added salt—random data mixed into each password before hashing. This meant two users with identical passwords would have completely different stored hashes:

User A: hello123 + salt xyz789 → hash 7c6a180b36896a0a8c02787eeafb0e4c
User B: hello123 + salt abc456 → hash e99a18c428cb38d5f260853678922e03

Suddenly, precomputed attacks became useless. An attacker couldn’t build a dictionary of common passwords and their hashes—each password would hash differently on every system, for every user.

This was genuinely elegant: the system could verify you knew the secret without the system knowing the secret. The test proved knowledge without requiring storage of that knowledge in recoverable form.

The philosophy here cuts deep. Ibn Sina, the medieval Persian polymath, wrote about negative properties—essences defined not by what they are, but by what they lack. Cold, he argued, isn’t a substance itself but the absence of heat. Darkness isn’t a thing but the absence of light. These “negative properties” have real effects despite being defined by absence.

A hashed password works the same way. It’s defined by what it’s not: not reversible, not the original, not recoverable. You can’t extract the password from the hash, you can’t reconstruct what was there, you can’t undo the transformation. The hash’s value—its ability to verify identity—comes precisely from what it refuses to reveal. It proves you knew something without preserving what you knew. Security through strategic absence.

Yet even this elegant solution didn’t solve the deeper problem. It made stealing stored passwords less useful, but passwords could still be stolen before hashing—through keyloggers, phishing, shoulder surfing, or simply by tricking users into revealing them. The one-way function protected the database, but it couldn’t protect the moment of authentication itself.

The paradox remained, just displaced: identity still reduced to information, and information could still be copied.

The Evolutionary Split

By the late 1960s, CTSS had proven that time-sharing worked. The question became: what’s next?

MIT, Bell Labs, and General Electric partnered on Multics (1965-2000), envisioning a “computer utility”—as reliable as electricity, as secure as a bank vault. Security wasn’t an afterthought; it was the foundation. Every design decision asked: how could this be exploited, and how do we prevent it?

Bell Labs initially participated but withdrew in 1969, frustrated by Multics’s complexity and slow progress. Ken Thompson, having worked on Multics, wanted something simpler. Unix (1969-present) took Multics’s best ideas—hierarchical file systems, process model, text as universal interface—and stripped away the elaborate security architecture.

The philosophical split was profound:

  • Multics: Security through complexity. Multiple protection rings, mandatory access controls, hardware-enforced privilege separation. “Do it right, even if it’s hard.”
  • Unix: Security through simplicity. Basic file permissions, simple process model, portable C code. “Make it work, make it simple, make it portable.”

For digital identity specifically, both systems inherited CTSS’s core insight: you need authentication (who are you?), authorization (what can you do?), and accounting (what did you do?). But they diverged sharply on how to implement these principles.

Unix bet on simplicity and won the market. Multics bet on security and won the argument. Decades later, we’re still trying to bolt Multics-level security onto Unix-style systems—trying to retrofit defense in depth into architectures designed for elegance and portability.


When Security Became Architecture

The Limitation of Clever Tricks

Password hashing was elegant, but it was also a patch—a clever mathematical trick applied to a fundamentally flawed model. You were still proving identity by revealing information to a system that might not be trustworthy. The hash protected stored passwords, but it couldn’t address deeper questions: What if the authentication system itself is compromised? What if the system lies about who you are after you’ve authenticated? What if someone with system privileges abuses them?

Remember, what was being protected here wasn’t abstract “identity”—it was concrete resources. CPU time that cost real money. Files containing research that could make or break academic careers. Computing privileges that determined whether you could get your work done. The question wasn’t philosophical (“who am I?”) but practical (“how do we ensure only authorized people access authorized resources?”).

These weren’t hypothetical concerns. They were questions that emerged directly from trying to build systems where strangers with varying levels of privilege shared expensive resources and sensitive data.

Morris’s password hashing solved one piece of the puzzle—protecting stored credentials. But Multics, the ambitious system that had inspired Unix’s creation, was simultaneously tackling something more fundamental: could you architect security into a system from the ground up rather than bolting it on afterward?

This matters for identity because authentication (proving who you are) is only half the problem. The other half is authorization (what you’re allowed to do once authenticated) and accountability (tracking what you actually did). Multics understood this in ways CTSS never had to.

The Computer Utility Vision

As one Multics developer noted:

“At that time in the mid-1960s, all then-existing computer systems could be cracked: that is, their file access controls could be defeated, and any user who could run a program on the machine could take the machine over.”

Multics aimed to build a system whose access controls couldn’t be bypassed. This required rethinking everything.

Ring Structure: Eight privilege levels (0-7) implemented in hardware, not just software. Ring 0 was the kernel—the innermost circle where privileged operations occurred. Ring 7 was userland where ordinary programs lived with minimal privileges. Programs in outer rings trying to access inner ring resources didn’t just get denied—they generated hardware traps. The attempt itself was physically impossible. Plato’s cave implemented in silicon: users in outer rings could only see shadows of inner rings, and this was the entire point.

Access Control Lists: Each file (“segment” in Multics terminology) had fine-grained permissions. Not just “read/write/execute” but nuanced controls. Mailboxes had permissions like “add,” “delete,” “read,” “own,” “status,” “wakeup,” “urgent.” The system could express complex trust relationships.

Mandatory Access Control: Beyond discretionary control (owners decide access), Multics implemented classification-based security for military applications. The Access Isolation Mechanism (AIM) enforced rules like no read-up (Secret users can’t read Top Secret data) and no write-down (Top Secret programs can’t write to Secret files). You couldn’t accidentally leak classified information even if you actively tried.

Hierarchical File System: True nested subdirectories, symbolic links, sophisticated naming—essential for managing complexity in large multi-user systems.

This was defense in depth: multiple layers of protection, each assuming the others might fail.

The Tiger Team Reality Check

And attackers came. The US Air Force funded “tiger teams” to break into Multics. They succeeded, repeatedly. Project ZARF (declassified in 1997) in 1973 systematically exploited bugs in supervisor entry points, memory management, timing vulnerabilities.

Security isn’t just about elegant design—it’s about implementation. And implementations, being written by humans, have bugs. The gap between specification and reality is where vulnerabilities live.

Eventually, with second-generation hardware fully implementing ring protection in silicon and extensive code hardening, break-ins became rare. In 1985, Multics achieved B2-level security certification from the NSA—one of the highest civilian ratings possible, and one that many modern systems still can’t reach.

Multics ran in production until October 2000—a 31-year run. It proved you could build secure multi-user systems. But it was complex, expensive, and required specialized hardware.

Unix won the market by making different tradeoffs: simplicity over security, portability over protection, elegance over completeness. Unix inherited Multics’s hierarchical file system, its process model, its text-as-universal-interface philosophy—but stripped away the complexity and the elaborate security.

For digital identity specifically, Multics established principles that survived its own obsolescence: authentication through passwords, authorization through access controls, protection through privilege levels, accountability through logging. These aren’t just features—they’re necessities for any system where strangers share resources.


The Inheritance We Can't Escape

By 1975, the foundations were in place:

  • Authentication: Passwords (flawed but ubiquitous)
  • Cryptography: Hashing with salt (essential but incomplete)
  • Access Control: ACLs (practical), capabilities (theoretical)
  • Public Key Crypto: Invented but not yet deployed at scale
  • File Systems: Hierarchical, with sophisticated permission models

What the pioneers got right:

  • Security as a design goal, not an afterthought
  • Defense in depth through multiple layers
  • Principle of least privilege
  • Separation of authentication, authorization, and accounting
  • Audit trails for accountability

What they couldn’t foresee:

  • Networks connecting millions of strangers
  • Identity as a commercial product
  • Password reuse across hundreds of services
  • Social engineering attacks that bypass all the clever crypto
  • Phishing, keystroke logging, credential stuffing
  • The entire economy of identity theft
  • That the password would still be with us, zombie-like, sixty years later

The Paradoxes That Persist

The Shared Secret Problem: Authentication requires both user and system to know the secret. But if the system knows it, the system can leak it. Even with hashing, someone with database access can attempt offline cracking. The secret must be shared to be verified, but sharing creates vulnerability. We’ve made the math harder, but we haven’t solved the fundamental problem.

The Convenience-Security Tradeoff: Stronger authentication makes systems harder to use, encouraging workarounds. Corbató noted that users got “habituated to instant response”—even brief delays felt “exasperatingly long.” Security is friction. Humans route around friction like water around stone. We can make security more sophisticated, but we can’t make friction appealing.

The Performative Identity Problem: Digital identity is performative, not ontological. You prove you’re you by successfully acting like you. But if someone else can perform your identity (by knowing your password), they are you to the system. Being and seeming collapse into the same thing. Wittgenstein would appreciate this: the meaning of “user” is inseparable from the language game of authentication.

The Trust Regression Problem: Who authenticates the authenticator? The system verifies your password, but how do you verify the system? The Air Force repeatedly compromised Multics by exploiting the gap between what the system claimed to do and what it actually did. Trust requires trust all the way down, and eventually you hit hardware, physics, or human institutions—none of which are perfectly trustworthy.

The Measurement Problem: Schrödinger taught us that observation changes the observed system. Authentication doesn’t discover your pre-existing identity—it creates your identity in that moment. Before you authenticate, you exist in superposition, potentially any user. The password collapses that wave function into a specific state: authenticated or denied, Alice or not-Alice. The quantum mechanics of identity: you don’t have an identity until someone measures it.

What We Built and What It Cost

The engineers of the 1960s and 70s solved real problems: how to share expensive computers fairly, how to prevent chaos in multi-user systems, how to bill accurately, how to enable collaboration without sacrificing privacy. Their solutions were imperfect but profound.

They invented digital identity not because it was philosophically interesting (though it is), but because it was necessary. Time-sharing created abundance from scarcity. Authentication made that abundance orderly. Passwords made authentication practical. Cryptography made passwords safer.

Sixty years later, we’re still using their inventions, still fighting their battles, still discovering new implications of their choices.

Fernando Corbató died in 2019 at age 93. In later interviews, he expressed ambivalence about passwords. They had “become kind of a nightmare,” he told the Wall Street Journal in 2014. Too many services, too many requirements, too many forgotten credentials locking people out of their digital lives.

He’d solved the problem he was asked to solve: verify users on a shared system with limited resources. He couldn’t have known we’d build a world where you need dozens of passwords just to function, where identity theft is an industry, where authentication failures can lock you out of your own life.

The password is dying now, slowly. Biometrics, hardware tokens, behavioral analysis, risk-based authentication—we’re desperately trying to move beyond the shared secret model. But the fundamental problem persists: How do you prove to a machine that you are who you say you are?

And maybe that’s the wrong question. Maybe the problem isn’t technical at all. Identity isn’t just information—it’s trust, context, relationship, history. Things that don’t reduce to bits, that don’t scale algorithmically, that resist verification.

We taught systems to demand proof of identity. We never taught them to understand what identity actually means.

The machine still asks: Who are you?

After six decades of increasingly sophisticated answers, that question remains as hard as ever. Perhaps because we’re still not sure what kind of answer we’re looking for.


Previous: Part 1 - The Birth Of Digital Authentication

Next: Part 3 - The Network Era (1980s-1990s), or "When Identity Became Portable and Everything Got Worse"


Further Reading:

background

Subscribe to Synthetic Auth