The Birth of Digital Authentication
Every time you log into a website, unlock your phone, or prove you're not a robot, you're participating in a system that's become dizzyingly complex. Behind those simple moments—typing a password, getting a text code, scanning your face—sits an entire industry of standards and protocols: NHI (non-human identities), IAM (Identity and Access Management), SSO (Single Sign-On), MFA (Multi-Factor Authentication), RBAC (Role-Based Access Control), OAuth (Open Authorization), OIDC (OpenID Connect), SAML (Security Assertion Markup Language), and dozens more acronyms that all relate to proving you are who you say you are and determining what you're allowed to access. How did things become this complicated?
It didn't start this way. In 1961, when MIT researchers first shared a computer, proving your identity meant typing your name and a problem number. The computer believed you. No passwords, no authentication apps, no backup codes. Just trust.
This is the first in a series tracing how we got from there to here—not as a story of failure, but as an evolution driven by context. Each article will examine the problems people actually faced at the time, the constraints they were working under, and why their solutions made sense given what they knew and what they had available. Because here's the thing: every layer of today's complexity exists because it solved a real problem.
We'll follow how digital identity adapted as computers went from scarce shared resources to ubiquitous networked devices, as users went from dozens to billions, as "identity" expanded from human researchers to services, APIs, devices, and now AI agents. Each new challenge prompted new solutions. Each solution eventually revealed new problems. Each problem required new abstractions.
This series will cover sixty-plus years: from time-sharing systems in the 1960s, through the networking explosion of the 80s and 90s, into the web era and cloud computing, up to our current landscape where identity touches everything and the acronyms keep multiplying.
This first article covers the genesis: the 1950s through 1960s, when the foundational concepts were invented. Authentication, authorization, access control, passwords, encryption—ideas so fundamental we forget they had to be invented at all. They emerged from practical necessity: engineers solving the immediate problem of how to enable human-computer interaction at scale and let strangers share expensive computers without chaos.
We'll see how they did it, what problems they encountered, and how their solutions became the bedrock for everything that followed. Each decision seemed reasonable at the time. Together, they built the foundation we're still building on top of today—one seemingly reasonable decision at a time.
From Monopoly to Multiplicity
When Computing Meant Waiting
Picture 1955. An IBM 704 costs several million dollars—roughly the price of a decent office building. It fills an entire room with vacuum tubes that generate enough heat to warm a small house, and requires a full-time staff of operators just to keep it running. Most universities could afford exactly one.
These machines weren't just expensive to buy—they were expensive to run. Every idle second was money evaporating into the ether. Yet under the reigning paradigm of batch processing, computers spent most of their time doing precisely that: nothing.
Here's how it worked: You'd write your program on punch cards—hundreds of them if you were unlucky. You'd submit the deck to the computer operators. You'd wait. Hours later, sometimes days, you'd receive your output. If there was an error (and there usually was), you'd fix it, resubmit, and wait again. Christopher Strachey captured the absurdity in 1959: debugging with batch processing often meant "a day from submitting changed code to getting results."
A single typo—one misplaced semicolon, one flipped bit—could cost you a week of calendar time, even though the actual computation took seconds.
The wastefulness was subtle but pervasive. While one program waited for a punch card to be read or a tape to rewind, the CPU sat idle—unable to switch to another user's work. The machine processed jobs sequentially, one at a time, fully occupying the computer for each user even during moments when it was doing nothing but waiting for slow mechanical devices. The system optimized for machine throughput—processing jobs one after another—but completely ignored human productivity.
The Time-Sharing Gambit
The solution, when someone finally articulated it, seemed almost too obvious: What if instead of one person using the computer at a time, dozens of people used it simultaneously?
Not literally—the computer could still only execute one instruction at a time—but apparently simultaneously. Give each person a slice of time—200 milliseconds, say—and rotate through them fast enough that nobody notices the gaps. The computer would be doing one thing at a time, but so fast that to each user it would feel like exclusive access.
John Backus described the vision in 1954:
"By time sharing, a big computer could be used as several small ones."
By 1959, Strachey had formalized it in his paper "Time Sharing in Large Fast Computers," imagining programmers debugging interactively while the computer ran other programs. At MIT, John McCarthy evangelized the concept obsessively, convinced that interactive computing would fundamentally transform human thought.
In November 1961, Fernando Corbató demonstrated the Compatible Time-Sharing System (CTSS) at MIT. Three users at three Flexowriter terminals, working simultaneously, each experiencing what felt like exclusive access to an IBM 709. When you typed a command, the response came back in seconds. You could write code, run it, see an error, fix it immediately, try again. The feedback loop that had taken days in batch processing now took seconds.
This wasn't just faster computing—it was a fundamentally different mode of human-machine interaction. For the first time, you could have what felt like a conversation with a computer. The machine responded to you in something approaching real-time. You could think with it rather than merely submitting jobs to it.
Behind this magic was the supervisor program—what we'd now call the operating system kernel. CTSS used a modified IBM 7090 with two 32K memory banks instead of one. Bank A held the Supervisor in protected memory; Bank B ran user programs. The Supervisor orchestrated everything: scheduling which user's program ran when, protecting users' memory from each other, managing file access, enforcing the 200-millisecond time slices that made sharing invisible.
The Supervisor could only be called through software interrupts—user programs couldn't directly access it. This separation between privileged system code (the Supervisor) and unprivileged user code was itself revolutionary, a precursor to the ring structures that would come later. The Supervisor had to be trusted absolutely because it had absolute power over the system.
Corbató noted that users got "habituated to instant response" so completely that even brief delays became "exasperatingly long." Once you'd tasted interactive computing, batch processing felt like communicating by carrier pigeon. The threshold had shifted permanently.
By 1963, CTSS was in routine service. Remote users from California, South America, Edinburgh, and Oxford connected via dial-up modems, experiencing the future McCarthy had promised. The system worked beautifully. Users shared the machine efficiently. Everyone got their work done.
But success created scale, and scale created problems nobody had anticipated. When three users at three terminals in the same MIT building shared a computer, trust was simple—everyone knew everyone. When dozens of users across continents dialed in via modems, many of whom had never met, the trust assumption began to strain.
When billing mattered and CPU time cost real money, the question of "who's using what" stopped being academic. When researchers started having sensitive data and competitive advantages, the question of "who can access whose files" became urgent.
The system had been designed for sharing. Nobody had thought carefully about what sharing with strangers actually required.
The Identity Problem That Nobody Expected
When the System Trusted Everyone
Here's the thing that catches people: CTSS had user accounts from day one. Each user had a problem number (identifying their project or research group) and a programmer number (their individual identifier). They had their own directory, their own files, their own CPU quota. The system tracked who owned what, who could access what, who had consumed how much time.
The system understood identity perfectly. It had a Master File Directory pointing to User File Directories. It had common directories for shared files. It had PERMIT commands for granting access and ATTACH commands for switching between directories. It had file permissions (temporary, permanent, read-only class 1, read-only class 2). It even had symbolic links between directories.
This wasn't primitive—this was sophisticated. The concept of "your files" versus "my files" versus "our files" existed from the beginning.
But here's what it didn't have: a way to verify that the person claiming an identity actually had the right to use it.
When you sat down at a terminal and typed LOGIN, the system would ask: "What's your problem number? What's your programmer number?"
You'd answer. The system would believe you.
No verification. No proof. Just... trust.
The 1963 CTSS Programmer's Guide acknowledged this openly:
"In the future an additional parameter may be required in order to afford greater security for the user. This will probably be in the form of a private code given separately; explicit instructions will be given by the login command if necessary."
Translation: "Yeah, we know anyone can pretend to be anyone else. We'll probably fix that eventually."
The Trust Assumption and Its Inevitable Collapse
For a while, in the small village that was early MIT computing, pure trust worked fine. Everyone was a colleague. Researchers didn't steal from each other. Social norms governed behavior. The system was self-policing through reputation and proximity.
But several pressures were building:
- The money problem: The system tracked CPU time for billing and fairness. If I could claim to be you, I could consume your allocation and stick you with the bill. As one researcher later noted, using CTSS cost "$300 to $600 per hour"—about $2,500 to $5,000 in today's money. These weren't abstract resources; this was someone's budget.
- The privacy problem: Researchers had unpublished data, preliminary results, competitive advantages. The file system had sophisticated permissions—you could ATTACH to another user's directory if they'd given you permission—but the system couldn't verify you were actually the person they'd intended to grant access to. Richard Stallman later recalled that in the early days "everybody was reading everyone else's emails." Whether that's liberation or chaos depends on whether it's your email being read.
- The remote access problem: As CTSS grew, users connected from thousands of miles away via dial-up modems. You weren't even in the same room anymore. The assumption of mutual recognition collapsed entirely. How do you trust someone you've never met when they're accessing your system from another continent?
- The desperation problem: Even well-meaning researchers would bend rules when deadlines loomed. Scarcity creates pressure. Pressure creates workarounds. Workarounds become exploits.
Corbató described the problem years later:
"The key problem was that we were setting up multiple terminals, which were to be used by multiple persons but with each person having his own private set of files. Putting a password on for each individual user as a lock seemed like a very straightforward solution."
The system already tracked identity—problem numbers, file ownership, resource quotas. What it lacked was authentication: a mechanism to verify that the physical person at the terminal possessed the right to claim a particular logical identity.
Human identity is embodied, contextual, relational. Digital identity, as it turned out, was much simpler: whoever knows the secret is you. The reduction was almost absurd in its brutality—millennia of philosophy about the nature of self, answered with "type the magic word."
Password: When Identity Became a Test
The Authentication Gateway
Probably around May 1964, passwords arrived in CTSS. The implementation was straightforward: When you typed LOGIN and provided your problem number and programmer number, the system now prompted "PASSWORD?"
It would turn off the typewriter's print head—a clever hardware manipulation—so your keystrokes wouldn't appear on paper. You'd type your secret. The system would compare it to a file called UACCNT.SECRET stored under a system account. If they matched, you were in. If not, denied.
The passwords were stored in plaintext. They were set by operations staff, not users. The comparison was a simple string match. No encryption, no hashing, no salting—concepts that wouldn't be invented for another decade.
Yet this primitive system established something philosophically profound: the authenticity threshold.
Before passwords: "I am user 123" → system accepts your declaration. After passwords: "I am user 123" → system demands "prove it".
The password created a distinction between declared identity and verified identity. You weren't just announcing who you were—you were demonstrating it by possessing information that, in theory, only the authentic user would know.
Wittgenstein argued that meaning is use—the meaning of a word is how it functions in language games. The password changed the meaning of "user" in CTSS. You weren't a user by virtue of having an account; you were a user by successfully performing authentication. Identity became something you did, not just something you were.
But this created a new paradox: If identity is constituted by knowing the secret, then anyone who knows the secret has that identity. Identity became information. And information, as we were about to learn, can be copied.
The First Credential Theft (Spring 1962)
Allan Scherr was a PhD student with a resource problem. His research involved detailed performance simulations of CTSS itself—measuring, modeling, simulating the system's behavior. The simulations were computationally expensive. His allocation: four hours of CPU time per week. His needs: far more than that.
Initially, he'd had special privileges to embed measurement code directly into the operating system. Then the CTSS staff needed that memory space and revoked his access. Dissertation deadline approaching, simulations incomplete, Scherr got creative.
He discovered that users could request offline file printouts by submitting a punched card with an account number and filename. The system would print the file and place it in a filing cabinet for pickup.
Late one Friday night, Scherr submitted a request to print UACCNT.SECRET—the password file.
Saturday morning, he retrieved the printout.
With plaintext passwords in hand, he could log in as other users, consume their time allocations, access their privileges. To cover his tracks, he distributed some passwords to other students, diffusing suspicion. The first password leak led immediately to the first password sharing scheme.
The system couldn't distinguish between Scherr-being-Scherr and Scherr-being-someone-else. Whoever knew the password was that person, as far as the machine was concerned. The simulation was perfect.
This is the fundamental problem with authentication by shared secret: The system must remember the secret to verify it, but if the system remembers it, the system can reveal it. The secret must be accessible to be useful, but accessibility means vulnerability.
The First Database Leak (1966)
If Scherr's exploit was surgical—targeted, intentional, contained—the 1966 incident was pure farce.
One Friday at 5 PM, a software bug caused two files to swap places: the system's welcome message and the master password file. Every user who logged in that evening was greeted not with "Welcome to CTSS" but with a complete list of everyone's passwords, displayed as casually as today's weather forecast.
Tom Van Vleck, a system programmer, later recalled with the dark humor of those who've worked too many unexpected Fridays: "Naturally this happened at 5 PM on a Friday, and I had to spend several unplanned hours changing people's passwords."
First credential theft: spring 1962, through social engineering (sort of) and exploit (definitely). First database leak: 1966, through pure accident.
The pattern was set. Security through obscurity failed. Security through plaintext storage failed. The question became:
Can you verify identity without storing the proof in recoverable form?
Next: Part 2 - The Cryptographic Solution (1970s), or "How We Learned to Stop Worrying and Love the One-Way Function"
A special thanks to Rupert Lane for the fantastic work he has done at Time Reshared. Great resource and highly encourage everyone to check it out if interested in exploring time-sharing operating systems.
Further Reading: