background

The Making of Digital Identity - 04 - The Web Identity Crisis


Part 1 of this series left off with the question of whether we can verify identity without storing the proof in recoverable form.
Part 2 of this series left us with authentication working—passwords hashed, systems hardened, privileges separated. Users could log in from different terminals. Trust was local and centralized: one system, one administrator, one password file.
Part 3 of this series covered how we spent the 1980s and 90s trying to recreate centuries of social technology in mathematics, and discovered that bits aren't wax, trust doesn't scale, and humans will always route around friction like water around stone.
Part 4 is the story of how the web gave everyone the power to create themselves from scratch, a hundred times over, on a hundred different sites—and how that liberation curdled into a crisis when advertisers, hackers, and the sheer weight of forgotten passwords revealed that a self with no center can't hold.

The Web Identity Crisis


By the mid-1990s, the cryptographic foundations of digital identity were in place. SSL/TLS could prove who owned a website and encrypt the connection. Public Key Infrastructure offered an elegant solution for verifying identity through certificates. Kerberos demonstrated that single sign-on could work in controlled environments.

And then the World Wide Web exploded in popularity, and almost none of it mattered for everyday users.

The problem wasn't that these cryptographic tools didn't work. In fact, they worked brilliantly. The problem was that they solved identity as a cryptographic problem, when the web's real challenge was identity as a scale and usability problem.

How do you prove who you are to a website you've never visited before? How do you maintain that identity as you move from page to page? And most critically: how do you do this for millions of non-technical users without requiring them to understand certificates, key pairs, or cryptographic protocols?

Tim Berners-Lee addressed this tension directly in his 1999 book Weaving the Web. The web, he explained, was designed as a universal information space—not as a system for managing identity. The question of "who are you?" was deliberately left out of the architecture. The web was built to link documents, not to authenticate people.

As a side note, I briefly met Sir Berners-Lee when I first started working at MIT and was attending a Semantic Web meetup in building 32. When he first walked in, I didn't put two and two together as it was totally unexpected, but then I realized who he was. Perks of being at MIT.

The web solved this problem, but not the way anyone expected. Instead of building on the sophisticated cryptographic foundations from Article 3, web developers created something far simpler: the username, password, and cookie.

But before this simpler path won by default, something more fundamental was happening. For the first time in human history, the concept of "who you are" was splitting into two fundamentally different digital paths.

On one side, governments and large organizations were beginning to digitize official identity. The same institutions that issued passports, driver's licenses, and birth certificates were asking: how do we make these work electronically? This path was about translating real-world, institutional identity into digital form.

On the other side, a flood of new web services—shopping sites, email providers, forums, news portals—needed to recognize their users. Not in a legal sense. Not tied to a government-issued credential. They just needed a way to say: "you're the same person who was here yesterday, and here's your shopping cart." This path was about creating new digital presences that had no real-world equivalent.

These two paths would define the next decade of digital life online. One was top-down, institutional, and rooted in legal frameworks. The other was bottom-up, informal, and driven by the practical needs of web developers who just needed people to sign up and log in.

The story of how these paths diverged—and the crisis that emerged when the informal path won—is the story of how digital identity became the fragmented mess we're still living with today.


Path One: Digitizing Official Identity

The 1990s saw governments and institutions around the world recognizing that identity needed to go digital.

The catalyst was electronic commerce. As businesses began transacting online, a fundamental legal question arose: how do you sign a contract electronically? A handwritten signature on paper had centuries of legal precedent behind it. But clicking "I agree" on a website? That was legally murky territory.

Digital Signatures Get Legal Standing

The response came through legislation. In the United States, Utah passed the Utah Digital Signature Act in 1995—the first law in the world to give digital signatures legal standing. The European Union followed with the Electronic Signatures Directive in 1999. By 2000, the US federal government enacted ESIGN (Electronic Signatures in Global and National Commerce Act), giving electronic signatures the same legal weight as handwritten ones across the country.

These laws created a framework where digital identity could carry legal authority. A digital signature, backed by a certificate issued by a trusted authority, could prove who you were in a way that courts would recognize.

The technology backing this was Public Key Infrastructure (PKI) and the vision was compelling:

  1. A trusted authority (government agency, licensed certificate authority) verifies your real-world identity
  2. They issue you a digital certificate binding your identity to a cryptographic key pair
  3. You use your private key to digitally sign documents, transactions, and communications
  4. Anyone can verify your signature using your public key and the certificate chain
  5. The signature carries legal weight equivalent to a handwritten signature

Some countries pursued this vision aggressively. Estonia launched its national digital identity program in 2002, issuing every citizen a smart card with cryptographic certificates. Finland, Belgium, and several other European nations developed similar programs. These weren't just authentication tools—they were legally binding digital identities issued by the state.

Toomas Hendrik, who championed Estonia's digital transformation before becoming the country's president, was a vocal proponent of this approach. Throughout his career, he consistently argued that if governments issue physical identity documents, they have an obligation to issue digital equivalents with equal legal standing—that digital identity is a right of citizenship, not a technical convenience.

The Enterprise Identity Push

Meanwhile, inside organizations, digital identity was becoming a management challenge. As enterprises adopted email, intranets, databases, and web-based tools throughout the 1990s, the number of systems requiring authentication multiplied.

Article 3 covered how Kerberos solved this for MIT's Project Athena. But most organizations weren't MIT. They had a mix of systems from different vendors, acquired at different times, each with its own user directory and authentication mechanism.

The enterprise response was directory services—centralized databases of identity information that multiple systems could share. Novell’s NDS (1993), Sun Microsystems’ NIS/iPlanet, and Microsoft’s Active Directory (1999) became the dominant solutions, each providing a single authoritative source of identity within an organization.

The vision was straightforward: one identity per employee, managed centrally by IT, used across all organizational systems. Whether on a NetWare file server, a Sun Solaris workstation, or a Windows desktop, your identity was issued by your employer, stored in a directory, and authenticated through standardized protocols—most notably LDAP (Lightweight Directory Access Protocol), which Sun and the University of Michigan championed as the universal language of identity.

This was digital identity as an institutional function—identity issued, managed, and controlled by organizations with authority over their members..

The Limitations of Institutional Digital Identity

Both government and enterprise digital identity shared a common characteristic: they were top-down systems requiring coordination, infrastructure, and institutional authority.

Government digital identity required:

  • Legislative frameworks defining legal standing
  • Trusted certificate authorities to issue credentials
  • Physical infrastructure (smart card readers, enrollment centers)
  • User education and adoption campaigns

Enterprise digital identity required:

  • Centralized IT management
  • Directory infrastructure
  • Organizational authority over users
  • Controlled network environments

These weren't problems for institutions with resources and authority. But they were completely impractical for the chaotic, decentralized, explosive growth happening on the open web.

A small online bookstore in 1997 couldn't wait for PKI infrastructure to mature. A web forum couldn't require government-issued digital certificates. An email provider serving millions couldn't verify real-world identities at enrollment.

The web needed to recognize its users now, and it couldn't wait for institutions to figure it out.


Path Two: Accounts From Scratch

While governments were legislating digital signatures and enterprises were deploying directory services, the World Wide Web was solving an entirely different problem. Nobody on the early web was thinking about "identity." That wasn't the word anyone used. Web developers were thinking about accounts—how to let users sign up, log in, and have a personalized experience.

The distinction matters. Governments were asking, "How do we verify who someone really is in the digital world?" Web developers were asking, "How do I remember this visitor so I can show them their shopping cart?"

These sound like related questions. They aren't. And the gap between them would haunt the internet for decades.

To understand how web accounts evolved, you have to understand the web's core protocol: HTTP, the Hypertext Transfer Protocol. HTTP is the language browsers and web servers use to communicate. When you visit a website, your browser sends an HTTP request—"give me this page"—and the server sends back a response—"here's the page." Tim Berners-Lee designed it in 1991 for sharing and linking documents. You request a document, the server delivers it, and the transaction is complete.

The key characteristic of HTTP is that it's stateless. The server retains no memory of previous interactions. Every request arrives as if from a stranger. There's no built-in concept of "you were here before" or "you're the same person who loaded the previous page."

This was a brilliant design choice for a document-sharing system. Servers that don't need to track millions of individual visitors can handle enormous traffic. They can be duplicated, replaced, and restarted without losing track of anyone—because they weren't tracking anyone to begin with.

But statelessness meant the web had no built-in way to recognize returning visitors. And by the mid-1990s, as the web transformed from a library into a marketplace, that absence was becoming a serious problem.

From Documents to Applications

The transformation happened fast. By 1995-1996, sites like Amazon (founded 1994), eBay (1995), and Hotmail (1996) weren't serving documents—they were running applications. And applications needed to recognize their users.

Amazon needed to associate a shopping cart with a specific shopper. eBay needed to know who was placing bids. Hotmail needed to show you your inbox, not someone else's. Web forums needed to connect posts to the people who wrote them. News sites wanted to remember your preferences.

All of these required something HTTP simply didn't provide: the ability to recognize a returning visitor across multiple page loads.

Lou Montulli and the Cookie

The breakthrough came from Lou Montulli, a programmer at Netscape Communications. In 1994, while his colleagues were building SSL to secure web communications, Montulli was tackling a different problem: how to give HTTP a memory.

His solution was deceptively simple. What if the server could hand the browser a small piece of data—a token—and the browser would hand it back with every subsequent request? The server could then read the token and say, "Ah, I recognize this visitor."

Montulli called these tokens cookies, and the mechanism worked like this:

  1. You visit shop.example.com for the first time
  2. The server generates a unique token and sends it along with the page: Set-Cookie: session_id=abc123
  3. Your browser tucks this cookie away, associated with that particular site
  4. On every future request to shop.example.com, your browser automatically includes: Cookie: session_id=abc123
  5. The server reads the cookie and says, "I know this visitor—they're the one with items in cart #abc123"

Netscape shipped cookies in Navigator 2.0 in late 1994 without waiting for anyone's permission. No standards body approved them. No committee debated the implications. Netscape just did it—and it worked so well that every other browser adopted the mechanism within months.

Cookies gave the stateless web a form of memory. But they only solved half the problem. They let a site recognize a returning browser. They didn't verify that the person behind the browser was who they claimed to be.

For that, web developers cobbled together something new. Or rather, something very old, dressed up for the web.

The Account: Something New Under the Sun

What emerged on the early web was a concept that would have seemed strange to the security engineers of previous decades: the user account as a self-service creation.

On Unix systems (Article 1), a system administrator created your account and assigned your credentials. In Kerberos environments(Article 3), your organization issued your principal and managed your authentication. In PKI (also Article 3), a certificate authority verified your real-world identity before issuing a certificate.

On the web, you created your own account. You picked your own username—or "handle," or "screen name," depending on the site. You chose your own password. You filled in whatever profile information the site asked for, and much of it could be fictional.

The requirements were minimal and varied wildly from site to site. Some sites asked for just a username and password—nothing else. Others wanted an email address. Some asked for a name, age, and location, but rarely verified any of it. There was no standard for what an "account" required because there was no standard for accounts at all. Each site made it up as they went along.

When you created an Amazon account in 1998, Amazon needed a way to contact you, a password so they could let you back in, and eventually a shipping address and credit card so they could complete transactions. That was it.

Your Amazon account wasn't you. It was a container—a profile that held your relationship with Amazon's service. Your purchase history, your wish lists, your product reviews, your personalized recommendations—these accumulated over time and became your Amazon presence. But it started as nothing more than a couple of fields in a database.

Jeff Bezos grasped the value of these accumulating profiles early on. In his 1998 letter to Amazon shareholders, he emphasized that Amazon's real competitive advantage wasn't in selling products—it was in understanding customer preferences well enough to help them make better purchase decisions. The more Amazon knew about what a customer browsed, bought, and reviewed, the more valuable that customer's profile became—not as an identity, but as a commercial relationship.

On forums and message boards, the concept was even more fluid. Your handle was your presence. "DragonSlayer99" or "BookwormMom" wasn't just a username—it was a persona. You could have different handles on different forums, showing different facets of yourself to different communities. A software engineer by day might be "MetalHead666" on a music forum and "CarefulDad" on a parenting board. Each handle carried its own reputation, its own post history, its own relationships with other members.

Nobody thought of these accounts and handles as "identity" in any formal sense. They were just... accounts. Profiles. Screen names.


A Personal Observation: The Beginning of the Wave Collapse

Looking back, I think this moment—the late 1990s explosion of web accounts and online personas—was the beginning of something we're still struggling to understand.

When "DragonSlayer99" created profiles on a gaming forum, a music site, and a tech community, each profile captured a genuine facet of a real person. Each was authentic. Each was incomplete. Together, they formed a richer picture of a human being than any single profile could contain. A person is simultaneously a parent, a professional, a hobbyist, a political thinker, a music fan—all at once, all the time. The early web, almost accidentally, allowed people to express these different facets separately.

Danah Zohar, in her 1990 book The Quantum Self, proposed that human consciousness and identity share properties with quantum physics—that the self exists in a kind of superposition, a fluid interplay of many simultaneous states rather than a single fixed point. I'm borrowing her metaphor here, though perhaps applying it differently than she intended, because I think it captures something essential about what was happening online.

Each web profile was an instantiation of one state of this quantum self. Not fake. Not the whole person. Just one facet, observed in one context. "DragonSlayer99" was real. "CarefulDad" was real. Neither was the complete person. Both were genuine expressions of someone who contained multitudes.

Sherry Turkle explored this phenomenon extensively in her 1995 book Life on the Screen, studying how people constructed and inhabited multiple online personas. Her research found that many users didn't experience their different online selves as deceptive or fragmented—they experienced them as liberating. Different contexts brought out different authentic facets of who they were. The early internet, Turkle argued, was creating a space where the multiplicity of self could be expressed and explored in ways that the physical world often constrained.

But the web—and the industries being built on top of it—couldn't tolerate this ambiguity forever. Over the coming years, platform after platform would try to collapse these separate instantiations into a single, unified, "real" identity. Facebook would demand real names. Google would try to link all your accounts. Data brokers would correlate profiles across sites to build comprehensive dossiers.

This wave collapse—the drive to flatten the quantum self into a single observable identity—would become one of the defining tensions of digital life. But in the late 1990s, that collapse was just beginning. And it started not with governments or identity theorists, but with advertisers.


The Pattern That Took Over the Web

Before we get to that collapse, it's worth understanding how web accounts actually worked under the hood, because the technical pattern shaped everything that followed.

Signing Up: You filled out a form—at minimum a username and password, sometimes an email address or other details depending on the site. The server stored what you gave it, assigned you a unique user ID, and that was that. A new account existed. No verification. No authority. Just a row in a database.

And that password you chose? In the late 1990s, most sites simply stored it exactly as you typed it—in plaintext, sitting in a database column right next to your username. The concept of password hashing existed and was well understood in the security community, but the majority of web developers building these early account systems weren't security specialists. They were people building stores, forums, and services who treated the account system as a means to an end. Storing the password as-is was the obvious, easy thing to do.

Logging In: You typed your username and password into a form. The server compared what you submitted to what it had stored. If they matched, the server generated a session token, stored it linked to your account, and set it as a cookie in your browser. From that point on, the cookie was your proof of login.

Staying Logged In: Every subsequent request included your session cookie. The server looked up the session, retrieved your account, and recognized which account was active. You remained "logged in" until the session expired or you explicitly logged out.

This pattern wasn't designed by a committee. No standard specified it. It emerged independently on thousands of websites because it was the obvious thing to do with the tools available: HTML forms to collect credentials, server-side scripts to check them, databases to store them, and cookies to remember the result.

By 2000, virtually every website that needed user accounts used some variation of this pattern.

Why Accounts Won Over Certificates

The institutional path—PKI certificates, government-issued credentials, enterprise directories—was technically superior by almost every measure. It was more secure, more verifiable, and more trustworthy. So why did the humble web account dominate?

Because nobody was thinking about identity. They were thinking about getting things done.

Creating an account required choosing a username and a password. That was it. No certificate enrollment. No visits to government offices. No smart card readers. No software installation. Any person with a web browser could do it in thirty seconds.

Building an account system required a web server and a database. Any developer with basic programming skills could do it in an afternoon. No certificate authorities. No directory servers. No cryptographic expertise.

Users already understood the concept. Usernames and passwords were familiar from desktop computers, dial-up accounts, and ATM PIN codes. The mental model was pre-existing.

And sites had complete control over the experience. They could brand their signup pages, customize their login flows, add "remember me" checkboxes, and make the whole thing feel seamless. Compare this to the browser's built-in certificate selection dialog—ugly, confusing, and impossible to customize.

Philip Hallam-Baker, who worked on early web security at CERN alongside Berners-Lee and later wrote about the consequences in The dotCrime Manifesto (2008), reflected on this period with visible frustration. The technology for proper client authentication through certificates existed. The knowledge was there. But certificates required infrastructure and user education, and the web was growing far too fast for either. The path of least resistance was usernames and passwords, and that's the path the entire web took—not because it was the right choice, but because it was the easy one.

The web account won because it solved the immediate problem—"let users sign up and come back"—with minimum friction. It wasn't designed as an identity system. It was designed as an account system. The fact that it would later be treated as identity was a conflation that hadn't happened yet.


The Two Paths Diverge And a Third Quietly Emerges

By the early 2000s, two separate worlds existed side by side, barely acknowledging each other:

The institutional world was building verified, authoritative digital identity. Government-issued certificates. Enterprise directories. Legal frameworks for digital signatures. This was slow, careful, expensive, and grounded in the assumption that digital identity should mirror real-world identity.

The web world was churning out accounts by the millions. Self-service signup. Passwords in databases. Cookies in browsers. This was fast, cheap, universal, and grounded in the assumption that sites just needed to recognize returning users.

Most people's daily digital experience was entirely in the second world. Their "online presence" wasn't a government certificate—it was a constellation of accounts, profiles, and handles scattered across dozens of websites.

But while users were busy signing up for accounts and choosing screen names, something else was happening to them. Something most of them didn't know about. A third path was emerging—and it would redefine what "who you are online" really meant.

The Shadow Profile: DoubleClick and the Birth of Behavioral Tracking

Remember Montulli's cookies? They were designed to let a website recognize its own returning visitors. But the cookie mechanism had no restriction on who could use it.

When you loaded a web page, your browser didn't just request content from the site you were visiting. It also loaded images, scripts, and advertisements from other domains whose content was embedded in that page. And each of those external domains could set and read their own cookies in your browser.

In 1996, a company called DoubleClick recognized what this meant—and it changed the web forever.

DoubleClick was an advertising network. It placed banner ads on thousands of websites. And because DoubleClick's ad server delivered those ads, its code was present on every site in its network. Every time you visited any of those sites, DoubleClick's server could set a cookie in your browser—a third-party cookie, set by a domain you hadn't actually visited.

The first time you encountered a site running DoubleClick ads, their server dropped a cookie with a unique ID. From that moment on, every site in DoubleClick's network recognized that same ID.

You visit a cooking blog. DoubleClick notes it. You browse a travel site. Same cookie, same profile. You read a parenting forum. Same cookie. You research cars. Same cookie.

DoubleClick was building something no individual website could build: a profile of your behavior across the entire web.

No one asked you to sign up. You didn't choose a username. You didn't set a password. You didn't agree to anything. There was no "account" in any sense a user would recognize. And yet DoubleClick was assembling a detailed record of your interests, habits, and intentions—a shadow profile that existed entirely without your participation.

When challenged on this, DoubleClick and its leadership consistently drew a distinction between "personal information" and "usage patterns." In testimony before the FTC and in public statements, the company's position was that they weren't collecting identity—they were collecting behavior. They didn't know who the user was. They just knew what browser number 48291847 was interested in.

The distinction between "personal information" and "usage patterns" would become one of the great definitional battles of the internet age. Because as DoubleClick was about to demonstrate, "anonymous" usage patterns have a way of becoming very personal very quickly.

The User Becomes the Product

This was a new kind of digital profile, fundamentally different from anything in previous articles.

Unix accounts were created for you by administrators so you could use a system. Kerberos principals were issued to you so you could access network resources. Web accounts were created by you so you could use a service.

DoubleClick's tracking profile was created from you—extracted from your behavior, assembled without your involvement, and maintained for someone else's benefit.

And the profile was remarkably intimate. By correlating browsing patterns across thousands of sites, DoubleClick could infer:

  • Your hobbies and interests
  • Your approximate income and education level
  • Your health concerns and medical research
  • Your political leanings
  • What you were thinking about buying
  • Your life stage—single, married, new parent, empty nester

Your Amazon account knew what you purchased. Your DoubleClick shadow profile knew what you thought about, what you worried about, what you aspired to. In many ways, it painted a more intimate portrait than any account you'd consciously created.

Then, in 1999, DoubleClick acquired Abacus Direct—a company that maintained a database of names, addresses, and purchasing habits collected from catalog retailers. This was offline data tied to real names and physical addresses.

DoubleClick announced plans to merge these databases. The anonymous browsing profile they'd been building—cookie ID #48291847, who reads cooking blogs and researches minivans and has been looking at symptoms of anxiety—would be linked to "Jennifer Martinez, 847 Oak Street, Denver, Colorado."

The shadow profile would get a name.

Public outcry was immediate. Privacy advocates sounded alarms. An FTC investigation forced DoubleClick to abandon the merger plan in 2000. But the underlying machinery remained in place, and the business model it represented—building detailed profiles from user behavior and selling access to those profiles to advertisers—was just getting started.

This was the moment the user became the product. Not in a metaphorical sense. In a literal business-model sense. The service was free. The ads paid the bills. And what the advertisers were buying was access to the profile—the shadow version of you that the tracking infrastructure had assembled.

The early web's instinct for separate handles and context-specific personas was already being undermined. Users thought they were different people on different sites. The tracking layer was quietly connecting the dots, collapsing those separate personas into a single, unified, marketable profile.

The Fracture Lines

By the early 2000s, three fundamentally different concepts of "who you are online" existed simultaneously, and almost nobody was talking about how they related to each other:

Institutional credentials: What governments and enterprises were building. Verified, authoritative, legally meaningful. Lived in certificates, directories, and government databases. Most people never encountered this on the consumer web.

Accounts and profiles: What users created and managed. Self-asserted, unverified, site-specific. Lived in the databases of millions of individual websites. This was the "online presence" people consciously maintained—a scattering of accounts, each with its own handle, profile, and history.

Behavioral shadow profiles: What advertisers and data brokers were assembling. Inferred from behavior, built without consent, maintained for commercial purposes. Lived in tracking cookies, ad network databases, and data broker files. This was happening to people, not something they chose.

The first was too complex for everyday use. The second was fragmenting into an unmanageable sprawl. The third was growing silently in the background, assembling the most detailed portrait of individual behavior that had ever existed—while the people it described had almost no visibility into it.

And the three concepts were on a collision course.

The institutional world wanted to bring order and verification to digital identity. The web's account world was drowning in its own fragmentation—users juggling dozens of accounts, reusing passwords out of cognitive necessity, losing track of where they'd signed up and what they'd shared. And the behavioral tracking world was building a version of identity that threatened to make the other two irrelevant by knowing more about people than they knew about themselves.

Lawrence Lessig saw this collision coming. In Code and Other Laws of Cyberspace (1999), he argued that the architecture of the internet was itself a form of regulation—that the code underlying digital systems would shape behavior and identity as powerfully as any law. He warned that the internet was developing multiple, competing systems for establishing who someone is: one built by governments, one built by users, and one built by commerce. These systems had fundamentally different goals and values, and the conflicts between them would only intensify as the internet became more central to daily life.

Something had to give. The web couldn't sustain millions of isolated account systems, each storing credentials with varying levels of competence, while users drowned in passwords and advertisers built parallel profiles in the shadows. The model was fracturing under its own weight.

The question was: what comes next? And who gets to answer that question—governments, technologists, or the companies already sitting on the most comprehensive profiles ever assembled?


What Comes Next

The web account crisis of the early 2000s made one thing clear: the pattern of every site running its own account system, every user drowning in passwords, and every breach cascading through reused credentials could not continue.

But the solution couldn't be "make everyone use government certificates" or "require PKI for web login." Those approaches had already failed to reach everyday users. Whatever came next had to work within the web people actually used—browsers, forms, cookies—while somehow solving the fragmentation problem.

The answer would involve a concept that barely existed in the 1990s: federation. Instead of creating a new account for every site, what if you could prove you were a legitimate user through a provider you already trusted? What if one of your existing accounts—your email, your operating system login—could vouch for you on other sites?

This idea—that an account on one site could serve as proof of who you were on another—would drive the next major evolution in how the web handled users.

In next article in this series, we'll follow the industry's attempts to solve account fragmentation—from Microsoft's ambitious (and controversial) Passport system, through the open-standards idealism of OpenID, to the OAuth protocol and the rise of "Log in with Google" and "Log in with Facebook."

The cryptographic tools—digital signatures, certificates, public key verification—would finally find their place in the everyday web. Not by asking users to manage key pairs, but by working invisibly behind the scenes, letting providers vouch for users through signed tokens and verified assertions.

And the three concepts of "who you are online"—institutional credentials, user-created accounts, and behavioral shadow profiles—would begin colliding in ways that are still reshaping the web today.

The wave collapse accelerates.


Next: Part 5 - The Enterprise Gets Serious


Further Reading:

Books

Standards and RFCs

Legislation

Government Programs

Key Articles and Reports

Videos and Talks

Historical Resources

background

Subscribe to Synthetic Auth