background

The Making of Digital Identity - 05 - The Federation Wars


Part 1 of this series left off with the question of whether we can verify identity without storing the proof in recoverable form.
Part 2 of this series left us with authentication working—passwords hashed, systems hardened, privileges separated. Users could log in from different terminals. Trust was local and centralized: one system, one administrator, one password file.
Part 3 of this series covered how we spent the 1980s and 90s trying to recreate centuries of social technology in mathematics, and discovered that bits aren't wax, trust doesn't scale, and humans will always route around friction like water around stone.
Part 4 tracked how the web gave everyone the power to create themselves from scratch, a hundred times over, on a hundred different sites—and how that liberation curdled into a crisis when advertisers, hackers, and the sheer weight of forgotten passwords revealed that a self with no center can't hold.
Part 5 is the story of who tried to become that center and what happened when corporations, open-source idealists, and billion-user platforms each claimed the right to vouch for who you are.

The Federation Wars


By the early 2000s, the web's identity model was fracturing along three lines that Article 4 traced in detail.

Institutional credentials—the digital certificates, government-issued smart cards, and enterprise directories—were verified, authoritative, and legally meaningful. They were also too complex, too expensive, and too dependent on infrastructure that ordinary users would never touch.

User-created accounts—the usernames, passwords, and profiles people created for themselves on every site they visited—were simple, universal, and multiplying out of control. The average user was accumulating dozens of accounts, reusing passwords out of cognitive necessity, and losing track of where they'd signed up and what they'd shared.

Behavioral shadow profiles—the tracking dossiers assembled by advertising networks like DoubleClick—were growing silently in the background, building intimate portraits of individual behavior without anyone's conscious participation.

Something had to give. The account-per-site model was collapsing under its own weight. Every new breach cascaded through reused passwords. Every forgotten password meant another reset flow, another support ticket, another frustrated user. Enterprises managing thousands of employees across dozens of applications were drowning in provisioning and de-provisioning. The web needed a better answer.

"People use the same password on different systems, they write them down and they just don't meet the challenge for anything you really want to secure." — Bill Gates, RSA Conference, 2004

The answer that emerged was federation.

The concept wasn't new. Kerberos had demonstrated the core idea in the 1980s: authenticate once with a trusted authority, receive a ticket, present that ticket to other services. Single sign-on. One login, many doors. But Kerberos worked within a single realm. One campus, one organization, one administrative domain. The web's problem was authentication across thousands of independent domains with no shared administration at all.

Federation promised to extend the Kerberos model to the open web. Instead of creating a new account for every site, you'd authenticate with a provider you already trusted—your employer, your email service, your government—and that provider would vouch for you elsewhere. The site you were visiting wouldn't need your password. It would accept a signed assertion from a provider it recognized, the same way a hotel accepts a passport issued by a government it recognizes.

The analogy runs deeper than it might seem. Consider how identification works across political boundaries.

Within a single country, identity is relatively straightforward. The government issues credentials—passports, national IDs, driver's licenses—and institutions within that country accept them. One authority, one set of documents, one trust framework. This is the Kerberos model: one realm, one KDC, one source of truth.

But between countries, identity gets complicated. The United States doesn't issue identity documents for French citizens, and France doesn't issue them for Americans. Yet Americans travel to France and French citizens travel to America, and somehow both countries manage to verify each other's travelers. They do this through federation—bilateral agreements, shared standards (the ICAO passport format), mutual recognition of each other's credential-issuing authority.

Each country maintains sovereignty over its own citizens' identities. No supranational body issues a "world passport." Instead, countries agree on formats, trust each other's issuance processes, and accept each other's documents.

This is exactly what web federation aspired to. Each identity provider maintains authority over its own users. No single provider controls everyone's identity. Instead, providers agree on protocols, trust each other's authentication, and accept each other's assertions.

Or think of it as the relationship between the federal government and the states. Each state issues its own driver's licenses, maintains its own DMV, sets its own requirements. But a license issued in Massachusetts works in California, because the states have agreed on mutual recognition. The federal government sets some standards (REAL ID requirements, for instance), but doesn't issue the credentials itself. Each state retains authority over its own residents.

Federation on the web was supposed to work the same way. Your identity provider—your "home state"—would authenticate you. Other sites—"other states"—would recognize that authentication through agreed-upon standards, without needing to issue you their own credentials.

The concept was elegant. The politics were explosive.

Because federation immediately raised a question that the passport analogy makes visceral: who gets to be a country? Who gets to issue the credentials that everyone else accepts? In the physical world, this is settled by sovereignty, treaties, and centuries of precedent. On the web in 2000, there was no sovereignty, no treaties, and no precedent.

The next fifteen years were a war over that question. Three very different factions—a corporate monopolist, an industry consortium, and a grassroots movement of open-source idealists—each proposed a different answer. The technical protocols they built—Passport, SAML, OpenID, OAuth, OpenID Connect—matter, but they're weapons in a larger battle about power, trust, and who owns your digital self.

As Kim Cameron, Microsoft's chief identity architect, wrote in his landmark 2005 paper The Laws of Identity: "The Internet was built without a way to know who and what you are connecting to. This limits what we can do with it and exposes us to growing dangers."


The Corporate Play

Microsoft Passport: One Account to Rule Them All

Microsoft saw the problem clearly—and moved first.

In 1999, Microsoft launched Passport (later called .NET Passport), built on a simple, compelling premise: you already had a Microsoft account. Hotmail, launched in 1996 and acquired by Microsoft in 1997, was one of the most popular email services in the world. MSN Messenger was becoming ubiquitous. What if that one account—your Hotmail login—could get you into any website on the internet?

The pitch to users was frictionless: one username, one password, everywhere. No more creating a new account for every site. No more remembering dozens of credentials. Just click "Sign in with Passport" and you're done.

The pitch to websites was equally attractive: stop building your own account systems. Let Microsoft handle authentication, and you get a verified user without the cost and liability of storing their password yourself. Easier for you, easier for your users, everyone wins.

Microsoft integrated Passport deeply into its ecosystem. Windows XP, released in October 2001, prompted users to create a Passport account during setup. MSN services required it. The Passport SDK was freely available for any website to implement.

It was, in essence, the Kerberos model scaled to the entire internet—with Microsoft as the sole Key Distribution Center.

And that was precisely the problem.

The Problem of the Universal KDC

Return to the federation analogy. Imagine one country approaching every other nation on Earth and saying: "We'll handle all the passports. Every traveler, every country, one issuer. We'll verify everyone's identity. You just accept our documents."

The practical benefits might be real—standardization, simplicity, reduced duplication. But no sovereign nation would accept this arrangement, for reasons that have nothing to do with technical capability. It's a question of authority, autonomy, and what happens when the single issuer's interests conflict with yours.

The web industry reacted to Passport exactly the way sovereign nations would react to that proposal.

Competitors refused. Amazon, eBay, Yahoo—the major web companies of the era—had no interest in ceding their customer relationships to Microsoft. If your users authenticated through Passport, Microsoft sat between you and your customers. Microsoft would know which of its Passport holders visited your site, when, and how often. Your customer data would flow through Microsoft's infrastructure. In an era when customer relationships were the core asset of every web business, this was a non-starter.

Privacy advocates sounded alarms. A single provider authenticating users across the entire web would have an unprecedented view of online behavior. Every Passport authentication event was a data point: this user visited this site at this time. This wasn't a shadow profile assembled through third-party cookies—it was a direct, authenticated record of a user's movements across the web, tied to their real email address. The Electronic Privacy Information Center (EPIC) and other advocacy groups raised immediate concerns.

The antitrust context was toxic. Passport launched during the peak of the United States v. Microsoft antitrust case. The Department of Justice had filed suit in 1998, the district court ruled against Microsoft in 2000, and the settlement came in 2001. In this environment, a Microsoft product that aimed to become the mandatory identity layer of the internet looked less like innovation and more like the next attempt at monopolistic control. The European Commission was pursuing its own case. Regulators were watching.

Security incidents eroded trust. In 2001, a vulnerability allowed attackers to access any Passport account using a simple exploit. Additional security flaws were discovered in 2002 and 2003. For a system whose entire value proposition was "trust us with your identity," repeated security failures were devastating.

By 2004, the grand vision was dead. Major e-commerce sites had refused to adopt Passport. The user base remained essentially limited to Microsoft's own services—Hotmail, MSN, Xbox Live. The product was quietly rebranded, its ambitions scaled back, and it eventually evolved into what's now known as Microsoft Account, serving Microsoft's own ecosystem rather than the entire web.

The Lesson Passport Taught

Passport failed, but it failed instructively. It proved three things simultaneously:

The problem was real. Users genuinely wanted to stop creating new accounts everywhere. The pitch resonated. People wanted federation—they just didn't want this federation.

Single-provider federation was politically unacceptable. No matter how good the technology, a system where one corporation controls the identity layer of the web will be rejected by competitors, regulators, and (eventually) users. Identity is too important—too much like sovereignty—to cede to a single authority.

The experience set the pattern. Despite its failure as a universal system, Passport demonstrated the user experience that would eventually win: click a button, authenticate with a provider you already know, arrive at the destination logged in. Every "Log in with..." button on the modern web descends from this interaction pattern.

Passport asked the right question—"what if you didn't need a new account everywhere?"—and gave the wrong answer: "trust Microsoft." The industry needed a different answer. One where no single company controlled the system. One built on open standards that anyone could implement.

The industry was about to build exactly that. But it would take the explicit threat of Passport's monopolistic model to galvanize the effort.

Liberty Alliance: The Counter-Movement (2001–2009)

In September 2001, Sun Microsystems did something unusual for a technology company: it organized a political coalition.

The Liberty Alliance Project launched with 33 founding members, including Sun, Nokia, RSA Security, and American Express. Within two years, it would grow to over 150 member organizations across technology, finance, telecommunications, and government. Its stated mission was to build open, interoperable standards for federated identity.

Its actual motivation was to stop Microsoft.

This wasn't kept quiet. Sun's executives were explicit in press interviews: Passport represented Microsoft's attempt to own the identity layer of the internet, and the industry needed an open alternative. Liberty Alliance was the diplomatic response to an attempted annexation—a coalition of nations agreeing that no single power should control the passports.

The approach was fundamentally different from Passport's. Where Passport assumed a single identity provider (Microsoft), Liberty Alliance designed for multiple, interoperable providers. Any organization could be an identity provider. Any service could accept assertions from any provider. The system was decentralized by design, with no single authority at the center.

Liberty Alliance developed a series of specifications—the Identity Federation Framework (ID-FF), the Identity Web Services Framework (ID-WSF)—that described how identity providers and service providers could communicate. These specifications addressed the hard questions: How does a service provider discover which identity provider a user belongs to? How are authentication assertions formatted and signed? How are sessions established and terminated? How is user consent managed?

The work was thorough, committee-driven, and slow. Standards bodies don't move at startup speed. But by 2003-2004, Liberty Alliance had produced specifications that major vendors—Sun, IBM, Nokia, Oracle—could implement.

And then something pragmatic happened. Rather than competing with the existing Security Assertion Markup Language (SAML) standard being developed at the Organization for the Advancement of Structured Information Standards (OASIS), a standards body where many of the same companies were members, Liberty Alliance contributed its identity federation specifications to OASIS. The two efforts merged. Liberty Alliance's ID-FF became a core component of SAML 2.0, ratified in March 2005.

Liberty Alliance itself gradually wound down after this, its mission accomplished. It had served its purpose: mobilizing the industry against single-vendor identity control and channeling that energy into an open standard. By 2009, it merged into the Kantara Initiative, which continues to work on digital identity standards today.

The coalition politics mattered as much as the technology. Liberty Alliance established a precedent: web identity would be built on open standards, not proprietary platforms. No single company would own the protocol. The specifications would be public, implementable by anyone, and governed by industry consensus.

This was the diplomatic framework. SAML 2.0 was the treaty that came out of it.

SAML 2.0: The Enterprise Treaty (2005)

SAML (Security Assertion Markup Language) had existed before Liberty Alliance. SAML 1.0, developed at OASIS, was ratified in 2002. It was functional but limited. It defined a format for authentication assertions but left many practical federation scenarios unaddressed. SAML 1.1 (2003) improved things incrementally.

SAML 2.0, ratified in March 2005, was a much more ambitious specification. It merged three distinct bodies of work: OASIS's original SAML specifications, Liberty Alliance's Identity Federation Framework, and the academic federation profiles developed by the Shibboleth project.


Shibboleth deserves a brief pause, because it was solving the federation problem years before the enterprise world caught up.

Launched in 2000 as an initiative of Internet2, the consortium of research universities that had been building high-speed networking infrastructure since the mid-1990s, Shibboleth grew out of a specific and deeply familiar frustration in academia: a student or researcher at MIT needed to access a licensed journal through JSTOR, a dataset hosted at Stanford, or a preprint repository managed by another institution entirely. Each of those resources had its own authentication system. Each demanded separate credentials. A researcher collaborating across three universities might manage a half-dozen login accounts just to do their work — all at institutions that were, nominally, partners in the same intellectual enterprise.

Shibboleth's solution was federation for higher education. A student's home university — MIT, say — became their identity provider. Publishers and repositories — JSTOR, Nature, institutional data archives — became service providers. When a student navigated to JSTOR off-campus, they'd be redirected to their university's login page, authenticate with the credentials they already had, and arrive back at JSTOR with full access — no JSTOR account required, no separate password to manage. The university vouched for them; JSTOR trusted the voucher.

The name itself is telling. In the Book of Judges, "shibboleth" was a word used to distinguish insiders from outsiders — those who could pronounce it correctly from those who couldn't. The Shibboleth project was building exactly that: a technical mechanism for institutions to recognize each other's members, to say "this person belongs to us" across organizational boundaries.

By the time SAML 2.0 was being finalized, Shibboleth was already deployed at hundreds of universities worldwide and had worked through the real-world complexities that enterprise federation specifications sometimes glossed over: how do you handle users who belong to multiple institutions? How do you let a university assert that someone is a "current student" without revealing their name or student ID to the publisher? How do you manage federation at scale when thousands of institutions and content providers need to trust each other dynamically? Shibboleth's academic federation profiles — the hard-won answers to these questions — fed directly into SAML 2.0, making the final specification far more practically grounded than it would otherwise have been.

The result was a comprehensive standard for exchanging authentication and authorization data between organizations.


To understand what SAML actually does—and why it matters for the story of digital identity—it helps to walk through how it works. The concepts here will recur in every subsequent protocol, so it's worth getting them clear.

The Three Roles

SAML defines three roles in any authentication transaction:

The Principal — the user trying to access something. A Cisco employee wanting to log into Salesforce, a student accessing a journal through their university, or an employee at a consulting firm accessing a client's project management tool.

The Identity Provider (IdP) — the system that knows who the user is and can verify their identity. This is the user's "home" organization—their employer's Active Directory, their university's authentication system, their government's credential service. The IdP is the authority that issues the assertion: "Yes, this person is who they claim to be."

The Service Provider (SP) — the system the user is trying to access. Salesforce, the journal publisher, the project management tool. The SP doesn't know the user's password and doesn't want to. It wants a trustworthy answer to one question: "Is this person authenticated, and what do I need to know about them?"

These three roles map directly to the identity patterns from earlier articles. The IdP is the KDC from Kerberos (Article 3)—the trusted authority that verifies identity. The SP is the service that accepts tickets. The principal is the user navigating between them. The difference is scope: Kerberos operated within a single realm. SAML operates across organizational boundaries.

The Flow

Here's what happens when a Cisco employee opens their browser and navigates to Salesforce, assuming both organizations have established a SAML federation:

Step 1: The user arrives unauthenticated. The Cisco employee navigates to Salesforce. Salesforce doesn't recognize them—there's no active session, no cookie, no credentials. But Salesforce has been configured to trust Cisco's identity provider.

Step 2: Redirect to the Identity Provider. Salesforce redirects the user's browser to Cisco's IdP, along with a SAML authentication request. This request essentially says: "Someone is trying to access our service and claims to be one of your users. Can you verify them?"

Step 3: The user authenticates at home. The user's browser arrives at Cisco's IdP—a login page they recognize, run by their own employer. They enter their corporate credentials (or, if they're already logged in to their corporate environment, this step happens automatically—true single sign-on). The IdP verifies their identity against its own directory (Active Directory, LDAP, whatever Cisco uses internally).

Step 4: The IdP issues an assertion. Having verified the user, Cisco's IdP constructs a SAML assertion—an XML document containing specific claims about the user. At minimum: "This person is authenticated, their identity is jane.smith@cisco.com, and we verified them at this timestamp." The assertion can also contain attributes: their role, their department, their group memberships—whatever Salesforce needs to determine what they can access. Critically, the IdP digitally signs this assertion with its private key.

Step 5: The assertion travels to the Service Provider. The IdP redirects the user's browser back to Salesforce, carrying the signed SAML assertion. (In practice, this is typically a POST request containing the base64-encoded assertion.)

Step 6: The Service Provider validates. Salesforce receives the assertion and verifies the digital signature using the IdP's public key (which Salesforce obtained when the federation relationship was first established). If the signature is valid, the assertion hasn't been tampered with, and it hasn't expired, Salesforce trusts it. It creates a local session for the user and grants access based on the attributes in the assertion.

The user never entered a Salesforce password. Salesforce never saw their Cisco credentials. The authentication happened at the user's home organization, and only a signed attestation of the result crossed the organizational boundary.

This is the letters-of-introduction pattern implemented in XML and HTTP redirects. The IdP is the lord who writes the letter. The digital signature is the wax seal. The assertion is the letter itself, stating who the bearer is and what the lord vouches for. The SP is the merchant in a distant city who trusts the lord's seal.

The Trust Establishment Problem

One critical detail: this entire flow assumes that Salesforce and Cisco have already established a trust relationship. Salesforce must have Cisco's IdP public key. Cisco's IdP must know Salesforce's endpoint URLs. Both sides must agree on what attributes will be exchanged, what name formats to use, and how to handle edge cases like logout and session expiration.

This trust establishment happens out of band, meaning administrators from both organizations exchange metadata files, configure their systems, and test the integration. It's the equivalent of two countries negotiating a visa agreement before their citizens can travel freely.

This works well for planned business partnerships—Cisco and Salesforce, a university and a journal publisher, a corporation and its consulting firm. It works poorly when the relationships are ad hoc, numerous, or unplanned. You can federate with ten partners. You can maybe federate with a hundred. You cannot federate with a million websites you've never heard of.

This is why SAML became the backbone of enterprise federation but never touched the consumer web. Enterprise relationships are planned, contractual, and relatively few. Consumer web interactions are spontaneous, countless, and between strangers. SAML was a treaty framework for formal alliances, not a protocol for casual encounters.

What SAML Got Right

For the problem it was designed to solve, SAML was remarkably successful. It gave enterprises federated single sign-on across organizational boundaries. It separated authentication from application logic—applications no longer needed to manage their own credential stores. It provided a standard that multiple vendors could implement, ending the era of proprietary federation solutions.

By the late 2000s, SAML 2.0 was the dominant protocol for enterprise federation. It remains so today in many B2B contexts. When a large organization connects to Salesforce, Workday, Box, or dozens of other SaaS applications, SAML is often the protocol carrying the authentication assertions.

As a side note, in my current work, I interact with a lot of institutions across the world to integrate their IdP with our SP using SAML.

What SAML Couldn't Do

But SAML had clear limitations that would matter enormously as the web evolved:

XML overhead. SAML assertions are XML documents—verbose, complex, and expensive to parse. In an era of desktop browsers on broadband connections, this was tolerable. As the web moved toward mobile devices and API-driven architectures, XML became a liability.

Browser-centric flow. SAML's redirect-based flow assumes a web browser. It doesn't work well for native mobile apps, single-page JavaScript applications, or API-to-API communication. The web of 2005 was browsers talking to servers. The web of 2010 was increasingly apps talking to APIs.

Pre-established trust only. Every federation relationship requires advance configuration. This is fine for enterprise, disqualifying for consumer use cases where a user might want to log into a site they discovered five seconds ago.

No user-centric model. SAML was designed for organizations, not individuals. The identity provider is your employer, not you. You don't choose your IdP—your IT department does. This works when the goal is "let employees access SaaS tools." It doesn't work when the goal is "let individuals control their own identity across the web."

SAML proved that federated authentication across organizations could work at scale, securely, with open standards. But it addressed only one of the three fracture lines—the institutional credential path. The other two—user-created accounts drowning in passwords, and behavioral shadow profiles growing in the dark—remained untouched.

For those, the web needed something different. Not a treaty between organizations, but a system designed for individuals. Something lightweight, decentralized, user-controlled. Something that felt less like international diplomacy and more like showing up and saying "I'm me."

The idealists were about to try.


The People's Identity

OpenID: The Idealist's Answer

While enterprises were negotiating SAML federation agreements through formal channels, a very different community was asking a very different question.

The enterprise world asked: "How do organizations trust each other's authentication?" The answer was SAML—comprehensive, secure, and designed for planned relationships between institutions.

But a growing community of bloggers, open-source developers, and web enthusiasts were asking something simpler: "Why do I need a new account on every website I visit? I already have a blog. I already have a URL. Why can't that be my identity?"

In 2005, Brad Fitzpatrick—the creator of LiveJournal, one of the earliest and most popular blogging platforms—built a prototype to solve a specific, mundane problem. LiveJournal users wanted to leave comments on other blogs without creating accounts on each one. Fitzpatrick wanted to let them prove they were legitimate LiveJournal users without sharing their LiveJournal password with a third-party site.

The solution he designed became OpenID, and its core concept was radical in its simplicity: your identity is a URL.

Not a username assigned by a corporation. Not a certificate issued by a government. Not a row in someone else's database. A URL—a web address that you controlled, that pointed to something you owned, that served as your identifier across the entire web.

Here's how it worked:

Step 1: You arrive at a blog where you want to leave a comment. Instead of creating yet another account, you type your OpenID URL into a login field—something like brad.livejournal.com or, if you ran your own website, bradfitzpatrick.com.

Step 2: The blog (the "relying party") contacts the OpenID provider associated with that URL—LiveJournal, in Fitzpatrick's case—and asks: "Someone claims to be this identity. Can you verify them?"

Step 3: Your browser is redirected to your OpenID provider's login page. You authenticate there, with your provider, using credentials only your provider knows.

Step 4: Your provider redirects you back to the blog with a signed assertion: "Yes, this person controls this URL. They're legitimate."

Step 5: The blog accepts the assertion and lets you comment. No new account created. No new password stored. The blog knows you as your URL.

The parallels to SAML are obvious—redirect to an identity provider, authenticate there, return with a signed assertion. But the philosophy couldn't have been more different.

SAML was designed for administrators establishing relationships between organizations. OpenID was designed for individuals choosing their own identity. SAML required out-of-band trust establishment—metadata exchange, certificate configuration, administrative coordination. OpenID required nothing—any relying party could accept any OpenID provider, dynamically, with no prior arrangement.

And crucially: you could be your own identity provider. If you had a website and some technical knowledge, you could run your own OpenID server. Your identity wasn't controlled by your employer, your government, or a technology company. It was controlled by you, hosted at a URL you owned.

This was, in spirit, the return of PGP's philosophy. Decentralized. User-controlled. No corporate gatekeeper. No central authority deciding who gets to have an identity. The web of trust, reimagined for web login.

The Movement Builds

OpenID caught fire in the technology community. The protocol went through rapid iteration—OpenID 1.0 in 2005, OpenID 1.1 in 2006, OpenID 2.0 in 2007—with an active open-source community building libraries, hosting providers, and evangelizing the concept.

Major companies took notice. By 2008-2009, an impressive roster of providers had adopted OpenID:

  • Yahoo announced OpenID support in January 2008, making 368 million Yahoo accounts available as OpenIDs
  • Google became an OpenID provider later that year
  • AOL had already enabled OpenID for its users
  • Microsoft announced support, bringing its vast user base into the ecosystem
  • MySpace joined, adding another massive user base
  • Even the U.S. government began exploring OpenID for citizen-facing services

By some estimates, over one billion user accounts were OpenID-enabled by 2009. The protocol had achieved something remarkable: near-universal support from major identity providers.

And almost nobody used it.

The Usability Catastrophe

OpenID's failure wasn't technical. The protocol worked. The cryptography was sound. The specification was well-designed. The open-source implementations were solid.

The failure was human.

Wired's Scott Gilbertson captured the postmortem perfectly when he called OpenID "The Web's Most Successful Failure" — a system that achieved near-universal provider support and near-zero user adoption simultaneously.

"Log in with a URL" baffled ordinary people. The concept that your identity was a web address—that you'd type brad.livejournal.com into a login field—made perfect sense to developers who thought in terms of URIs and namespaces. It made no sense to someone who just wanted to leave a comment on a recipe blog. Users were accustomed to typing a username and a password. A URL looked like an address bar input, not a login credential.

Users didn't understand identity providers. SAML federation was invisible to end users—their IT department handled everything. OpenID put the choice of identity provider in front of the user and expected them to make an informed decision. "Which OpenID provider do you want to use?" was a question most people couldn't even parse, let alone answer.

The redirect flow was disorienting. Being bounced from a blog to LiveJournal's login page and back again felt strange and potentially suspicious to users unfamiliar with the pattern. "Why is this cooking blog sending me to Yahoo?" It felt like phishing, even when it wasn't.

And it actually enabled phishing. A malicious site could display a fake OpenID login field, redirect you to a convincing replica of your provider's login page, and harvest your credentials. The redirect-based flow—the same one SAML used—was more dangerous in an open, uncontrolled consumer context where anyone could be a relying party. With SAML, the relying parties were vetted organizations with pre-established trust. With OpenID, the relying party could be anyone with a website.

Relying party adoption was lukewarm. While many big companies became OpenID providers (letting their users use their accounts elsewhere), far fewer became OpenID relying parties (accepting OpenID logins on their sites). The incentive was asymmetric. Being a provider cost little and generated goodwill. Being a relying party meant sending your users to someone else's login page—where they might get distracted, where you lost control of the experience, and where you didn't capture the user's email address for your own marketing.

The result was an ecosystem with billions of potential identities and almost nowhere to use them.

The Deeper Problem

OpenID's struggles revealed something important about the difference between identity in principle and identity in practice.

In principle, decentralized identity is appealing. You control your own identifier. No corporation sits between you and the sites you visit. No single point of failure. No surveillance chokepoint. It's the architecture of freedom—and it maps to deeply held values about autonomy and self-determination.

In practice, people don't want to think about identity infrastructure any more than they want to think about plumbing. They want to turn the tap and have water come out. They want to click a button and be logged in. The moment you ask someone to understand identity providers, select among options, and type a URL into a login field, you've asked them to become the system administrator of their own digital life. Most people—reasonably—refuse.

This is the tension that runs through the entire history of digital identity: security and autonomy demand user engagement, but usability demands user invisibility. The systems that win are the ones that hide their complexity, even when the hidden complexity comes with hidden costs.

OpenID asked users to see the machinery and make informed choices about it. The system that killed OpenID's mainstream ambitions would take exactly the opposite approach: hide everything, make it effortless, and collect the data behind the curtain.

Facebook Connect: The Wave Collapse

In May 2008, at Facebook's f8 developer conference, Mark Zuckerberg announced Facebook Connect. The pitch was disarmingly simple: any website could add a "Log in with Facebook" button. Users would click it, see a familiar Facebook dialog asking them to authorize the connection, and arrive at the destination site already identified—with their real name, their profile photo, and potentially their email address, friend list, and interests.

No URL to type. No identity provider to understand. No redirect to an unfamiliar login page. Just a blue button with a logo everyone recognized.

Facebook Connect launched publicly in December 2008. Within months, it was everywhere. Where OpenID had spent three years trying to reach mainstream adoption and failed, Facebook Connect achieved it almost overnight. By 2010, over two million websites had integrated it.

The difference wasn't primarily technical. The underlying pattern—redirect to a provider, authenticate, return with an assertion—was essentially the same. The difference was in what Facebook understood that OpenID's creators didn't: people don't want to choose an identity provider. They want to use an identity they already have.

Facebook didn't ask users to understand federation. It didn't present options. It didn't require users to think about who they were in a philosophical or architectural sense. It offered a button. The button said "Log in with Facebook." Everyone had Facebook. Everyone understood the button.

And the button did something OpenID never could: it brought data with it.

When you logged into a site with OpenID, the site learned your URL. Maybe an email address if you configured your provider to share one. That was about it—the protocol was deliberately minimal about what information crossed the boundary, because privacy was a core value for OpenID's designers.

When you logged into a site with Facebook Connect, the site could request—and often received—your name, your email, your profile photo, your birthday, your location, your list of friends, your interests, and your "likes." For a website, this was transformative. Instead of a blank registration form that users might abandon, they got a pre-populated profile of a real person, complete with a social network they could leverage.

The incentive was overwhelming. A site implementing OpenID got an anonymous identifier. A site implementing Facebook Connect got a rich user profile and the ability to post back to the user's Facebook feed—free viral marketing. The choice wasn't even close.

The Collision

Article 4 traced three fracture lines in digital identity: institutional credentials, user-created accounts, and behavioral shadow profiles. It predicted these lines were on a collision course.

Facebook Connect was the collision.

User-created account: Your Facebook profile was the most invested-in, most carefully maintained account most people had online. Real name (Facebook's terms of service required it), real photo, real "friends", real interests. Years of accumulated social history. It was, for many users, the closest thing to a canonical digital self.

Behavioral shadow profile: Article 4 described how DoubleClick built cross-site behavioral profiles through third-party cookies—tracking which sites users visited without their conscious participation. Facebook Connect did the same thing, but better. Every time you clicked "Log in with Facebook" on a third-party site, Facebook knew you were there. Not through an anonymous cookie—through your authenticated, real-name Facebook session. And Facebook's "Like" buttons, embedded on millions of sites, tracked your browsing even when you didn't click them, even when you didn't log in through Facebook—as long as you had an active Facebook session in your browser.

DoubleClick had built anonymous behavioral profiles and needed to acquire a separate company (Abacus Direct) to attach real names to them—a plan that public outrage killed in 2000. Facebook Connect made that merger automatic. The behavioral profile and the real-name identity were the same system from day one.

Institutional credential: This was the subtler collision. Facebook wasn't a government. It wasn't an employer. It had no institutional mandate to verify identity. Yet "Log in with Facebook" was increasingly being treated as proof of identity—by websites, by services, and by users themselves. A Facebook profile became a de facto credential, not because of any formal authority, but because of sheer ubiquity.

The identity provider and the surveillance infrastructure had merged. The same system that authenticated you to third-party sites was the system that tracked your behavior across those sites, assembled a comprehensive profile of your interests and activities, and sold access to that profile to advertisers. Users traded the inconvenience of creating new accounts for the invisible cost of having their cross-site behavior tracked by a single entity with an advertising business model.

This was not a hidden agenda. It was the business model, operating as designed.

The Wave Collapse

Article 4 introduced a metaphor from Danah Zohar's work on quantum selfhood: the idea that identity exists in superposition—multiple simultaneous states, each authentic, each incomplete. DragonSlayer99 on a gaming forum, CarefulDad on a parenting board, a professional persona on LinkedIn—each a genuine facet of a single human being, expressed differently in different contexts.

The early web, almost accidentally, allowed this multiplicity to flourish. Different sites, different handles, different personas. No central registry connecting them. Each context got the version of you that was appropriate to that context. This wasn't deception—it was the natural way humans have always operated. You are a different version of yourself at work than at a bar with friends than at Thanksgiving dinner with extended family. Context-appropriate self-presentation is a fundamental social skill, not a character flaw.

Facebook Connect collapsed the superposition.

When "Log in with Facebook" became the dominant way to access new services, your Facebook identity—your real name, your single profile, your unified social graph—became your identity everywhere. The gaming forum, the parenting board, the professional network, the dating app, the political discussion group—all connected through one profile, one name, one identity.

Mark Zuckerberg was explicit about his philosophy on this point. In an interview quoted in David Kirkpatrick's The Facebook Effect, he stated:

"You have one identity. The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly. Having two identities for yourself is an example of a lack of integrity."

This wasn't a throwaway comment—it reflected a product philosophy that Facebook enforced through its real-name policy, which required users to register under the name "they go by in everyday life."

This was a philosophical position masquerading as a product decision. And it directly contradicted what researchers like Sherry Turkle had documented in the 1990s—that multiple online personas weren't evidence of dishonesty but of the natural multiplicity of the self.

The consequences played out over years. Drag performers who went by stage names had their accounts suspended. Political dissidents in countries with authoritarian governments found their real names exposed. Domestic violence survivors who had carefully separated their online presence from their legal name were forced to choose between using Facebook under their real name—visible to their abusers—or not using the platform that had become a prerequisite for social participation.

Google attempted its own version of this wave collapse. When Google+ launched in 2011, it enforced a strict real-name policy—even harsher than Facebook's. Users who registered under pseudonyms or handles were suspended. The resulting backlash—known as the "nymwars" (a portmanteau of "pseudonym" and "wars")—was fierce. Activists, security researchers, members of marginalized communities, and privacy advocates argued that pseudonymity wasn't just a preference but a safety requirement. Google eventually relented in 2014, dropping the real-name requirement—but by then, Google+ was already failing for unrelated reasons.

The nymwars were small in scale but significant in what they revealed: the wave collapse wasn't just an abstract concern about identity philosophy. For vulnerable people, the forced flattening of multiple contextual identities into a single "real" identity was a safety issue with physical-world consequences.

Twitter offered a quiet counter-example. "Sign in with Twitter," launched in 2009, provided the same federation convenience—click a button, authenticate through an existing account, arrive logged in. But Twitter's identity model was pseudonymous by default. Your Twitter handle was whatever you chose. No real-name policy. No requirement that your online persona match your legal identity. Twitter demonstrated that social login didn't require the wave collapse—that federation and pseudonymity could coexist. The industry largely ignored this lesson.

Google's Parallel Path

Google's entry into social login deserves its own note, because Google's approach would eventually prove more durable than Facebook's—though for different reasons than anyone expected.

Google launched its identity platform incrementally. It served as an OpenID provider starting in 2008, implemented OAuth 2.0 for authorization, and—critically—would go on to develop OpenID Connect, the protocol that finally unified authentication and authorization into a clean standard.

"Log in with Google" offered many of the same benefits as Facebook Connect: a recognized brand, an account nearly everyone had (thanks to Gmail and Android), a pre-populated profile. But Google's approach was less aggressive about social graph data. When you logged into a site with Google, the site typically received your name, email, and profile photo—not your entire social network, not your browsing history, not your interest graph.

This wasn't because Google was less interested in data—Google's entire business was data. But Google's data advantage came from search, not from social login. Google already knew what you were looking for, what you clicked on, where you went on your phone. It didn't need social login to build your profile. Facebook, by contrast, was specifically using Connect to extend its data collection beyond facebook.com and into the broader web.

The distinction would matter later, when the surveillance reckoning arrived and "Log in with Facebook" became politically toxic while "Log in with Google" survived relatively unscathed. But in 2008-2010, both were part of the same trend: the concentration of web identity into a handful of mega-providers.

What Federation Became

By 2010, the federation landscape had resolved into something nobody had planned:

Enterprises used SAML. Formal, secure, administrator-managed. Your employer's IT department handled everything. You logged in once and accessed your SaaS tools without thinking about the plumbing.

Consumers used social login. Informal, convenient, user-facing. You clicked a Facebook or Google button. The convenience was real. The costs were hidden.

Idealists had lost. OpenID's decentralized vision—user-controlled, provider-agnostic, privacy-preserving—had been steamrolled by the sheer usability advantage of social login backed by billion-user platforms. The technology worked. The philosophy was right. The user experience was wrong.

The question from the opening of this article—"who gets to be a country?"—had been answered, at least for the consumer web. Not by governments. Not by open protocols. Not by user choice. By the platforms where people already spent their time.

Microsoft had tried to claim that role by corporate fiat and been rejected. The open-standards community had tried to make it available to everyone and been ignored. Facebook claimed it by showing up with a blue button and two hundred million users and making the whole thing effortless.

The federation wars weren't over—the enterprise track was still evolving, and the protocols that would unify consumer and enterprise identity were still being written. But the political outcome was clear: identity on the consumer web had concentrated into an oligopoly.

The institutional world, meanwhile, had its own battles to fight—and its own answers to build.


The Enterprise Fights Back

While Facebook and Google were claiming the consumer web's identity layer, enterprises were grappling with a different set of pressures—and building a parallel stack that would eventually reshape how identity worked for everyone.

The Compliance Hammer

Before 2002, enterprise identity management was an IT convenience. Single sign-on saved employees time. Centralized directories simplified administration. These were good things, worth investing in, but they were efficiency plays. Nobody went to jail over a misconfigured LDAP server.

Then the laws arrived.

Sarbanes-Oxley (SOX, 2002) was a direct response to the Enron and WorldCom accounting scandals. It required public companies to maintain internal controls over financial reporting—including controls over who could access financial systems and what they could do there. If an unauthorized person modified financial records and the company couldn't demonstrate how that happened, executives faced personal criminal liability.

HIPAA (1996, with the Security Rule enforced from 2005) required healthcare organizations to protect patient data. Access to medical records had to be logged, controlled, and auditable. Who accessed what, when, and why—all of it had to be traceable.

PCI-DSS (2004) mandated security standards for any organization handling credit card data. Unique user IDs for every person with computer access. Authentication for access to cardholder data. Logging of all access events.

FERPA (the Family Educational Rights and Privacy Act, 1974) had been on the books for decades before the others, but its identity management implications sharpened considerably as universities moved their records online. Originally designed to give students control over their paper files, FERPA required educational institutions to restrict access to academic records — grades, enrollment status, disciplinary history — to authorized parties only. As student information systems migrated to web-based portals in the late 1990s and early 2000s, "authorized parties only" stopped being an administrative policy enforced by a filing cabinet and became a technical requirement enforced by access controls. Who could see a student's transcript? Who could pull enrollment data? The answers had to be demonstrable, not just assumed. For universities, this reinforced the same pressure HIPAA and SOX were creating in healthcare and finance: identity infrastructure wasn't a convenience — it was a legal obligation with auditable teeth

The common thread: audit trails. Every one of these regulations required organizations to know, provably and retroactively, who accessed sensitive systems. Not "we think it was probably someone in accounting." Not "the shared admin account was used." Specific individuals, specific actions, specific timestamps.

This transformed identity management from an IT convenience into a legal obligation. Companies that had been content with shared admin passwords and informal access suddenly needed to demonstrate, to auditors and regulators, that they controlled who could do what. Identity infrastructure wasn't optional anymore. It was a compliance requirement, and non-compliance had teeth—fines, sanctions, personal liability for executives.

The money followed. Enterprise identity management, which had been a modest IT budget line, became a major spending category. Companies that had deferred investments in directory services, access controls, and authentication infrastructure now had regulatory deadlines forcing their hand.

This funding wave created the conditions for a new category of company—one that would reshape enterprise identity.

ADFS: Microsoft's Second Attempt (2003–2012)

Microsoft had learned from Passport's failure. The consumer web had rejected a single-provider model. But the enterprise world was different. Most corporations already ran Active Directory—by the mid-2000s, AD was the dominant directory service for Windows environments, which meant it was the dominant directory service, period. Enterprises didn't need to be convinced to trust Microsoft with identity. They already did.

Active Directory Federation Services (ADFS) launched in 2003 as a Windows Server component, with significant updates in 2008 and 2012. Where Passport had tried to make Microsoft the identity provider for the entire internet, ADFS did something more modest and more sustainable: it let organizations extend their existing Active Directory identities to external applications and partner organizations.

The model was pure SAML federation (and its Microsoft-specific cousin, WS-Federation). Your organization's AD became the identity provider. SaaS applications—Salesforce, Workday, Box, ServiceNow—became service providers. Employees logged into their corporate desktop once and accessed cloud applications without additional passwords. Partners could be granted federated access without being added to your internal directory.

ADFS solved a real problem: the cloud migration was accelerating, and organizations needed their on-premises identity infrastructure to work with off-premises applications. ADFS bridged that gap.

But ADFS reflected Microsoft's enterprise DNA—powerful, feature-rich, and complex. Configuration required deep expertise. Troubleshooting federation issues meant reading XML traces and debugging cryptographic assertions. Misconfiguration was common and could mean either locked-out users (frustrating) or security holes (dangerous). Organizations needed dedicated identity engineers just to keep ADFS running.

This complexity was, paradoxically, part of what created the opening for the next wave of identity companies. ADFS proved that enterprise federation worked. It also proved that most organizations needed help doing it.

The Rise of IDaaS: Okta and the Cloud Identity Layer

By the late 2000s, a contradiction had emerged in enterprise IT. Companies were moving their applications to the cloud—Salesforce, Google Apps, Workday, Box, Dropbox—to reduce the burden of managing on-premises infrastructure. But they were still running their identity infrastructure on-premises, in Active Directory and ADFS, managed by internal IT teams.

Identity was becoming the last piece of on-premises infrastructure that everything else depended on.

Okta, founded in 2009 by Todd McKinnon and Frederic Kerrest (both former Salesforce executives), saw the opportunity. What if identity itself moved to the cloud? What if, instead of running your own ADFS server, you used a cloud service that handled federation, single sign-on, multi-factor authentication, and user lifecycle management?

The core of Okta's value proposition was its application catalog — a library of pre-built integrations with thousands of SaaS applications. To understand what "pre-built" actually meant, it helps to appreciate what building the integration yourself involved. Every time an enterprise wanted to federate with a new SaaS vendor, someone had to sit down and do the plumbing: exchange SAML metadata files with the vendor, configure the correct endpoint URLs on both sides, map the right user attributes across the two systems, set up certificate rotation, test edge cases like single logout, and then debug whatever broke. Do that once with Salesforce, and you've spent days. Do it with thirty applications across your organization, and you've employed a small team. Do it with a hundred, and the project never ends.

Okta had already done that work for you. Its engineers had built and tested the integration with Salesforce, and Workday, and Box, and ServiceNow, and thousands of others — figuring out each vendor's quirks, their preferred attribute names, their certificate requirements. When your organization connected to Okta, you inherited all of that. Adding a new application wasn't a project; it was selecting it from a catalog and mapping your user attributes to it. Instead of your employees having to navigate a dozen separate login screens, they got a single portal — a dashboard of tiles, one per application — and one click sent the right SAML assertion or OAuth token flowing invisibly in the background.

But here's the distinction that mattered enormously, and that separated IDaaS from what Facebook and Google were doing on the consumer web at the same time: you brought your own users.

With Facebook Connect or Google's social login, the identity provider owned the users. Facebook knew who they were, held their data, and decided the terms on which that identity could be used. If Facebook suspended your account, you lost access everywhere. The platform was the authority.

IDaaS worked the other way around. Okta — and Ping Identity, and OneLogin, and eventually Azure AD — provided the infrastructure, the software, and the integrations. But the users were yours. Your employees existed in your Active Directory or HR system first; Okta synchronized with that, or you provisioned into it, but the canonical record of who your people were remained under your control. You decided the access policies. You controlled the lifecycle — when an employee joined, what they could access, and critically, what happened the moment they left. Okta enforced your rules at scale across hundreds of applications simultaneously, but the rules were yours.

It was, in a sense, the enterprise equivalent of "bring your own device" — except applied to identity. Bring your own users. The IDaaS provider gave you the platform to manage them; it didn't presume to own them.

This distinction had real consequences. An enterprise whose employees authenticated through Okta wasn't handing a surveillance company a map of its workforce's daily activity. The data about who logged into what, when, stayed within the enterprise's own administrative domain. The vendor relationship was a service contract, governed by enterprise agreements with audit rights and data processing terms — not a terms-of-service checkbox that quietly granted the platform broad rights to the underlying data.

Ping Identity (founded 2002) had been working in this space earlier, with more of a hybrid focus—bridging on-premises enterprise infrastructure with cloud applications. OneLogin (2009) entered the same market. Microsoft itself would eventually respond with Azure Active Directory (later renamed Entra ID), moving its own identity platform to the cloud.

This was the emergence of Identity as a Service (IDaaS) — a category that barely existed in 2008 and would become a multi-billion-dollar market within a decade. What IDaaS changed wasn't the protocols — SAML assertions still flowed, OAuth tokens still passed. What changed was the operational model. Identity management shifted from a capital expense (buy servers, install software, hire specialists) to an operating expense (pay per user per month). And with that shift, sophisticated identity management became accessible to organizations that could never have afforded to build and maintain it themselves.

As a side note, the IDaaS market that Okta helped create has since expanded well beyond workforce SSO — spanning customer identity, privileged access management, machine identity, and increasingly, AI agent authentication, which will be covered in later articles as part of this series. If you're trying to navigate that landscape today, the IAM Benchmark is a useful starting point: a structured, regularly updated directory of IAM vendors — from full platform suites like Okta and Ping Identity to focused point solutions — with clear summaries of what each one actually does and who it's built for. Worth bookmarking if the vendor choices in this space have ever felt opaque.

MFA Goes Mainstream—and Gets Humbled

While federation solved the problem of too many passwords across too many sites, it didn't solve the problem of passwords themselves. A single federated password was still a password—still guessable, still phishable, still vulnerable to breach and reuse. Federation reduced the number of passwords but didn't improve the fundamental weakness of the mechanism.

The enterprise answer was multi-factor authentication: require something beyond a password to prove identity. The concept—"something you know plus something you have"—dated back decades. ATM cards required both a physical card and a PIN. High-security facilities required badges and codes. The principle was well established.

RSA SecurID had been the enterprise standard for hardware-based authentication since its launch in 1986. The system was straightforward: every employee carried a small hardware token that displayed a six-digit number that changed every 60 seconds. To log in, you entered your password (something you know) plus the current number from the token (something you have). The number was generated by a proprietary algorithm seeded with a unique value stored both in the token and on RSA's authentication server. Without the physical token, an attacker who stole your password still couldn't log in.

By the late 2000s, RSA SecurID tokens were ubiquitous in enterprise environments. Banks, government agencies, defense contractors, Fortune 500 companies—tens of millions of tokens were in circulation. RSA had become synonymous with two-factor authentication.

Then, in March 2011, RSA disclosed that it had been breached.

Attackers—later attributed to a nation-state actor—had penetrated RSA's own network and stolen information related to the SecurID system, including data about the seed values used to generate token codes. The precise scope of the theft was debated, but the implication was devastating: the secret at the heart of every SecurID token might be compromised. An attacker with the seed data could potentially predict the codes a token would display, rendering the "something you have" factor meaningless.

RSA initially downplayed the severity, with chairman Art Coviello stating, "We believe and still believe that the customers are protected." As the scope became clearer—and after Lockheed Martin reported an intrusion attempt that appeared to leverage compromised SecurID data—RSA offered to replace tokens for customers who requested it. The breach cost RSA's parent company EMC $66.3 million, covering investigation, system hardening, and token replacements for over 30,000 customers.

The irony was sharp. The company that had built its entire business on trust—"trust our tokens, trust our algorithms, trust our infrastructure"—had itself been compromised. The guardian of enterprise authentication had failed to guard itself.

But the breach didn't discredit multi-factor authentication. It had the opposite effect. The RSA incident demonstrated that any authentication factor could be compromised, which strengthened the argument for defense in depth—multiple independent factors, open standards rather than proprietary secrets, and the assumption that any single component might fail.

The breach accelerated interest in open MFA standards. The TOTP specification (Time-Based One-Time Password, RFC 6238) was published in 2011, providing an open alternative to RSA's proprietary algorithm. TOTP used the same concept—a time-based code generated from a shared secret—but the algorithm was public, implementable by anyone, and not dependent on a single vendor's infrastructure. Google Authenticator, launched in 2010, was an early and influential TOTP implementation.

And a more fundamental rethinking was beginning. In 2012, a group of technology companies—including PayPal, Lenovo, and Nok Nok Labs—founded the FIDO Alliance(Fast IDentity Online). FIDO's premise was radical for the identity industry: passwords were not just inconvenient but fundamentally broken, and no amount of layering additional factors on top of passwords would fix the underlying problem. The goal was passwordless authentication—using public key cryptography and hardware security to eliminate passwords entirely.

FIDO's work was just beginning in 2012, and its impact would unfold over the next decade. But its founding marked a philosophical turning point: the enterprise identity community was beginning to ask whether the password—the oldest authentication mechanism in computing, dating back to Compatible Time-Sharing System (CTSS)—could finally be replaced rather than merely supplemented.

OAuth: The Authorization Protocol Everyone Used for Authentication

While enterprises were deploying SAML and MFA, a different problem was festering on the consumer web—and it would produce the protocol that ultimately unified consumer and enterprise identity.

The problem was the password anti-pattern.

In the mid-2000s, a new category of web application was emerging: services that wanted to access your data on other services. A photo printing service that needed access to your Flickr photos. A social aggregator that pulled in your Twitter posts. A productivity tool that read your Google Calendar.

The only way these applications could access your data was to ask for your password. You literally typed your Twitter password into a third-party app, and that app logged in as you. If the app was poorly built, your password leaked. If the app was malicious, your account was compromised. If you changed your password, every connected app broke. And you couldn't give an app limited access—"read my tweets but don't post"—because the app had your full credentials.

This was the digital equivalent of handing your house keys to every contractor, delivery person, and dog walker who needed temporary access to your home. Obviously dangerous. Obviously unsustainable. But without an alternative, everyone did it.

OAuth was built to provide that alternative.

The protocol emerged in 2007 from a community effort involving Blaine Cook (Twitter), Chris Messina, Larry Halff (Magnolia), and Eran Hammer (then at Yahoo). The specific catalyst was practical: Twitter and the social bookmarking service Ma.gnolia both needed delegated authorization, and no existing standard addressed the problem.

OAuth 1.0 (community specification December 2007, IETF RFC 5849 in April 2010) solved the core problem. Instead of sharing your password with a third-party app, you authorized the app to access specific data on your behalf. The app received a token—a limited, revocable credential—instead of your password. You could revoke the app's access without changing your password. The app could be limited to specific permissions ("read tweets" but not "send tweets").

The analogy: instead of giving your house keys to a contractor, you give them a time-limited access badge that only opens the specific rooms they need, and you can deactivate the badge from your phone at any time.

OAuth 1.0 worked, but it was complex to implement correctly. To understand why, it helps to first understand what OAuth was actually doing at each step.

How OAuth works

The core problem OAuth solved was delegation: how do you let a third-party app access your data on another service, without giving that app your password? The answer was a structured handshake between three parties — you, the app you're using, and the service that holds your data.

Take a concrete example. You're using a new photo editing app, and you want it to pull in your photos from Flickr. Without OAuth, the app would ask for your Flickr password, log in as you, and do whatever it wanted. With OAuth, the flow goes like this:

  1. The app requests permission. Your photo editor contacts Flickr and says: "I'd like to access a user's photos. Here are my credentials as a registered app."
  2. You get asked. Flickr redirects you to its own login page — not the app's — and asks: "This app wants to read your photos. Do you approve?" You're authenticating with Flickr directly, so the app never sees your password.
  3. Flickr issues a token. If you approve, Flickr sends the app a token — a limited, revocable credential that says "this app is allowed to read this user's photos." Not write. Not delete. Just read.
  4. The app uses the token. From then on, the app presents that token with every request to Flickr's API. Flickr checks the token, confirms it's valid and covers the requested action, and returns the data.
  5. You can revoke it anytime. Because the token is separate from your password, you can cancel the app's access — from Flickr's settings — without changing your password or affecting any other app.

That's the fundamental model. Your password never left Flickr. The app got exactly the access you approved, nothing more. And you retained control.

The OAuth 1.0 implementation burden

Where OAuth 1.0 became painful was in step 4 — actually using the token. Every single API request had to be cryptographically signed. The app had to assemble a precise string from the HTTP method, the URL, the query parameters, and a timestamp, run it through an HMAC-SHA1 algorithm using its secret key, and include the resulting signature in the request header. Get any part of that string assembly wrong — a parameter out of order, a character that should have been percent-encoded differently — and the server would reject the request with a cryptic error. Every programming language needed its own library, and those libraries were often buggy or inconsistent with each other. OAuth 1.0 was secure precisely because it was strict, but that strictness made it a significant burden — especially for smaller developers without dedicated security engineers.

What OAuth 2.0 changed

OAuth 2.0 kept the same fundamental model — three parties, delegated access, tokens instead of passwords — but rethought the implementation entirely.

Its most significant change was dropping the per-request cryptographic signatures. Rather than requiring every API call to be individually signed, OAuth 2.0 simply relied on HTTPS to secure the connection. If the transport layer was encrypted, the argument went, you didn't need an additional layer of cryptographic proof on every individual request. This made implementations dramatically simpler: instead of a signing library, you just needed a valid HTTPS client and a token you included in the Authorization header.

OAuth 2.0 also introduced multiple "grant types" — distinct flows tailored to different situations, rather than one protocol that everyone had to bend to their use case:

  • Authorization Code — the standard flow for server-side web apps, where a backend can securely hold secrets. This is the flow the Flickr example above describes.
  • Client Credentials — for server-to-server communication where no human user is involved at all. A backend service authenticating with an API, not on anyone's behalf.
  • Device Code — for smart TVs, CLI tools, or anything without a browser to redirect through. You see a code on screen, go authenticate on your phone, and the device polls until access is granted.
  • Implicit — a shortcut originally designed for browser-based JavaScript apps, later deprecated once its security weaknesses became clear.

The flexibility was real and valuable. But as Eran Hammer would argue, it was also where things started to go wrong.

The Road to Hell

Eran Hammer had been the lead author and editor of the OAuth 2.0 specification throughout its development. In July 2012—three months before the RFC was published—he resigned from the working group and published a blistering essay titled "OAuth 2.0 and the Road to Hell."

"OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career."

His critique was specific and technical. The OAuth 1.0 specification was a complete protocol—follow it and you had a secure implementation. OAuth 2.0, Hammer argued, was a "framework"—a collection of options, extension points, and implementation choices that left critical security decisions to individual developers. Two OAuth 2.0 implementations could be fully compliant with the specification and completely incompatible with each other. Worse, the specification made it easy to build insecure implementations that technically followed the rules.

Hammer pointed to the influence of enterprise vendors—Microsoft, Google, and others—who had pushed the specification toward flexibility at the expense of security and simplicity. What had started as a focused, opinionated protocol had become, in his view, a design-by-committee compromise that served everyone's needs and no one's security.

He was, by many accounts, right about the problems. OAuth 2.0's flexibility did lead to inconsistent implementations. Security vulnerabilities in OAuth 2.0 deployments became common—not because the specification was broken, but because it left too many security-critical decisions to implementers who didn't always understand the implications.

And it didn't matter. OAuth 2.0 became the universal standard anyway.

It became the standard because flexibility, for all its risks, was what the industry needed. SAML's rigidity made it reliable but confined it to enterprise. OAuth 1.0's strictness made it secure but difficult to implement. OAuth 2.0's looseness made it adaptable to web apps, mobile apps, single-page apps, server-to-server communication, IoT devices—every new context the evolving internet threw at it.

The pattern from the entire series reasserted itself. HTTP beat better protocols. Cookies beat certificates. Plaintext passwords in databases beat PKI. The solution that ships—messy, imperfect, "good enough"—beats the perfect solution that doesn't. OAuth 2.0 was the latest instance of a dynamic as old as digital identity itself: pragmatism defeats purity, and the market accepts "workable" over "correct."

But Hammer had identified a real gap, and that gap had a specific shape: OAuth 2.0 was an authorization protocol. It answered the question "what is this app allowed to do?" It did not answer the question "who is this person?" Everyone was using OAuth for login—Facebook, Google, Twitter—but each had implemented the authentication layer differently, with proprietary extensions, incompatible claims, and inconsistent security properties.

Authorization without standardized authentication was like a system of travel visas with no standardized passport. You could authorize access to specific resources, but you had no standard way to verify who was requesting that access.

The missing piece was about to arrive.

OpenID Connect: The Resolution (2014)

OpenID Connect (OIDC) was built by people who understood both OAuth's strengths and OpenID's failure.

The key figures—Mike Jones (Microsoft), John Bradley (independent consultant and veteran of multiple identity standards), and Nat Sakimura (Nomura Research Institute, Japan)—were steeped in the history. They'd watched OpenID fail at the consumer level despite technical soundness. They'd watched OAuth 2.0 succeed as an authorization framework while leaving authentication as an ad hoc mess. They'd watched Facebook and Google build proprietary authentication layers on top of OAuth and create a fragmented landscape where every "Log in with..." button worked differently under the hood.

OpenID Connect, finalized in February 2014, took a pragmatic approach: don't replace OAuth 2.0. Build on top of it. Add a standardized authentication layer that does for identity what OAuth did for authorization.

The core innovation was the ID Token—a JSON Web Token (JWT) issued by the identity provider alongside the OAuth access token. To understand what made this significant, it helps to see what each token actually looks like.

The OAuth access token is deliberately opaque. From the application's perspective, it's just a string—something like:

ya29.a0AfH6SMBx7k2Qz9mN3pL8wR1vY4eC6dF0hJ2iK5

The application doesn't know what's inside it. It doesn't need to. It presents the token to an API, and the API decides whether it's valid and what access it grants. The token is a key—useful for opening a specific door, but carrying no information about who's holding it.

The ID token is the opposite. It's a JWT, and JWTs have a defined structure: three base64-encoded sections separated by dots. The first is a header describing the token type and signing algorithm. The second is the payload—the claims. The third is the cryptographic signature. Decoded, the payload of an ID token looks something like this:

{
"iss": "https://accounts.google.com",
"sub": "110169484474386276334",
"aud": "812741506391.apps.googleusercontent.com",
"email": "jane.smith@gmail.com",
"email_verified": true,
"name": "Jane Smith",
"picture": "https://lh3.googleusercontent.com/photo.jpg",
"iat": 1516239022,
"exp": 1516242622
}

Each field is a claim — a specific, signed assertion about the user or the authentication event.

The entire payload is signed with Google's private key. Any application that receives this token can verify that signature using Google's public key — published at a well-known URL — without calling Google at all. If the signature checks out, the claims are trustworthy. If someone tampered with the payload — changed the email, extended the expiration — the signature would no longer match and the token would be rejected.

Where the access token said "this app is authorized to access these resources," the ID token said "this person is Jane Smith, here is her email, here is when she authenticated, and here is the cryptographic proof that we — the identity provider — are making these claims."

This distinction matters. An application receiving only an access token knows it has permission to do something, but doesn't know who gave that permission. An application receiving an ID token knows exactly who authenticated, when, and through which provider — all verifiable without a round-trip to the identity provider. The ID token was a signed, structured, machine-readable identity assertion that could travel across systems and be verified by anyone who trusted the issuer's public key. That's the entire PKI vision from the 1990s, finally working invisibly at consumer scale — one button click, one signed token, one cryptographic handshake the user never sees.

If this sounds familiar, it should. It's the same concept as a SAML assertion—a signed statement from a trusted authority about a user's identity. But where SAML assertions were verbose XML documents designed for browser-based enterprise federation, ID tokens were compact JSON objects designed for the modern web—mobile apps, single-page applications, APIs, microservices.

The format mattered. A SAML assertion might be kilobytes of XML requiring dedicated parsing libraries. A JWT was a few hundred bytes of base64-encoded JSON that any programming language could decode in a few lines of code. In an era of mobile apps on cellular networks, that difference in overhead was the difference between practical and impractical.

OpenID Connect succeeded where OpenID had failed for a simple reason: it didn't ask users to do anything differently. There was no URL to type, no provider to choose, no new concept to understand. Users still clicked "Log in with Google." The experience was identical. What changed was the plumbing underneath—standardized, interoperable, secure.

And it succeeded where SAML couldn't expand because it didn't require pre-established trust relationships. Any developer could register their application with Google's identity platform, implement the OIDC flow, and accept Google-issued ID tokens—in an afternoon, without exchanging metadata files or configuring XML trust stores.


The Stack Crystallizes—and the Cracks Show

By 2014, digital identity had settled into a recognizable architecture. The federation wars had produced winners, losers, and a landscape that nobody had fully planned but everyone had to live with.

The Consumer Web Stack

For ordinary people using the internet, identity had consolidated around a small number of mega-providers:

Authentication: "Log in with Google," "Log in with Facebook," or create a site-specific password. OAuth 2.0 and OpenID Connect handled the plumbing. Users saw buttons.

Authorization: OAuth 2.0 tokens governed what third-party apps could access. "Allow this app to read your contacts?" was a dialog most users had encountered, even if they didn't understand the protocol underneath.

Identity storage: Concentrated in a handful of providers. Google, Facebook, and (increasingly) Apple held the canonical versions of hundreds of millions of people's digital identities. The dream of distributed identity had collapsed into oligopoly.

Behavioral tracking: Deeply integrated with the identity layer. The companies providing "Log in with..." were the same companies whose business models depended on knowing as much as possible about their users' behavior across the web.

The Enterprise Stack

For organizations managing employee access, a parallel but distinct architecture had matured:

Directory: LDAP and Active Directory remained the source of truth for who existed within an organization and what groups they belonged to. The data model from Article 3's directory services discussion—hierarchical trees, distinguished names, group memberships—was still the foundation, now often synchronized to cloud directories.

Federation: SAML 2.0 for established B2B relationships. Increasingly, OpenID Connect for newer integrations. The two protocols coexisted, with SAML dominant in legacy enterprise contexts and OIDC gaining ground in cloud-native environments.

Single Sign-On: IDaaS platforms—Okta, Ping Identity, Azure AD—served as federation brokers, maintaining trust relationships with hundreds of SaaS applications and presenting employees with a unified portal. The SSO dream that Kerberos had demonstrated at MIT in the 1980s was now a commodity service available to any organization with a subscription.

Multi-Factor Authentication: TOTP apps (Google Authenticator, Authy), SMS codes, push notifications, and the beginning of FIDO-based hardware authentication. The RSA breach had accelerated the move toward open standards and away from single-vendor dependence. MFA was transitioning from "high-security option" to "baseline requirement."

Authorization: Role-Based Access Control (RBAC)—assigning permissions based on group membership and job function—remained the workhorse. But its limitations were becoming apparent in complex environments. A user's access needs might depend not just on their role but on their location, the device they were using, the time of day, and the sensitivity of the data they were requesting. Attribute-Based Access Control (ABAC) and policy-based approaches were gaining traction, though RBAC remained dominant by sheer inertia.

Compliance: Audit trails, access reviews, lifecycle management—the compliance requirements that had funded the enterprise identity industry in the first place were now deeply embedded in the stack. Every authentication event logged. Every access decision recorded. Every permission change tracked. The identity system wasn't just about letting people in—it was about proving, to auditors and regulators, exactly who was let in, when, and why.

The Uncomfortable Truth

The federation era had solved the problem it set out to solve. Users had fewer passwords to remember. Enterprises could manage access across hundreds of cloud applications. The cryptographic tools from the 1990s had finally reached everyday users, working invisibly behind friendly buttons.

But the three fracture lines hadn't been resolved. They'd merged in ways that created new problems as serious as the ones they replaced.

The concentration problem. Federation was supposed to distribute trust. The original vision—Liberty Alliance, OpenID—imagined a world of many identity providers, with users choosing among them, and no single entity dominating. Instead, the market concentrated identity into a handful of providers. Google and Facebook became the de facto identity layer of the consumer web. For enterprises, Microsoft (Azure AD) and a few IDaaS providers played the same role.

The surveillance problem. The behavioral shadow profile had been fused with identity. Every "Log in with Facebook" gave Facebook cross-site behavioral data. Every "Log in with Google" fed data into Google's advertising profile. The identity provider and the behavioral tracker were the same entity.

The single-point-of-failure problem. Concentrated identity created concentrated risk. When Facebook experienced outages—as it did repeatedly—millions of users couldn't log into third-party sites that depended on Facebook login. When a major identity provider was breached, the blast radius was enormous—a compromised Google account could mean compromised access to every service the user had connected to Google.

The identity-ownership problem. The most fundamental tension was philosophical. Users didn't own their federated identities. Facebook could disable your account—for violating community standards, for using a pseudonym that violated the real-name policy, for reasons that were opaque and unappealable—and you would lose access to every third-party site you'd connected through Facebook login. Your digital identity existed at the pleasure of a corporation whose interests might not align with yours.

This was a new form of an ancient problem. Article 3 described how identity documents have always been issued by authorities—governments, institutions, organizations—and the holder has always been dependent on the issuer. A government can revoke your passport. An employer can disable your corporate account. The issuer retains ultimate control.

But physical identity documents have legal frameworks governing revocation. Due process. Appeals. Rights. Digital identity on the federated web had none of these protections. Terms of service—thousands of words of legalese that nobody read—were the entire legal framework. The identity provider was simultaneously the issuer, the judge, and the executioner, accountable only to its own business interests.

The web had, in fifteen years, recapitulated the full arc of identity politics that took the physical world centuries to navigate. A chaotic period of no central authority (the early web's account free-for-all). An attempted monopoly (Passport). A coalition rejecting the monopoly (Liberty Alliance). Open standards as treaty frameworks (SAML, OAuth). A few powerful entities emerging as de facto authorities anyway (Google, Facebook). And now, the growing realization that concentrated authority over identity—even when it arrives through market dynamics rather than government decree—creates power imbalances that demand accountability.

The federation wars hadn't ended. They'd just revealed the deeper war underneath: not which protocol wins, but who controls identity itself.


What Comes Next

The period from 1999 to 2014 answered the question that we've previoustl posed: how do you solve the web's account fragmentation crisis? The answer was federation—authenticate once, access many services, let a trusted provider vouch for you.

But federation, as it actually played out, raised harder questions than it answered.

The cryptographic tools—digital signatures, public key verification, signed assertions—finally reached everyday users, working invisibly behind "Log in with..." buttons. The enterprise identity stack matured into a recognizable architecture: directories, federation protocols, single sign-on portals, multi-factor authentication, compliance-driven audit trails. These were genuine achievements, decades in the making.

But the identity oligopoly that emerged was not what anyone had planned. The open-standards idealists who built OpenID wanted a web where anyone could be an identity provider and users chose freely. Instead, the market delivered a web where three or four mega-platforms controlled the identity layer—and used that control to deepen their surveillance of user behavior. The wave collapse that we described—the flattening of multiple, contextual, authentic selves into a single observable identity—had accelerated through social login. And the concentration of identity in a few providers created new risks: systemic single points of failure, unaccountable power over digital existence, and the fusion of authentication with behavioral tracking.

The next chapter of digital identity would be shaped by two forces.

The first was technological. The smartphone—always with you, equipped with biometric sensors and hardware security modules, connected to everything—was about to transform authentication from something you did at a keyboard to something your device did continuously. Mobile would demand new protocols (SAML's XML was too heavy, its browser-based flows too rigid), new form factors for credentials (hardware tokens wouldn't survive the transition to touch screens), and new thinking about what "logging in" even meant when your device already knew who you were.

The second was political. The surveillance economy that social login had enabled was heading toward a reckoning. The fusion of identity and tracking—invisible to most users throughout the federation era—was about to become very, very visible. And when it did, the question of who controls digital identity would stop being a technical debate and become a public crisis.

The federation wars built the protocols. The next era would test whether those protocols could survive contact with billions of mobile devices, a global privacy awakening, and the long-deferred question of whether passwords—the oldest authentication mechanism in computing, could finally be eliminated.


Next: Part 6 - The Mobile Revolution and the Surveillance Machine

Note: If you would like to see a specific IAM product/vendor that is not listed in the IAM Benchmark, please contact me and I'd be glad to add it.


Further Reading:

Books

› Recordon, David and Reed, Drummond. "OpenID 2.0: A Platform for User-Centric Identity Management". ACM Workshop on Digital Identity Management, 2006.

› Richer, Justin and Sanso, Antonio. "OAuth 2 in Action". Manning Publications, 2017.

› Siriwardena, Prabath. "Advanced API Security: OAuth 2.0 and Beyond". Apress, 2020.

› Angwin, Julia. "Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance". Times Books, 2014.

› Schneier, Bruce. "Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World". W.W. Norton & Company, 2015.

› Lyon, David. "Surveillance Studies: An Overview". Polity Press, 2007.

› Solove, Daniel J. "The Digital Person: Technology and Privacy in the Information Age". NYU Press, 2004.


Standards and RFCs

› Cantor, S., et al. "Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML) V2.0". OASIS Standard, March 2005.

› Recordon, D. and Reed, D. "OpenID Authentication 2.0 - Final". OpenID Foundation, December 2007.

› Hammer-Lahav, E. (ed.) "RFC 5849: The OAuth 1.0 Protocol". Internet Engineering Task Force, April 2010.

› Hardt, D. (ed.) "RFC 6749: The OAuth 2.0 Authorization Framework". Internet Engineering Task Force, October 2012.

› Sakimura, N., Bradley, J., Jones, M., de Medeiros, B., Mortimore, C. "OpenID Connect Core 1.0". OpenID Foundation, February 2014.

› Jones, M., Bradley, J., Sakimura, N. "RFC 7519: JSON Web Token (JWT)". Internet Engineering Task Force, May 2015.

› Jones, M., Bradley, J., Sakimura, N. "RFC 7517: JSON Web Key (JWK)". Internet Engineering Task Force, May 2015.

› M'Raihi, D., et al. "RFC 6238: TOTP: Time-Based One-Time Password Algorithm". Internet Engineering Task Force, May 2011.

› M'Raihi, D., et al. "RFC 4226: HOTP: An HMAC-Based One-Time Password Algorithm". Internet Engineering Task Force, December 2005.


Legislation and Compliance

"Sarbanes-Oxley Act of 2002". 107th United States Congress, Public Law 107-204.

"Health Insurance Portability and Accountability Act (HIPAA) Security Rule". U.S. Department of Health and Human Services, effective 2005.

"Payment Card Industry Data Security Standard (PCI-DSS)". PCI Security Standards Council, Version 1.0, December 2004.

"California Online Privacy Protection Act (CalOPPA)". California State Legislature, 2003.


Key Articles and Reports

› Hammer, Eran. "OAuth 2.0 and the Road to Hell".

› Felt, Adrienne Porter and Evans, David. "Privacy Protection for Social Networking APIs". Web 2.0 Security and Privacy Workshop, 2008.

› Sun, San-Tsai and Beznosov, Konstantin. "The Devil is in the (Implementation) Details: An Empirical Analysis of OAuth SSO Systems". ACM Conference on Computer and Communications Security, 2012.

› Armando, Alessandro, et al. "Formal Analysis of SAML 2.0 Web Browser Single Sign-On: Breaking the SAML-based Single Sign-On for Google Apps". ACM Workshop on Formal Methods in Security Engineering, 2008.

› Gross, Ralph and Acquisti, Alessandro. "Information Revelation and Privacy in Online Social Networks". ACM Workshop on Privacy in the Electronic Society, 2005.

› Krishnamurthy, Balachander and Wills, Craig E. "Privacy Diffusion on the Web: A Longitudinal Perspective". World Wide Web Conference (WWW), 2009.

› Roesner, Franziska, Kohno, Tadayoshi, and Wetherall, David. "Detecting and Defending Against Third-Party Tracking on the Web". USENIX Symposium on Networked Systems Design and Implementation, 2012.

› Bonneau, Joseph and Preibusch, Sören. "The Password Thicket: Technical and Market Failures in Human Authentication on the Web".

› Fett, Daniel, Küsters, Ralf, and Schmitz, Guido. "A Comprehensive Formal Security Analysis of OAuth 2.0". ACM Conference on Computer and Communications Security, 2016.


Historical Documents

"Liberty Alliance Project". Original specifications and documentation.

"Microsoft Passport". Microsoft Developer Network Archive.

"Facebook Platform Launch". F8 Conference announcement, May 2007.

"RSA Security Breach Disclosure". RSA SecurID Public Statement, March 2011.



Reference Resources

"SAML 2.0". Wikipedia.

"OpenID". Wikipedia.

"OAuth". Wikipedia.

"Liberty Alliance". Wikipedia.

"Microsoft account (formerly Passport)". Wikipedia.

"Nymwars". Wikipedia.

"FIDO Alliance". Official website.

"OpenID Foundation". Official website.

"OAuth Community Site".

"OASIS SAML Resources". OASIS.

background

Subscribe to Synthetic Auth