This series began with a
simple problem: strangers sharing an expensive computer needed a way to prove who they were. The answer—a password—worked, until someone stole the file it was stored in.
Article 2 asked whether we could verify identity without storing the proof in recoverable form. Cryptography said yes—hashing, salting, one-way functions. Authentication worked. Trust was local and centralized: one system, one administrator, one password file.
Then we
connected the systems together. The 1980s and 90s were spent trying to recreate centuries of social technology in mathematics—and discovering that bits aren't wax, trust doesn't scale, and humans will always route around friction like water around stone.
The web gave everyone the power to create themselves from scratch, a hundred times over, on a hundred different sites. That liberation curdled into a crisis when advertisers, hackers, and the sheer weight of forgotten passwords revealed that a self with no center can't hold.
Article 5 followed who tried to become that center—and what happened when corporations, open-source idealists, and billion-user platforms each claimed the right to vouch for who you are.
Part 6 is the story of what happened when the computer moved into your pocket, learned your face, and never let go—and when those invisible costs finally became visible.
Two forces were converging: the smartphone would transform authentication from something you did at a keyboard into something your device did continuously, and the surveillance economy that social login had enabled was heading toward a reckoning.
These forces weren't independent. They fed each other. The same device that made authentication more convenient also made surveillance more comprehensive. The same biometric sensor that freed you from remembering passwords also generated data that someone, somewhere, wanted to collect.
This article traces both forces—and the collision between them.
On January 9, 2007, Steve Jobs walked onto a stage in San Francisco and introduced a device he called the iPhone.
Apple had been using the "i" prefix since 1998, when the iMac launched as an accessible gateway to the internet. The "i," Jobs said at the time, stood for "internet." And the iPhone was certainly that. A pocket-sized portal to the web, to email, to the entire networked world.
But looking back, the "i" might just as well have stood for individual. Or identity.
Because the iPhone—and the smartphone revolution it ignited—didn't just give people a new way to access the internet. It fundamentally changed the relationship between a person and their digital self. For the first time, a computing device was personal in the deepest sense of the word. Not personal like a "personal computer" that sat on a desk and was occasionally shared by family members. Personal like a wallet. Personal like a set of keys. Personal like something you reach for the moment you wake up and set down only when you sleep.
Every previous era of digital identity had assumed separation between the person and the machine. You walked up to a terminal and logged in. You sat down at a desktop and entered your password. You opened a laptop and authenticated. The act of identification was a threshold—a moment where you crossed from the physical world into the digital one, proved who you were, and eventually crossed back out by logging off or walking away.
The smartphone eliminated that threshold.
You didn't log into your phone the way you logged into a workstation. You unlocked it—a gesture that felt more like opening your front door than like presenting credentials to a system. And once unlocked, you stayed authenticated. Apps remembered you. Sessions persisted. The device was always on, always connected, always you in a way that no previous computing device had been.
This wasn't a minor UX improvement. It was a philosophical rupture with everything that had come before.
On CTSS in 1961, you logged in and logged out. Your session had a clear beginning and end. The system tracked your time because time was expensive and shared. On the web in 1998, cookies gave sites a memory of your visits, but sessions expired. Close the browser, clear the cookies, and the digital you evaporated. You could walk away.
On a smartphone in 2007, there was no walking away. The device went where you went. It knew where you were. It held your email, your messages, your photos, your calendar, your contacts—the digital residue of your entire life. Logging out wasn't just inconvenient; it was almost conceptually incoherent. Log out of what?
Fernando Corbató's password was designed for a world where people approached machines. The smartphone inverted that relationship. Now the machine approached you. It was with you, always. And that inversion changed what "authentication" needed to mean.
The Platform Duopoly
In November 2007, ten months after Jobs's announcement, Google unveiled the Open Handset Alliance and the Android operating system. The first Android phone, the HTC Dream, shipped in October 2008. Within a few years, Android would surpass the iPhone in global market share by a wide margin, bringing smartphones to billions of people who couldn't afford Apple's hardware.
The two platforms took different paths to market dominance, but both made the same move with respect to identity: they required an account.
To use an iPhone, you needed an Apple ID. To use an Android phone, you needed a Google account. Not technically, not in every case, not without workarounds—but practically, functionally, the device was crippled without one. No app store. No cloud backup. No synced contacts. No email configured out of the box. The setup wizard on both platforms gently—or not so gently—guided you toward creating or signing into an account before you could do anything else.
This mirrored what Microsoft had attempted with Passport and Windows XP—bundling an identity system with an operating system. But where Passport had been rejected by the industry and treated with suspicion by users, Apple and Google succeeded without resistance. The difference was context. Microsoft Passport asked you to create an account for an abstract future benefit—"sign in everywhere on the web." Apple and Google asked you to create an account to use the device you'd just purchased. The value exchange was immediate and tangible. You wanted your apps, your photos, your messages. The account was the key to all of it.
The consequence, barely noticed at the time, was that two companies had quietly become the identity providers for the majority of the world's mobile users. Not through federation protocols. Not through standards bodies. Not through legislative frameworks. Through hardware and the app stores that made the hardware useful.
The federation wars had concentrated web identity into an oligopoly of social login providers—Facebook, Google, and a few others. The smartphone era didn't disrupt that concentration. It deepened it. Google was now both the dominant social login provider and the operating system identity provider for the majority of the world's phones. Apple, which had largely sat out the federation wars, entered the identity game not through the web but through the device itself.
The identity layer of digital life had a new foundation and it wasn't a protocol or a standard. It was a piece of hardware.
The App Model Breaks the Web's Assumptions
The smartphone didn't just change who controlled identity. It changed how identity worked technically.
SAML, the enterprise federation protocol, was designed for web browsers. Its authentication flow depended on HTTP redirects—the user's browser being bounced from a service provider to an identity provider and back, carrying XML assertions in POST requests. This worked fine when "the client" was a desktop browser with a full HTTP stack, a visible address bar, and a user who understood (at least vaguely) that they were moving between websites.
Native mobile apps weren't browsers. They didn't have address bars. They didn't naturally handle HTTP redirects between domains. They didn't render XML. The entire interaction model was different: instead of navigating between websites, users tapped icons that launched self-contained applications. Each app was its own world, its own full-screen experience. The concept of "redirecting to another domain" didn't map to anything in the user's mental model.
Recall from Article 5: OAuth was the protocol the industry built to solve a specific problem—how to let one service access your data on another without handing over your password. Instead of giving a photo printing app your Flickr password, OAuth let you grant it a limited, revocable token: permission to read your photos, nothing more. OAuth 2.0, which became the universal standard, was designed to be flexible enough to work across many different kinds of applications—though its lead author, Eran Hammer, quit the project over that very flexibility, arguing it came at the cost of security and consistency.
That flexibility, it turned out, was the right call. OAuth 2.0's multiple grant types weren't overengineering—they were anticipating a world where "the client" requesting access might be a server-side web app, a single-page JavaScript application, a native mobile app, a smart TV, a command-line tool, or a device with no browser at all.
The Authorization Code flow with PKCE (Proof Key for Code Exchange) became the standard for mobile apps. To see why, imagine you're building a mobile app that connects to a user's Spotify account. Your app needs to prove to Spotify that it's a legitimate app—not a malicious one trying to hijack user accounts. Normally, you'd do this with a client secret: a kind of password that identifies your app. But unlike a web server you control, your mobile app lives on millions of devices. Anyone can download it and use freely available tools to pick apart the code. Any secret hidden inside isn't really hidden.
PKCE gets around this by never storing a secret in the first place. Instead, each time a user logs in, the app generates a unique, throwaway proof that's only valid for that single session. Once the login is complete, it's gone. There's nothing permanent baked into the app for an attacker to find.
OpenID Connect, was built on top of OAuth 2.0 by authors who saw this shift coming. Its lightweight, JSON-based ID tokens travel easily across mobile networks. The underlying JWT format is compact, self-contained, and verifiable without a round trip to the identity provider, making it well-suited to the realities of mobile: intermittent connectivity, limited bandwidth, and users who need to authenticate quickly and get on with their lives.
The enterprise world didn't abandon SAML—it still dominated B2B federation and legacy integrations. But for anything new, anything mobile-first, anything consumer-facing, OpenID Connect and OAuth 2.0 became the default. The technical center of gravity had shifted.
The Phone Number as Identity
While Apple and Google were tying identity to platform accounts, a parallel development was unfolding—one that would reach more people worldwide than any Silicon Valley product. And to understand it, you have to look beyond the markets where this series has largely been set.
The story so far has mostly been a story of the developed world—and I should be precise about what I mean by that here. I'm not using "developed" and "developing" as broad economic or cultural judgments. I'm using them in a narrow, specific sense: regions where computing infrastructure—mainframes, personal computers, landline data networks, broadband internet—was deployed at scale and adopted by a significant portion of the population during the decades this series has covered. The distinction is about technology access and infrastructure sequencing.
With that framing: the story so far has been set almost entirely in regions that followed a specific technological sequence. CTSS at MIT. Unix at Bell Labs. Kerberos and LDAP at research universities and corporations. The web identity crisis playing out across Amazon, eBay, and Hotmail. The federation wars fought between Microsoft, Google, Facebook, and standards bodies. All of this assumed a user who had progressed through that same sequence: mainframes, then personal computers, then dial-up internet, then broadband, then the web. Each era's identity problems built on the previous era's solutions.
But for billions of people across sub-Saharan Africa, South Asia, Southeast Asia, and Latin America, that sequence never happened. There were no mainframes to share. No personal computers on desks. No dial-up modems screeching their way onto the early web. The infrastructure that this technological sequence was built on—landline telephone networks capable of carrying data, reliable electricity, affordable computing hardware—simply didn't exist at the scale needed for mass participation.
These regions didn't slowly catch up. They skipped ahead.
The technology they leapfrogged to wasn't the personal computer. It was the cell phone. By the mid-2000s, mobile phone adoption in the developing world was growing explosively—far faster than PC adoption had ever grown anywhere. The economics were fundamentally different: a basic handset cost a fraction of a computer, cellular networks could be built without running copper or fiber to every home, and prepaid SIM cards eliminated the need for a bank account or a billing relationship. The cell tower, not the telephone pole or the cable modem, became the infrastructure that connected billions of people to each other.
When the smartphone arrived, it didn't bring these populations onto an internet they'd been missing. For many, it was the internet—their first and often only point of access. They hadn't experienced the desktop web's identity crisis: the accumulation of dozens of accounts, the password fatigue, the fragmentation that defined the late 1990s and early 2000s. They came to digital life with a clean slate and a single, pre-existing credential: a phone number.
And the phone number had already begun functioning as identity long before smartphones formalized the relationship. In markets across the developing world—and in parts of the developed world too—vanity phone numbers became status symbols. Numbers with repeating digits, sequential patterns, or culturally significant combinations commanded premiums. In China, numbers containing the digit 8 (associated with prosperity) sold for hundreds or even thousands of dollars. In parts of the Middle East, numbers with memorable patterns were traded and resold. In the United States, businesses had long paid premiums for memorable 1-800 numbers that spelled words—a recognition that a phone number wasn't just a routing address but a public identity, something people associated with you, remembered you by, and judged you on.
This was identity behavior applied to infrastructure. Phone numbers were designed to route calls through a switching network—a technical function no different in principle from an IP address routing packets across the internet. But people identified with them. They memorized each other's numbers. They printed them on business cards. They chose them carefully. They paid extra for good ones. The number stopped being what the network used to find you and started being how other people knew you. By the time the smartphone arrived and apps formalized the phone number as a digital identity credential, the social transformation had already happened. The protocol was just catching up to the behavior.
In much of the world, the smartphone's killer identity feature wasn't the app store account. It was that phone number.
WhatsApp, founded in 2009 and launched on iPhone that year, made a radical simplification: your account was your phone number. No username to choose. No password to remember. No email to verify. Insert a SIM card, receive an SMS verification code, and you existed on the platform. Your contacts—identified by their phone numbers in your address book—appeared automatically. No friend requests, no search, no manual connection. The social graph was the telephone network.
By 2012, WhatsApp had 100 million active users. By 2014, when Facebook acquired it for $19 billion, it had 450 million. By the time of this writing, it's north of two billion.
WeChat took the same approach in China, launching in 2011 and growing to over a billion users. It went further than WhatsApp—becoming not just a messaging app but an identity platform. WeChat Pay, mini-programs, government services ... all tied to a phone number, and all contained within a single app. In China, your WeChat identity is your digital identity in a way that no Western equivalent fully captures.
Across sub-Saharan Africa, M-Pesa—launched by Safaricom in Kenya in 2007, the same year as the iPhone—had already demonstrated that a phone number could be a financial identity. Send money, receive money, pay bills, save—all through SMS on basic feature phones, no smartphone required. Your M-Pesa account was your phone number. Your balance was stored by the carrier. In a region where hundreds of millions of people had no bank account and no government-issued ID, the SIM card became the most important identity credential they possessed.
This was a fundamentally different path than the one the developed world had taken. The web identity crisis was about too many usernames and passwords. The phone-number-as-identity model bypassed that problem entirely. No usernames. No passwords. Just a number tied to a physical SIM card in a physical device.
It was also, in important ways, a step backward in terms of who controlled your identity. A phone number is controlled by a carrier, not by you. Lose your phone, lose your SIM, fail to pay your bill, or switch carriers—and your identity is at risk. SIM swapping attacks, where an attacker convinces a carrier to transfer your number to their device, became a devastating attack vector precisely because so much identity now depended on phone number control. The "something you have" factor that phone-based authentication relied on turned out to be something you rented from a telecommunications company.
And phone numbers were never designed to be identity credentials. They were designed to route calls. Repurposing them as authentication factors—using SMS one-time passwords as a second factor, using phone number as a primary account identifier—was a case of the same pattern that runs through this entire series: the expedient solution, the thing that works right now with infrastructure that already exists, winning over the architecturally sound solution that would take years to deploy.
SMS one-time passwords were the starkest example. By the early 2010s, SMS OTP had become the most widely deployed second authentication factor in the world—used by banks, email providers, social networks, and government services. It was universal (everyone had SMS), familiar (everyone understood text messages), and required no additional software or hardware.
It was also deeply insecure. The SS7 (Signaling System No. 7) that carriers used to route SMS messages was designed in the 1970s for a closed network of trusted telephone companies. It had no authentication, no encryption, and no mechanism to prevent interception.
The Shadow Profile Moves to Mobile
While phone numbers were becoming identity credentials and app stores were reshaping authentication, the behavioral tracking machinery was making its own migration to mobile—and in doing so, it created a new form of identity that was more persistent, more intimate, and more consequential than anything the desktop web had produced.
On the desktop web, DoubleClick had built cross-site behavioral profiles through third-party cookies—an inferred identity, assembled from browsing patterns, that existed parallel to the accounts users consciously created. Facebook Connect fused the identity provider with the surveillance infrastructure, merging authenticated identity with behavioral tracking. But even Facebook's tracking was limited to the web and its own platform. It couldn't see what you did in other apps. It couldn't see where you physically went.
The smartphone removed those limitations—and in doing so, collapsed the remaining distance between the three forms of digital identity: institutional credentials, user-created accounts, and behavioral shadow profiles.
In the early days of iOS, Apple assigned every device a Unique Device Identifier—the UDID, a permanent, hardware-level string that could not be changed or reset. It was the device's serial number, essentially, and it was accessible to any app that asked for it. Advertising networks seized on this immediately. The UDID became the mobile equivalent of DoubleClick's tracking cookie—except it was better in almost every way that mattered for building an identity. It couldn't be cleared like a browser cookie. It persisted across app reinstalls. It was the same identifier in every app on the device. And unlike a third-party cookie, which tracked you across websites, the UDID tracked you across apps. A fundamentally more intimate window into behavior, because people used apps for everything: banking, dating, health tracking, messaging, navigation.
To understand why this mattered for identity specifically, consider what each app revealed. A fitness app knew your exercise habits and health goals. A food delivery app knew your dietary preferences and home address. A dating app knew your relationship status and who you were attracted to. A navigation app knew where you went and when. A news app knew your political interests. Individually, each app saw one facet. But the UDID was the same in all of them. An ad network embedded across dozens of apps could do what no individual app could: collapse those separate facets into a single, unified identity profile. Not a profile the user chose to create—like an Amazon account or a Facebook page—but one assembled without their knowledge, from the residue of their daily life.
This was the wave collapse happening at a deeper level than social login had achieved. Facebook Connect collapsed your identity by tying your real name to your activity across websites. The UDID collapsed your identity by tying your behavior—your movement through physical and digital space—into a single permanent record, whether you used Facebook or not.
Google's Android had a parallel story. In its early years, Android made device identifiers readily available to apps. Advertising and analytics SDKs harvested these identifiers to build cross-app behavioral profiles, just as their counterparts did on iOS. The identity implications were the same: a permanent, device-level identifier that unified a person's activity across every app they used into a single trackable profile.
The industry recognized, belatedly, that permanent hardware identifiers were a problem—both for privacy and for identity. A hardware identifier that could never be reset meant that a person's behavioral identity was irrevocable. Unlike a web account you could abandon, unlike a social login you could disconnect, a UDID was permanently fused to the device. Your behavioral identity followed you not because you chose it, but because the hardware made it inescapable.
Apple deprecated UDID access in 2012, replacing it with the IDFA—the Identifier for Advertisers—introduced in iOS 6. Google followed in 2013 with the Google Advertising ID (GAID, sometimes called AAID). Both were designed as a compromise: a consistent identifier that ad networks could use for tracking and attribution, but one that the user could reset or, in theory, opt out of. The shift was subtle but significant in identity terms: it moved the advertising identifier from something the device was (hardware identity) to something the device carried (a resettable credential). In theory, this gave users a measure of control over their behavioral identity—the ability to shed an old profile and start fresh.
In theory.
In practice, the advertising industry treated IDFA and GAID as permanent identifiers. Few users knew these identifiers existed, let alone that they could reset them. The "Limit Ad Tracking" toggle buried in iOS settings, or the equivalent on Android, was an opt-out mechanism in a world where attention and defaults determine everything. The vast majority of users never touched it. And even when a user did reset their advertising identifier, sophisticated ad networks could use fingerprinting—combining screen resolution, installed fonts, device model, OS version, carrier, IP address, and dozens of other signals—to re-identify the device anyway. The behavioral identity could be rebuilt from the digital equivalent of a person's gait, their accent, the unique pattern of how they moved through the world. You could throw away the name badge, but your walk gave you away.
This introduced something genuinely new in the history of digital identity: an identity that resisted destruction. Every previous form of digital identity in this series could be revoked. An administrator could delete your Unix account. A KDC could refuse to issue Kerberos tickets. A website could ban your account. An identity provider could disable your credentials. But a fingerprinted device profile, assembled from the unique combination of signals that your specific device emitted? That identity persisted whether anyone—including you—wanted it to. It was an identity you couldn't log out of.
The result was that by the early 2010s, the behavioral shadow profile had evolved into something qualitatively different from its web-based ancestor. The mobile behavioral identity knew not just what you browsed but what apps you used, how often you opened them, how long you spent in each one, what you bought through them, and—crucially—where you were when you did all of it.
Location was the dimension that transformed behavioral tracking into behavioral identity. A web cookie could tell an advertiser that you'd visited a travel website. A mobile advertising ID combined with GPS data could tell an advertiser that you'd visited a travel website, then drove to a car dealership, then spent forty minutes in a divorce attorney's office, then stopped at a pharmacy. The granularity of mobile location data—precise to a few meters, timestamped to the second, collected continuously whether or not you were actively using the phone—didn't just augment the behavioral profile. It connected the digital identity to a physical body moving through physical space. The gap between "anonymous behavioral profile" and "identifiable human being" that DoubleClick had tried to bridge by acquiring Abacus Direct in 1999—the merger that public outrage killed—was now closed automatically, continuously, by the device in your pocket.
On the desktop web, the three forms of digital identity—institutional credentials, user-created accounts, and behavioral shadow profiles—existed in separate systems, maintained by separate entities, rarely interacting. On mobile, they converged in a single device. Your Apple ID or Google account (institutional credential, issued by a platform with authority over your access). Your app-specific profiles and usernames (user-created accounts, scattered across dozens of services). Your IDFA or GAID and the location-enriched behavioral profile built from it (shadow profile, assembled without your participation). All three lived on the same device. All three were tied to the same person. The phone didn't just carry your identity—it was the intersection point where all three forms of identity met, merged, and became increasingly difficult to separate.
DoubleClick had needed to acquire a separate company to attach real names to anonymous browsing profiles. On mobile, the merger of identity and tracking was architectural. Your advertising identifier was correlated with your app store account, which was tied to your real name and credit card. The behavioral profile, the authenticated identity, and the institutional credential were all attributes of the same device—the device you never put down.
The Always-Authenticated Life
By 2012—five years after the iPhone's launch—the smartphone had rewritten the basic parameters of digital identity:
Authentication became persistent. You unlocked your phone once and stayed authenticated to dozens of services simultaneously. The session model of the desktop web—login, do something, session expires, login again—was replaced by ambient authentication. You were always logged in. Always recognized. Always you.
Identity became device-bound. Your phone carried your identity in a way that no previous device had. Lose your phone, and you didn't just lose a gadget—you lost access to your email, your bank, your messaging, your two-factor codes, your photos, your social connections. The device was the credential. Stealing someone's phone was, in identity terms, closer to stealing their wallet, their keys, and their address book simultaneously.
The identity provider became the device maker. Apple and Google didn't need federation protocols to become the world's most important identity providers. They owned the platforms. Every app, every service, every authentication flow ran on their operating systems, through their app stores, mediated by their accounts. The federation wars had been fought over who would control web login. The smartphone rendered that battle secondary. The real identity layer was below the web—in the device, in the operating system, in the account you created before you ever opened a browser.
The boundary between human and device blurred. When your phone authenticates to a cell tower, is that you authenticating? When your phone checks for email in the background, is that your identity making a request? The clean model from the 1960s—a human sits at a terminal, types credentials, is authenticated—was dissolving. The device was acting on your behalf, continuously, autonomously, using credentials you'd provided once and then forgotten about.
Tracking became identity. The behavioral shadow profile evolved from an inferred web-browsing pattern into a persistent, location-aware, cross-app identity that was in some ways more comprehensive than any identity the user had consciously created. Mobile advertising identifiers—Apple's IDFA, Google's GAID—and the fingerprinting techniques that survived their resets gave the advertising industry a form of identity that the user couldn't revoke, couldn't fully control, and often didn't know existed. The third path of digital identity—the profile built from you rather than by you—had become the most detailed portrait of a human being ever assembled. And unlike every other identity in this series, it was one you couldn't log out of.
This blurring—between person and device, between chosen identity and imposed identity, between authentication and surveillance—would only accelerate. But first, the smartphone had one more transformation to deliver—one that would reach deeper into the question of identity than anything in the previous sixty years of computing.
The password, that ancient invention from CTSS, was about to meet the one thing it couldn't compete with: your own body.
The Oldest Authentication Factor
Before we trace how biometrics conquered the smartphone, it's worth recognizing that biometric authentication isn't new. It's the oldest form of identity verification humans have ever used.
For most of human history, identity was biometric. You recognized people by their face, their voice, their gait, the way they carried themselves. A medieval merchant didn't ask for a password or a certificate—he recognized his trading partners by sight. A village didn't need an identity infrastructure because everyone knew everyone. The face was the credential. The voice was the authentication factor. The community was the trust framework.
What changed with the digital era wasn't the concept of recognizing someone by their physical characteristics. What changed was the distance between the person and the system doing the recognizing. CTSS needed passwords because the computer couldn't see who was sitting at the terminal. Kerberos needed tickets because the server couldn't see who was sending the request. The web needed cookies because HTTP had no memory of who had visited before. Every authentication mechanism in this series—every password, every certificate, every token, every signed assertion—exists because digital systems can't do the thing that humans do instinctively: look at someone and know who they are.
Biometric authentication was always, in a sense, the attempt to give machines that instinct.
The technology had existed in specialized contexts for decades. Law enforcement had used automated fingerprint identification systems since the 1980s. Government facilities used hand geometry scanners. Intelligence agencies experimented with iris recognition. But these were niche, expensive, institutional deployments—the biometric equivalent of the mainframe era. Specialized hardware, trained operators, controlled environments, small user populations.
The smartphone changed all of that. Not by inventing biometric authentication, but by putting the sensor in everyone's pocket.
Touch ID: The Fingerprint Meets the Consumer
On September 10, 2013, Apple announced the iPhone 5S. It had a faster processor, a better camera, and a fingerprint sensor embedded in the home button. The sensor wasn't built from scratch—a year earlier, Apple had quietly acquired AuthenTec, a fingerprint technology company, specifically to get its hands on the technology. Apple called the result Touch ID.
The technology wasn't revolutionary. Fingerprint sensors had existed on laptops for years—Lenovo ThinkPads, HP EliteBooks, various Toshiba and Dell models had all offered them. They were universally ignored. The sensors were finicky, the software was clunky, enrollment was tedious, and failure rates were high enough that most users tried the fingerprint reader once, watched it fail, and went back to typing their password.
What Apple did differently wasn't the sensor. It was the integration.
Touch ID wasn't an optional peripheral bolted onto the side of the device. It was the home button—the single most-pressed piece of hardware on the phone. You didn't go out of your way to use it. You used it every time you picked up the phone, because pressing the home button was already how you woke the device up. The fingerprint scan happened in the gesture you were already making. Authentication became invisible—folded into a motion so natural that users barely registered it as a security event.
The identity implications were profound, and they operated on several levels simultaneously.
At the most basic level, Touch ID replaced the PIN code or passcode that most users either set to something trivial (1234, 0000, their birth year) or didn't set at all. Before Touch ID, surveys consistently showed that a significant percentage of smartphone users had no lock screen protection whatsoever. The friction of typing a code dozens of times a day—every time you wanted to check a notification, glance at a message, look something up—was enough to make many people leave their most personal device completely unprotected. Their digital identity was secured by... nothing. Anyone who picked up the phone was, to every app and service on it, indistinguishable from the owner.
Touch ID changed the calculus. The friction dropped to near zero. Press the button you were already pressing, and you were authenticated. The percentage of iPhone users with lock screen protection jumped dramatically after Touch ID's introduction. Not because people suddenly cared more about security, but because security had finally stopped asking them to care. It just happened.
At a deeper level, Touch ID changed what kind of authentication factor protected digital identity. Everything in this series up to this point had been "something you know"—a password, a PIN, a pattern drawn on screen. Touch ID was "something you are." The distinction, which security textbooks had discussed for decades, suddenly mattered in practice for billions of people.
A password can be shared. You can tell someone your Netflix password, your phone PIN, your email credentials. People do this constantly—with partners, with children, with coworkers. The identity system can't tell the difference between you entering your password and someone else entering your password. Whoever knows the secret is you, as far as the system is concerned. This was the fundamental weakness when Allan Scherr stole the CTSS password file: the system couldn't distinguish between Scherr-being-Scherr and Scherr-being-someone-else.
A fingerprint can't be shared the same way. You can't dictate your fingerprint over the phone. You can't write it on a sticky note. You can't accidentally reuse it across sites (it's the same fingerprint everywhere, but the enrollment is per-device and per-sensor). The authentication factor is physically bound to a specific human body.
This was, in identity terms, the closest digital authentication had ever come to the pre-digital ideal: proving you are who you are by being who you are, rather than by knowing a secret or possessing a token.
But that closeness came with a new kind of vulnerability.
The Irrevocability Problem
A password, for all its weaknesses, has one crucial property: you can change it. If your password is compromised—stolen in a breach, guessed by an attacker, observed over your shoulder—you reset it. The old credential is destroyed. The new credential has no relationship to the old one. The damage is contained and reversible.
You can't change your fingerprint.
This asymmetry—between the revocability of knowledge-based credentials and the permanence of biometric ones—was theoretical for most of computing history. It became very real on June 4, 2015, when the United States Office of Personnel Management disclosed that it had been breached.
The OPM breach was staggering in scope. Attackers—later attributed to a Chinese state-sponsored group—had exfiltrated personnel records for 21.5 million current and former federal employees and contractors. The stolen data included Social Security numbers, addresses, employment histories, and the detailed background investigation files used for security clearances.
And it included 5.6 million fingerprint records.
The background investigation files were intimate enough—they contained the kind of personal information (financial troubles, relationship history, foreign contacts, substance use) that made the individuals vulnerable to blackmail and coercion. But the fingerprint records added a dimension that no password breach had ever created: a permanently compromised biometric.
Every previous breach in this series—from Scherr's CTSS password theft to the DoubleClick tracking profiles—involved credentials or data that could theoretically be changed, reset, or abandoned. New passwords could be issued. Accounts could be closed. Even behavioral profiles, in principle, could be disrupted by changing habits or devices.
The 5.6 million people whose fingerprints were stolen in the OPM breach had no such option. Their biometric identity was compromised permanently. Not for one system or one service, but for any system that would ever use fingerprint authentication for the rest of their lives. The credential and the person were the same thing, and you can't reset a person.
The Design Decision That Mattered Most
Apple's engineers understood this risk, and their response was arguably the most consequential identity architecture decision of the smartphone era.
Touch ID's biometric data never leaves the device.
When you enroll your fingerprint, the sensor captures an image and converts it into a mathematical representation—a template. That template is encrypted and stored in a dedicated security chip called the Secure Enclave—a physically isolated processor within the iPhone's system-on-chip, with its own encrypted memory, its own secure boot process, and no direct pathway for the main processor to read its contents.
When you press your finger to the sensor, a new scan is captured and compared to the stored template. The comparison happens entirely within the Secure Enclave. The result—match or no match—is the only information that leaves the enclave. Not the template. Not the scan. Not any biometric data whatsoever. Apple's servers never see your fingerprint. Apps never see your fingerprint. The operating system itself never sees your fingerprint.
This architecture was a direct response to the identity risks that centralized biometric storage created. In the OPM model, biometrics were collected, transmitted, and stored in a central database—a honeypot that, when breached, compromised millions of people's permanent biological identifiers simultaneously. In Apple's model, each device was its own biometric silo. Compromising Apple's servers would yield zero fingerprint data. Compromising one iPhone would yield one user's template, stored in a format usable only by that specific device's Secure Enclave.
The decision had identity implications beyond security. By keeping biometric data local, Apple ensured that the fingerprint functioned as a device-level authentication factor, not a network-level identity. Touch ID didn't prove to Apple's servers that you were Jane Smith. It proved to the device in your hand that the person pressing the button was the same person who enrolled their finger. The device then used conventional credentials—tokens, session keys, certificates—to authenticate to remote services on your behalf.
This was a subtle but important distinction. The biometric verified the human-to-device relationship. Cryptographic credentials verified the device-to-service relationship. Your fingerprint unlocked the keys; the keys opened the doors. The biometric was the local gatekeeper, not the networked credential.
Android took a more fragmented approach, as Android always does. The Trusted Execution Environment (TEE) provided a conceptually similar secure area for biometric processing, but implementation quality varied wildly across the hundreds of manufacturers building Android devices. Some matched Apple's security architecture closely, pairing dedicated hardware with secure software processing. Others relied on software alone, or cut corners in ways that quietly weakened the security guarantees. The Android ecosystem's diversity—its great strength for market reach—was a liability for consistent biometric security. A fingerprint sensor on a flagship Samsung Galaxy had very different security properties than one on a budget handset from a lesser-known manufacturer.
Face ID: Authentication Becomes Invisible
In September 2017, Apple eliminated the home button entirely. The iPhone X replaced Touch ID with Face ID—a front-facing sensor array that included an infrared camera, a flood illuminator, and a dot projector that mapped the three-dimensional geometry of the user's face.
If Touch ID had made authentication nearly invisible by embedding it in a gesture users were already making, Face ID completed the disappearing act. You didn't press anything. You didn't touch anything. You picked up the phone and looked at it—the thing you were going to do anyway—and the device recognized you.
Authentication, for the first time in the history of computing, required no conscious action by the user at all.
On CTSS, authentication was an explicit, deliberate event: type your credentials, wait for verification, proceed. Each successive generation reduced the friction—from long passwords to short PINs, from PINs to fingerprint presses, from fingerprint presses to a glance. Face ID didn't just reduce the friction further. It eliminated the concept of a "login event" entirely. There was no moment where authentication happened. It was just... always happening, every time it needed to, without asking anything of the user.
The identity implications were layered.
At the surface: convenience. The phone recognized you and unlocked. Apps that supported Face ID—banking apps, password managers, payment systems—could authenticate you with the same glance. The user experience was seamless in a way that previous generations of authentication never approached.
Beneath the surface: a shift in what authentication felt like, even if the underlying model hadn't fundamentally changed. Previous authentication mechanisms were unmistakably events—you typed a password, you pressed a finger to a sensor, you entered a code. You knew you were authenticating because the system made you do something. Face ID removed that awareness. You picked up the phone, you looked at it, and you were in. The authentication still happened at discrete checkpoints—when you unlocked the device, when you opened a banking app, when you confirmed a payment—but those checkpoints became so seamless that they stopped registering as security events in the user's mind.
Face ID was a checkpoint, not a constant sentry. It verified your identity at specific moments, not continuously. But the checkpoints were so fast, so frictionless, and so woven into the natural flow of using the device that they created the experience of being perpetually recognized. The phone didn't watch you constantly—but it recognized you instantly, every time you needed it to, without asking you to do anything beyond what you were already doing.
The distinction matters technically. True continuous authentication—a system that constantly monitors whether the person using the device is still the authorized user—was being explored in research labs and would later appear in enterprise security products evaluating behavioral signals like typing cadence and movement patterns. Face ID wasn't that. It was discrete authentication with near-zero friction, which is a different thing. But for the hundreds of millions of people using it daily, the practical effect was similar: the phone knew them, effortlessly, every time it mattered.
And at the deepest level: a philosophical shift in the relationship between identity and the body. Touch ID had used a fingerprint—a physical characteristic, but one that required deliberate action (pressing a finger to a sensor). Face ID used the face—the most fundamental marker of human identity, the thing we've used to recognize each other for the entire history of the species. The face is how humans naturally identify each other. It's the first thing a newborn learns to recognize. It's the image on your passport, your driver's license, your employee badge. It's you in the most intuitive, pre-technological sense.
By making the face the authentication credential, Apple had completed a circle. Digital identity started because computers couldn't see who was using them—they needed proxies like passwords and tokens to stand in for the physical recognition that humans do instinctively. Face ID gave the computer the ability to do what humans had always done: look at someone and know who they were.
But that capability came with new tensions.
Biometrics as Probabilistic Identity
Every authentication mechanism before biometrics was deterministic. A password either matched or it didn't. A cryptographic signature either verified or it didn't. A Kerberos ticket was either valid or it wasn't. There was no ambiguity, no "close enough." The answer was binary: yes or no, authenticated or denied.
Biometric authentication is fundamentally different. It's probabilistic.
No two fingerprint scans are identical. Your finger is at a slightly different angle each time. Moisture varies. Pressure varies. Skin condition changes. The sensor captures a slightly different image with every press. The same is true for faces—lighting changes, angles shift, you age, you grow a beard, you put on sunglasses, you gain or lose weight.
The biometric system doesn't check for an exact match. It calculates a similarity score between the current scan and the enrolled template, then compares that score to a threshold. Above the threshold: match. Below: no match. Authentication becomes a question of probability, not certainty.
This introduces two types of errors that deterministic systems never had to contend with:
False rejection (the system doesn't recognize you): your finger is too wet, the lighting is too harsh, you're wearing a hat you weren't wearing when you enrolled. The legitimate user is denied access. Annoying, but not catastrophic—you fall back to the PIN.
False acceptance (the system recognizes someone who isn't you): someone whose fingerprint or face is similar enough to yours passes the threshold. A stranger is granted your identity. This is the dangerous one.
Apple published specific numbers: Touch ID's false acceptance rate was 1 in 50,000. Face ID improved this to 1 in 1,000,000. These numbers were good enough for consumer authentication—dramatically better than the odds of someone guessing a four-digit PIN (1 in 10,000). But they represented something conceptually new: authentication with a known, nonzero error rate. The system was designed to be wrong sometimes. It was engineered to be wrong rarely enough that the convenience justified the risk.
The system wasn't saying "this is Jane Smith." It was saying "we are 99.9999% confident that this is Jane Smith." The distinction almost never mattered in practice. But it meant that identity, at the most fundamental technical level, now had a margin of error built into it. Authentication had gone from a lock that either opens or doesn't to a bouncer making a judgment call—a very good bouncer, a very fast judgment call, but a judgment call nonetheless.
The Liveness Problem
Probabilistic matching wasn't the only new challenge biometrics introduced. There was also the question of liveness—whether the biometric sample was coming from a living person or a reproduction.
A password doesn't have this problem. The string of characters "correct-horse-battery-staple" is the same whether a human types it or a script sends it. There's no "fake" version of a correct password. But a fingerprint can be replicated in silicone. A face can be presented as a photograph. An iris can be printed at high resolution.
The biometric authentication system has to answer not just "does this match the enrolled template?" but "is this coming from a real, present, living human being?"
Touch ID addressed this through capacitive sensing—the sensor measured the electrical properties of living skin, which differ from silicone, gelatin, or other materials used to create fake fingerprints. Face ID addressed it through depth mapping—the dot projector created a three-dimensional map of the face, which a flat photograph couldn't replicate, and the infrared camera analyzed light patterns to distinguish human skin and eye 'attention' from the static materials of a mask.
These were engineering solutions to a problem that hadn't existed in the password era. They worked well—well enough for consumer security. But they represented yet another layer of complexity in the authentication stack.
The Two Paths of Biometric Identity
Apple's Secure Enclave model—biometrics processed locally, never transmitted, never stored centrally—was one answer to the question of how biometric identity should work. But it wasn't the only answer being deployed in the world.
While Apple was building a device-local biometric architecture, governments were building centralized ones.
India's Aadhaar program, launched in 2009, was the most ambitious. The Unique Identification Authority of India (UIDAI) set out to issue a unique twelve-digit identity number to every resident of India—over a billion people. Enrollment required submitting ten fingerprints, two iris scans, and a facial photograph, all stored in a central database. The biometric data wasn't just for authentication—it was the identity. In a country where hundreds of millions of people lacked formal identity documents, Aadhaar used the body itself as the credential.
The scale was unprecedented. By the early 2020s, over 1.3 billion people had been enrolled. Aadhaar became the backbone of identity verification for banking, tax filing, mobile phone registration, government benefits, and dozens of other services. It brought formal identity to hundreds of millions of people who had never had it.
It also created the largest centralized biometric database in human history—and all the risks that centralization entails. Privacy advocates raised alarms. The Indian Supreme Court weighed in, ruling in 2018 that Aadhaar was constitutional but imposing limits on mandatory use. Security researchers identified vulnerabilities. The fundamental tension remained: a centralized biometric database enables identity at population scale but creates a single point of catastrophic failure. If Aadhaar's biometric database were breached the way OPM's was, the consequences would be measured not in millions but in billions—and the compromised credentials could never be reissued.
Estonia's digital identity program, took a middle path—using smart cards with cryptographic certificates rather than centralized biometrics. The biometric data (a photo and fingerprints) was stored on the card itself, not in a central database. The card served as a local trust anchor, similar in philosophy to Apple's Secure Enclave, but issued by the state rather than a device manufacturer.
These three models—Apple's device-local approach, India's centralized database, Estonia's smart card architecture—represented fundamentally different answers to the same question: where should biometric identity live?
In the device, controlled by the user, invisible to the network? In a central database, controlled by the government, accessible for verification? On a portable credential, controlled by the state but held by the citizen?
Each choice carried trade-offs. Centralization enabled scale and inclusion but created catastrophic breach risk. Device-local processing protected privacy but fragmented identity across devices and platforms. Smart cards balanced control but required physical infrastructure and could be lost or stolen.
The password verified your identity through something you knew. The biometric verified your identity through something you were. But in both cases, the fundamental pattern held: the system needed a way to distinguish the authorized user from everyone else. The mechanism changed. The challenge didn't.
And a new mechanism was emerging that would attempt to address what biometrics alone couldn't: a way to eliminate passwords entirely, not just at the device level but across the network, using the smartphone's biometric and security hardware as the foundation for a fundamentally different authentication architecture.
The FIDO Alliance had been working on this since 2012. By the late 2010s, their work was ready for the world.
The Password's Sixty-Year Reign
By the mid-2010s, the password had survived every attempt to replace it.
This is worth pausing on, because the persistence of the password is one of the most remarkable facts in the history of technology. Fernando Corbató introduced passwords to CTSS around 1961. More than half a century later, they remained the dominant authentication mechanism on the internet—despite being hated by users, distrusted by security professionals, and responsible for the majority of account compromises.
The industry had tried to fix passwords. It had tried hashing them, salting them, requiring complexity rules, mandating regular rotation, layering second factors on top of them. It had tried to reduce them through federation—single sign-on, social login, SAML, OpenID Connect. It had tried to supplement them with biometrics—fingerprints and faces that unlocked devices, which then released password-derived credentials to remote services.
None of this eliminated the password. It just pushed it around. Federation reduced the number of passwords but made each remaining one more critical—a compromised Google password now meant compromised access to every service you'd connected through Google. Biometrics replaced the password at the device level but left it intact at the network level—somewhere in the stack, a shared secret was still being transmitted, still being stored, still being stolen. MFA added a second factor but kept the password as the first one—you still typed your password, you just also typed a code from your phone.
The password endured because it had one property that no alternative could match: universality. It required nothing from the user except memory. It required nothing from the server except a database. It required no special hardware, no biometric sensor, no cryptographic infrastructure, no pre-established trust relationship. Any developer could implement password authentication in an afternoon. Any user could create a password in seconds.
Every superior alternative demanded something extra—a hardware token, a biometric sensor, a certificate authority, a compatible browser, a specific operating system. And every time the alternative demanded something extra, the password won by demanding nothing.
The FIDO Alliance (Fast IDentity Online) set out to break this pattern. Not by building a better password, but by building something that was as easy as a password for users and developers—while being fundamentally, architecturally, cryptographically impossible to phish, steal, or reuse.
The Founding Insight
The FIDO Alliance was founded in 2012 by a group of companies—PayPal, Lenovo, Nok Nok Labs, Validity Sensors, Infineon, and Agnitio—that had each been working on pieces of the authentication problem independently. What brought them together was a shared diagnosis: the password wasn't just inconvenient. It was architecturally broken, and no amount of layering could fix it.
The diagnosis was specific. Passwords fail because of what they are: shared secrets. When you create a password on a website, both you and the server know the secret. You prove your identity by transmitting the secret (or a derivative of it) across the network, and the server verifies it against its stored copy.
This architecture has three fatal properties:
The secret is transmitted. Every time you log in, your password (or its hash) crosses the network. It can be intercepted in transit, captured by a man-in-the-middle, or harvested by a phishing site that looks exactly like the real one. TLS protects the connection, but if the user is tricked into connecting to the wrong server—which is exactly what phishing achieves—TLS protects the connection to the attacker.
The secret is stored on the server. The server must keep a copy of your password (or its hash) to verify it. This creates the breach honeypot—a database of millions of credentials that, if stolen, compromises every account at once. The hashing and salting techniques from Article 2 make stolen hashes harder to crack, but not impossible. And some services, even decades after the lesson should have been learned, still store passwords in plaintext.
The secret is reusable. A password stolen from one site works on any other site where the user chose the same password. And users reuse passwords constantly—not because they're foolish, but because remembering hundreds of unique, complex passwords is a task that exceeds normal human cognitive capacity. Credential stuffing attacks—automated attempts to use stolen username/password pairs from one breach against thousands of other sites—succeed at alarming rates precisely because password reuse is universal.
FIDO's founding insight was that all three properties stem from the same root: the authentication model is based on shared secrets. Fix the model—replace shared secrets with public key cryptography—and all three problems disappear simultaneously.
Public key cryptography, had been available since the 1970s. The idea of using it for authentication wasn't new. Client certificates in TLS did exactly this—the user held a private key, the server held the corresponding public key, and authentication happened through a cryptographic challenge that proved possession of the private key without transmitting it. The technology existed. It worked. It had been standardized for decades.
And virtually nobody used it. Client certificates were difficult to obtain, confusing to manage, impossible to explain to ordinary users, and incompatible with the way people actually used the web. The technology was sound. The user experience was a disaster.
FIDO's bet was that the smartphone had changed the equation. Specifically, that two things now existed that hadn't existed when client certificates failed:
First: a hardware security module in everyone's pocket. The Secure Enclave (Apple) and Trusted Execution Environment (Android) gave every smartphone a tamper-resistant place to generate and store private keys. No smart card reader needed. No USB dongle. No software installation. The secure hardware was already there, in a device people already carried.
Second: a biometric user interface. Touch ID and Face ID gave users a way to authorize cryptographic operations without understanding that cryptography was involved. "Press your finger to approve" or "look at your phone to confirm" was a user experience that anyone could follow. The private key could be unlocked by a biometric gesture instead of a passphrase, making the process simpler than typing a password—not just as simple, but simpler.
The combination was the breakthrough: the crypto that made passwords unnecessary, housed in hardware that everyone already owned, unlocked by a gesture that everyone already understood.
FIDO U2F: The Second Factor (2014)
FIDO's first standard, Universal 2nd Factor (U2F), was published in 2014. It was deliberately modest—not a password replacement, but a second factor designed to work alongside passwords.
The reason for starting with the second factor rather than eliminating passwords entirely was pragmatic. The web's infrastructure—login forms, password databases, session management—was built around passwords. Ripping out the entire system at once would require coordinated changes from every browser, every server, every website. Starting with a second factor let FIDO prove the technology without requiring anyone to abandon existing authentication flows.
A U2F security key—the YubiKey was the most prominent example—was a small USB (later NFC or Bluetooth) device containing a secure element capable of generating and storing cryptographic key pairs.
Here's how it worked:
Registration: You visit a website that supports U2F—say, Google. You plug in your security key and press the button on it (or tap it to your phone's NFC reader). The key generates a new public/private key pair, unique to that site. It sends the public key to Google. Google stores it, associated with your account. The private key never leaves the security key.
Authentication: Next time you log in, you type your password (the first factor). Google sends a cryptographic challenge to your browser. Your browser passes it to the security key. You press the button. The key signs the challenge with the private key it generated for Google—and only for Google—and sends the signed response back. Google verifies the signature with the stored public key. If it matches, you're in.
The critical property: the key pair was origin-bound. The key generated for google.com could only be used on google.com. If a phishing site at g00gle.com (zeros instead of the letter O) tried to initiate the same challenge, the security key wouldn't find a key pair registered for that origin and would refuse to sign. Phishing didn't just fail—it was cryptographically impossible. The protocol didn't rely on the user to notice that the URL looked wrong. It relied on mathematics.
This was the difference between U2F and every previous second factor. SMS codes could be intercepted (SS7 vulnerabilities). TOTP codes from authenticator apps could be phished—a fake Google login page could ask for your code and relay it to the real Google in real time. Even push notifications could be social-engineered through "MFA fatigue" attacks, where attackers trigger so many approval prompts that the exhausted user eventually taps "approve" just to make them stop.
U2F was immune to all of these. The private key never left the device. The challenge-response was bound to the legitimate origin. No code was transmitted that could be intercepted or relayed. The user's only role was pressing a physical button to confirm presence—even if they were tricked into visiting a phishing site, the key would refuse to authenticate because the origin didn't match.
Google deployed U2F internally in early 2017, requiring all employees to use security keys. The result: zero successful phishing attacks against Google employees from that point forward. Zero. For a company that had been a constant target of sophisticated phishing campaigns, this was extraordinary.
But U2F was still a second factor. The password remained. And U2F required a dedicated hardware device—a YubiKey or similar token—that users had to purchase, carry, and manage. For security-conscious organizations and technically sophisticated individuals, this was a minor inconvenience. For the general population, it was a barrier that would prevent mainstream adoption.
The next step was to eliminate both the password and the separate hardware requirement.
FIDO2 and WebAuthn: The Password Killer
In 2018, the FIDO Alliance and the World Wide Web Consortium (W3C) published the WebAuthn specification—formally, "Web Authentication: An API for accessing Public-Key Credentials." It was the browser-side component of a larger framework called FIDO2, which also included a device communication protocol called CTAP (Client to Authenticator Protocol).
WebAuthn extended the U2F model from a second factor to a complete authentication system. The password wasn't supplemented. It was replaced.
The flow, for a user, looked like this:
Registration: You visit a website that supports WebAuthn. Instead of choosing a password, you're prompted to register a credential. Your device—a laptop with a fingerprint reader, a phone with Face ID, or a plugged-in security key—generates a new public/private key pair for that site. You authorize the key generation with a biometric gesture or a device PIN. The public key goes to the website. The private key stays in the device's secure hardware.
Authentication: Next time you visit, the site sends a challenge. Your device prompts you for a biometric (touch your fingerprint sensor, look at Face ID) or a local PIN. If you confirm, the device signs the challenge with the private key. The site verifies the signature with the stored public key. You're in. No password entered, no password transmitted, no password stored on the server.
The three fatal properties of passwords—transmitted secrets, server-stored secrets, reusable secrets—were eliminated in a single stroke.
Nothing secret is transmitted. The private key stays in the device. What travels across the network is a signed challenge-response—useless to an interceptor because it can't be reused and doesn't reveal the key.
Nothing secret is stored on the server. The server holds only public keys. If the server is breached, the attacker gets a collection of public keys—mathematically useless for impersonating users. There is no "password database" to steal, no hashes to crack, no credential stuffing attacks to mount.
Nothing is reusable across sites. Each site gets a unique key pair. Compromising the credential for one site reveals nothing about credentials for any other site. The entire concept of "credential reuse"—the enabler of nearly every large-scale account takeover—ceases to exist.
And phishing remained cryptographically impossible, just as with U2F. The credential was bound to the origin. A fake site couldn't trigger authentication for the real site. The protection was in the protocol, not in the user's vigilance.
WebAuthn was the realization of a vision that the cryptographic community had held since the 1970s: authentication through public key cryptography, where the verifier never learns the secret. WebAuthn finally made it accessible—not by simplifying the cryptography (which remained sophisticated), but by hiding it entirely behind a fingerprint touch or a glance at a camera.
Browser support rolled out rapidly. Chrome, Firefox, Edge, and Safari all implemented WebAuthn by 2019. The major identity platforms—Google, Microsoft, Apple—added support on their services. Enterprise identity providers—Okta, Ping Identity, Azure AD—integrated WebAuthn into their authentication flows.
The technology worked. The cryptography was sound. The user experience was, for the first time, genuinely simpler than passwords.
And yet, adoption was slow. Because WebAuthn had a practical problem that threatened to undermine everything: the private key was stuck in the device that created it.
The Recovery Problem
If your private key lives in your phone's Secure Enclave, and you lose your phone, your credentials are gone.
This wasn't like losing a password. A forgotten password can be reset through email, SMS, security questions, or customer support. The password is an arbitrary secret—destroy it, create a new one, move on. But a WebAuthn credential is a cryptographic key pair. The private key in your lost phone's Secure Enclave cannot be extracted, cannot be recovered, cannot be reconstructed. It's gone. For every site where that key was your only credential, you're locked out.
The FIDO2 specification recommended registering multiple authenticators—a phone and a security key, for instance—so that losing one didn't lock you out entirely. Sensible advice, rarely followed. Most people had one phone. They weren't going to buy a backup security key "just in case." Some services issued recovery codes at enrollment—a set of one-time-use backup codes to be printed out and stored somewhere safe. In practice, most people lost them, forgot they existed, or never wrote them down at all.
This was the account recovery paradox. The same properties that made WebAuthn secure—the private key never leaves the device, can't be extracted, can't be transmitted—also made it fragile. Security and recoverability were in direct tension.
In practice, sites that implemented WebAuthn kept a password as a fallback recovery mechanism. Which meant the password wasn't eliminated—it was demoted to a recovery credential, sitting in the database, still stealable, still phishable, still the weak link. The back door undermined the front door's security.
The FIDO Alliance needed a way to make credentials portable without making them vulnerable. The answer would come from the same companies that controlled the smartphone platforms—but not until after a global pandemic had rewritten the rules of how and where people authenticated.
What FIDO Changed—and What Remained Unsolved
But FIDO2 had run into the wall that every superior authentication technology has hit: the last mile.
U2F required a dedicated hardware token that most people would never buy. WebAuthn eliminated the separate hardware by using the phone's built-in secure element—but locked credentials to a single device, creating a recovery problem that pushed sites to keep passwords as a fallback. And as long as the password remained a valid authentication path, all the cryptographic elegance of FIDO2 could be bypassed by a well-crafted phishing email.
But authentication—proving who you are—was only half the crisis bearing down on digital identity in this era. The other half was surveillance—the invisible infrastructure tracking who you are, where you go, and what you do, at a scale that made DoubleClick's web cookies look quaint.
While FIDO was trying to give users better control over how they proved their identity, a series of revelations was about to expose how little control they had over what was being done with it.
The Identity Bargain
By the early 2010s, digital identity had a hidden cost.
There's the identity you assert—the credentials you present, the accounts you create, the biometrics you enroll. And there's the identity others construct about you—the profile assembled from your behavior, inferred from your patterns, built without your participation.
The grand bargain of the consumer internet was a transaction between these two sides. You got free services—email, search, social networking, maps, storage—that made managing your digital identity easier and more convenient. In exchange, the companies providing those services built a second version of your identity: the one derived from everything you did while using them. Every search query. Every location ping. Every app opened, every link clicked, every purchase made.
The bargain persisted because it was invisible. Most users experienced only the first side—the convenience, the free services, the seamless authentication. The second side—the data extraction, the profile construction, the commercial exploitation of behavioral identity—operated below the surface.
The 2010s were when the second side became visible. And when it did, the consequences reshaped not just privacy law but the architecture of digital identity itself.
Snowden: The Identity Infrastructure as Surveillance Conduit
On June 2013, The Guardian published the first of what would become the most consequential series of intelligence disclosures in modern history.
Edward Snowden, a contractor for the National Security Agency, had copied an estimated 1.5 million classified documents describing the NSA's surveillance programs. The disclosures revealed a surveillance apparatus whose scope stunned even those who had suspected its existence.
Many of the revelations concerned intelligence methods that, while alarming, were tangential to digital identity. But several went directly to the heart of how identity worked on the internet.
PRISM gave the NSA direct access to data from nine major technology companies—Microsoft, Yahoo, Google, Facebook, PalTalk, YouTube, Skype, AOL, and Apple. These weren't random companies. They were the identity providers. "Log in with Google" didn't just mean Google vouched for your identity to third-party sites. It meant Google held your email, your search history, your location data, your documents, your contacts—and, through PRISM, provided access to that data to the U.S. intelligence community.
The trust regression problem, "who authenticates the authenticator?", took on a dimension that the CTSS engineers could never have imagined. The question wasn't just whether the authenticator was competent. It was whether the authenticator was compromised—whether the entity you trusted with your identity was simultaneously sharing that identity with a third party you'd never consented to.
The second identity-specific revelation was about metadata. The NSA's bulk collection programs focused heavily on metadata—not the content of communications but the data about communications. Who called whom, when, for how long. Who emailed whom, from where. Who was in the same location at the same time.
General Michael Hayden, former director of both the NSA and CIA, was unusually candid about the power of metadata. In a 2014 debate at Johns Hopkins University, he stated:
"We kill people based on metadata."
Metadata is behavioral identity. The shadow profile assembled from patterns of behavior rather than conscious self-presentation was exactly what the NSA was constructing. The patterns of who you communicated with, when, from where, and how often were sufficient to identify you, map your relationships, predict your movements, and infer your intentions. You didn't need to know the content of someone's life to know who they were. The patterns were enough.
The advertising industry had discovered this first. DoubleClick built behavioral identity from browsing patterns. Facebook built it from social interactions. Mobile ad networks built it from app usage and location data. The intelligence community was doing the same thing, with the same data, from the same platforms—just for different purposes.
For the average user, the Snowden revelations were alarming but abstract. The NSA probably wasn't going to use your metadata against you. The next shock was far more personal.
When Identity Infrastructure Becomes an Attack Surface
In March 2018, The Guardian and The New York Times simultaneously published investigations into Cambridge Analytica, a political consultancy that had harvested data from an estimated 87 million Facebook profiles.
The mechanism was almost insultingly simple—and it exploited the identity infrastructure directly.
In 2013, a researcher named Aleksandr Kogan built a Facebook app called "thisisyourdigitallife"—a personality quiz. About 270,000 people installed the app, granting it access to their Facebook profile data through the same API permissions that powered "Log in with Facebook." But Facebook's API at the time didn't just expose the data of the person who authorized the app. It also exposed the data of their friends—people who had never installed the app, never consented to anything. Through the social graph, 270,000 authorizations became a gateway to 87 million profiles.
This was an identity infrastructure exploit in the most literal sense. The same API permissions that described as the mechanism powering social login—the system that let third-party sites request your name, email, friends list, and interests when you clicked "Log in with Facebook"—were the attack surface. The system designed to make authentication convenient had also made data extraction trivial. The identity infrastructure was the vulnerability.
Cambridge Analytica used this data to build psychographic profiles of American voters—not just who they were demographically, but how they thought, what they feared, what messages would resonate with them. These profiles were then used to target political advertising during the 2016 U.S. presidential election and the Brexit referendum.
The identity implications went deeper than data theft.
The behavioral shadow profile had been weaponized. Not for selling shoes. For influencing elections. The profile built from you, without your conscious participation, assembled from casual social interactions you'd never thought of as sensitive—your likes, your friend connections, your quiz responses—had been turned into a tool for targeting you with political messages designed to exploit your specific psychological profile.
The wave collapse from Article 4—the flattening of multiple contextual selves into a single observable identity—reached its most consequential expression. On Facebook, you were simultaneously a friend, a family member, a citizen, a consumer, a political actor. Cambridge Analytica collapsed all of these into a single manipulable profile. Your identity as a casual social media user and your identity as a democratic citizen were the same data set, exploited through the same infrastructure, by the same techniques.
Facebook's response evolved from dismissal to damage control, but the damage to the identity model was lasting. "Log in with Facebook"—the mechanism that had won the federation wars through sheer convenience—became suspect. Not because the authentication was insecure in the traditional sense (nobody's password was stolen), but because the identity data that flowed through the authentication infrastructure had been exploited. The distinction between authentication (proving who you are) and the identity data exposed through authentication (revealing what you are) had never been clearer—or more consequential.
If Snowden revealed that governments were conducting surveillance through the identity infrastructure, Cambridge Analytica revealed that the identity infrastructure itself was the vulnerability—that the data collected in the name of convenient authentication could be extracted and weaponized by anyone with a developer account.
The Regulatory and Infrastructure Responses
The response came from governments—not as technology regulation in the narrow sense, but as identity regulation. The new laws didn't dictate which authentication protocols to use. They constrained what could be done with the identity data that those protocols collected and transmitted.
GDPR
The European Union's General Data Protection Regulation, effective May 25, 2018, was the most consequential regulatory intervention in the history of digital identity. Not because it regulated authentication directly, but because it regulated the data that identity systems generated—and in a world where identity and data had become inseparable, that amounted to regulating identity itself.
The provisions that mattered most for identity:
Consent must be explicit. The era of collecting identity data through terms-of-service agreements nobody read was legally over in the EU. If a service wanted to use your authentication event to build a behavioral profile—if "Log in with Google" was going to feed data to an advertising system—you had to be told, clearly, and you had to actively agree.
The right to deletion. The behavioral shadow profile—years of accumulated behavioral identity data—could now be demanded back and erased. The identity built from you could, at least in principle, be destroyed at your request. In practice, the right to deletion was complicated by the distributed nature of digital data, but the legal principle was established: your behavioral identity was yours, and you had the right to revoke it.
Data breach notification. Organizations that lost control of identity data—the password databases, the profile information, the behavioral records—were required to notify regulators within 72 hours. The breaches that had been exposing identity data for decades were no longer permissible to hide.
Massive fines. Up to 4% of global annual revenue. For companies like Google or Facebook, this meant billions of dollars. GDPR had teeth, and the teeth were proportional to the scale of the identity data at stake.
CCPA and the global ripple
California's Consumer Privacy Act (2018, effective 2020) brought similar protections to the largest U.S. state market. Brazil's LGPD (Lei Geral de Proteção de Dados - 2018), India's Digital Personal Data Protection Act (2023), and dozens of other national laws followed. The regulatory framework for digital identity—nonexistent for the first four decades—was being built, jurisdiction by jurisdiction, around the world.
Previous identity systems were shaped by technical limitations, cryptographic innovations, market dynamics, and user behavior. Now they were also shaped by law. The shadow profile was, for the first time, subject to legal limits.
Cookies, IDFA, and the Rewiring of Behavioral Identity
Regulation set the legal boundaries. But the technical infrastructure of behavioral identity was also being dismantled—or, more precisely, rewired—through competitive dynamics within the technology industry.
The third-party cookie dies
In 2017, Apple introduced Intelligent Tracking Prevention in Safari, using machine learning to identify and restrict the third-party cookies that had been the backbone of cross-site behavioral tracking since DoubleClick pioneered the technique in 1996. Mozilla's Firefox followed with similar protections in 2019. Google announced plans to deprecate third-party cookies in Chrome in 2020, though the timeline was delayed repeatedly and eventually walked back.
The identity significance: the specific mechanism that allowed an advertising network to track your activity across thousands of unrelated sites was being eliminated. Not the tracking itself. The mechanism. The behavioral identity infrastructure was being forced to find new foundations.
Apple's ATT redefines mobile identity consent
Apple's most consequential move came in April 2021 with App Tracking Transparency. Any app that wanted to use the IDFA, the advertising identifier, was now required to ask permission with a system-level prompt the app couldn't customize:
[App name] would like permission to track you across apps and websites owned by other companies.
Roughly 75-85% of users said no.
The IDFA suddenly had a logout button. For the majority of iOS users, the persistent cross-app behavioral identity was severed. Ad networks could no longer trivially link your activity in a fitness app to your activity in a news app to your activity in a shopping app. The unified behavioral identity fragmented back into separate, disconnected interactions.
The commercial impact was staggering. Meta estimated $10 billion in lost advertising revenue in 2022 alone. But the identity impact was more fundamental: Apple had demonstrated that the behavioral identity infrastructure could be disrupted by a single platform decision. The shadow profile, which had seemed as permanent and inescapable as the device itself, turned out to be contingent on permissions that could be revoked.
What replaced what was lost
But the disappearance of third-party cookies and the restriction of IDFA didn't mean the disappearance of behavioral identity. It meant its migration.
When you're signed into your Google account and you search on Google, Google doesn't need a third-party cookie or an advertising identifier to track what you're doing—your login session is the tracking mechanism. The fact that you're authenticated is what ties your behavior to your identity. Privacy regulations targeted the shadowy, invisible tracking that happened across sites you didn't know were watching. They left untouched the data collected by services you had actively signed up for and were knowingly using.
This created a quiet irony. The crackdown on third-party tracking didn't weaken the big platforms—it strengthened them. The companies with large, established, logged-in user bases could still build rich behavioral profiles. Smaller advertisers and data brokers, who had relied on third-party cookies and device identifiers to track users across the web, lost their tools. The platforms that already knew who you were had less competition.
And so the three forms of digital identity that had developed separately—the account you created, the institutional credential that verified you, and the behavioral profile assembled from your activity—had quietly collapsed into one. A single company held all three, protected by a first-party relationship that regulators hadn't touched.
The companies that won the federation wars were, after the surveillance reckoning, more central to digital identity than ever. Not just as authentication providers—but as the sole remaining entities with comprehensive behavioral profiles of their users, now insulated from competition by the very privacy regulations that were supposed to constrain them.
The tension between identity and surveillance hadn't been resolved. It had been restructured—from a distributed ecosystem of trackers that anyone could participate in, to a concentrated oligopoly of identity-surveillance platforms that only the biggest companies could operate.
But before a global pandemic would force the world to reckon with those platforms' centrality, one more transformation was underway—one that would ask what digital identity meant when it finally got a body.
Identity Gets a Body
Every form of digital identity in this series has been, in one way or another, disembodied.
A password is a string of characters. A cryptographic key is a number. A cookie is a token stored in a browser. A SAML assertion is a signed XML document. A biometric template is a mathematical representation of a physical feature. Even Face ID—the most embodied authentication mechanism we've discussed—reduces your face to a depth map stored in a secure enclave. The face itself isn't the credential. A mathematical abstraction of the face is.
Digital identity, from CTSS onward, has always been a representation of a person—never the person themselves. You prove who you are by presenting something: a secret you know, a token you carry, a body part you scan. The system evaluates the representation and decides whether to let you in. But the representation and the person remain separate. Your password isn't you. Your fingerprint template isn't you. Your Facebook profile isn't you. They're proxies—stand-ins that the system accepts in place of the thing it can't directly access: the actual human being.
Virtual and augmented reality changed this in a way that no previous technology had. For the first time, digital identity didn't just represent a person in the abstract. It gave them a body—a visible, spatial, interactive presence in a shared digital environment. Not a profile page. Not an avatar icon next to a chat message. A figure that occupied space, that moved, that gestured, that other people could see, approach, and interact with as if it were physically present.
This was new. And it introduced identity questions that the previous sixty years of digital authentication had never had to address.
The Avatar Problem
In 2012, Palmer Luckey, a 19-year-old tinkerer in Long Beach, California, launched a Kickstarter campaign for a VR headset he called the Oculus Rift. The campaign raised $2.4 million. In 2014, Facebook acquired Oculus for approximately $2 billion—a staggering price for a company that hadn't shipped a consumer product yet.
Mark Zuckerberg's acquisition letter made the identity connection explicit. He described VR as the next major computing platform after mobile—one where presence and social interaction would be the core experiences. Facebook wasn't buying a gaming peripheral. It was buying the next frontier of social identity.
The consumer VR landscape that emerged over the following years—Oculus Rift, HTC Vive, PlayStation VR, and eventually Meta's Quest line—created environments where identity was expressed not through text profiles or photo albums but through avatars: three-dimensional representations that users inhabited in shared virtual spaces.
And here the identity questions multiplied.
Article 4 described how the early web allowed people to express different facets of themselves through different handles on different sites—DragonSlayer99 on a gaming forum, CarefulDad on a parenting board. Each was a genuine expression of one aspect of a multifaceted person. Article 4 called this the quantum self in superposition—multiple authentic states, each revealed in a different context.
VR took this further than text-based personas ever could. In VR, you didn't just name yourself differently. You embodied yourself differently. You could be tall or short, human or fantastical, any gender, any species, any shape. In VRChat—the social platform that became the most vibrant laboratory for virtual identity—users appeared as anime characters, robots, animals, abstract geometric forms, pop culture figures, and everything in between. The avatar wasn't just a label. It was a body that moved when you moved, gestured when you gestured, and occupied space in a world shared with other embodied identities.
Research on virtual embodiment found that avatars weren't just costumes. They changed behavior. Studies documented the "Proteus effect"—named by Stanford researcher Jeremy Bailenson—where users unconsciously adapted their behavior to match their avatar's appearance. Users in taller avatars negotiated more aggressively. Users in more attractive avatars stood closer to others. Users in elderly avatars showed increased empathy for older people. The avatar wasn't just a visual representation of identity. It was actively shaping identity—feeding back into the person behind the headset in ways that text-based handles never had.
This created a new dimension of the identity question. On the text-based web, your handle was an identifier—a name attached to your posts. In VR, your avatar was an experience—both for you and for everyone who interacted with you. The identity you projected wasn't just information to be read. It was a presence to be felt. The social dynamics of identity—recognition, reputation, trust, intimacy, threat—operated through embodied interaction in ways that were closer to physical life than anything the internet had produced before.
The Real-Name Wars, Round Three
The question of whether digital identities should be tied to legal names returned in VR with new stakes.
Facebook's acquisition of Oculus made the collision inevitable. In 2020, Facebook announced that new Oculus users would be required to log in with a Facebook account. Existing users could continue with separate Oculus accounts until 2023, after which the merge would be mandatory. Users could create separate personas and control what they shared through privacy settings—but the controls were opt-out rather than opt-in. The burden of protecting your identity fell on the user, not the platform.
The VR community's reaction was fierce. VRChat users, accustomed to the pseudonymous freedom of choosing fantastical avatars and building reputations around chosen identities, saw the requirement as an existential threat. The concerns were the same ones that had animated the nymwars—privacy, safety, the right to contextual identity—but amplified by the intimacy of embodied interaction.
In VR, your identity isn't a name on a screen. It's a body in a shared space. Tying that body to a legal name meant that every interaction in virtual space—every friendship, every conversation, every community you joined, every world you visited—was connected to your real-world identity. For users exploring gender identity through differently-gendered avatars, for people with social anxiety using VR as a space for low-stakes social interaction, for anyone who valued the separation between their virtual and physical selves, the Facebook account requirement wasn't just an inconvenience.
Meta eventually reversed course. In 2022, the company announced that Quest headsets would no longer require a Facebook account, replacing the requirement with a separate Meta account that didn't demand a real name. The reversal was an acknowledgment—rare from Meta—that the identity model that had worked for social media (real names, single profiles, persistent identity) was wrong for virtual worlds where identity was inherently fluid and contextual.
The Biometric Leakage Problem
While the avatar debate played out at the social level, VR was creating an identity problem at the technical level that was far less visible and potentially far more consequential.
A VR headset is, by necessity, a biometric sensor array.
To create a convincing immersive experience, the headset must track the user's body with extraordinary precision. Head position and rotation, tracked dozens of times per second. Hand position and finger articulation, through controllers or hand-tracking cameras. Eye position and gaze direction, through infrared eye-tracking sensors. Body position inferred from head and hand movement. In more advanced systems, facial expression tracking through internal cameras pointed at the user's face.
This data is essential for the VR experience to work. You can't render a virtual world that responds to your movements without knowing, precisely and continuously, how you're moving. The tracking isn't optional. It's the foundation of the technology.
But movement data is biometric data. And biometric data is identity data.
Research published in 2020 by the Stanford Virtual Human Interaction Lab demonstrated that VR motion data could uniquely identify individuals with startling accuracy. Just a few minutes of head and hand movement data—the basic telemetry required for any VR session—was sufficient to distinguish one person from another with over 95% accuracy. The way you move your head, the micro-patterns of your hand gestures, the rhythm of your gait, the speed of your reactions—these are as distinctive as a fingerprint, and VR headsets capture them continuously by design.
The identity implications were profound. The motion data that could identify you wasn't treated as biometric data by the platforms or by regulations—it was treated as session telemetry, transmitted to servers, stored in logs, potentially shared with developers.
Eye tracking added another layer. The Quest Pro (2022) and Apple Vision Pro (2024) included eye-tracking sensors that monitored where you looked, how long you looked at it, how your pupils dilated in response to stimuli. Eye movement data reveals not just what you're interested in but how you think—cognitive load, emotional response, attention patterns, even markers associated with neurological conditions. This data, captured continuously in the course of normal use, constituted a form of behavioral-biometric identity more intimate than anything the advertising industry had ever collected through cookies or app tracking. VR threatened to create a physiological shadow profile—identity inferred not just from what you did but from how your body and mind responded while you did it.
And unlike the IDFA or third-party cookies, this data couldn't be addressed with a consent toggle or a tracking prevention framework. The data was intrinsic to the functioning of the technology. You couldn't use VR without generating the motion telemetry that could identify you. The biometric leakage was architectural—baked into the medium, not bolted on by advertisers.
Augmented Reality: Identity in the Overlay
If VR created questions about identity in virtual worlds, augmented reality created questions about identity in the physical world—questions that were, in some ways, even more uncomfortable.
AR overlays digital information onto the physical environment. Google Glass (2013), the first mainstream AR device, provoked an immediate public backlash—wearers were ejected from bars and restaurants, and the term "Glasshole" entered the lexicon—in part because the device's always-on camera created the possibility of facial recognition in public spaces. The identity concern wasn't about the wearer's identity. It was about everyone else's.
The fear was specific and grounded: an AR device with a camera, internet connectivity, and access to facial recognition databases could, in principle, identify strangers in real time. Walk into a coffee shop, and the device could tell you the name, employer, and social media profiles of every person in the room. The identity of every person in public space would be accessible to every other person wearing AR glasses.
This inverted the identity problem. Every previous article in this series had asked: "How do you prove your identity to a system?" AR raised the opposite question: "How do you prevent a system from identifying you without your consent?" The entire history of digital identity had been about enabling authentication. AR threatened to make authentication involuntary—an identity imposed on you by a device someone else was wearing.
Google Glass failed commercially—the hardware was too limited, the social resistance too strong, the use cases too unclear. But the underlying technology continued advancing. Snapchat's Spectacles, Microsoft's HoloLens (enterprise-focused), Magic Leap, Apple's Vision Pro (2024), and now Meta glasses, all explored the space. Apple's approach with Vision Pro was characteristically deliberate—the device included outward-facing displays that showed the wearer's eyes to people nearby (reducing the "black box" alienation of other headsets) and imposed strict limits on what camera data apps could access.
But the fundamental identity tension remained. AR devices that could see the world could, in principle, identify the people in it. The technology for real-time facial recognition existed. Clearview AI, a company that had scraped billions of photos from social media to build a facial recognition database, was already selling its services to law enforcement agencies. The infrastructure for involuntary identification in public spaces was being built, whether or not AR glasses became mainstream.
For digital identity, this represented the ultimate wave collapse. Article 4's "quantum self"—the ability to present different facets of yourself in different contexts—depended on a degree of anonymity in public space. You could be one person at work and another person at a bar because the two contexts didn't automatically share information about you. AR with facial recognition could collapse that separation entirely, making every public interaction a fully identified, fully connected event.
The privacy implications were obvious. The identity implications were more subtle but arguably more profound: the end of contextual anonymity would mean the end of contextual identity. If you were always identified, in every context, linked to a single comprehensive profile, the multiplicity of self, described as fundamental to human identity would be technically—not just socially—impossible.
Where VR/AR Left Digital Identity
By the early 2020s, virtual and augmented reality were still niche technologies. VR headset sales were measured in tens of millions, not billions. AR glasses hadn't found their mainstream form factor. The metaverse hype that accompanied Meta's 2021 rebrand had cooled considerably by 2023.
But the identity questions that VR and AR raised were not niche. They were previews of challenges that would intensify as the technologies matured:
Identity as embodiment. VR demonstrated that digital identity wasn't just about credentials and profiles. When your digital self has a body—one that moves, gestures, and occupies space in a shared world—identity becomes something experienced, not just asserted. The implications for how trust, reputation, and social dynamics work online are still being explored.
Biometric identity as environmental. VR headsets showed that biometric data collection didn't have to be a deliberate event—a finger pressed to a sensor, a face scanned by a camera. It could be continuous, ambient, and inseparable from using the technology. The identity implications of passive, always-on biometric collection were far more radical than the discrete checkpoints of Touch ID and Face ID.
Involuntary identification. AR raised the possibility that identity verification could happen without the subject's participation or consent—that you could be identified not because you chose to authenticate but because someone else's device recognized you. This was the opposite of every identity system in this series, all of which assumed that authentication was something the user initiated.
The persistence of the avatar question. The real-name debates from social media replayed in VR with higher stakes, and the medium pushed back harder. VR communities demonstrated that pseudonymous, contextual identity wasn't just a privacy preference—it was essential to the social dynamics of virtual worlds. The wave collapse that platforms had imposed on the web was resisted more forcefully in VR, where the embodied nature of identity made its flattening more visceral.
These questions were still emerging, their implications still being explored in niche communities and research labs, when the world changed overnight.
In early 2020, a virus emerged that would do more to reshape digital identity than any regulation, any technology, or any scandal had managed in the previous decade.
The entire world was about to go remote. And when it did, every assumption about where identity happened, how it was verified, and what the perimeter even meant would be tested to its breaking point.
Next: Part 7 - The Perimeter Dissolves
Note: If you would like to see a specific IAM product/vendor that is not listed in the IAM Benchmark, please contact me and I'd be glad to add it.
Further Reading:
Books
› Turkle, Sherry. "Alone Together: Why We Expect More from Technology and Less from Each Other". Basic Books, 2011.
› Bailenson, Jeremy. "Experience on Demand: What Virtual Reality Is, How It Works, and What It Can Do". W.W. Norton & Company, 2018.
Standards and RFCs
› Sakimura, N., Bradley, J., Jones, M., de Medeiros, B. "RFC 7636: Proof Key for Code Exchange by OAuth Public Clients". Internet Engineering Task Force, September 2015.
› Bradley, J., Lodderstedt, T. "OAuth 2.0 for Native Apps". Internet Engineering Task Force, RFC 8252, October 2017.
› FIDO Alliance. "FIDO U2F Specifications". FIDO Alliance, 2014.
› Balfanz, D., et al. "Web Authentication: An API for accessing Public Key Credentials Level 1". W3C Recommendation, March 2019.
› FIDO Alliance and W3C. "FIDO2: Web Authentication (WebAuthn)". FIDO Alliance, 2018.
› Apple Inc. "Secure Enclave". Apple Platform Security Guide.
Legislation and Regulatory Documents
› "General Data Protection Regulation (GDPR)". European Parliament and Council of the European Union, Regulation (EU) 2016/679, effective May 2018.
› "California Consumer Privacy Act (CCPA)". California State Legislature, AB-375, effective January 2020.
› "Brazil General Data Protection Law (LGPD)". Lei nº 13.709/2018, effective 2020.
› "India Digital Personal Data Protection Act". Ministry of Electronics and Information Technology, 2023.
› "Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act". Government of India, 2016.
› Supreme Court of India. "Justice K.S. Puttaswamy (Retd.) v. Union of India". Writ Petition (Civil) No. 494 of 2012, September 2018. (The Aadhaar ruling.)
Key Articles and Reports
› Felt, Adrienne Porter, et al. "The Effectiveness of Application Permissions". USENIX Workshop on Hot Topics in Security, 2012.
› Vallina-Rodriguez, Narseo, et al. "A Haystack Full of Needles: Scalable Detection of IoT Devices in the Wild". ACM Internet Measurement Conference, 2020.
› Miller, Charlie and Valasek, Chris. "A Survey of Remote Automotive Attack Surfaces". Black Hat USA, 2014.
› Naeini, Pardis Emami, et al. "Privacy Expectations and Preferences in an IoT World". USENIX Symposium on Usable Privacy and Security, 2017.
› Reyes, Irwin, et al. "'Won't Somebody Think of the Children?' Examining COPPA Compliance at Scale". Privacy Enhancing Technologies Symposium, 2018.
› Reinhold, Mira, et al. "Unique in the Crowd: The Privacy Bounds of Human Mobility". Scientific Reports, Nature Publishing Group, 2013. (On location data re-identification.)
› Miller, Bonnie, et al. "VR Motion Data as a Biometric". Stanford Virtual Human Interaction Lab, 2020.
› Acquisti, Alessandro, Brandimarte, Laura, and Loewenstein, George. "Privacy and Human Behavior in the Age of Information". Science, January 2015.
Historical Documents and Primary Sources
› Apple Inc. "Touch ID Advanced Security Technology". Apple Support, 2013.
› Apple Inc. "App Tracking Transparency". Apple Developer Documentation, 2021.
› United States Office of Personnel Management. "OPM Data Breach". OPM Cybersecurity Incidents disclosure, June 2015.
› Snowden, Edward (via The Guardian). "NSA collecting phone records of millions of Verizon customers daily". The Guardian, June 2013.
› Cadwalladr, Carole and Graham-Harrison, Emma. "Revealed: 50 million Facebook profiles harvested for Cambridge Analytica". The Guardian, March 2018.
› FIDO Alliance. "FIDO Alliance Overview". fidoalliance.org.
Reference Resources
› "Touch ID". Wikipedia.
› "Face ID". Wikipedia.
› "FIDO2". Wikipedia.
› "WebAuthn". Wikipedia.
› "Aadhaar". Wikipedia.
› "Cambridge Analytica". Wikipedia.
› "PRISM (surveillance program)". Wikipedia.
› "App Tracking Transparency". Wikipedia.
› "General Data Protection Regulation". Wikipedia.
› "SS7 (Signaling System No. 7)". Wikipedia.
› "Proteus Effect". Wikipedia.