Europe is building moats. Impressive, well-funded, politically resolute moats. Ditch Zoom for Visio. Replace Visa and Mastercard with Wero. Host your own clouds, back your own models, write your own rules. The frustration behind it is legitimate. Nobody wants their continent's critical infrastructure running on someone else's servers, subject to someone else's laws, switchable off by someone else's boardroom decision. But here's the problem nobody in the sovereignty conversation wants to say out loud: you can own the castle and still be shaped by whoever built the foundations.
Take Mistral, Europe's most serious AI bet. To its credit, Mistral has put real effort into training on European languages, French, German, Italian, Spanish and more, at a level most American labs haven't bothered with. That matters. But owning the language is not the same as owning the thinking. The deeper architecture of how these models reason, what they treat as a good answer, how they weigh competing ideas, that was shaped by research built mostly in American universities and labs, and it travels with the technology no matter who runs it. Mistral speaks "European". Whether it thinks European is a harder question.
Here's the simplest way to put it. These models learn from data, and most of that data is English-language and American in character. Not because of a conspiracy, but because that's who was online first, who published most, whose legal and business documents got digitized. That shapes everything: what the model thinks is normal, what it treats as neutral, what kind of answer it reaches for by default. And the problem is compounding. The models trained on yesterday's internet are now generating vast amounts of new text, which will train tomorrow's models. The cultural skew doesn't dilute over time. It feeds itself.
Europe's response to all this is not unreasonable. Building your own infrastructure matters. The EU's approach of regulating what AI can and can't do, rather than trying to control where it comes from, is arguably smarter than it gets credit for. But there's a layer underneath all the policy moves that nobody has a clean answer for. Think of it this way: when an AI model is built, it doesn't just learn facts. It learns a whole set of unstated assumptions about the world. What counts as a fair outcome. How to balance individual rights against collective ones. When authority should be questioned and when it should be deferred to. What a reasonable person would think. These assumptions don't come with a label. They're absorbed from millions of texts written by people operating inside a particular legal system, a particular political culture, a particular set of social norms. You can't simply swap them out by changing the headquarters of the company that built the model, or even by adding more European text on top. They're woven into the foundations. And because most of those foundations were laid in one specific cultural context, even a model marketed as European is, at some level, still reasoning from someone else's defaults. That's the part the sovereignty debate most consistently avoids, because unlike servers or funding or regulations, it doesn't have an obvious fix.
Which brings us to the question worth sitting with. If it works, if Europe, China, India, the Arab world each build their own distinct AI systems shaped by their own cultures and assumptions, is that a good thing or a bad thing? Maybe it's richer. Maybe every tradition finally gets to think on its own terms rather than through an American filter. Or maybe we end up somewhere darker, where AI systems built in different epistemic worlds can't meaningfully communicate with each other, and neither can the people using them. Leibniz dreamed of a universal logical language that could resolve any dispute between civilizations. The internet briefly looked like one. AI sovereignty may be the moment we stopped trying.
This article is adapted from Issue #025 of Synthetic Auth Report