Algorithmic Republic: Who Governs the Digital City?
Plato trusted philosopher kings. We trust algorithms. The difference is that algorithms do not know they are governing — and no one can hold them accountable.
On an entirely ordinary evening in 2016, a researcher on YouTube’s content safety team opened a brand-new account on the platform — new in every sense, a blank page with no history: no watch history, no recorded preferences, nothing. It was like a newborn. Then he sat and watched what his own company’s algorithm would choose to show him. Within a single week of ordinary, non-selective browsing, that pristine account was receiving recommendations for conspiracy theory content and extremist material. He had searched for nothing, requested nothing, expressed no explicit interest in anything whatsoever — but the algorithm had decided, entirely on its own, that this content was the most effective at keeping him in front of the screen, and that keeping him in front of the screen was its only measure of success.
I sat thinking about that unnamed researcher and his experiment, and comparing it to my own habitual behavior on YouTube and on every other algorithmically governed app. I hide posts I dislike to teach the algorithm — the “stupid” one — that cooking videos are not for me, that any video titled “watch before it gets deleted” gets dismissed immediately just for the stupidity of that title, and that the time I accidentally searched for sounds that attract cats on Facebook was only because I was looking after my neighbor’s cat for a single week. I even sent Facebook a kind of apology for the search, adjusting my behavior deliberately until it went back to showing me posts from friends that actually mattered to me.
Who made the decision to serve extremist content to that bored YouTube researcher? Practically speaking, no one. No executive sat in a YouTube office one morning and decided to route radical content to fresh accounts. The system decided. The equation decided. A mathematical function decided — one that has no understanding of what “extremism” means and no interest in its meaning. The algorithm knows only that this content extends watch time, that extending watch time means more advertising, and that more advertising means success by the measure its engineers designed. The decision amounts to statistical arithmetic. Nothing more.
Plato had a name for this kind of governance. He called it the rule of the ignorant. In our previous article, The Body Is the New Cage: Escaping Biology Through Technology, we asked what remains of the human being if we transcend the body. Today we ask: who remains to govern once we have transcended the governor?
The Philosopher King: An Idea That Seemed Absurd, Then Necessary
In Plato’s Republic, Socrates proposed an idea that scandalized his interlocutors: governance should belong neither to democrats, nor to the wealthy, nor to generals. It should belong to philosophers — specifically, those who had spent decades studying mathematics, music, philosophy, and astronomy before descending into the arena of politics, not because they wanted power but because they alone knew what genuine good for the city actually looked like.
The idea is uncomfortable for obvious reasons: who decides which philosopher is fit to govern? And who holds that governing philosopher accountable? But alongside the discomfort there is a logic that is difficult to refute — democracy grants everyone an equal vote, including those who know nothing about the subject of the decision. Plato’s theory was not merely a case for tyranny; it was a protest against the idea that equally distributed ignorance produces good decisions.
For two centuries, political liberalism rejected this argument on the grounds that power corrupts, and that democracy with all its flaws causes less harm than any form of elite rule. Then digital platforms arrived and restaged Plato’s Republic in a form no one anticipated: governance by those who believe they know best, represented not by philosophers but by engineers and algorithms, and applied not to a single geographic city but to billions of minds simultaneously.
The difference between the philosopher king and the governing algorithm is that the first knows it governs and accepts responsibility for that knowledge. The algorithm does not know it governs, and no one can question it — because there is no “one” to be asked.
How the Algorithm Actually Governs: A Necessary Technical Explanation
When we say “the algorithm governs,” we mean something technically precise that deserves explaining. The recommendation algorithm on major platforms — TikTok, YouTube, Instagram, Facebook — is not a ranked list assembled by human beings. It is a machine learning system using what are known as large neural recommendation engines: a form of artificial intelligence that mimics the human brain’s pattern-recognition capacity, but at a speed and scale that no human process can approach.
The system works as follows. It collects every signal a user leaves in their interaction with content: how far into a video they watched before stopping, whether they replayed it, whether they commented, whether they shared, what time of day it was, what device they used, what latitude and longitude they were in, and how many moments their eyes paused on a particular part of the screen. These signals are fed into a vast mathematical model containing hundreds of millions of variables — parameters shaped by billions of hours of prior viewing by hundreds of millions of users. For each candidate piece of content, the system then computes a conditional probability: how likely is it that this particular user will watch this particular piece for longer than X seconds? The piece with the highest probability is shown.
What is not computed in this equation: is this content true? Is it useful? Does it ignite hatred? Does it produce pathological anxiety? Does it reinforce a conspiracy theory? These questions are entirely outside the mathematical function — not because the engineers do not know them, but because measuring “attention-per-minute” is incomparably easier than measuring “harm-per-year.”
| What It Does with Exceptional Accuracy | What Is Entirely Outside Its Calculation |
|---|---|
| Predicting what you will watch in seconds | Whether what you are about to watch will harm you |
| Maximizing session duration | Whether maximizing duration damages your mental health |
| Identifying your emotional state from interaction data | Whether exploiting that state is ethical |
| Calibrating content to increase emotional arousal | Whether that arousal generates lasting social rage |
| Building personalized information bubbles | What those bubbles will do to social cohesion in a decade |
The companies know these limits. Internal documents leaked from Facebook, Google, and YouTube have shown, on multiple occasions, that engineers and researchers inside these companies are well aware that their algorithms feed division, anger, and anxiety. The problem is not ignorance — it is the incentive structure. An algorithm that maximizes arousal maximizes revenue. Fixing the algorithm typically means reducing short-term income, and publicly traded companies resist this by their very nature.

The Social Dilemma: When the Maker Judges Its Own Creation
In 2020, Netflix produced a documentary called The Social Dilemma, bringing together dozens of former social media developers and executives who spoke with unusual candor about what they had built. What they agreed on was not shocking because it was secret — it was shocking because it came from inside the machine itself. They know that what they built causes addiction, damages adolescent mental health, feeds conspiracy theories, and dismantles civil dialogue. They built it anyway, because each of them was working in a small sector that could not see the full picture, and because the economic incentives all pointed in one direction.
Tristan Harris, former design ethicist at Google and founder of the Center for Humane Technology, described what is happening with rare precision: “We didn’t build platforms. We built a persuasion machine that operates on billions of people simultaneously, learning how to persuade each of them differently in a way designed specifically for them — without any of them knowing about the others.”
This persuasion machine is what deserves to be called the governor of our era. It issues no written laws. It announces no decrees. It operates by a more subtle and more effective mechanism: it determines what you see, and by doing so determines what you think, which determines how you vote, what you buy, whom you hate, and whom you trust. The Platonic philosopher king planned the city from above. The algorithm reshapes its citizens from within.
Content Moderation: The Invisible Judiciary
Alongside the recommendation algorithm, there is a less visible authority of equal importance: the power to moderate and delete content. Every major digital platform operates a system of rules and policies that determine what is permitted to appear and what is removed, enforced by a combination of employees and algorithms processing billions of posts every day.
The numbers alone reveal the scale of this authority. YouTube users upload approximately 500 hours of video every minute. Facebook and Instagram together see more than 100 million pieces of content published daily. No army of human beings could review this volume. The companies therefore run AI models that perform the initial review automatically, with a human team handling edge cases and appeals.
This system produces decisions with the force of judicial rulings. Deleting someone’s account severs them from their network of relationships, followers, and digital archive. Removing a video erases a record that may have been the only documentation of a specific event. Attaching a warning label to a post significantly reduces its reach — without the poster ever knowing. All of these decisions happen every day by the millions, according to standards set by private companies and enforced by algorithms that cannot be challenged before any independent authority.
An ordinary user in the United States may barely notice this invisible power. They post something and it appears instantly — no delay, no gap. But the same power becomes unmistakably visible when someone in another country waits a week or more just to publish a single, unremarkable video, simply because an algorithm flagged the content as dangerous due to a faulty automated translation. The governor operates unevenly across geographies, and the experience of its authority depends heavily on where you sit.
In a state governed by law, you have the right to a fair trial, to appear before a court, and to appeal before independent judges. In the digital city, you have an online complaint form that an automated system answers within several weeks.
TikTok and the Distinctiveness of the Chinese Algorithm
No discussion of algorithmic governance is complete without pausing at TikTok, whose model differs qualitatively from its Western counterparts. TikTok’s algorithm is considered the most sophisticated and effective attention-capture mechanism among current social platforms. A number of researchers have described it as resembling a particle accelerator for content: it tests thousands of variables for each user in parallel, learning with remarkable speed what keeps them in front of the screen longest.
But what distinguishes TikTok goes beyond technical efficiency — it is ownership. The parent company is ByteDance, a Chinese firm legally obligated under China’s national security laws to hand over data to the Chinese government on request, and to allow potential intervention in its algorithms when deemed necessary. This does not mean the Chinese government is steering TikTok every day. It means that the most influential tool for shaping the interests of the world’s population under thirty is designed in a way that makes intervention — possible at any moment — technically and legally feasible.
The United States recognized this and attempted to pass legislation forcing ByteDance to divest TikTok by January 2025. The legal and political turbulence that followed — with its cycles of bans, freezes, and negotiations — is at its core a struggle over one question: who has the right to govern the digital city? Which national government? Which private corporation? And under which law?
The Accountability Crisis: When No One Is Responsible
In 2018, Mark Zuckerberg was summoned to testify before the US Congress following the Cambridge Analytica scandal. The session was revealing — though not in the way intended. Most senators did not technically understand how Facebook worked. One asked why Facebook was free. Zuckerberg replied calmly: “We run ads, Senator.” A question that should not need answering for the chief executive of a company worth hundreds of billions of dollars.
The scene exposed a real gap in the architecture of democratic accountability. Legislative bodies move at the pace their electoral cycles and bureaucratic procedures allow, while technology evolves fast enough to make any law written today outdated tomorrow. And even when legislators understand the technology, they face a structural problem: platforms are geographically borderless, but law remains fundamentally local. What is deleted in Germany stays visible in India. What is restricted in the European Union flows freely in the American digital space.
In the absence of genuine accountability, companies fill the vacuum with what is called “self-governance of content”: they set standards for permissible expression on their platforms, establish advisory boards to review contested decisions, and publish periodic transparency reports. All of these mechanisms have value within their limits — but they remain, ultimately, the company judging itself, which no sound legal system has ever accepted as sufficient.
Democracy vs. the Algorithm: Which Is More Just?
Here we should be honest about Plato’s original question, because he was not entirely wrong. Does digital democracy — giving every voice equal access on the platforms — produce a space more just and more truthful than the algorithm?
Not necessarily. Unmanaged digital democracy has produced spaces where the loudest voice prevails over the most accurate, the most sensational over the most precise, the fastest to anger over the most considered. The public in an open digital forum does not vote on truth — it votes on what confirms what it already wants to believe. And this is precisely what Plato warned against when he saw Athenian democracy as an instrument for reinforcing collective illusion rather than dissolving it.
The problem, then, is not choosing between chaotic democracy and governing algorithm. The problem is that both, in their current forms, lack what Plato called philosophical education: the capacity to distinguish right from wrong, public interest from private interest, information from noise.
The only difference between Plato’s dilemma and ours is scale. Athens held a few thousand citizens. The digital city holds billions of individuals, receiving their information, forming their opinions, and building their political identities inside a space managed by companies whose executive boards number only in the dozens.
Digital Governance Initiatives: What Is Being Tried Now
The picture is not entirely bleak. There are serious attempts to build more accountable digital governance frameworks, though they remain in early stages.
The European Union’s Digital Services Act (DSA), which came into full effect in 2024, requires major platforms to be transparent about recommendation algorithms, allows users to opt out of personalized recommendations in favor of chronological ordering, and mandates independent assessments of the social risks their algorithms create. The EU’s General Data Protection Regulation (GDPR) provides additional leverage over data practices.
Meta’s Oversight Board is a semi-independent body with authority to review content removal decisions and issue binding rulings in specific cases. Its success has been limited, but its existence represents a precedent in platform accountability.
The field of Explainable AI (XAI) is a research domain aimed at making algorithmic decisions legible in terms humans can understand, replacing the current “black box” with something approaching a reasoned judgment. It remains largely in research, but it is essential infrastructure for any genuine accountability regime.
The Center for Humane Technology, led by Tristan Harris and others, pressures both companies and governments to redesign platforms around measures of human well-being rather than engagement metrics alone.
All of these initiatives move in the right direction. But none yet addresses the foundational question: by what right does a private company operating from its headquarters in California hold the authority to determine what may be said and what may be heard in the global digital city?
Plato’s Cave and the Problem of Power: A Fourth Reading
In our first reading of Plato’s cave, we found the problem to be structural and social. In the first article of this series, we found it to be a socially costly choice. In the article on the body, we found that the chain had become voluntary. Now, seen from the angle of power, the image of the cave looks different for the fourth time.
In the original cave, figures stand behind the prisoners carrying objects before the fire, casting their shadows on the wall. Plato does not identify them or explain their motives — he leaves them deliberately obscure. In our digital cave, we know who stands behind the fire: an engineering team at a private company, receiving its incentives from a board of directors, which receives its pressures from shareholders, who want a return on investment. A chain of economic interests, each link doing what seems rational within its immediate incentives — and the cumulative outcome is wanted by no one precisely, yet no one can stop it.
The Platonic philosopher king was responsible for the whole city and for its long-term future. The governing algorithm is responsible for the next financial quarter. This distance in the horizon of responsibility may be the single most troubling feature of the digital city we have begun to inhabit.
Conclusion: Who Governs Those Who Cannot See That They Are Governed?
Plato did not propose the philosopher king because he loved tyranny. He proposed it because he saw that governing without genuine knowledge of the common good produces injustice regardless of the ruler’s intentions. His problem with Athenian democracy was not that it was governance by the people — it was that it was governance by passion through the people. The rule of the mob.
We live today in a city governed by a system that does not know it governs, operated by people who cannot see the full scope of what they have built, subject to a symbolic accountability that does not rise to the level of the actual power exercised. This is not a conspiracy. It is something more unsettling: the natural outcome of building systems of extraordinary complexity from narrow economic incentives, without developing governance frameworks adequate to what those systems become.
The question that Plato never answered — and that we still lack the tools to answer — is this: how do you govern a city in which billions of people live inside a space with no geographic borders, no clear national sovereignty, and no written social contract? The European regulatory experiment, the self-governance initiatives, the attempts to transfer power to users — all of these are beginnings. None of them is an answer.
In the next article of this series — The Third Simulation: Art in the Age of Generative AI — we move from political power to a different kind of authority: the power to determine what is beautiful and what is creative, when the algorithm becomes the artist.
References
- Plato. The Republic, Books V–VII — The Philosopher King. (See our article: Plato’s Cave: A Late Reading)
- Chaslot, Guillaume. “How YouTube’s Algorithm Distorts Reality.” The Guardian, 2019. theguardian.com
- The Social Dilemma. Dir. Jeff Orlowski. Netflix, 2020.
- Meta Oversight Board. Annual Report 2023. oversightboard.com
- European Commission. Digital Services Act — Overview. digital-strategy.ec.europa.eu
- Harris, Tristan. Testimony to the U.S. Senate Commerce Committee. June 2019.
- Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019.
- Also in this series: The Body Is the New Cage: Escaping Biology Through Technology
- Also in this series: The Digital Cave: Why We Choose Shadows Again
- Related: Smart Cities: Is Humanity Ready for Life in the Future?
- Related: The AI Bubble: Technical Reality and the Illusion of Continuity


