Tanner Lecture 1. Governing the Algorithmic City

“We are entering an age when the power of regulation will be relocated to a structure whose properties and possibilities are fundamentally different. … one form of power may be destroyed, but another is taking its place. Our aim must be to understand this power and to ask whether it is properly exercised.”

Lessig, Code 2.0

View the talk here, and read the paper here.

I’m not a fan of slides. AI art is ethically—probably also legally—dubious, and imo anyway getting a bit naff. Philosophers like handouts. But it’s not cool to print up ‘00s of handouts. So let’s try this.

A handout is a prospectus, guide-rope, and receipts. You can have a quick flick through to see what’s coming if you like. If something I say is hard to follow, you should find a clear counterpart here to check against. And in the Q&A, you’ll have something to refer back to when composing questions or comments.

But for the most part, if all is going well then you shouldn’t need to look at this much during the lecture. After all, if you’re here in person, then wouldn’t it be nice not to look at a screen for a change? Hi!

0. Prologue

How can philosophers help us understand human values in the age of AI? No monopoly on understanding of either topic. And recently, some philosophers writing about AI have strained moral, political and philosophical credulity, promising VC philanthropy a 10^48 return on their moral investment.

Philosophy is best not when it tells you what to do, but when it is used to help democratic publics better understand the urgent challenges we face, so that we can make better decisions about what to do, together.

We can help explain why some new use of a new technology feels intuitively wrong, and whether we should trust our gut. We can zero in on what is genuinely normatively new, and what is old wine in new bottles. And we can articulate positive ideals to inspire social and political change.

This works best if we draw not on speculation about possible futures, but on a deep understanding of those technologies—not just underlying mathematical and engineering principles, but even more importantly technology’s role in society. This means drawing on more empirical disciplines, and producing work that is legible and useful to them. That’s what I think normative philosophy of computing should be, and it’s what I have tried to do. Here we go.

1. Introduction

Political philosophy, ultimately, is about how to live together. It depends on properly understanding our social relations. But it is yet to adequately address social changes induced by computing, intensified by AI. Here are my aims in this lecture:

(1) Develop analytical tools, distilled from empirical literature, to pinpoint the key change that political philosophy must recognise: the rise of algorithmically-mediated social relations;

(2) Show how algorithmic intermediaries introduce new and intensified power relations, and illuminate both the nature and justification of that power;

(3) Show how justifying power in these altered social relations demands first-order advances in political philosophy.

2. The Algorithmic City

The Algorithmic City = network of social relations mediated by algorithmic intermediaries (to be super clear, ≠ smart cities).

Social relations = stable patterns of communication and interaction.

Intermediaries = go-betweens, conveying information or action from one to another.

‘Algorithmic’ as synecdoche: tip of a spear including other software and hardware, human labour, resources etc. And metonym: the functionally-agential dimension of a computational system (representation of environment + goal-directed action => outcome). Not just AI, also search, blockchain, hash-matching etc.

Many of our social relations now partly or entirely constituted by algorithmic intermediaries. Includes social media, messaging, generative AI, e-commerce, 2-sided markets, search, app stores, security, management, recruitment.

It’s a lot! Especially focused on intermediaries for stable, high-stakes interaction, with greater degree of adaptive influence. Less interested in content-independent ‘pipes’.

Platforms are paradigmatic algorithmic intermediaries. Not all algorithmic intermediaries are platforms.

Not all internet companies are algorithmic intermediaries, and vice versa; algorithmic intermediaries mostly tools, not all algorithmic tools intermediaries (think predictive analytics in e.g. justice and welfare systems).

Mostly private, for-profit, but not just Big Tech. Aiming at a more fundamental model of social relations realised in many different ways now and in future, e.g. also techno-authoritarianism, little tech, fediverse, decentralised finance, web3, metaverse, digital democracy, AI as universal intermediary.

3. The Nature of Intermediary Power

In evaluating changed social relations, political philosophy’s first (not only) question should be: how do they change relations of power?

Power = (roughly) one-way control, exercised by direct effects, shaping options, or shaping beliefs and desires.

Algorithmic intermediaries: exercise power over us (e.g. search, 2-sided markets, making a business or breaking it; Dall-E/ChatGPT refusing to answer prompts; productive power of e.g. virtual worlds; search/maybe generative AI shaping beliefs and desires); shape power relations between us (e.g. social media governing manipulation and deception, generative AI enabling mass influence campaigns); and, over time, reshape social structures constituted by our social relations (call this power through; e.g. political communications, commerce, labour, culture—e.g. art—education, sociality; never one-way or deterministic, but still…).

New and intensified power relations raise new practical questions. But nature of algorithmic intermediary power invites theoretical progress too.

Extrinsic power shapes social relations by providing physical and institutional environment for unmediated interaction. E.g. architecture, marketplace of ideas, property laws.

Contrast this with intermediary power. If extrinsic power governs social relations like the river’s banks govern the water (Dewey), then intermediary power is more like bonds holding water molecules together.

It governs social relations from the inside out, shaping which kinds of social relations are possible or impossible, frustrating or encouraging behaviours through design, pre-empting choices and enabling in-principle perfection of coercive enforcement, with access to others as a cudgel.

Algorithmic intermediaries bring intermediary power to the fore. But it is not entirely new, of course (in one sense, nothing ever is)! Law also exercises intermediary power (e.g. constitutive rules). Analogue intermediaries (e.g. matchmakers, financial exchanges) do too; so do social structures like language/culture (cf. Foucault), and artefacts (cf. Mumford, Winner).

Analytical political philosophy should pay more attention to intermediary power in general.

But algorithmic intermediary power is interestingly new, because it extends and potentially perfects the form. It doesn’t merely determine whether some practice is legally recognised, but whether the underlying behaviour is literally possible. It operates at a scale, speed, and ability to dynamically update far beyond analogue intermediaries (e.g. automated content moderation, adaptive governance of generative AI). It is agential and so subject to a kind of direct normative evaluation unsuited for non-agential social structures like language, culture, and non-agential artefacts.

4. Justifying Intermediary Power

Some think naming power is enough to criticise it. We can do better, drawing on values of liberty, equality, collective self-determination (my lode stars).

If A has power over B, that presumptively undermines B’s liberty by subjecting them to risk of (wrongful) interference by A. If A has power over B, then they presumptively stand in hierarchical, unequal, social relations. And if A has power over B, C and D, then, presumptively, [A, B, C, D] are not collectively self-determining.

So should we just eliminate algorithmic intermediary power?

No: people will choose to relate to each other/the world through algorithmic intermediaries, so their power over some is unavoidable; and (this is the most important thing) if they do not use their power wisely to govern power relations between us, they will enable oppression, manipulation, inequality; and there’s a huge opportunity: algorithmic intermediaries can reshape ‘precipitates of the past’—what if that were used to advance our values?

Algorithmic intermediaries don’t just exercise power over us, they govern us (make, implement, and enforce constitutive norms of an institution or community). And we need them to do so.

It is urgent, then, to determine what justified algorithmic governance would look like.

First, substantive justification is necessary. But on its own it can only override presumptive objections from liberty, equality, and collective self-determination. To resolve those objections, power must also be used in the right ways, by those with proper authority to do so.

In other words, we must ask who gets to exercise power, how, and to what ends? And we cannot just cut and paste existing answers to these questions into the Algorithmic City—algorithmic intermediaries introduce an ‘interestingly-new’ modality of governance.

Political theories tailored for a different modality of governance may not transfer. Facts about governance that we have taken to be parametric prove to be contingent. This suggests some interesting first-order philosophical discoveries lie ahead. Here’s a dégustation menu (much more on these in the paper).

5. Who: Pre-Emption and Authority

Analytical political philosophy has overwhelmingly focused on justifying the state’s exercise of coercive extrinsic power through law. Authority, is interpreted as right to coerce or duty to obey law (e.g. Rawls, Raz, Weber, Locke).

But algorithmic intermediaries can govern without law or coercion. They constitute the social relations they govern, so can simply not enable non-compliance: ‘pre-emptive governance’ (Zittrain, see also Hildebrand, Lessig), or ‘technological management’ (Brownsword). This can be personalised, updated, dynamically refined. Regulation by design has always existed—but algorithmic intermediaries are vastly more malleable and dynamic.

Imagine: Pre-Emptopolis, an Algorithmic City governed wholly through pre-emption—undesired options are anticipated, prevented. Coercion and law obviated. No option to disobey the law because no law and no non-compliance! But same questions of authority apply.

Authority therefore fundamentally concerns not coercion, but right to govern. Coercion and law one modality. Pre-emption another. Contingent fact that we have mostly governed through law has shaped political philosophy; algorithmic intermediaries invite us to excavate underlying values to find a more comprehensive theory of authority that can justify right to govern independent of modality.

I suspect algorithmic governance makes realising justified authority other things equal harder than extrinsic governance by law!

6. How: Procedural Legitimacy and Algorithmic Governance

Notwithstanding present technical limitations, Algorithmic City could in theory perfect post-hoc enforcement as well as pre-emptive governance (at least faces fewer hard feasibility constraints). Intermediary power enables comprehensive surveillance/capture; algorithmic advances (e.g. in LLMs and computer vision) support automated enforcement.

Our theories of procedural legitimacy have all been tailored for, and derived from, the preeminence of law as a modality of governance. But it is just one among others. So we must, again, excavate principles underpinning procedural legitimacy, and not take law as foundational.

Don’t reify legal proceduralism, but appeal to underlying idea of limiting power, through e.g. authorship, publicity, and especially resistibility.

Shows that law has two ‘accidental virtues’: for law to be effective, some minimum degree of publicity necessary. Algorithmic governance inherently opaque, and affords invisibility, can operate in background (e.g. copyright enforcement).

And law in the analogue city affords resistibility. Public enforcement enables public disobedience; public spaces support mutual trust, unmediated communication, collective action.

In the Algorithmic City disobedience must be actively enabled. Governance can be private, personalised. Trust harder to build. Surveillance and atomisation default. Resistibility crucial for procedural legitimacy’s role in limiting power, may also be ground or at least evidence of authority.

7. What: Intermediary Power and Justificatory Neutrality

Justificatory neutrality = justifying governance without excessive appeal to substantive moral commitments that subjects may reasonably reject.

Outcome neutrality is clearly unattainable; justificatory neutrality might also be, but easier to achieve with extrinsic than intermediary power: can agree on external parameters, then allow unmediated interaction within those boundaries.

Extrinsic governance allows rulers to be silent on some matters (implies permission, not endorsement).

Intermediary power in the Algorithmic City makes justificatory neutrality still harder to achieve. Must decide on everything, because everything that happens in the Algorithmic City is enabled by the intermediaries that constitute it.

In the analogue city for any behaviour X, the default is freedom to X; in the Algorithmic City the default is inability to X, it is possible only if enabled. This implicates those who govern the Algorithmic City in its denizens’ behaviour to a greater degree. Intermediary power removes option of silence.

8. Takeaways

A lot here. In the essay I consider objections based on exit, stakes, and regulation. But for now I hope you remember this:

1. The Algorithmic City is the network of algorithmically-mediated social relations; political philosophers have ignored its arrival.

2. Algorithmic intermediaries exercise intermediary power over mediatees, shaping power relations between them and reshaping society through them. Intermediary power too has been overlooked.

3. New power relations aren’t necessarily objectionable—but algorithmic governance is presumptively so, and must be justified against who, how and what standards. In particular, substantive justification insufficient.

4. The differences between intermediary algorithmic governance and extrinsic legal governance necessitate rethinking how we understand the who/how/what questions in political philosophy—and en passant imply that justifying algorithmic governance might be other things equal harder than justifying its legal counterpart.

5. This should ground real concern about aspirations to increase the role of algorithmic governance in our lives—whether by aspiring to value-aligned ‘AGI’ or to unachievable and undesirable (insofar as irresistible and invisible) ‘perfectible governance’.

Answering the what/how/who questions for algorithmic intermediaries is the work of a new and much-needed subfield of normative philosophy of computing. In Lecture 2 I apply this methodology to evaluating the algorithmic distribution of attention in the digital public sphere. But that will just be a start on a much bigger project‚ which extends far beyond my work.

This cannot be a conclusion. There is so much to do.

Thanks for your attention!

Special thanks to the Tanner Foundation, Stanford HAI, Stanford McCoy Family Center for Ethics in Society, Rob Reich, Marion Fourcade, Arvind Narayanan, Josh Cohen and Renee Jorgensen, and most especially to Lu for her encouragement and endurance while I’ve been writing this, and to Moss and Ash for being delightfully ungovernable, algorithmically or otherwise.

I live and work on Ngarigo, Ngunnawal and Ngambri lands, and I acknowledge the Traditional Custodians of Country there and throughout Australia. I pay my respects to Elders past, present and future, and I support a voice to parliament. Always was, always will be.

View the Tanner Lecture.

Read the paper.

Follow: Mastodon, Other Place, MINT Lab