Tanner Lecture 2. Communicative Justice and the Distribution of Attention
“The ties which hold [people] together in action are numerous, tough and subtle. But they are invisible and intangible. We have the physical tools of communication as never before. The thoughts and aspirations congruous with them are not communicated, and hence are not common. Without such communication the public will remain shadowy and formless, seeking spasmodically for itself, but seizing and holding its shadow rather than its substance.”
Dewey, The Public and its Problems
New to handouts? I’m going for unmediated interaction in these lectures, so no slides. This is a prospectus, guide-rope, and set of receipts (you can see full paper for references). But you shouldn’t need to look at it much til Q&A.
0. Prologue
Algorithmic intermediaries = sociotechnical systems using algorithmic tools, such as ML, to enable people to communicate with and act on other people, in ways that the intermediary can dynamically shape (ie not content-independent pipes). Platforms paradigmatic but not exhaustive.
In Lecture 1 I proposed a model of how algorithmic intermediaries shape the social relations they partly constitute, and in doing so exercise governing power. Political philosophy faces new questions about how that power can be justified.
This lecture pursues a central case in detail: the governance of the digital public sphere—the digital environment for public communication (and what a day to be discussing this on… seriously, first time I presented this was the day Musk acquisition of Twitter went through, and now this with Trump and Meta…)
1. Introduction
We can likely agree that the digital public sphere is in poor health. To be clear, it is not all bad! But it has many pathologies, including: epistemic crisis, misinformation, inauthentic coordinated behaviour, bots, silencing, harassment, abuse, manipulation, radicalisation, affective polarisation, arguably even fomenting civil war and genocide…
Algorithmic intermediaries—especially digital platforms (of course social media, but also operating systems/app stores, security and cloud infrastructure)—govern our ailing digital public sphere.
They have a responsibility to do so better, but existing political philosophy offers too little guidance. It assumes that protecting free expression with extrinsic power (legal limits, public spaces) is enough.
This is obviated by new ability to shape public communication and distribute attention—paradigmatic intermediary power/algorithmic governance, which generates new questions of substantive justification, procedural legitimacy and proper authority.
We need a theory of communicative justice to address them.
(Note, this is a paper about political philosophy, not about the US constitution).
2. Algorithmic Intermediaries Govern the Digital Public Sphere
Algorithmic intermediaries (e.g. social media, search, protocols, operating systems, infrastructure) exercise intermediary power over the digital public sphere in three key ways:
1. Architecture: Design of platforms and protocols shapes how users interact (affordances). Do not determine behaviour, but make some acts possible, others impossible, encourage some, discourage others. E.g. visibility of quantitative feedback, context collapse, character limits (Vaidhyanathan, Settle).
2. Amplification: Attention is finite but content is functionally not. Architectures provide signals for recommender systems, which shape which communications we are exposed to online, filtering, amplifying, reducing. E.g. amplification of emotive content including outrage that increases time on platform, ‘do not recommend’ (Gillespie). Everything is pushed or pulled to some degree, no neutral baseline (Keller).
3. Moderation: Platforms remove some content/users, increasingly also relying on AI (Roberts, Gillespie), which in turn depends on harmful outsourcing. Used to be just ‘behind the screen’. Very much in public eye now…
In sum: algorithmic intermediaries shape public communication and distribute attention. They govern the digital public sphere. States also govern there. But algorithmic intermediaries crucial (indeed, states often act through them).
Is social media the true cause of democracy’s plight? Pathologies also due to us.
But this is always true of social pathologies. Still rulers’ responsibility to address them: 1, they are implicated and 2, they have distinct capacity to remedy. And: 3, norms they defend and implement matter aside from results. This is the expressive function of law/code.
They must learn how to govern better. But what would ruling well mean? Pathology-specific responses? Not enough: need a positive ideal. Helps with some pathologies (what to optimise for?); guides in trade-offs; helps answer how/who questions (‘what justifies, limits’).
A job for political philosophy!
3. From Freedom of Expression to Communicative Justice
But… political philosophy’s default view (from Mill to Habermas): healthy public sphere a spontaneous by-product of protecting basic rights to freedom of expression. Its value is grounded in those rights and in instrumentality to democracy.
But now clear (e.g. Wu) that robust protections for free speech online actively undermine digital public sphere (moderation is necessary). And tethering value to democracy is risky: democracy’s general peril can lead to defeatism, or make it easy to shift responsibility.
Moreover: algorithmic intermediaries don’t determine whether people can express themselves. They shape how you can communicate, and they distribute attention. The interests at stake are communicative, not only expressive.
Need a theory of communicative justice (Gangadharan), which includes but goes beyond freedom of expression, and guides intermediary power. Must defend its independent value—not only (though also) as a contributor to democracy.
4. The Currency of Communicative Justice
Why think we need a theory of justice here?
Demands of justice arise when moral equals seek to realise individual and collective goods requiring restraint, coordination, and collective action, which involve benefits and burdens. Failure to appropriately coordinate has costs; but avoiding failure is insufficient—must achieve those goods in a way that reflects our moral equality.
Seems like a pretty good description of the task facing those governing the digital public sphere.
A theory of communicative justice should therefore identify relevant individual and collective interests that communication promotes (call these communicative interests), and the norms governing their achievement and distribution.
I favour a democratic egalitarian theory of communicative justice. This is my way of unifying the commitment to liberty, relational equality, and collective self-determination described in Lecture 1. But my aim is to build out a schema that can be populated by different substantive theories.
Start with some non-instrumental interests in communication itself. First is the basic interest in being attended to, being acknowledged (recognition). Next: being treated as an equal. Next: being esteemed, the object of effortful engagement (I appreciate your effortful engagement! TY!). Highest: joint action of a vibrant conversation.
But communication is basically instrumental to everything! Other communicative interests also crucial (some but not others appropriately described as communicative). E.g. three broad categories: Informational interests (obvious but can’t be overstated); coordination and collective action (consent, contract); formation of individual and collective identity and sense of community (Brock). This is not an exhaustive list.
Collective goods: public (non-rivalrous and non-excludable); enjoyed in virtue of membership in a group; (perhaps) irreducibly social. I’ll focus on civic robustness. I think this is actually a non-instrumental communicative collective interest, but won’t overstate the point.
Dewey argued ‘private’ decisions often impose negative externalities, those affected must unite in response. Public sphere is where public can find itself. Not just externalities, but governing power, should give rise to civic publics.
Includes transnational (e.g. big tech) and subnational power (e.g. employers). Not only reactive; also lays discursive foundations for collective action. Public sphere civically robust when supports creation of civic publics responding to governing power.
Civic robustness is obviously vital for healthy democracy (Young). But it is an independently valuable complement, a kind of egalitarian bottom-up counter-power that can be first and last resort.
5. Democratic Egalitarian Norms of Communicative Justice
Algorithmic intermediaries should promote communicative interests subject to democratic egalitarian constraints. This = communicative justice. Like justice in achievement of any important good, matters in its own right, not only for its contribution to democracy.
Architecture, amplification, and moderation should promote fulfilment of individual interests in (see above): recognition, equal treatment, esteem, dialogue, knowledge, deliberation, coordination and community.
To promote civic robustness, must aim at subordinate goals, e.g. healthy information environment (an irreducibly social good I think); secure private sphere; allocation of collective attention so public can find itself; communications expressing minimal mutual toleration/trust.
This means actually evaluating communications for quality! Of course the devil is in the details—determining how to promote these interests is incredibly hard.
Objection: aren’t people the best judge of what’s in their communicative interests? Well… They only get to choose between the subset of options platforms show them, so there’s always influence. And just predicting clicks gets us where we are today.
But my account is not consequentialist! Aims to promote these interests in a democratic egalitarian way. Comes down again to the what/who/how questions. Start with substantive justification.
What. The first step is to establish a baseline for acceptable communication. This is in many ways no different to longstanding debates over censorship and free speech. The difference is that platforms restrict not *expression* but *attention*. So the stakes are somewhat lower. But we still have communicative rights (grounded in especially relational equality) to participate in civic publics. And perhaps to the audiences we build up through reasonable efforts. Can’t just ban someone because doing so would be optimal or on a whim. We also have a wider range of interventions (such as reduction) available. Full removal may not satisfy necessity constraint. Dubious whether punishment could be a justified basis for ongoing restriction, given relative ease of renewing restriction.
Beyond the baseline, support egalitarian social relations on platform (shape power relations between us in egalitarian way). This means some familiar things: create fair opportunities to participate in relevant civic publics; govern in ways that can’t be exploited by some to oppress others (cf flagging as a means of gov). But also means new requirements: to actively reduce and disincentivise abusive, manipulative, deceptive communication (restrict only if necessary). This is about shaping communication and distributing attention (not just about policing boundaries).
Beyond aiming for egalitarian social relations, seek to distribute benefits and burdens fairly. So, e.g. when facing conflicts in the promotion of communicative interests, attend to structural injustice.
More generally: consider (positive) collective attention that is due to amplification/architecture. Currently ‘rich get richer’. Aim instead for equal opportunity for positive collective attention, acknowledging that ability to distribute attention is limited (can’t actually *force* people to pay attention).
Distribute burdens fairly too: e.g. cost of maintaining a tolerable communicative environment. Individual enforcement (e.g. ‘just block people you don’t like’) imposes costs on those already structurally disadvantaged (Brock, Flowers). Centralised governance spares the disadvantaged those additional costs.
Who. Promoting communicative interests, aiming for relational or distributive equality, all involves making and imposing substantive value judgments—setting, implementing and enforcing the constitutive norms of a community. What gives platforms the right to do that? Isn’t it impermissible ‘viewpoint discrimination’?
First response: platforms inescapably govern. They cannot avoid making and imposing norms, shaping what we see and don’t see. Their right to govern, such as it is, is grounded in part in the fact that they cannot avoid governing.
Second response: this definitely means focusing on promoting primary goods—communicative interests like in not being deceived and manipulated, or in civic robustness, that are worth having whatever else one wants or believes. Also means not just promoting things one happens to agree with. Governing power must be used with restraint. But can’t *not* use it.
Objection: Viewpoint discrimination can be avoided if platforms adopt content-independent distribution norms like reverse chronological ordering.
Response: reverse chronological is content-independent but not actually fair (sigh time zones); algorithms are just one part of a complex whole, architecture and moderation would still shape public communication and distribute attention—just clumsily, leading to avoidable communicative injustice, failing to fulfil communicative interests and realise civic robustness, missing opportunity to scaffold individual preferences, and direct collective attention, leaving the public ‘shadowy and formless, seeking spasmodically for itself, but seizing and holding its shadow rather than its substance.’
Better response to the impossibility of neutrality = aim for proper authority. What would that look like here?
Democratic authorisation clearly desirable in principle. But algorithmic intermediaries don’t overlap with specific polities, so may call for (genuine) platform democracy (complete with protections for basic rights).
That’s very unlikely at present; but we should be cautious about democratic states playing this role, because one key dimension of civic robustness = holding the state to account, so states should mostly avoid direct involvement in implementation. See e.g. India, Texas.
Demands creative institutional design. Best for democratic polities to set out general account of communicative justice, then non-state actors should implement, with independent oversight (a bit like EU approach).
But which non-state actors? Existing private entities are obviously poorly positioned. Power radically concentrated in a few capricious hands. Huge incentives to join forces with state power not support communicative justice. Private enforcement of public norms has very bad record (e.g. copyright). Platforms depend on engagement-at-all-costs and surveillance, both at odds with communicative justice. Ways forward?
A. Outlaw surveillance advertising, aligning platforms’ incentives with people’s (many reasons to do this).
B. Optimising for short-term engagement is rational only because costs are externalised. Like fossil fuels: those who profit should bear the costs.
While waiting for proper democratic authorisation, no choice but for platforms to try to implement communicative justice as best they can themselves—authority can be grounded in capacity pro tem if democratic authorisation infeasible (and if intermediaries suitably restrained in use of power).
How. Groundbreaking literature on content moderation has given much weight to individual due process (e.g. Suzor). Focusing on communicative justice more broadly—shaping public communication, distributing attention—invites a more systemic approach (cp. Douek).
For example: many communicative harms online are collective and/or stochastic (either cumulative from or probabilistically caused by many individually non-harmful components). These are best addressed globally rather than individually, through algorithmic reduction rather than restriction. In such cases of ‘shadowbanning’ individual transparency may be self-defeating. But invisible governance is risky, so collective oversight is essential.
More generally, algorithmic intermediaries govern; publicity is necessary for legitimacy: demands oversight at the systemic level.
Similar insights relevant at the technological level. Design of intermediaries should focus on global not local optima. Collective goods like civic robustness and healthy information environment likely can’t be realised by optimising for individual preferences. Interesting implications for recsys design, but also tells against market and decentralisation solutions—they support individual curation, not more effective/just governance.
7. Conclusion
Algorithmic intermediaries govern the digital public sphere through architecture, amplification, and moderation, shaping public communication and distributing attention.
Existing political theories focus on freedom of expression and assume extrinsic governance sufficient for healthy public sphere to spontaneously emerge. This is inadequate. We are inescapably building the digital public sphere. We must do so better.
We need a positive ideal to guide us, a theory of communicative justice: an account of our communicative interests, and the norms by which they can be appropriately promoted.
I advocated for a democratic egalitarian theory, and argued it matters in its own right, not just because it is instrumental to democracy. Whether social media is causing democracy’s troubles is in one sense a red herring—algorithmic intermediaries have responsibilities to realise communicative justice even if democracy’s plight is not their fault.
I gave an account of the substantive goals communicative justice should aim at—promoting communicative interests including civic robustness in ways, consistent with basic communicative rights, that fairly distribute the benefits and burdens of doing so.
I argued that while authority to govern should derive from a democratic source, it can be justified pro tem on grounds of capacity married with restraint.
And that the collective, systemic nature of communicative justice as a value invites an approach to the ‘how’ question that focuses on global rather than local optimisation, in both regulation and technological design.
This will I hope provide a schema for a broader debate on communicative justice, illustrating the necessity of making first-order progress in political philosophy to understand how to justify governing power in the Algorithmic City.
Thanks for your attention!
Special thanks to the Tanner Foundation, Stanford HAI, Stanford McCoy Family Center for Ethics in Society, Rob Reich, Marion Fourcade, Arvind Narayanan, Josh Cohen and Renee Jorgensen, and most especially to Lu for her encouragement and endurance while I’ve been writing this, and to Moss and Ash for being delightfully ungovernable, algorithmically or otherwise.
I live and work on Ngarigo, Ngunnawal and Ngambri lands, and I acknowledge the Traditional Custodians of Country there and throughout Australia. I pay my respects to Elders past, present and future, and I support a voice to parliament. Always was, always will be.
I’m currently revising the essay; will post a link back here when it’s ready.
Follow: Mastodon, Other Place, MINT Lab