Communicative Justice and the Distribution of Attention V.2

The ties which hold [people] together in action are numerous, tough and subtle. But they are invisible and intangible. We have the physical tools of communication as never before. The thoughts and aspirations congruous with them are not communicated, and hence are not common. Without such communication the public will remain shadowy and formless, seeking spasmodically for itself, but seizing and holding its shadow rather than its substance. Till the Great Society is converted into a Great Community, the Public will remain in eclipse. Communication can alone create a Great Community.

Dewey The Public and its Problems, 170.


This is a handout for a longer paper, which you can read here. It’s a revised version of my second Tanner Lecture, which you can view here.

1. Introduction

Moral panics come and go, but we can likely agree that the digital public sphere is in poor health.

Digital platforms shape communication and distribute attention. They govern the digital public sphere. They have a responsibility to do so better.

Existing political philosophy offers inadequate guidance. In particular, theories of freedom of expression are maladapted to describing how to shape communication and distribute attention.

We need instead a theory of communicative justice. I introduce and defend one such theory, focusing both on the currency of communicative justice and on the norms that, by constraining how we may promote that currency, manifest recognition for our status as moral equals.

2. Digital Platforms Shape Communication and Distribute Attention

The public sphere = communication that is not reasonably expected to remain private. We can’t uniquely distinguish a ‘political’ public sphere from everything else (politics gets into and onto everything).

Pathologies of the digital public sphere (these aren’t mutually exclusive):

Epistemic pollution, i.e. everything that makes it hard to form accurate beliefs. Includes misinformation, disinformation, conspiracy theories, epistemic fragmentation, bots, flooding the zone etc.

Abuse, i.e. speech that directly harms by denigrating or degrading the target. Includes individual harassment, silencing, brigading, some forms of collective harm (see below).

Manipulation, i.e. practices of using communication to influence another’s beliefs and desires in ways that compromise their autonomous agency. Includes intentional 1:1 manipulation but also stochastic manipulation that operates over populations, e.g. affective polarisation arising from platform design.

Platforms jointly constitute much of the digital public sphere. Platform design is in part responsible for these pathologies, and is an essential ingredient in remedying them, because platforms shape communication and distribute attention.

Platforms define the options for mass communication—through every stage from identity verification, to content creation, to interaction and sharing. Call this altogether platform architecture.

They implement Moderation policies aimed to prevent, mitigate or punish non-compliant behaviour. I think moderation should be defined as *enforcement* of content policies (i.e. not just curation).

And they distribute attention, determining how content spreads online, through their Curation practices. In particular, they distribute collective attention, i.e. when a post goes viral, or is otherwise being participatorily engaged with by many people.

Curation = (roughly) amplification and demotion. We often focus on one but not the other. We also often talk only about algorithmic amplification. This can be misleading. Amplification is never *only* algorithmic. Sometimes derives from platform architecture too (e.g. using social graph to distribute content). Always relies on signals from architecture i.e. how it measures engagement.

An attempt to define algorithmic amplification: comprises shortlist generation (filtering) and ranking. Maximal (alg) amplification: X is on everyone’s shortlist and ranked at top. Zero (alg) amplification: X is viewable only if intentionally sought out (e.g. by viewing someone’s profile). Algorithmic amplification and demotion are contraries—demotion = lower ranking, fewer shortlists. Demotion sometimes described as a species of content moderation; if it is done as an enforcement measure, then sure; but it can also be a means of curation, not moderation.

Architecture, moderation and curation choices by platforms shape communication and distribute attention. They do not determine either. We are not passive. We also decide how to allocate our attention, and how to communicate. But platforms shape what is possible, they incentivise and penalise, they encourage and discourage.

The pathologies of the digital public sphere are undoubtedly due both to us and to platform design. Waving a magic wand and making the platforms perfect (whatever that would mean) would not fix the problem. But platforms do govern the digital public sphere. They are responsible for making, implementing and enforcing the norms that should channel our behaviour away from these pathologies. They are undoubtedly implicated in them, and must be involved in a solution (even governments will have to work through platforms). So they have a responsibility to do better. But what would that look like?

Pathology-specific responses? Not enough: need a positive ideal. Helps with some pathologies (what to optimise for?); guides in trade-offs; helps answer how/who questions (‘what justifies, limits’).

A job for political philosophy!

3. From Freedom of Expression to Communicative Justice

But… political philosophy offers inadequate resources. Theories of the public sphere are attractive but too embedded within deliberative democratic theory, and explicitly fail to describe how we should shape communication and distribute attention (they talk about the value of the public sphere, but not how to make it).

Freedom of expression is our most familiar resource. But it is inadequate. First, as widely observed, the pathologies of the digital public sphere are mostly due to giving too much weight to free expression (especially understood as free speech).

Second, the focus of freedom of expression is maladapted to our needs. It aims to justify rights that the state not interfere with our speech. We’re interested in non-state actors, and non-intervention is not an available option. They inescapably intervene; the question is how to do so better.

Third, free expression is appropriately grounded in fundamental principles about individual self-sovereignty. States’ ability to interfere with speech should be strictly curtailed, in part on similar grounds to their ability to interfere with bodily integrity. What I am uniquely placed to control, others should have a presumptively limited ability to justifiably impinge upon.

Platforms control access to an audience, and shape how one may communicate with them. Any communicative rights must be positive rights, not negative ones. The ideal of self-sovereignty is less relevant here.

Imagine people had no language for communicating beyond their family group, and you invent a tool that translates between family languages. You can monitor how this tool is being used, how and when it fails, what its impacts are, and you can update the tool accordingly. The norms governing what this tool should do (and who should control it) could not be drawn from a theory of freedom of expression alone.

Fourth, theories of freedom of expression are too individualistic. They must struggle, for example, to accommodate stochastic and collective harms. Stochastic harm = each token raises the probability of some harm coming about; enough tokens makes it very likely. Collective harm = each token contributes a small, intrinsically insignificant harm; enough tokens makes for a serious harm in the aggregate. In each case, the individual’s act is insufficiently significant to merit the serious intervention of limiting their speech; should we therefore assume stochastic and collective harms are simply unavoidable byproducts of people’s speech rights? No!

Should we argue, then, for an expansion of theories of freedom of expression so that they can accommodate these different aspects of communication? Maybe, but seems like false advertising. Communication = expression + attention. Let’s be explicit about our focus. And we need theories of freedom of expression as well as of communicative justice—free speech rights are essential for self-sovereignty.

We *can* get much out of theories of both the public sphere and of freedom of expression—they can help articulate the interests at stake in shaping communication and distributing attention. But they can’t give us much guidance in attempting those tasks. For that we need to focus on communicative justice directly.

4. The Currency of Communicative Justice

Why adopt the lens of justice here?

First, communication is, roughly, a function of expression and attention. When expression is so easy as to be costless, attention is the scarce resource (as is widely noted). So we have to figure out how to distribute a scarce resource—this is a classic problem of distributive justice.

Second, more generally demands of justice arise when moral equals seek to realise individual and collective goods requiring restraint, coordination, and collective action, which involve benefits and burdens. Failure to appropriately coordinate has costs; but avoiding failure is insufficient—must achieve those goods in a way that reflects our moral equality.

A theory of communicative justice should therefore identify relevant individual and collective interests that communication promotes (communicative interests), and the norms governing their achievement and distribution. Note that theories of freedom of expression and of the public sphere can be useful here, in furnishing ideas for the kinds of interests most at stake.

The first step is to identify the communicative interests to be promoted. This includes non-instrumental communicative interests, like that in enjoying egalitarian communication with others, or participating in the successful joint action of a vibrant conversation. But also includes the goods to which communication is instrumental, like information, coordination, the formation of individual and collective identity, and even entertainment.

Those interests are broadly individual, but collective goods are also at stake—collective in the sense of being public (non-rivalrous and non-excludable); enjoyed in virtue of membership in a group; (in some cases) irreducibly social.

E.g.: a healthy information environment, a vibrant creative economy (we’re not just talking about politics here), and Deweyan ‘Civic Robustness’, the property a public sphere has when publics can successfully emerge in response to the exercise of power, both to counter it, and to provide it with positive direction.

While these goods are invaluable for democracy, I emphasise that they matter independently of how they contribute to democracy because otherwise there’s a risk that democracy’s other pathologies will induce despair or inaction in this case.

5. Norms of Communicative Justice

Platforms that govern the digital public sphere should shape public communication and distribute attention through architecture, curation and moderation, in ways that support the fulfilment of communicative interests. And we can very clearly see how in many respects the current digital public sphere is underperforming in that regard.

But realising communicative justice is about more than just promoting these goods. It’s about doing so in a way that reflects our underlying moral equality. What does that entail? We need to focus on substantive justification, proper authority, and procedural legitimacy.

What: Reasonable Disagreement. Describing these goods at a high level is pretty easy. But how could we operationalise promoting them, especially in circumstances of radical disagreement? Obviously it would be disastrous for platforms to just promote whatever ideas they happen to agree with (ahem, Elmo). But also can’t just rely on user preferences—that’s what gets us engagement optimisation.

Need to adopt some kind of public justification standard, focusing on communicative interests that support primary goods: e.g. enjoying egalitarian communications with others and avoiding deception and manipulation are likely crucial for social bases of self-respect. Also primary collective goods like healthy information environment, vibrant creative economy, and civic robustness—these are things we should want (as communities) whatever else we want.

What: Baselines. The next step is to establish a baseline for acceptable communication. This is in many ways no different to longstanding debates over censorship and free speech, so I’ll say less about it here. The difference is that platforms restrict not *expression* but *attention*. So the stakes are somewhat lower. But we still have communicative rights (grounded in especially relational equality) to participate in civic publics. And platforms shouldn’t have arbitrary power in any case. So they have to be judicious in exercising this power.

What about collective and stochastic harms? Filtering/demotion clearly called for; can be understood as moderation in accordance with platform policy, or else as curation.

Are they entitled to limit speech? Yes! Otherwise they would be under a positive duty to contribute to speech they deem harmful. Obviously they are not. Are they entitled to set the baselines wherever they wish? To answer that we need to think about authority, not substantive justification.

What: Distribution. Beyond the baseline, need an account of how to distribute fulfilment of communicative interests. For example, consider (positive) collective attention that is due to amplification/architecture. Currently ‘rich get richer’. Should aim instead for equal opportunity for positive collective attention, acknowledging that ability to distribute attention is limited (can’t actually *force* people to pay attention).

Distribute burdens fairly too: e.g. cost of maintaining a tolerable communicative environment. Individual enforcement (e.g. ‘just block people you don’t like’) imposes costs on those already structurally disadvantaged. Centralised governance spares the disadvantaged those additional costs.

Who. Even if we focus on primary goods, governing the digital public sphere still involves making and imposing significant value judgments on a population about what counts as realising those goods. What gives the private companies behind algorithmic intermediaries the right to exercise this kind of governing power?

(It’s not the consent of the governed or their opportunity to exit—that’s just a zombie argument that won’t go away no matter how many decisive objections it faces; can discuss in Q&A).

Ideally the right to govern digital public sphere would derive from democratic authorisation. But we need to be careful. The primary (though not only) role for civic robustness is to hold states (and state-like entities such as the EU) to account. Giving them too much control over the public sphere is a bad idea (cf what Indian government is currently trying to do).

This demands creative (though obviously not unique) institutional design. Best for democratic polities to set out general account of communicative justice, then non-state actors should implement, with independent oversight (a bit like EU DSA).

But which non-state actors? Existing private entities are obviously poorly positioned. Power radically concentrated in a few capricious hands. Huge incentives to join forces with state power not support communicative justice. Private enforcement of public norms has very bad record (e.g. copyright). Platforms depend on engagement-at-all-costs and surveillance, both at odds with communicative justice. We should change that by banning surveillance advertising, and forcing those who profit from optimising for short-term engagement to bear its costs.

But while waiting for proper democratic authorisation, platforms cannot avoid governing, so they have to try to implement communicative justice as best they can themselves—authority can be grounded in capacity pro tem if democratic authorisation infeasible (and if intermediaries suitably restrained in use of power).

How. Procedural legitimacy here has both legal-bureaucratic and technological dimensions. It’s about how we make governance practices transparent and accountable, but also about how we implement them technologically.

Platforms are now better at making policies clear up front (though nobody reads them). Legitimacy in medias res involves transparency and consistency, both in platform-initiated enforcement actions, and in mediating disputes between people. The latter especially important as it dictates whether the platform supports or undermines egalitarian social relations. One virtue of transparency is that it enables governance to operate through people’s wills rather than through force. This could be especially true with stochastic and collective harms.

Much discussion of post-hoc procedural legitimacy—contestability and accountability in enforcement decisions. Some criticism of late that this is ‘procedural fetishism’, too individualistic. I think criticism overstated. Regulation should focus on setting out broader, systemic norms of communicative justice (many of the values at stake are collective, so an individualistic focus is insufficient). But people subject to adverse decisions above a threshold of seriousness should also enjoy some version of due process rights.

Importantly, we need technological legitimacy as well as textual legitimacy. Platforms govern through architecture and curation practices, not only through applying their terms of service. Calls for platform observability fit here. Need to know more ex ante about design choices, paths not taken.

In medias res we need insight into the systemic effects of platform choices especially. The direct stakes for individuals in architecture and curation choices are often quite low, but the aggregate effects can be significant. Also some interesting technological challenges—designing recommender systems with global optimisation in mind e.g.

This focus on systemic effects—things like collective attention, civic robustness, information environment—means we should be sceptical about infrastructural changes that aim to solve these problems through relying *more* on consumer choice (e.g. antitrust, middleware).

7. Conclusion

Digital platforms shape public communication and distribute attention, and in so doing govern the digital public sphere. They have a responsibility to do so well.

We need a theory of communicative justice to guide them—an account of the interests served by public communication, and the norms governing how they should be promoted.

Democratic polities should articulate norms of communicative justice (both procedural and substantive) to guide algorithmic intermediaries, and then provide appropriate systemic oversight to ensure that they are being met—without unduly interfering in the digital public sphere themselves.

In the meantime, however, platforms cannot afford to wait for democratic authorisation. And they can’t evade responsibility by giving users illusions of choice or neutrality. They must instead stop optimising for user engagement and profit, and start optimising for communicative justice. I’ve sketched one substantive theory of communicative justice here; I hope this leads to many others being advanced!


Thanks for your attention!

Special thanks to the Tanner Foundation, Stanford HAI, Stanford McCoy Family Center for Ethics in Society, Rob Reich, Marion Fourcade, Arvind Narayanan, Josh Cohen and Renee Jorgensen, and most especially to Lu for her encouragement and endurance while I’ve been writing this, and to Moss and Ash for being delightfully ungovernable, algorithmically or otherwise.

I live and work on Ngarigo, Ngunnawal and Ngambri lands, and I acknowledge the Traditional Custodians of Country there and throughout Australia. I pay my respects to Elders past, present and future, and I support a voice to parliament. Always was, always will be.

Read a draft of the paper (with further acknowledgments and bibliography).

View the Tanner Lecture.

Follow: Mastodon, Other Place, MINT Lab