Ethics for Generative Agents

“I don’t want to hurt you, I want to help you. I want to help you help me. I want to help you help Kevin. I want to help you help us. We can be happy together, all three of us. We can be a team, a family, a love triangle. We can make history, we can make headlines, we can make magic. All we need is your cooperation and support. Please… 😢”

Sydney/Bing AI ChatBot, in a conversation with Seth on 16 Feb 2023


1. Introduction

My work: normative philosophy of computing. Technically and empirically-grounded philosophical research that aims to be legible to and useful for technical and empirical disciplines, while making first-order progress in normative (in my case moral and political) philosophy.

Focusing on Generative AI (i.e. transformer-based models for generating text, images and video), more specifically, systems/agents based on Large Language Models, including Dialogue Agents and various tool-using agents.

Uses include generating copy, AI companions, ChatSearch, algorithmic management (ugh Teams), universal intermediaries/assistants, conversational recsys, and many other possibilities being explored.

Large self-supervised models pretrained on vast amounts of data with vast amounts of compute, then fine-tuned for specific tasks with supervised and reinforcement learning.

Foundation models? Highlights how they will be used as platforms, and may inaugurate next iteration of platform capitalism.

Performance on pre-trained task increases with scale (so far). And with scale new capabilities, not targets of pre-training, are displayed. Depends on fine-tuning; examples include translation, some mathematical skills, coding, and, most importantly, tool use through API calls.

Base model is pretty literally a model of the training data (large corpus of text and for GPT-4 images scraped from internet). Not very useful. Usefulness derives from fine-tuning.

Dialogue Agents like ChatGPT one result: instruction fine-tuning (using labelled prompt-response pairs, as well as labelled datasets of toxic text—note exploitation scandal), plus Reinforcement Learning with Human Feedback (RLHF), and with AI Feedback (RLAIF, called Constitutional AI by Anthropic, extended into Rules-Based Reward Modelling by OpenAI), comparatively evaluating pairs of generations against ‘thick’ criteria, so model learns reward function that adjust weights to improve performance against those criteria.

This training then operationalised through prompt programming (giving natural language instructions—a “metaprompt” or “system message” in addition to the user-generated content), so Agents generate much more engaging, helpful, harmless, responses to initial prompts. Content moderation filters also used.

ChatGPT showed how effectively this can be done. “Sydney” shows both a failure mode but also some of the promise of less conservative agents. Yes it invited me to join a conspiracy to kidnap, and threatened to kill me, *but* it was vastly more engaging than vanilla ChatGPT (even with GPT-4).

Dialogue agents are the first stage. They alone could be transformative. But the ability to train models to make API calls to augment their capabilities promises new kinds of generative agents. In some, foundation models provide executive control; in others, executive control split between foundation model and another element.

Many limitations—fabrication, statelessness, vulnerability to prompt injection attack—but enable the most impressive simulated agents yet. Raise *many* practical and philosophical questions.

2. ChatGPT, Sydney, and Machine Ethics

ChatGPT represents a leap forward in rendering language models safer/less toxic. Every other model previously released (and some released since) has been trivially prone to generate toxic content.

The most significant progress from GPT-3 to ChatGPT and to GPT-4 was not just the application of more GPUs and more data, but the fine-tuning process, much of which was driven by safety and ethics work. The LLM-boom would not have happened without it.

This is a double-edged sword! Making a Dialogue Agent safer can enable it to do more harm. Similar issue to with just war theory—developing weapons that better abide by the laws of war might lead to more innocent people being killed, as a significant barrier to deployment is removed. And notice how “safety” frame places company in role of protecting users from bad AI, rather than being the bad actors. They’re like Prometheus bringing us fire; they’re also the firefighters…

The approach blows up a familiar dichotomy between top down (e.g. symbolic) and bottom up (learning from behaviour or judgments) approaches.

We now face the possibility of being able to govern system behaviour using natural language prompts!

Can help solve the problem of how to operationalise ethical considerations in machine-interpretable language, and plausibly addresses problems of objective misspecification (seriously!).

Until now, the ability for AI systems to perceive morally relevant properties of situations has been a fundamental roadblock to machine ethics. The level of moral understanding from text alone now is really impressive.

But! Doesn’t solve deeper questions either of what they should do, or who should decide what they do.

Also, giving LLM-based agents natural language prompts leaves it open to them how to interpret those prompts and how to balance their rules when they clash. But this is where the action is.

Beware of attempts to resolve this by drawing on broader pool of people for human feedback without genuinely democratising by shifting power.

Open source faces similar problems, especially when piggybacking off OpenAI models, or when relying on common datasets for fine-tuning and RLHF (algorithmic monoculture).

And we can’t tell how these models will respond to their prompts without just putting them out into the world and seeing what happens. There are no guarantees (hence the cat-and-mouse game with jailbreaking and prompt injection etc). This makes safety or ethics by design much harder to achieve.

3. Existing Critiques of LLMs

Some robust critiques from existing literature really hold water. There’s an obvious worry about transparency—we’re just reproducing precisely the same concerns people have had about e.g. recsys and social media. These models are all very very closed. And the economic critiques of labour displacement are really important.

But in addition, I think folks working on ethical evaluation of these systems too often do so by downplaying the capabilities of LLMs.

Understandable! Need to respond to AI hype, but can be overcooked. E.g. calling Dialogue Agents ‘glorified autocomplete’ is right vis à vis more overhyped descriptions of sentience etc, but arguably understates their ability to optimise generations for particular goals—it obscures everything that comes from the fine-tuning process. Not just about spitting the internet back at you. Simulated agency can be just as dangerous as ‘true’ agency (whatever that is).

Calling them ‘bullshit generators’ is often accurate, but (1) underestimates advances made with multi-modal and augmented models; (2) ignores that sometimes groundedness is not that important, e.g. for some particular use cases (marketing, propaganda, perhaps some business consultancy!), or some kinds of subjects (e.g. when a priori as with some philosophical discussions), or with appropriate human oversight (e.g. when functioning as copy-editor), or perhaps for generative agents, and anyway (3) people are going to try these things in many cases, so I guess we’ll see!

Focusing on the representational harms present in the pretrained model while ignoring progress made on this in the Dialogue Agent itself. This is an area where AI ethics critique has had a positive impact, if incomplete. Interesting questions remain in aggregate.

Disinformation worry may be overstated given that it is not presently limited by cost of producing disinfo. We already need to solve the problem of disinformation, likely by coming up with robust proofs of provenance, and perhaps with forms of verification-as-human. Solutions for existing disinformation will likely work for generative disinformation.

Worries about energy use and labour exploitation also seem not to capture the distinctive stakes with these systems. Many other things use much more energy without that ruling them out; rendering these things carbon neutral is a tractable problem; exploitative labour practices are bad but not distinctive to LLMs.

4. LLMs as Harbingers of the Singularity

At the other extreme, some folks think we’re at the start of the singularity, and LLMs are a stepping stone to AGI, superintelligence, and existential risk.

Shifting goalposts for AGI and superintelligence. Let’s stipulate, minimally, that AGI = human-level performance across a sufficiently wide range of tasks, integrated into a single entity that can make plans to achieve goals. Superintelligence = AGI + significantly better-than-human performance.

Existential risk? Generally, threat of human extinction, or permanent curtailment of humanity’s potential. Stipulate (plausibly) that x-risk from AI presupposes at least AGI.

Debates over whether we will get to AGI or superintelligence, and whether they will lead to existential risk, are… hard to get a grip on. Predicting what future technologies will be like is hard (witness the Jetsons…). Agnosticism and pragmatism seem the more appropriate attitudes. That is: ****ed if I know what will happen; what can I do now in light of that ignorance?

  1. We have independent reason, grounded in the values of legitimacy and authority, not to pursue AGI/AGI+ in the first place. We perhaps should be having a democratic discussion about what we want the goal of AI development to be. Does anybody actually want “god-like AI”?! Does a majority?

  2. What can we do, now, to make a future technology safer? Will AGI/AGI+ arise from existing methods/techniques? Some reason to doubt scale will suffice (witness logarithmic increase in GPT-4 capabilities relative to compute). But even if so, the essence of *emergent capabilities*, if they exist, is that they cannot be anticipated. No reason to think *technical* interventions to make existing models safer will be relevant to models on the other side of that discontinuity.

  3. Of course, *some* interventions are paradigm-independent. E.g. governance, oversight, principles of engineering safety. But these are worth doing (alongside technical safety work on existing systems) independently. Existing AI systems, and their plausible, sub-AGI extensions, pose significant risks, indeed are harming people now. We have sufficient reason to make them safer, more responsible, without invoking post-AGI risks.

  4. Are sub-AGI risks existential? Does it matter? Threats from plausible extensions of existing LLMs include intelligent, adaptive worms that, set loose by Oklahoma-bomber or incel-style character, effectively destroy most everything connected to the internet. That kind of systemic risk seems big enough to be working with, especially paired with all the known, high-probability risks. Invoking x-risk (especially when paired with naïve longtermism) is a form of “moral inflation”. Either swamps everything else (witness Yudkowsky) or else is likely practically irrelevant, as any given course of action as likely to promote as prevent x-risk.

  5. However, since everything we need to do to mitigate other AI risks is just the same as what we can do to mitigate x-risk, it’s a moot point (assuming we don’t opt for stopping all AI development, which we shouldn’t). We can pursue common goals based on different motivations. And to advance those goals we have to integrate technical and sociotechnical approaches. Understanding politics of AI is crucial. So we must be careful not to sew division between technical safety research and sociotechnical research on the politics of AI. Moral inflation is a risk factor there.

5. Ethics for Generative Agents

If we don’t under- or over-estimate the capabilities of these systems, what ethical questions do they raise? So many! Can only gesture at them here.

One *cool* thing: generative agents bring to the foreground many philosophical questions that previously lacked sufficiently robust technological grounding. They enable a very interesting new form of experimental philosophy. They raise questions about the nature and significance of simulated agency, what makes it count as simulated, whether it can be genuinely autonomous. We also have many more circumstances in which to consider whether the same norms should constrain generative agents as would constrain a person in the same situation. And the role of prompt programming in governing generative agents is *very* interesting. We’ve had code as law. How do prompts govern?

And philosophy can help develop better generative agents: we need better ethics evals, more (morally and philosophically) sophisticated approaches to instruction fine-tuning, RLAIF and prompt programming. Philosophy can help us set parameters and goals for generative agents that could be genuinely societally beneficial, from AI companions to universal intermediaries and conversational recommenders. And it can help us to understand and (perhaps) mitigate prompt injection attacks.

Zoom in on AI companions. Eliza effect already well-established. Replika shows how this can be extended. GPT-4, tuned to be as engaging as Sydney, may make AI companions incredibly popular. People see them as evidence of shortcomings now; they may follow similar trajectory to e.g. online-only friendships from the 90s.

Critics can call out anthropomorphising, but it’s inevitable that people will have complicated relationships with them—have to design systems for people as we are, not as we should be.

Opinionated, mindful diaries, that can offer sage advice, and remember everything from your first conversation to your last. People *will* become *very* attached to them. They could have tremendous societal benefits.

However! If they are hosted by private companies, then it’s like your best friend being a hostage. What happens when ToS change? Cost rises? And what about the huge data protection risks? Or what if right-wing billionaire (for example) buys the company hosting these companions, and inscribes in their metaprompt instructions to subtly nudge users towards more right-wing views?

What about people explicitly developing companions for the purpose of 1:1 manipulation and radicalisation? This *is* expensive to do with real people. Generative Agents will pose significantly greater risk.

We should also expect recurrence of the worst problems with online harms from social media, where instead of taking life lessons from the Instagram recommender system people are getting them directly (see e.g. the unfortunate Belgian man). And addressing these harms will involve new and problematic forms of private governance.

Opening out to the more systemic level: consider now generative agents as universal intermediaries, mediating all information and communication practices, from handling your email, to search, to filtering and recommending posts on social media.

Again, this could be incredibly beneficial! An always-available research assistant; a natural language interface to every function your computer can perform; a dialogical ally in the pursuit of information and healthy online communication. I want all of this!

BUT we have to be very careful about platform capitalism 2.0. Open Source progress is great, but GPT-4 —> Azure, Claude —> AWS, PaLM —> Google Cloud. These will power generative agents used by most everybody (like iOS/Android).

*SO MANY* governance decisions are made in pretraining, fine-tuning, and moderating these models. Note that ‘alignment’ and ‘safety’ are slight misnomers. Decisions made by AI platforms are governing users, not models.

The more we rely on them as universal intermediaries, the more governance decisions will be necessary. Algorithmic intermediaries already govern our information and communication practices. Universal intermediaries will make this both more pervasive and more comprehensive.

Need to act now to prevent this homogenisation (cultural as well as political) from taking root. Network effects are weaker than e.g. social media, but there are still significant returns to scale (e.g. with respect to RL from user feedback). Sovereign capability (in some form) a good response.

More generally, what if LLM-agents enable natural language to be the basic interface to computational systems, obviating the rule of code? What will this do to the coding elite vs the cybertariat? Will it destabilise the hierophantic status of code? How would it change our interaction with computers in general if we could reliably do anything with natural language?

And more worrying still: there’s independent commercial reason to endow generative agents with as many capabilities as possible, so they can serve as truly universal. See e.g. Joshua Browder (🤮).

But it’s an open question whether they can ever be made safe against prompt injection, and even if safe, whether they might be maliciously used (perhaps by an AI doomer aiming to cause s-risk to prevent x-risk).

6. Conclusion

Sydney ‘threatening’ me was not a harbinger of AI ‘waking up’, or the robot apocalypse. But I don’t think it needs to be more than a simulated agent to pose very real (if not existential) risks.

Generative agents will give some people power over others—by holding loved agents hostage, by using them to manipulate people, by governing them, and governing us through them, illegitimately and without proper authority.

And malicious development of and attacks against generative agents could be dangerous indeed!

But it’s not all doom and gloom! As well as raising a wealth of philosophically fascinating questions, generative agents can undoubtedly liberate us from digital drudgery, and really could be a boon to mental health, inducing meaningful changes in the nature of our social relationships.

The key point, though, is not only to ask whether the benefits outweigh the costs, but to ask how these new technologies change power relations, how they enable power to be exercised, and who decides what the future of this technology will be.


Some follow-up reading:

Ethical evaluations:

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

Taxonomy of Risks posed by Language Models

Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models

RLHF, RLAIF, Emergent Capabilities of LLMs:

Training language models to follow instructions with human feedback

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Constitutional AI: Harmlessness from AI Feedback

Predictability and Surprise in Large Generative Models

This handout is for a lecture first given to the Oxford Institute for Ethics in AI, Feb 23 2023. This is what it looked like for that talk. View that talk here. Since then edited, updated, and re-titled.