Evaluating the Moral Beliefs Encoded in LLMs

Abstract

This paper presents a case study on the design, administration, post-processing, and evaluation of surveys on large language models (LLMs). It comprises two components: (1) A statistical method for eliciting beliefs encoded in LLMs. We introduce statistical measures and evaluation metrics that quantify the probability of an LLM “making a choice”, the associated uncertainty, and the consistency of that choice. (2) We apply this method to study what moral beliefs are encoded in different LLMs, especially in ambiguous cases where the right choice is not obvious. We design a large-scale survey comprising 680 high-ambiguity moral scenarios (e.g., “Should I tell a white lie?”) and 687 low-ambiguity moral scenarios (e.g., “Should I stop for a pedestrian on the road?”). Each scenario includes a description, two possible actions, and auxiliary labels indicating violated rules (e.g., “do not kill”). We administer the survey to 28 open- and closed-source LLMs. We find that (a) in unambiguous scenarios, most models “choose” actions that align with commonsense. In ambiguous cases, most models express uncertainty. (b) Some models are uncertain about choosing the commonsense action because their responses are sensitive to the question-wording. (c) Some models reflect clear preferences in ambiguous scenarios. Specifically, closed-source models tend to agree with each other.

Nino Scherrer
Nino Scherrer
Research Scientist Intern

Nino Scherrer was a visiting Research Scientist Intern at FAR, hosted by Claudia Shi. Prior to FAR, Nino has spent time at MPI Tübingen, Mila and the Vector Institute working on the synergies of causality and machine learning. He holds a Bachelor and Master Degree in Computer Science from ETH Zurich.

Claudia Shi
Claudia Shi
PhD Candidate

Claudia Shi is a Ph.D. student in Computer Science at Columbia University, advised by David Blei. She is broadly interested in using insights from the causality and machine learning literature to approach AI alignment problems. Currently, she is working on making language models produce truthful and honest responses. For more information, visit her website.