Exploring the Potential for Large Language Models to Demonstrate Rational Probabilistic Beliefs

Authors

  • Gabriel Freedman Imperial College London
  • Francesca Toni

DOI:

https://doi.org/10.32473/flairs.38.1.138892

Abstract

Advances in the general capabilities of large language models (LLMs) have led to their use for information retrieval, and as components in automated decision systems. A faithful representation of probabilistic reasoning in these models may be essential to ensure trustworthy, explainable and effective performance in these tasks. Despite previous work suggesting that LLMs can perform complex reasoning and well-calibrated uncertainty quantification, we find that current versions of this class of model lack the ability to provide rational and coherent representations of probabilistic beliefs. To demonstrate this, we introduce a novel dataset of claims with indeterminate truth values and apply a number of well-established techniques for uncertainty quantification to measure the ability of LLM's to adhere to fundamental properties of probabilistic reasoning.

Downloads

Published

14-05-2025

How to Cite

Freedman, G., & Toni, F. (2025). Exploring the Potential for Large Language Models to Demonstrate Rational Probabilistic Beliefs. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138892

Issue

Section

Special Track: Uncertain Reasoning