Detecting Human Bias in Emergency Triage Using LLMs

Literature Review, Preliminary Study, and Experimental Plan


  • Marta Avalos University of Bordeaux
  • Dalia Cohen
  • Dylan Russon
  • Melissa Davids
  • Oceane Doremus
  • Gabrielle Chenais
  • Eric Tellier
  • Cédric Gil-Jardiné
  • Emmanuel Lagarde



Human Bias, Large Language Models, Natural Language Processing, Emergency Department, Triage


The surge in AI-based research for emergency healthcare presents challenges such as data protection compliance and the risk of exacerbating health inequalities. Human biases in demographic data used to train AI systems may indeed be replicated.
Yet, AI also offers a chance for a paradigm shift, acting as a tool to counteract human biases.
Our study focuses on emergency triage, swiftly categorizing patients by severity upon arrival. Objectives include conducting a literature review to identify potential human biases in triage and presenting a preliminary study. This involves a qualitative survey to complement the review on factors influencing triage scores. Additionally, we analyze triage data descriptively and pilot AI-driven triage using a Large Language Model model with data from University Hospital of Bordeaux. Finally, assembling these pieces, we outline an experimental plan to assess AI effectiveness in detecting biases in triage data.

Author Biography

Marta Avalos, University of Bordeaux

Associate Professor of Biostatistics, University of Bordeaux / Bordeaux population health INSERM 1219 / INRIA SISTM

Bordeaux, France




How to Cite

Avalos, M., Cohen, D., Russon, D., Davids, M., Doremus, O., Chenais, G., Tellier, E., Gil-Jardiné, C., & Lagarde, E. (2024). Detecting Human Bias in Emergency Triage Using LLMs: Literature Review, Preliminary Study, and Experimental Plan. The International FLAIRS Conference Proceedings, 37(1).



Special Track: AI in Healthcare Informatics