Detecting Human Bias in Emergency Triage Using LLMs
Literature Review, Preliminary Study, and Experimental Plan
DOI:
https://doi.org/10.32473/flairs.37.1.135586Keywords:
Human Bias, Large Language Models, Natural Language Processing, Emergency Department, TriageAbstract
The surge in AI-based research for emergency healthcare presents challenges such as data protection compliance and the risk of exacerbating health inequalities. Human biases in demographic data used to train AI systems may indeed be replicated.
Yet, AI also offers a chance for a paradigm shift, acting as a tool to counteract human biases.
Our study focuses on emergency triage, swiftly categorizing patients by severity upon arrival. Objectives include conducting a literature review to identify potential human biases in triage and presenting a preliminary study. This involves a qualitative survey to complement the review on factors influencing triage scores. Additionally, we analyze triage data descriptively and pilot AI-driven triage using a Large Language Model model with data from University Hospital of Bordeaux. Finally, assembling these pieces, we outline an experimental plan to assess AI effectiveness in detecting biases in triage data.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Marta Avalos, Dalia Cohen, Dylan Russon, Melissa Davids, Oceane Doremus, Gabrielle Chenais, Eric Tellier, Cédric Gil-Jardiné, Emmanuel Lagarde
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.