Large Language Models (LLMs) and Causality Extraction from Text

Tutorial at FLAIRS-38

Authors

  • Wlodek Zadrozny UNC Charlotte

DOI:

https://doi.org/10.32473/flairs.38.1.138900

Keywords:

Large Language Models, LLM, NLP, Causality extraction, data extraction from text

Abstract

This tutorial explores the application of Large Language Models (LLMs), such as BERT, LLAMA, and
GPT-3.5/4, to the extraction of causality from text documents, including identifying causes, effects, and actions in diverse texts, such as business, medical, and newswire domains. We also address challenges related
to data availability and quality, such as varying definitions of causality. Causality extraction plays a crucial role in natural language understanding, particularly for building structured representations of medical and technical texts and for multimodal question answering. Participants will gain access to example code and links to related repositories. Beyond causality extraction, the
session will connect these tasks to broader themes, such as the mathematics of hallucinations in generative models and best practices for effective prompting. Designed for participants with some familiarity with machine learning or natural language processing (NLP), and ideally LLMs, the tutorial should be both accessible and highly relevant.

Downloads

Published

14-05-2025

How to Cite

Zadrozny, W. (2025). Large Language Models (LLMs) and Causality Extraction from Text : Tutorial at FLAIRS-38. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138900