Large Language Models (LLMs) and Causality Extraction from Text
Tutorial at FLAIRS-38
DOI:
https://doi.org/10.32473/flairs.38.1.138900Keywords:
Large Language Models, LLM, NLP, Causality extraction, data extraction from textAbstract
This tutorial explores the application of Large Language Models (LLMs), such as BERT, LLAMA, and
GPT-3.5/4, to the extraction of causality from text documents, including identifying causes, effects, and actions in diverse texts, such as business, medical, and newswire domains. We also address challenges related
to data availability and quality, such as varying definitions of causality. Causality extraction plays a crucial role in natural language understanding, particularly for building structured representations of medical and technical texts and for multimodal question answering. Participants will gain access to example code and links to related repositories. Beyond causality extraction, the
session will connect these tasks to broader themes, such as the mathematics of hallucinations in generative models and best practices for effective prompting. Designed for participants with some familiarity with machine learning or natural language processing (NLP), and ideally LLMs, the tutorial should be both accessible and highly relevant.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Wlodek Zadrozny

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.