From Language to Causality: Extracting Causal Relations from Large Language Models
Date
Type
Language
Reading access rights:
Rights Holder
Conference Date
Conference Place
Conference Title
ISBN, e-ISBN
Container Title
Department
Version
Faculty
First Page
Subject (OSZKAR)
Natural Language Processing
Bayesian Networks
Causal Discovery
Probabilistic Graphical Models
Gender
University
- Cite this item
- https://doi.org/10.3311/MINISY2025-014
OOC works
Abstract
This research introduces a novel framework for constructing causal networks by leveraging the causal reasoning abilities of multiple Large Language Models (LLMs). We instruct LLMs to extract explicit causal links from their internal knowledge representations regarding specific topics. We explore methods for consolidating these graphs, addressing conflicts, and determining the strength and directionality of causal links. Evaluated across various domains using the Qwen 2.5 model family (0.5B to 14B parameters), the framework demonstrates the ability of language models to generate meaningful causal networks from complex queries. Our findings suggest that fusing causal knowledge from multiple LLMs significantly enhances causal discovery from natural language, though practical application benefits from human oversight and domain expertise to ensure accuracy and reliability. We also highlight the potential of integrating probabilistic approaches to quantify uncertainty within the extracted causal relationships.