xLSTM Architectures in Reinforcement Learning
Date
Authors
Type
könyvfejezet
Language
en
Reading access rights:
Open access
Rights Holder
Budapest University of Technology and Economics, Department of Artificial Intelligence and Systems Enginering
Conference Date
2025.02.03-2025.02.04
Conference Place
Budapest, Hungary
Conference Title
32nd Minisymposium of the Department of Artificial Intelligence and Systems Engineering
ISBN, e-ISBN
978-963-421-989-7
Container Title
Proceedings of the 32nd Minisymposium
Department
Department of Artificial Intelligence and Systems Engineering
Version
Post print
Faculty
Faculty of Electrical Engineering and Informatics
First Page
33
Subject (OSZKAR)
xLSTM
reinforcement learning
machine learning
reinforcement learning
machine learning
Gender
Konferenciacikk
University
Budapest University of Technology and Economics
- Cite this item
- https://doi.org/10.3311/MINISY2025-007
OOC works
Abstract
Long Short-Term Memory (LSTM) architectures have recently seen significant advancements through innovations such as exponential gating and modified memory structures, reigniting interest in their potential for modern sequence-based tasks. While xLSTM models have demonstrated strong performance in language modeling, their suitability for reinforcement learning (RL) tasks has yet to be fully explored. In this work, we investigate the application of xLSTM in RL environments, focusing on classic control tasks tasks that are commonly employed as benchmarks. This comparison provides a starting point for understanding the differences between xLSTM and LSTM in the context of reinforcement learning.