Learning classi!er systems (LCS) are often regarded as complex and challenging to master despite the community’s ongoing e"orts to provide simpli!ed educational models and detailed algorithmic descriptions. In this position paper, we argue that such perceived complexity is due to how LCSs are explained, which is still based on the narrative used to present the early models almost 50 years ago. Such a narrative centers around the system’s interaction with the environment and how the information streams from detectors to actuators, creating increasingly focused classi!er sets. Accordingly, it blends core LCS concepts with elements universal to all value-based reinforcement learning algorithms that are never included in the descriptions of competing methods. We suggest another, possibly leaner, narrative based on the view of XCS as an approximator of state-action value functions to solve reinforcement learning tasks. We show how abandoning the traditional narrative may result in a simpler description of XCS that can be easily extended to integrate known reinforcement learning extensions and tackle classi!cation and regression problems. Our approach can be seamlessly applied to one-step and multi-step scenarios without modi!cations. It also provides guidelines for developing LCS implementations that might be more accessible to people approaching LCS for the !rst time.

A Proposal for a Leaner Narrative of Learning Classifier Systems

Lanzi, Pier Luca;Loiacono, Daniele
2025-01-01

Abstract

Learning classi!er systems (LCS) are often regarded as complex and challenging to master despite the community’s ongoing e"orts to provide simpli!ed educational models and detailed algorithmic descriptions. In this position paper, we argue that such perceived complexity is due to how LCSs are explained, which is still based on the narrative used to present the early models almost 50 years ago. Such a narrative centers around the system’s interaction with the environment and how the information streams from detectors to actuators, creating increasingly focused classi!er sets. Accordingly, it blends core LCS concepts with elements universal to all value-based reinforcement learning algorithms that are never included in the descriptions of competing methods. We suggest another, possibly leaner, narrative based on the view of XCS as an approximator of state-action value functions to solve reinforcement learning tasks. We show how abandoning the traditional narrative may result in a simpler description of XCS that can be easily extended to integrate known reinforcement learning extensions and tackle classi!cation and regression problems. Our approach can be seamlessly applied to one-step and multi-step scenarios without modi!cations. It also provides guidelines for developing LCS implementations that might be more accessible to people approaching LCS for the !rst time.
2025
GECCO 2025 Companion - Proceedings of the 2025 Genetic and Evolutionary Computation Conference Companion
Learning Classifier Systems
XCS
File in questo prodotto:
File Dimensione Formato  
3712255.3735661.pdf

accesso aperto

: Publisher’s version
Dimensione 996.09 kB
Formato Adobe PDF
996.09 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1298058
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact