Skip to the content.

2nd TempXAI Workshop for Explainable AI in Time Series and Data Streams

The workshop focuses on exploring the crucial intersection of Explainable AI (XAI) and the challenges posed by time series and data streams. Our primary objectives include understanding dynamic interpretability, delving into techniques that offer transparent insights into time-evolving data, and providing a better understanding of machine learning models in dynamic environments. We aim to advance incremental explainability by investigating methods that ensure interpretability remains effective as models adapt to changing data over time or methods that are able to explain these changes. Moreover, we seek to promote real-time decision-making by exploring applications of XAI in real-time decision-making scenarios, addressing the need for interpretable models in time-sensitive contexts. The workshop also aims to share practical insights by encouraging the sharing of novel XAI tools that are specific to time series and data streams, in addition to case studies and practical implementations in employing interpretable machine learning for time series and data streams.

The XAI for time series and data streams workshop welcomes papers that cover, but are not limited to, one or several of the following topics:

We welcome submissions of regular papers (max. 8-16 pages) and extended abstracts (up to 2-4 pages). Each paper will be double-blind peer-reviewed and, upon selection, be presented and discussed at the workshop. For extended abstracts, works-in-progress or industrial experiences are welcome. We also welcome submissions of position papers (2 pages) presenting novel ideas, perspectives, or challenges in explainable AI for time series and data streams. At least one author of each accepted paper must be registered to the conference and attend to the workshop. The workshops will be published as in a joint post-workshop proceeding published by Springer Communications in Computer and Information Science. Please format your papers according to the one-column Springer LNCS template found here.

Important dates

The workshop will comprise paper presentations, discussions, and invited talks. In case of many submissions, a poster session may also be included.

Program

Keynote speaker

Beyond Accuracy: The Dual Challenge of Effective and Explainable Time Series Classification
Keynote by Dr. Georgiana Ifrim, School of Computer Science, University College Dublin
This talk connects two important streams of research: the development of highly accurate time series classifiers and the growing demand for explainable AI (XAI). I will begin by examining inherently interpretable models like MrSEQL and MrSQM, which provide a foundation for trustworthy time series analysis. From there, we will bridge the gap to post-hoc explainability, with a deep dive into attribution-based methods and their specific adaptations for time series data. A key theme of this talk is the critical need for rigorous evaluation of our explanations. To that end, I will present recent frameworks, including AMEE and InterpretTime, that allow for the quantitative assessment of attribution methods. Throughout the talk, I will draw on compelling examples from the domain of human movement tracking using wearable sensors to illustrate how these techniques are being applied to better understand both our models and our data.

Organizers

Image 1
Zahraa S. Abdallah
University of Bristol
Image 2
Matthias Jakobs
TU Dortmund University
Image 3
Panagiotis Papapetrou
Stockholm University
Image 4
Amal Saadallah
TU Dortmund University
Image 5
George Tzagkarakis
FORTH-ICS

Program Committee

Sponsors

Image 5 This workshop is supported by the Federal Ministry of Education and Research of Germany and the state of North Rhine-Westphalia as part of the Lamarr Institute for Machine Learning and Artificial Intelligence.
Image 5