Our Keynote speakers were kind enough to make their slides available! You can find them here:
- Andreas Theissler - Explainable AI for Time Series Classification and Anomaly Detection: Current state and open issues
- Panagiotis Papapetrou - Towards Explainable Time Series Classification
- George Tzagkarakis - Feature Engineering for Graph-based Analysis of Recurrent Behavior in Biosignal Ensembles
Call For Papers
Time series data is omnipresent, as time is an integral component of all observable phenomena. With the increasing instrumentation of our world, sensors and systems are continuously generating vast quantities of time series data. Hence, time series analysis is integrated in principled decision-making and planning in a wide variety of applications across various industries. This has led to an increasing demand for accurate and interpretable time series models. Explainable AI methods can help users understand the analysis and predictions made by these models and build trust in their predictions. The integration of expert knowledge into learning pipelines from time series plays a major role in this context. This workshop will bring together experts and researchers to discuss recent developments and applications of Explainable AI for Time Series data.
XAI-TS welcomes papers that cover, but are not limited to, one or several of the following topics:
- Explainable AI methods for Time Series modeling
- Interpretable machine learning algorithms for Time Series
- Explainability metrics and evaluation, including benchmark datasets
- Case studies and applications of Explainable AI for Time Series
- Integration of domain knowledge in Time Series modeling
- Explainability Methods for Multivariate Time Series
- Explainable concept drift detection in Time Series
- Visual explanations for long Time Series data.
- Explainable Time Series features engineering
- Explainable Deep Learning for Time Series Modeling
- Explainable pattern discovery and recognition in Time Series
- Explainable anomaly detection in Time Series.
- Explainable aggregation of Time Series.
- Causality; Stochastic process modeling
Submission and Dates
We welcome submissions of full papers (8-16 pages) and short papers (4 pages) reporting on original research. Submissions must follow the LNCS formatting style and will be double-blinded and reviewed by at least two program committee members. We also welcome submissions of position papers (2 pages) presenting novel ideas, perspectives, or challenges in explainable AI for Time Series, which will be reviewed by the organizers. You can submit via Microsoft CMT here. At least one author of each accepted paper must have a full registration and be in Turin to present the paper. Papers without a full registration or in-presence presentation won’t be included in the post-workshop Springer proceedings.
- Paper Submission:
Wednesday, June 21, 2023Saturday, June 24, 2023 (extended deadline)
- Author Notification: Monday, July 24, 2023
- Camera Ready Deadline: Sunday, August 20, 2023
- Workshop: Monday, September 18, 2023
The workshop will comprise of paper presentations, discussions and invited talks. In case of a large number of submissions, a poster session may also be included.
|Matthias Jakobs is a Ph.D. student from TU Dortmund University and the LAMARR Institute for Machine Learning. His research includes the application of explainability methods in the context of time-series forecasting, with a focus on explainable model selection and ensembling. He was also a Program Committee member of the “Workshop on Trustworthy Artificial Intelligence” hosted at ECML-PKDD 2022. Webpage|
|Emmanuel Müller holds the chair of Data Science and Data Engineering at the TU Dortmund University and is the Founding Director of the Research Center Trustworthy Data Science and Security within the University Alliance Ruhr. He organized several tutorials and workshops at major Data Mining, Database, and Machine Learning conferences on the topic of unsupervised Machine Learning and edited a special issue for the Machine Learning Journal. He initiated and coordinated various education data science programs on the level of university education (M.Sc.), two graduate schools (Ph.D.), and multiple executive education programs (industry). He is part of the NRW graduate school DataNinja on Trustworthy AI for Seamless Problem Solving. Webpage|
|Amal Saadallah is a PostDoc at the Lamarr Institute for Machine Learning and Artificial Intelligence. She obtained her Ph.D. in computer science at the Technical University of Dortmund, Germany, in 2022. Her major research focus is on Online Time Series Forecasting, Adaptive Ensemble Learning, Active Learning, and Explainable AI for Industry 4.0 Applications. She served as a PC member for several top conferences, including ECML PKDD, AAAI, IEEE ITSC, IDA, etc. She also co-organized a discovery challenge in EPIA17 and was a member and nominee of the European DatSci and AI Awards in 2019. Webpage|
- Sebastian Buschjäger, TU Dortmund University, Germany
- Amal Saadallah, TU Dortmund University, Germany
- Matthias Jakobs, TU Dortmund University, Germany
- Emmanuel Müller, TU Dortmund University, Germany
- Nilah Ravi Nair, TU Dortmund University, Germany
- Jacopo De Stefani, TU Delft, The Netherlands
- Maja Schneider, TU Munich / Imperial College London, Germany / United Kingdom
- Chiara Balestra, TU Dortmund University, Germany
- Bin Li, TU Dortmund University, Germany