Show Me What You’re Looking For

Visualizing Abstracted Transformer Attention for Enhancing Their Local Interpretability on Time Series Data

Auteurs-es

DOI :

https://doi.org/10.32473/flairs.v34i1.128399

Mots-clés :

Transformers, Multi-Head Attention, Attention, Interpretability, Abstraction, Visualisation, Time Series

Résumé

While Transformers have shown their advantages considering
their learning performance, their lack of explainability
and interpretability is still a major problem.
This specifically relates to the processing of time series,
as a specific form of complex data. In this paper,
we propose an approach for visualizing abstracted information
in order to enable computational sensemaking
and local interpretability on the respective Transformer
model. Our results demonstrate the efficacy of
the proposed abstraction method and visualization, utilizing
both synthetic and real world data for evaluation.

Téléchargements

Publié-e

2021-04-18

Comment citer

Schwenke, L., & Atzmueller, M. (2021). Show Me What You’re Looking For: Visualizing Abstracted Transformer Attention for Enhancing Their Local Interpretability on Time Series Data. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128399

Numéro

Rubrique

Special Track: Neural Networks and Data Mining