Explainable Artificial Intelligence in Deep Learning-Based Solar Storm Predictions
DOI:
https://doi.org/10.32473/flairs.38.1.138654Keywords:
Explainable AI, Solar Storm, Coronal Mass Ejection (CME), Solar Flare, Long Short-Term Memory (LSTM), Active Region (AR), SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanation (LIME)Abstract
A deep learning model is often considered a black-box model, as its internal workings tend to be opaque to the user.
Because of the lack of transparency, it is challenging to understand the reasoning behind the model's predictions. Here, we present an approach to making a deep learning-based solar storm prediction model interpretable, where solar storms include solar flares and coronal mass ejections (CMEs). This deep learning model, built based on a long short-term memory (LSTM) network with an attention mechanism, aims to predict whether an active region (AR) on the Sun's surface that produces a flare within 24 hours will also produce a CME associated with the flare. The crux of our approach is to model data samples in an AR as time series and use the LSTM network to capture the temporal dynamics of the data samples. To make the model's predictions accountable and reliable, we leverage post hoc model-agnostic techniques, which help elucidate the factors contributing to the predicted output for an input sequence and provide insights into the model's behavior across multiple sequences within an AR. To our knowledge, this is the first time that interpretability has been added to an LSTM-based solar storm prediction model.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Adam Rawashdeh, Jason T. L. Wang, Katherine G. Herbert

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.