Explainable Artificial Intelligence in Deep Learning-Based Solar Storm Predictions

Authors

  • Adam Rawashdeh New Jersey Institute of Technology
  • Jason T. L. Wang New Jersey Institute of Technology
  • Katherine G. Herbert Montclair State University

DOI:

https://doi.org/10.32473/flairs.38.1.138654

Keywords:

Explainable AI, Solar Storm, Coronal Mass Ejection (CME), Solar Flare, Long Short-Term Memory (LSTM), Active Region (AR), SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanation (LIME)

Abstract

A deep learning model is often considered a black-box model, as its internal workings tend to be opaque to the user.
Because of the lack of transparency, it is challenging to understand the reasoning behind the model's predictions. Here, we present an approach to making a deep learning-based solar storm prediction model interpretable, where solar storms include solar flares and coronal mass ejections (CMEs). This deep learning model, built based on a long short-term memory (LSTM) network with an attention mechanism, aims to predict whether an active region (AR) on the Sun's surface that produces a flare within 24 hours will also produce a CME associated with the flare. The crux of our approach is to model data samples in an AR as time series and use the LSTM network to capture the temporal dynamics of the data samples. To make the model's predictions accountable and reliable, we leverage post hoc model-agnostic techniques, which help elucidate the factors contributing to the predicted output for an input sequence and provide insights into the model's behavior across multiple sequences within an AR. To our knowledge, this is the first time that interpretability has been added to an LSTM-based solar storm prediction model.

Downloads

Published

14-05-2025

How to Cite

Rawashdeh, A., Wang, J. T. L., & Herbert, K. G. (2025). Explainable Artificial Intelligence in Deep Learning-Based Solar Storm Predictions. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138654

Issue

Section

Special Track: Explainable, Fair, and Trustworthy AI