Automated Assessment of Quality of Jupyter Notebooks Using Artificial Intelligence and Big Code

Auteurs-es

  • Priti Oli University of Memphis
  • Rabin Banjade University of Memphis
  • Lasang Jimba Tamang University of Memphis
  • Vasile Rus University of Memphis

DOI :

https://doi.org/10.32473/flairs.v34i1.128560

Mots-clés :

Jupyter Notebooks, Quality Assessment, Big Code, Reproducibility, Executability, Machine learning, Deep learning

Résumé

We present in this paper an automated method to assess the quality of Jupyter notebooks. The quality of notebooks is assessed in terms of reproducibility and executability. Specifically, we automatically extract a number of expert-defined features for each notebook, perform a feature selection step, and then trained supervised binary classifiers to predict whether a notebook is reproducible and executable, respectively. We also experimented with semantic code embeddings to capture the notebooks' semantics. We have evaluated these methods on a dataset of 306,539 notebooks and achieved an F1 score of 0.87 for reproducibility and 0.96 for executability (using expert-defined features) and an F1 score of 0.81 for reproducibility and 0.78 for executability (using code embeddings). Our results suggest that semantic code embeddings can be used to determine with good performance the reproducibility and executability of Jupyter notebooks, and since they can be automatically derived, they have the advantage of no need for expert involvement to define features.

Téléchargements

Publié-e

2021-04-18

Comment citer

Oli, P., Banjade, R., Tamang, L. J., & Rus, V. (2021). Automated Assessment of Quality of Jupyter Notebooks Using Artificial Intelligence and Big Code. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128560

Numéro

Rubrique

Special Track: Neural Networks and Data Mining