Detecting Anomalies in Sequences of Short Text Using Iterative Language Models

Authors

  • Cynthia Freeman Cynthia Freeman
  • Ian Beaver
  • Abdullah Mueen

DOI:

https://doi.org/10.32473/flairs.v34i1.128551

Abstract

Business managers using Intelligent Virtual Assistants (IVAs) to enhance their company's customer service need ways to accurately and efficiently detect anomalies in conversations between the IVA and customers, vital for customer retention and satisfaction. Unfortunately, anomaly detection is a challenging problem because of the subjective nature of what is defined as anomalous. Detecting anomalies in sequences of short texts, common in chat settings, is even more difficult because independently generated texts are similar only at a semantic level, resulting in an abundance of false positives. In addition, literature for detecting anomalies in time ordered sequences of short text is shallow considering the abundance of such data sets in online settings. We introduce a technique for detecting anomalies in sequences of short textual data by adaptively and iteratively learning low perplexity language models. Our algorithm defines a short textual item as anomalous when its cross-entropy exceeds the upper confidence interval of a trained additive regression model. We demonstrate successful case studies and bridge the gap between theory and practice by finding anomalies in sequences of real conversations with virtual chat agents. Empirical evaluation shows that our method achieves, on average, 31% higher max F1 scores than the baseline method of non-negative matrix factorization across three large human-annotated sequences of short texts.

Downloads

Published

18-04-2021

How to Cite

Freeman, C., Beaver, I., & Mueen, A. (2021). Detecting Anomalies in Sequences of Short Text Using Iterative Language Models. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128551

Issue

Section

Main Track Proceedings