Augmenting Training Data for a Virtual Character Using GPT-3.5
DOI:
https://doi.org/10.32473/flairs.37.1.135552Keywords:
Large Language Models, Synthetic Data, Dialog Based SystemsAbstract
This paper compares different methods of using a large language model (GPT-3.5) for creating synthetic training data for a retrieval-based conversational character. The training data are in the form of linked questions and answers, which allow a classifier to retrieve a pre-recorded answer to an unseen question; the intuition is that a large language model could predict what human users might ask, thus saving the effort of collecting real user questions as training data. Results show small improvements in test performance for all synthetic datasets. However, a classifier trained on only small amounts of collected user data resulted in a higher F-score than the classifiers trained on much larger amounts of synthetic data generated using GPT-3.5. Based on these results, we see a potential in using large language models for generating training data, but at this point it is not as valuable as collecting actual user data for training.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Elizabeth Chen, Ron Artstein

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.