Exploration of Word Embeddings with Graph-Based Context Adaptation for Enhanced Word Vectors

作者

##plugins.pubIds.doi.readerDisplayName##:

https://doi.org/10.32473/flairs.37.1.135597

摘要

In the aspect of information storage, text assumes a central role, necessitating streamlined and effective methods for swift retrieval. Among various text representations, the vector form stands out for its remarkable efficiency, especially when dealing with expansive datasets. Arranging words that are similar in meaning close to each other in the vectorized representation helps improve how well the system performs in different Natural Language Processing related tasks. Previous methods, primarily centered on capturing word context through neural language models, have fallen short in delivering high scores for word similarity problems. This paper investigates the connection between representing words in vector form and the improved performance and accuracy observed in Natural Language Processing tasks. It introduces a method to represent words as a graph, aiming to preserve their inherent relationships and to enhance overall capabilities in semantic representation. Experimental deployment of this technique across diverse text corpora underscores its superiority over conventional word embedding approaches. The findings contribute to the evolving landscape of semantic representation learning but also illuminates their implications for text classification tasks, especially within the context of dynamic embedding models.

##submission.downloads##

已出版

2024-05-12

栏目

Special Track: Applied Natural Language Processing