Exploration of Word Embeddings with Graph-Based Context Adaptation for Enhanced Word Vectors

Auteurs-es

DOI :

https://doi.org/10.32473/flairs.37.1.135597

Résumé

In the aspect of information storage, text assumes a central role, necessitating streamlined and effective methods for swift retrieval. Among various text representations, the vector form stands out for its remarkable efficiency, especially when dealing with expansive datasets. Arranging words that are similar in meaning close to each other in the vectorized representation helps improve how well the system performs in different Natural Language Processing related tasks. Previous methods, primarily centered on capturing word context through neural language models, have fallen short in delivering high scores for word similarity problems. This paper investigates the connection between representing words in vector form and the improved performance and accuracy observed in Natural Language Processing tasks. It introduces a method to represent words as a graph, aiming to preserve their inherent relationships and to enhance overall capabilities in semantic representation. Experimental deployment of this technique across diverse text corpora underscores its superiority over conventional word embedding approaches. The findings contribute to the evolving landscape of semantic representation learning but also illuminates their implications for text classification tasks, especially within the context of dynamic embedding models.

Téléchargements

Publié-e

2024-05-12

Comment citer

Sandhu, T., & Kobti, Z. (2024). Exploration of Word Embeddings with Graph-Based Context Adaptation for Enhanced Word Vectors. The International FLAIRS Conference Proceedings, 37(1). https://doi.org/10.32473/flairs.37.1.135597

Numéro

Rubrique

Special Track: Applied Natural Language Processing