UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Unsupervised Multilingual Alignment using Wasserstein Barycenter

Loading...
Thumbnail Image

Date

2020-01-23

Authors

Lian, Xin

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

We investigate the language alignment problem when there are multiple languages, and we are interested in finding translation between all pairs of languages. The problem of language alignment has long been an exciting topic for Natural Language Processing researchers. Current methods for learning cross-domain correspondences at the word level rely on distributed representations of words. Therefore, the recent development in the word computational linguistics and neural language modeling has led to the development of the so-called zero-shot learning paradigm. Many algorithms were proposed to solve the bilingual alignment problem in supervised or unsupervised manners. One popular way to extend the bilingual alignment to the multilingual setting is by picking one of the input languages as the pivot language and transiting through that language. However, transiting through a pivot language degrades the quality of translations, since it assumes transitive relations among all pairs of languages. It is often the case that one does not enforce such transitive relations in the training process of bilingual tasks. Therefore, transiting through an uninformed pivot language degrades the quality of translation. Motivated by the observation that using information from other languages during the training process helps improve translating language pairs, we propose a new algorithm for unsupervised multilingual alignment, where we employ the barycenter of all language word embeddings as a new pivot to imply translations. Instead of going through a pivot language, we propose to align languages through their Wasserstein barycenter. Our motivation behind this is that we can encapsulate information from all languages in the barycenter and facilitate bilingual alignment. We evaluate our method on standard benchmarks and demonstrate that our method outperforms state-of-the-art approaches. The barycenter is closely related to the joint mapping for all input languages hence encapsulates all useful information for translation. Finally, we evaluate our method by jointly aligning word vectors in 6 languages and demonstrating noticeable improvement to the current state-of-the-art.

Description

Keywords

LC Keywords

Multilingual computing, Computational linguistics, Natural language processing (Computer science)

Citation