UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

AfriBERTa: Towards Viable Multilingual Language Models for Low-resource Languages

dc.contributor.authorOgueji, Kelechi
dc.date.accessioned2022-08-29T17:13:45Z
dc.date.available2022-08-29T17:13:45Z
dc.date.issued2022-08-29
dc.date.submitted2022-08-19
dc.description.abstractThere are over 7000 languages spoken on earth, but many of these languages suffer from a dearth of natural language processing (NLP) tools. Multilingual pretrained language models have been introduced to help alleviate this problem. However, the largest pretrained multilingual models were trained on only hundreds of languages. This is a small amount when compared to the number of spoken languages. While these models have displayed impressive performance on several languages, including those they were not pretrained on, there is a lot of ground to be covered. A lot of languages are often left out because pretrained language models are assumed to require a lot of training data, which the languages do not have. Furthermore, a major motivation behind these models is that such lower-resource languages benefit from joint training with higher-resource languages. In this thesis, we challenge both these assumptions and present the first attempt at training a multilingual language model on only low-resource languages. We show that it is possible to train competitive multilingual language models on less than one gigabyte of text data containing a selection of African languages. Our model, named AfriBERTa, covers 11 African languages, including the first language model for 4 of these languages. We evaluate this model on named entity recognition and text classification spanning 10 languages. Our evaluation results show that our model is very competitive with larger multilingual models - multilingual BERT and XLM-RoBERTa - on several languages. Results suggest that our “small data” approach based on similar languages may sometimes work better than joint training on large datasets with high- resource languages. Furthermore, we present a comprehensive discussion of the implications of our findings.en
dc.identifier.urihttp://hdl.handle.net/10012/18662
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.relation.urihttps://huggingface.co/datasets/castorini/afriberta-corpusen
dc.relation.urihttps://github.com/castorini/afribertaen
dc.relation.urihttps://huggingface.co/castorini/afriberta_largeen
dc.subjectnatural language processingen
dc.subjectmultilingualen
dc.subjectlanguage modelen
dc.subjectnamed entity recognitionen
dc.subjectpre-trained language modelen
dc.subjecttext classificationen
dc.titleAfriBERTa: Towards Viable Multilingual Language Models for Low-resource Languagesen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Mathematicsen
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0en
uws.contributor.advisorLin, Jimmy
uws.contributor.affiliation1Faculty of Mathematicsen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ogueji_Kelechi.pdf
Size:
1.32 MB
Format:
Adobe Portable Document Format
Description:
Main article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: