Introducing various Semantic Models for Amharic: Experimentation and Evaluation with multiple Tasks and Datasets

11/02/2020
by   Seid Muhie Yimam, et al.
0

The availability of different pre-trained semantic models enabled the quick development of machine learning components for downstream applications. Despite the availability of abundant text data for low resource languages, only a few semantic models are publicly available. Publicly available pre-trained models are usually built as a multilingual version of semantic models that can not fit well for each language due to context variations. In this work, we introduce different semantic models for Amharic. After we experiment with the existing pre-trained semantic models, we trained and fine-tuned nine new different models using a monolingual text corpus. The models are build using word2Vec embeddings, distributional thesaurus (DT), contextual embeddings, and DT embeddings obtained via network embedding algorithms. Moreover, we employ these models for different NLP tasks and investigate their impact. We find that newly trained models perform better than pre-trained multilingual models. Furthermore, models based on contextual embeddings from RoBERTA perform better than the word2Vec models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2020

Give your Text Representation Models some Love: the Case for Basque

Word embeddings and pre-trained language models allow to build rich repr...
research
04/03/2020

Testing pre-trained Transformer models for Lithuanian news clustering

A recent introduction of Transformer deep learning architecture made bre...
research
06/22/2022

Independent evaluation of state-of-the-art deep networks for mammography

Deep neural models have shown remarkable performance in image recognitio...
research
03/16/2023

SemDeDup: Data-efficient learning at web-scale through semantic deduplication

Progress in machine learning has been driven in large part by massive in...
research
01/20/2020

Model Reuse with Reduced Kernel Mean Embedding Specification

Given a publicly available pool of machine learning models constructed f...
research
07/06/2023

Performance Comparison of Pre-trained Models for Speech-to-Text in Turkish: Whisper-Small and Wav2Vec2-XLS-R-300M

In this study, the performances of the Whisper-Small and Wav2Vec2-XLS-R-...
research
05/18/2023

Democratized Diffusion Language Model

Despite the potential benefits of Diffusion Models for NLP applications,...

Please sign up or login with your details

Forgot password? Click here to reset