Human-Machine Interaction Speech Corpus from the ROBIN project

11/22/2021
by   Vasile Pais, et al.
RACAI
0

This paper introduces a new Romanian speech corpus from the ROBIN project, called ROBIN Technical Acquisition Speech Corpus (ROBINTASC). Its main purpose was to improve the behaviour of a conversational agent, allowing human-machine interaction in the context of purchasing technical equipment. The paper contains a detailed description of the acquisition process, corpus statistics as well as an evaluation of the corpus influence on a low-latency ASR system as well as a dialogue component.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/25/2020

FT Speech: Danish Parliament Speech Corpus

This paper introduces FT Speech, a new speech corpus created from the re...
11/30/2009

Acquisition d'informations lexicales à partir de corpus Cédric Messiant et Thierry Poibeau

This paper is about automatic acquisition of lexical information from co...
04/08/2019

Disfluencies and Human Speech Transcription Errors

This paper explores contexts associated with errors in transcrip-tion of...
10/05/2020

JSSS: free Japanese speech corpus for summarization and simplification

In this paper, we construct a new Japanese speech corpus for speech-base...
11/23/2021

Romanian Speech Recognition Experiments from the ROBIN Project

One of the fundamental functionalities for accepting a socially assistiv...
07/27/2018

Ethnographie de la structuration d'un corpus collectif de messages de soutien social en ligne

In this paper, we propose a study of progressive development of the stru...
10/07/2020

Super-Human Performance in Online Low-latency Recognition of Conversational Speech

Achieving super-human performance in recognizing human speech has been a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

ROBIN111http://aimas.cs.pub.ro/robin/en/ is a user-centred project aiming to develop software and services for human interaction with robots within a digital interconnected society. Its focus is on several types of robots: assistive ones - targeting users with special needs (people with some medical problems or the elderly), robots for interaction with clients and software robots that can be installed on vehicles with the aim of (semi)autonomous driving. One of the objectives of the ROBIN-Dialog component project222http://aimas.cs.pub.ro/robin/en/robin-dialog/ was the creation of necessary Romanian language resources and processing tools for making a robot able to communicate with users in tasks defined within several micro-worlds. One example of micro-world is given by the interaction within the notebooks department of an electronics store. This micro-world is made up of the physical space occupied by this department, by the notebooks that are commercialized by that store, their characteristics on the basis of which customers decide what products they buy, their availability, the provisional date for their becoming available, the robot and the customers who interact with the robot for finding the notebook they want to purchase or for finding the right configuration for their needs. Acting as a shop assistant, the robot must be aware of the products the department commercializes, their availability, their characteristics, as well as the types of usage scenarios they are adequate for (e.g. notebooks for gaming, for design, programming, etc.).

Previously, [1] and [2]

described the natural language processing pipeline being used, as well as the dialogue manager for micro-worlds. Furthermore,

[3] and [4] presented a low-latency automated speech recognition (ASR) system developed and used within the ROBIN project. This paper introduces a new speech corpus recorded for the purposes of improving the performance of the ASR system and further of the entire pipeline. The paper is structured as follows: in Section II we present related work, including other available Romanian speech resources, in Section III the corpus acquisition process is described, Section IV contains relevant corpus statistics, while in Section V we consider the impact of the new corpus being used in the ROBIN project. We conclude the paper in Section VI.

Ii Related work

Corpus Speech Type Domain # Hours # Utterances # Speakers
RSC [9] Read Wikipedia 100 136.1k 164
RoDigits [10] Read Digits 37.5 15.4k 154
SWARA [19] Read Newspapers 21 19k 17
RO-GRID [24] Read General 6.6 4.8k 12
RSS [8] Spontaneous Internet, TV 5.5 5.7k 3
RASC [23] Read Wikipedia 4.8k 3k -
CV [11] Read Wikipedia 9 8k 130
VoxPopuli [15] Spontaneous Legal 83 27k 164
MaSS [15] Read Bible 23 8.1k 1
ROBINTASC Read Technology 6.5 3.8k 6
TABLE I: Public Romanian speech corpora statistics.

Compared to better resourced languages, such as English, speech resources available for the Romanian language are reduced in number. The representative corpus of the contemporary Romanian language (CoRoLa)[5] contains a spoken component that can be interrogated via the OCQP platform333http://corolaws.racai.ro/corola_sound_search/index.php [6]. Currently, it contains professional recordings from various sources (radio stations, recording studios), broadcast news and extracts from Romanian Wikipedia read by non-professionals (recorded in non-professional environments). In the context of the ReTeRom project444https://www.racai.ro/p/reterom/, the CoBiLiRo platform[7] was built to allow gathering of additional bimodal corpora, one of the final goals being to enrich the CoRoLa corpus.

The Read Speech Corpus (RSC)[9] contains 100 hours collected from 164 native speakers, mainly students and faculty staff, with an age average of 24 years. The sentences were selected from novels, online news and from a list of words that covered all the possible syllables in Romanian.

The RoDigits[10] corpus contains 37.5 hours of spoken connected digits from 154 speakers whose ages vary between 20 and 45. Each speaker recorded 100 clips of 12 randomly generated Romanian digits, and after the semi-automated validation, the final corpus contains 15,389 audio files.

SWARA [19] is a corpus that comprises speech data collected from 17 speakers which was manually segmented at the utterance-level, resulting in a dataset of approximately 21 hours of transcribed speech, split into over 19,000 audio-text pairs.

The RO-GRID [24] dataset was developed by reading sequences of six words chosen from a list of alternatives. The first three words were designated as ”keywords” and the speaker had to utter all combinations, which ended up being 400 ones. The last three words were designated as ”fillers” and were randomly chosen while creating the sentence. The final corpus contained 6.6 hours of audio from 12 speakers.

The Romanian Speech Synthesis (RSS) [8] corpus was designed for speech synthesis and it contains 4 hours of speech from a single female speaker using multiple microphones. The speaker read 4,000 sentences chosen for diphtone coverage, that were extracted from novels and newspapers and fairy-tales. RSS was also extended with over 1,700 utterances from two new female speakers, comprising now 5.5 hours of speech.

Romanian Anonymous Speech Corpus (RASC) [23] is a dataset that applied the concept of crowd-sourcing to collect Romanian spoken data from the general population, by developing an open interactive platform. The corpus currently contains 4.8 hours of transcribed audio.

The Common Voice (CV) [11]

corpus is a massively multilingual dataset of transcribed speech. At the moment of this writing, the Romanian version contains 9 hours of transcribed audio (6 hours validated) recorded by 130 speakers, using sentences from the Romanian Wikipedia.

VoxPopuli [15] is a large-scale multilingual corpus that contains 100,000 hours of raw audios in 23 languages and 1,800 hours of transcribed speech in 16 languages. One of the languages found in the corpus is Romanian, with 4,500 hours of unlabelled speech and 83 hours of transcribed audio.

Multilingual corpus of Sentence-aligned Spoken utterances (MaSS) [22] is a speech dataset based on readings of the Bible. The dataset contains 8,130 of parallel spoken utterances in eight languages, thus also allowing construction of end-to-end speech translation systems. The Romanian version contains 23 hours of spoken data.

Table I summarizes the statistics of the publicly available Romanian speech corpora presented above.

Iii Corpus acquisition

The ROBINTASC corpus was collected at RACAI, during the year 2020, as part of the ROBIN project. It was recorded by a number of 6 speakers of different genders (3 males and 3 females) and ages. For recording purposes, the RELATE [12] platform was extended to allow for audio files to be stored, recorded and listened to.

The audio processing component is activated if a corpus is created within the platform by specifying that it contains audio files. This enables all bimodal processing features. Since we start with text sentences for which we aim to provide recordings, the first step is to upload the associated texts. These can be uploaded either as separate text files or as a single CSV file containing each sentence on a different line. In the last case, the platform allows specifying the column containing the text as well as CSV characteristics such as headers, column separators, enclosing characters and optional characters indicating comments (lines to be skipped).

Once the text files are uploaded, speakers can access the audio recorder. This is implemented using JavaScript and works within the RELATE general HTML template. When a speaker first accesses the component, it will ask for a pseudonym that will be used as part of the file name for all recordings. The speaker is presented with a single sentence and a ”Start” button together with information about the current sentence number and the total number of sentences. Thus the speaker is offered the opportunity to first read and understand the sentence before starting the actual recording. The interface is presented in Fig. 1.

Fig. 1: Sound recording component integrated in the RELATE platform.

Recordings are stored as WAV files with a sample rate of 44.1 KHz using 16-bit signed integers. The recording component has a PHP back-end allowing it to store the files in the bimodal corpus, together with the associated text. In order to allow multiple speakers to record the same sentences, the file name incorporates the speaker pseudonym, thus creating unique file names for each of the speakers. Furthermore, in case of text uploaded as CSV files, the file name contains also the line number from the corresponding CSV file.

We did not use a “studio” environment for performing the recording. Instead, each speaker used his/her own hardware (headphones or dedicated microphone) to make the recordings. At any time after a sentence is recorded, the speaker (or another person given access to the corpus) can listen to the recording, download the associated WAV file and, if there were issues detected during recording (i.e., there was an unwanted noise or the speaker realises the pronunciation was not correct), delete it. The deletion of a recording will cause the associated sentence to re-appear in the recording component. This enables the speakers to re-record the sentences.

After all the sentences were recorded, as part of the packaging process, the text was annotated using UDPipe[13] as integrated in the RELATE platform[14]. This provides linguistic annotations such as part-of-speech (using both universal part-of-speech tags555https://universaldependencies.org/u/pos/ and language-dependent MSD tags), lemmatization and dependency relations. No phonetic transcription is made. The resulting annotations are stored in tab-separated CoNLL-U666https://universaldependencies.org/format.html files.

Finally, a script was created to gather all the generated files (raw text, text annotations, sound recordings), anonymize the speakers, add metadata and create a single archive with the corpus. Text file names use the pattern Sn.txt where n is the sentence number (starting with 0 and ending with 710). Corresponding annotation files use the pattern Sn.conllu. Sound files use the pattern Sn_s.wav, where n continues to represent the sentence number and s represents the speaker number (from 1 to 6).

A metadata file was generated with corpus and speaker characteristics, including number of sentences, total duration, speaker’s gender and age, number of recorded files by each speaker, information about recording device used. In order to anonymize the corpus, speaker’s age is given only as intervals (for example ”40-50” years).

Iv Corpus statistics

Statistics were computed at all levels: audio files, raw text and annotated text. For text-related statistics, the RELATE platform was used, while audio information was extracted using the soxi utility from the SoX (Sound eXchange)777http://sox.sourceforge.net/ software package. Audio statistics are given in Table II and text-related statistics are given in Table III.

Statistic Value
Number of WAV files 3786
Total duration 6h25m03s
Minimum duration 1.02s
Maximum duration 12.91s
Average duration 6.10s
Total size 1.89Gb
Sample rate 44.1KHz
Channels 1
Encoding Signed Int16 PCM
TABLE II: Audio statistics
Statistic Value
Number of text files 711
Total text size 57Kb
Maximum file size 122b
Minimum file size 3b
Average file size 81.8b
Number of tokens 11,927
Unique tokens 222
Unique lemmas 191
Hapax legomena 58
TABLE III: Text statistics

The smallest text and the corresponding smallest audio recording, as indicated in the statistics tables, are associated with the simple interaction ”Pa!” (”Bye!”). An example from an average sized text file is: “Care e cel mai scump leptop acer, cu placă grafică dedicată tesla pe o sută și opt gigabaiți ram?” (“Which is the most expensive ACER laptop, with dedicated Tesla P100 graphical board and 8 gigabytes of RAM?”). The text in Romanian is written having in mind the pronunciation of English words and not their written form. Furthermore, numbers are written explicitly using words.

Lemma Occurrences
leptop 605
placă 441
grafic 441
gigabait 419
ram 350
dedica 340
scump 220
ieftin 220
sută 203
mie 200

TABLE IV: Most frequent 10 lemmas

The most frequent lemmas (given in Table IV) show that most of the sentences are focused around the acquisition of laptops. Notice that the word for computer memory (”ram”) appears in over half of the sentences. Even though the text corpus is rather small, the number of hapax legomena (words appearing only once) is rather reduced (only 58 words are hapax legomena as indicated in Table III).

Tag Occurrences # Unq. Lemmas
NOUN 2,675 66
ADJ 1,698 32
DET 1,211 7
NUM 1,089 21
ADP 919 9
VERB 558 29
ADV 514 14
PRON 485 5
AUX 467 3
TABLE V: Most frequent 10 part-of-speech tags

The most frequent part-of-speech tags are presented in Table V. Nouns and adjectives are the most frequent ones. This answers the need of having computer parts with different characteristics covered by the corpus. Also the numerals are the fifth most frequent tags, corresponding to the different quantities associated with computer parts present in text.

The lexical diversity of the corpus is given by the number of unique lemmas and their proportion in the whole number of occurrences for each part of speech. As one can notice in the last column of Table V, the corpus is not very lexically diverse, as our aim was to capture a variety of ways in which relevant terms in this micro-world are pronounced.

Spk Gender Age Audio Files
1 M 40-50 233
2 M 30-40 711
3 M 20-30 711
4 F 30-40 711
5 F 40-50 709
6 F 40-50 711

TABLE VI: Speaker statistics

Speaker related statistics are presented in Table VI. This includes the gender, age group and the number of recordings.

V Corpus usage within the ROBIN project

The primary reason behind the construction of the ROBINTASC corpus was the improvement of the ROBIN project’s components involved in the micro-world scenario associated with a human-robot interaction in a computer store. The following sub-sections present an overview of the influence of this corpus on the software components: ASR and dialogue manager.

V-a Automatic Speech Recognition

The baseline ASR system [3, 4] was trained on 230 hours of Romanian speech and follows closely the Deep Speech 2 architecture [16]: 2 convolutional 2D layers [17]

, 4 Long Short-Term Memory cells (LSTM)

[18]

of 768 neurons, 1 look-ahead layer

[16] and 1 dense layer on top of which the softmax function was applied to create the output distribution over the possible characters. The ROBINTASC fine-tuned version of the baseline ASR system started with the baseline weights and completed the fine-tuning using the training part of the ROBINTASC corpus. The KenLM language model used to correct the transcriptions was also modified in the fine-tuned version of the ASR to better mimic the ROBINTASC words distribution, by multiplying each sentence 10 times in the text part of the training portion of ROBINTASC. The text replication step was performed in order to use an already existing automatic processing pipeline. This is not a limitation of the model itself which could have been adjusted using the model’s weights instead of replicating the text.

The transcription performance was assessed on a test corpus that contains new sentences pronounced by one female and one male voice that also recorded samples in the ROBINTASC training part with the speaker id 5 (F5-test) and 1 (M1-test), respectively, together with a new male voice (M-new). It is known that WER (and CER) are better on sampled test corpora than on unseen data set [21], containing voices that did not participate in the recording of the training data. Thus, we wanted to evaluate the close to real-world performance usage of the fine-tuned ASR system versus the baseline version.

The test corpus contains 50 questions that were designed to stress-test the ability of the fine-tuned ASR to adapt to the computer store domain. These sentences contain computer hardware-related companies that were found in ROBINTASC (e.g. Intel, CUDA, NVIDIA, etc.), but also new company names (e.g. Nokia, Siriux, etc.) or device names (e.g. ”smart phone”). All English words have been phonetically transcribed to Romanian, following the design principles of ROBINTASC, in order to see if the ASR system can learn English pronunciations (e.g. ”smart făun/fon” for ”smart phone”).

We evaluated both the baseline ASR system and the ROBINTASC fine-tuned ASR system on the test corpus. The results of the two versions are outlined in Table VII. It can be observed that the fine-tuning process improved the performance of the model for all three speakers, improving the average WER by 16.3% and the average CER by 7.8%. The highest and the lowest improvements are obtained on the female voice from train (F5-test), 24.16% WER, and on the male voice from train (M1-test), 10.34% WER respectively. The performance on the new male voice (M-new) was enhanced by 14.33% WER with fine-tuning.

Baseline Fine-tuned
Model WER CER WER CER
M1-test 38.71 9.42 28.37 9.09
F5-test 81.23 48.41 57.07 29.71
M-new 59.21 26.28 44.88 21.83
Average 59.71 28.03 43.44 20.21
TABLE VII: ASR evaluation results using baseline and fine-tuned versions on the three voices found in ROBINTASC test: male from train (M1-test), female from train (F5-test) and the new male voice (M-new).

Looking at the generated transcriptions, we can explain some of the errors in the following way:

  1. some clitics were not properly transcribed: ”haș pe -ul” vs. ”haș pe ul”, ”care-l” vs. ”care -l” or ”care îl” (see also the discussion on the treatment of clitics in [25]);

  2. one word is sometimes recognized as two consecutive words: ”ultra portabil” instead of ”ultraportabil”;

  3. some of the English terminology in the test corpus has more than one possible phonetical transcriptions and in ROBINTASC all of these have been used: ”uindous”, ”uindos” or ”uindăus” for the English ”Windows”;

  4. in general, new English phonetically transcribed terminology is not properly recognized: ”ol in oane” (”All in one”) vs. ”oli man” or ”linăx cent ău es” (”Linux CentOS”) vs. ”linăx centes”.

Other reasons for the high WER values, for both the baseline and fine-tuned models, can be attributed to the different recording conditions and the amount of data used to train the models.

V-B Dialogue manager

The ROBIN Dialogue Manager (RDM, [20]) is a Java-based dialogue manager that works with micro-worlds. A micro-world is a set of definitions of spoken-about concepts, predicates that hold among them, ASR and TTS systems that work well in the micro-world and any other piece of information that would make an autonomous system (e.g. a robot) handle specific tasks in the micro-world. In the case of the notebook department of an electronics store micro-world, the robot should be able to give technical details and pricing for the existing stock of laptops.

RDM has been designed to work on the Pepper robot888https://www.softbankrobotics.com/emea/en/pepper, enabling it to listen and respond to users’ questions in Romanian. When enough information has been gathered through the conversation, RDM can supply predefined action items to the robot’s planning algorithm., e.g. “Let me find out if your laptop is in stock.”

We have empirically evaluated RDM with the baseline and fine-tuned ASR systems, by asking it different questions, appropriate to the electronics store micro-world. While we do not have a quantitative evaluation on how much better the fine-tuned ASR system is, it was significantly better than the baseline ASR system mainly because English terminology was not handled at all by the baseline model but was handled acceptably well by the fine-tuned ASR system, as long as the English terms were in the ROBINTASC corpus, e.g. ”leptop” (English ”laptop”), ”haș pe” (”HP”), ”gigabaiți” (”GB”), ”epăl” (”Apple”), etc. This is an indication that the fine-tuned ASR can be further improved with new English terms, should the need arise.

Vi Conclusions

This paper introduced ROBINTASC, a new Romanian language speech corpus from the ROBIN project. We have shown that it had a positive influence on two components developed within the ROBIN project, namely an ASR system based on Deep Speech 2 architecture and a dialogue manager, developed for micro-world scenarios. The corpus is open sourced, available under a Creative Commons Attribution NonCommercial NoDerivatives (CC BY-NC-ND) 4.0 license

999https://creativecommons.org/licenses/by-nc-nd/4.0/ and can be downloaded from the Zenodo platform101010https://doi.org/10.5281/zenodo.4626540.

Acknowledgment

The research described in this article was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS – UEFISCDI, project number PN-III 72PCCDI ⁄ 2018, ROBIN – “Roboții și Societatea: Sisteme Cognitive pentru Roboți Personali și Vehicule Autonome”.

References

  • [1] D. Tufiș, V. Barbu Mititelu, E. Irimia, M. Mitrofan, R. Ion and G. Cioroiu, ”Making Pepper Understand and Respond in Romanian”, 2019 22nd International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 2019, pp. 682-688, doi: 10.1109/CSCS.2019.00122.
  • [2] R. Ion, V. Badea, G. Cioroiu, V. Barbu Mititelu, E. Irimia, M. Mitrofan, and D. Tufiș, ”A Dialog Manager for Micro-Worlds”, Studies in Informatics and Control, 2020, 29(4), pp. 411-420.
  • [3]

    A.M. Avram, V. Păiș, D. Tufiș, ”Towards a Romanian end-to-end automatic speech recognition based on DeepSpeech2”, 2020, Proc. Ro. Acad., Series A, Volume 21, No. 4, pp. 395-402.

  • [4] A.M. Avram, V. Păiș, D. Tufiș, ”Romanian speech recognition experiments from the ROBIN project”, Proceedings of the 15th International Conference Linguistic Resources and Tools for Natural Language Processing (CONSILR), 2020, pp. 103-114.
  • [5] D. Tufiș, V. Barbu Mititelu, E. Irimia, V. Păiș, R. Ion, N. Diewald, M. Mitrofan, M. Onofrei, ”Little strokes fell great oaks. Creating CoRoLa, the reference corpus of contemporary Romanian”, Revue Roumaine de linguistique, 2019, LXIV (3).
  • [6] T. Boroș, Ș. Dumitrescu, and V. Păiș, ”Tools and resources for Romanian text-to-speech and speech-to-text applications”, Proceedings of the International Conference on Human-Computer Interaction – RoCHI 2018, pp 46-53.
  • [7] D. Cristea, I. Pistol, Ș. Boghiu, A.D. Bibiri, D. Gîfu, A. Scutelnicu, M. Onofrei, D. Trandabăț, G. Buceag, ”CoBiLiRo: A Research Platform for Bimodal Corpora”, Proceedings of the 1st International Workshop on Language Technology Platforms (IWLTP 2020), pp. 22–27, Language Resources and Evaluation Conference (LREC 2020), Marseille, 11–16 May 2020.
  • [8] A. Stan, J. Yamagishi, S. King, M. Aylett, ”The Romanian speech synthesis (RSS) corpus: Building a high quality HMM-based speech synthesis system using a high sampling rate”, Speech Communication, 2011, pp. 442-450.
  • [9] A.L. Georgescu, H. Cucu, A. Buzo and C. Burileanu, ”RSC: A Romanian Read Speech Corpus for Automatic Speech Recognition”, Proceedings of the 12th Language Resources and Evaluation Conference, Marseille, France, 2020, pp. 6606-6612.
  • [10] A.L. Georgescu, A. Caranica, H. Cucu and C. Burileanu, ”Rodigits-A Romanian Connected-Digits Speech Corpus For Automatic Speech And Speaker Recognition”, University Politehnica of Bucharest Scientific Bulletin, Series C, 2018, Vol. 80, Iss. 3, pp. 45-62.
  • [11] R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F.M. Tyers, and G. Weber, ”Common voice: A massively-multilingual speech corpus”, arXiv:1912.06670, 2019.
  • [12] V. Păiș, R. Ion, D. Tufiș, ”A Processing Platform Relating Data and Tools for Romanian Language”, Proceedings of the 1st International Workshop on Language Technology Platforms, European Language Resources Association, 2020, pp. 81-88.
  • [13] M. Straka, J. Hajic, J. Straková, ”UDPipe: Trainable Pipeline for Processing CoNLL-U Files Performing Tokenization, Morphological Analysis, POS Tagging and Parsing”, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC), Portorož, Slovenia, 2016.
  • [14] V. Păiș, ”Multiple annotation pipelines within the RELATE platform”, Proceedings of the 15th International Conference Linguistic Resources and Tools for Natural Language Processing (CONSILR), 2020, pp. 65-75.
  • [15] C. Wang, M. Rivière, A. Lee, A. Wu, C. Talnikar, D. Haziza, M. Williamson, J. Pino, E. Dupoux, ”VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation”, arXiv preprint arXiv:2101.00390s.
  • [16]

    D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, Q. Cheng, G. Chen, J. Chen, ”Deep speech 2: End-to-end speech recognition in english and mandarin”. Proceedings of the 33rd International Conference on Machine Learning (PMLR), 2016, pp. 173-182.

  • [17] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, ”Gradient-based learning applied to document recognition”, Proceedings of the IEEE, 1998, pp. 2278-324.
  • [18] S. Hochreiter and J. Schmidhuber, ”Long short-term memory”. Proceedings of Neural computation, 1997, pp. 1735-1780.
  • [19] A. Stan, F. Dinescu, C. Ţiple, Ș. Meza, B. Orza, M. Chirilă, M. Giurgiu, ”The SWARA speech corpus: A large parallel Romanian read speech dataset”. Proceedings of the 9th International Conference on Speech Technology and Human-Computer Dialogue (SpeD), 2017, pp. 1-6.
  • [20] R. Ion, V. G. Badea, G. Cioroiu, V. Barbu Mititelu, E. Irimia, M. Mitrofan, D. Tufiș, ”A Dialog Manager for Micro-Worlds”. Studies in Informatics and Control, ISSN 1220-1766, vol. 29(4), pp. 411-420, 2020. https://doi.org/10.24846/v29i4y202003
  • [21] T. Likhomanenko, Q. Xu, V. Pratap, P. Tomasello, J. Kahn, G. Avidov, R. Collobert, G. Synnaeve, ”Rethinking Evaluation in ASR: Are Our Models Robust Enough?”. arXiv:2010.11745v3 [cs.LG]
  • [22] M. Z. Boito, W. N. Havard, M. Garnerin, E. L. Ferrand, L. Besacier, ”Mass: A large and clean multilingual corpus of sentence-aligned spoken utterances extracted from the bible”, Proceedings of The 12th Language Resources and Evaluation Conference (LREC), 2020, pp. 6486-6493.
  • [23] S. D. Dumitrescu, T. Boroș, R. Ion. ”Crowd-sourced, automatic speech-corpora collection–Building the Romanian Anonymous Speech Corpus”, CCURL 2014: Collaboration and Computing for Under-Resourced Languages in the Linked Open Data Era, 2014, pp. 90-94.
  • [24] A. Kabir, M. Giurgiu, ”A romanian corpus for speech perception and automatic speech recognition”. Proceedings of The 10th International Conference on Signal Processing, Robotics and Automation, 2011, pp. 323-327.
  • [25]

    C. Manolache, A.-L. Georgescu, V. Barbu Mititelu, H. Cucu and C. Burileanu, “Improved Text Normalization and Language Models for SpeeD’s Automatic Speech Recognition System”. Proceedings of the 15th International Conference “Linguistic Resources and Tools for Natural Language Processing”, online, 14-15 December 2020, Editura Universității A. I. Cuza, Iași, 2020, pp. 115-128.