DeepAI AI Chat
Log In Sign Up

Composing Finite State Transducers on GPUs

05/16/2018
by   Arturo Argueta, et al.
0

Weighted finite-state transducers (FSTs) are frequently used in language processing to handle tasks such as part-of-speech tagging and speech recognition. There has been previous work using multiple CPU cores to accelerate finite state algorithms, but limited attention has been given to parallel graphics processing unit (GPU) implementations. In this paper, we introduce the first (to our knowledge) GPU implementation of the FST composition operation, and we also discuss the optimizations used to achieve the best performance on this architecture. We show that our approach obtains speedups of up to 6x over our serial implementation and 4.5x over OpenFST.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/11/2017

Decoding with Finite-State Transducers on GPUs

Weighted finite automata and transducers (including hidden Markov models...
10/06/2021

Parallel Composition of Weighted Finite-State Transducers

Finite-state transducers (FSTs) are frequently used in speech recognitio...
04/21/2018

Parallel Implementations of Cellular Automata for Traffic Models

The Biham-Middleton-Levine (BML) traffic model is a simple two-dimension...
03/02/2018

Fusion of multispectral satellite imagery using a cluster of graphics processing unit

The paper presents a parallel implementation of existing image fusion me...
03/27/2019

Efficient LBM on GPUs for dense moving objects using immersed boundary condition

There exists an increasing interest for using immersed boundary methods ...
01/17/2022

A tool set for random number generation on GPUs in R

We introduce the R package clrng which leverages the gpuR package and is...