DeepAI AI Chat
Log In Sign Up

Composing Finite State Transducers on GPUs

by   Arturo Argueta, et al.

Weighted finite-state transducers (FSTs) are frequently used in language processing to handle tasks such as part-of-speech tagging and speech recognition. There has been previous work using multiple CPU cores to accelerate finite state algorithms, but limited attention has been given to parallel graphics processing unit (GPU) implementations. In this paper, we introduce the first (to our knowledge) GPU implementation of the FST composition operation, and we also discuss the optimizations used to achieve the best performance on this architecture. We show that our approach obtains speedups of up to 6x over our serial implementation and 4.5x over OpenFST.


page 1

page 2

page 3

page 4


Decoding with Finite-State Transducers on GPUs

Weighted finite automata and transducers (including hidden Markov models...

Parallel Composition of Weighted Finite-State Transducers

Finite-state transducers (FSTs) are frequently used in speech recognitio...

Parallel Implementations of Cellular Automata for Traffic Models

The Biham-Middleton-Levine (BML) traffic model is a simple two-dimension...

Fusion of multispectral satellite imagery using a cluster of graphics processing unit

The paper presents a parallel implementation of existing image fusion me...

Efficient LBM on GPUs for dense moving objects using immersed boundary condition

There exists an increasing interest for using immersed boundary methods ...

A tool set for random number generation on GPUs in R

We introduce the R package clrng which leverages the gpuR package and is...