Convolutional neural networks (CNNs) LeCun and Bengio (1998) and long short-term memory networks (LSTM) Hochreiter and Schmidhuber (1997) and its variants Graves et al. (2005); Graves and Schmidhuber (2009) have recently achieved impressive results Li and Wu (2014); de Buy Wenniger et al. (2019); Doetsch et al. (2014). This exceptional performance comes, however, at the cost of having an ensemble of, e.g., 118 recognizers Stuner et al. (2016). High cost of training and operation brings to mind the question whether less costly methods can be applied to boost the performance of handwriting recognizers.
A possible direction would consist of the use of linguistic statistics Puigcerver (2018). A recent method for using language information is a dual-state word-beam search Scheidl et al. (2018) for decoding the connectionist temporal classification (CTC Graves et al. (2006)) layer of neural networks, which has been shown to be effective Scheidl et al. (2018).
Although the presence of dictionaries and corpora is beneficial, historical documents present a challenge. For instance, historic spelling of a word differs from the contemporary spelling, there often is an absence of strict orthography, and there may be frequent misspellings Hauser and Schulz (2007). Figure 1 shows a word image from one of the datasets used in this paper. This historical word has an extra character compared to the current spelling. Moreover, for rare languages, e.g., Aymara Emlen (2017)
, the complete lexicon does not exist yet, and corpora are of very limited size. Handwritten-text recognition (HTR) is exactly required toobtain such digital linguistic resources for that language.
Another possible direction to improve performance would concern a heavy optimization of network architecture and training (hyper)parameters. The state-of-the-art approaches can be sensitive to the choice of hyper-parameter values. As an example, it is reported that increasing the depth of a neural network that consists of convolutional and LSTM layers, from 8 hidden layers to 10 is advantageous. Further enlarging to 12 hidden layers yielded unsatisfactory results Voigtlaender et al. (2016). From the perspective of e-Science services for handwriting recognition dealing with hundreds different books, it is not feasible to tailor the recognizer models for each book based on prior knowledge, using human handcrafting of neural networks. Preferably, having an ensemble consisting of a limited number of automatically generated architectures would be practical.
In this paper, we explore the possibilities of exploiting the success of current CNN/LSTM approaches, using several methods at the level of linguistics and labeling systematics, as well as an ensemble method. Ideally, the approach should be robust, require a minimum of human intervention with a limited set of hyper-parameter settings (architectures), and minimum linguistic resources. For evaluation, we use a standard benchmark public dataset, RIMES Grosicki et al. (2006), and a historical handwritten dataset, KdK Van der Zant et al. (2008); Van Oosten and Schomaker (2014). The two dataset differ in time period and language.
An essential consideration is that it should be possible to add our suggested algorithm to the Monk system, Van der Zant et al. (2008, 2009); He et al. (2016); Schomaker (2019). Monk is a live web-based search engine for words and character recognition, retrieval and annotation. It contains diverse digitized historical and contemporary handwritten manuscripts in many languages: Chinese, Thai, Arabic, Dutch, English, Persian. Also, complicated machine-printed documents such as German, Fraktur, Egyptian hieroglyphs, and historical language are available in the Monk system.
The rest of this paper is structured as follows. In section 2
, we briefly survey the related works in terms of recent state-of-the-art methods on RIMES, convolutional recurrent neural network, word search in character-hypothesis grids, ensemble systems, and requirements of the proposed method. In section3, we present our system. The experimental evaluation and discussion are given in sections 4 and 5. Finally, conclusions are drawn in section 6.
2 Related Work
In this section, we first briefly survey the recent studies that worked on isolated words of the RIMES dataset. Then, a convolutional recurrent neural network is briefly detailed. Afterwards, we survey part of long history of word search in character-hypothesis grids and linguistic post-processing. As an example of these approaches, we explain a dual-state word-beam search for CTC decoding which is one of principle of our work. Finally, we review researches in ensemble-system approaches.
2.1 On RIMES
One of the used datasets in this paper is RIMES Grosicki et al. (2006). In this section, the compared methods are explain briefly. In Poznanski and Wolf (2016), a 12-layer convolutional neural networks (CNN) is used to processes fixed-sized word images and recognize a Pyramidal Histogram of Character (PHOC) representation Almazán et al. (2014), using multiple parallel fully connected layers. Afterwards, Canonical Correlation Analysis (CCA)Hardoon et al. (2004) is applied as a final stage of the word recognition task, using a predefined lexicon.
In Stuner et al. (2016), two architectures are used to generate more than a thousand networks to construct an ensemble. Each network is either two-layer BiLSTM or three-layer multidimensional LSTM (MDLSTM) neural networks Graves and Schmidhuber (2009). BiLSTMs are fed by HOG Dalal and Triggs (2005), and the input of the MDLSTM is raw image. The best path algorithm Graves (2012) is applied for CTC decoding. This approach uses a lexicon verification method. After training 2,100 networks and evaluating on the validation set of RIMES dataset, the lowest performance networks are removed, which results in 118 networks. It is reported that the pruned ensemble of 118 networks has 0.16pp drop in performance compared to the ensemble of 2,100 networks on the RIMES dataset. On another dataset, IAM U.-V. Marti (2002), the size of ensemble is different (n=1,039). Because of the simplicity of system and high number of recognizers, the complexity is medium to high.
In Menasri et al. (2012), an ensemble uses eight recognizers for handwriting recognition which includes four variants of a MDLSTM, a grapheme based MLP-HMM, and two variants of a context-dependent sliding window GMM-HMM. The ensemble system is a simple sum rule.
In Sueiras et al. (2018)
, a framework consisting of a deep CNN, LSTM layers as encoder/decoder, and a attention mechanism for isolated handwritten-word recognition is given. The result is reported with/without dictionary. For pre-processing, methods for baseline correction, normalization, and deslanting are applied. After pre-processing, an input image is converted to a sequence of image patches by using a horizontal sliding window, . Then, a deep CNN is used for feature extraction. Afterwards, a LSTM is applied to extract the horizontal relationships existing among a sequence of overlapped horizontal patches of input images. Then, a decoder component is used, a combination of an LSTM and an attention mechanism. To find the best performance, experiments are done to determine the optimal LSTM cell size and patch size. This method does not have very high performance.
In Ptucha et al. (2019), a whole-word CNN can be apply to recognize known words, defined as the 500 most frequent words in the training set of the RIMES dataset, which have a minimum confidence level of 70%. Otherwise, a Block Length CNN predicts the number of symbols in the given image block. Then, a fully convolutional neural network predicts the characters. Finally, the result is enhanced by a vocabulary-matching method. This varied-CNN method has a problem with separating common and non-common words. The separation of lexicon into a set of common and a set of uncommon words may be artificial, in view the usual continuously decaying Zipf distribution G. K. Zipf (1935). In Dutta et al. (2018), deslanting and slope normalization is performed on images, using the approach presented in Vinciarelli and Luettin (2001). A pre-trained CNN-RNN is used. During training and testing on benchmark datasets, three types of augmentations are used: affine transformation; elastic distortion; multi-scale transformation. Then, the best result on one of their seven approach is reported. Before that, image augmentation during training and testing is used in Okafor et al. (2017, 2018) for animal recognition.
The successful methods applied to the RIMES dataset are unfortunately quite complicate. Most of them use a combination of CNNs and LSTMs. Therefore, we treat convolutional neural networks in the next section.
2.2 Convolutional Recurrent Neural Network
The convolutional recurrent neural network is an end-to-end trainable system presented in Shi et al. (2017). It outperforms the plain CNN in three aspects: 1) It does not need precise annotation for each character and it can handle a string of characters for the word image; 2) it works without a strict preprocessing phase, hand-crafted features or component localization/segmentation; 3) It benefits from the state preservation capability of a recurrent neural network (RNN) in order to deal with character sequence; 4) It does not dependent on the width of word image. Only, height normalization is needed.
The model is composed of seven layers of convolutional layers followed by two layers of BiLSTM units containing 256 hidden cells and a transcription layer. Although, a the model is made up of two distinct neural network varieties, it can be trained integrally using one loss function.
Figure 2 shows the pipeline of the convolutional recurrent neural network Shi et al. (2017). The input of the model is a height-normalized and gray-scale word image. The the feature extraction is performed by convolutional layers directly from the input image. The output of CNN is a frame of features sequence, and acts the input of the recurrent neural network, which provides raw character hypotheses. Finally, the transcription layer translates the resulting prediction into a label sequence.
2.3 Word search and linguistic post-processing
Character-oriented approaches create a data structure representing the character hypotheses, their position in the text and the confidence value. For example, a LSTM produces a final map with character hypothesis activations, ordered from left-to-right or right-to-left with some stride (step size). Other approaches generate a grid or graph of character hypotheses. The final processing step involves finding the most likely character path, given a dictionary and potential other linguistic resources (statistics). For the LSTM, a well-known first step toward this is connectionist temporal classification (CTC)Graves et al. (2006).
Given a dictionary containing possible input words, an easy method can be used for error detection and correction of a word recognizer. In the case of existence of the word hypothesis in the dictionary, the result is accepted as the label of the input image. Otherwise, if a similar word exists in the dictionary, it can be accepted as final label candidate by using the Levenshtein distance and its variants Levenshtein (1966); Wagner and Fischer (1974); Seni et al. (1996); Oommen and Loke (1997)
, or n-gram distancesAngell et al. (1983), as common measures for comparing dissimilarity. If required, it is possible to use suitable linguistic statistics to further refine the ranking Youssef Bassil (2012); Asonov (2010); Chantal Amrhein (2018).
A data structure for contextual word recognition is presented in Wells et al. (1990) for quick dictionary look-up using limited memory.
An approach of providing contextual information by giving a dictionary to predict the most probable label in a graph search is presented inFavata (2001), which is robust to dictionary errors. In this approach, for every lexical word, the most probable path and related confidence is calculated to predict dictionary ranking.
was one of the first researchers working on the letter prediction task. Based on this idea, using a trainable variable memory length Markov model (VLMM), a linguistic post-processing model for character recognizers is introduced inGuyon and Pereira (1995). The next character is predicted by a variable length window of previous characters.
, for Japanese mail address, a character recognition method uses a dictionary in a trie tree. The dictionary matching is controlled by a beam search approach. The dictionary includes all the address names and principal postal offices in Japan. After pre-processing and segmentation character hypotheses are produced by combination of successive segments. Then, a version of a nearest-neighbor classifier that exploits the trie structure is made for a fast predict in of the final label. InSeni et al. (1996), an on-line handwritten recognition system for cursive words uses simple character features to reduce a given large dictionary. The outputs of a Time-Delay Neural Network (TDNN) are converted into a character sequence. The result of the system is a matched word in the reduced dictionary using a variant of Damerau-Levenshtein distance. For on-line handwriting recognition a search technique is proposed in Seni and Anastasakos (2000)
, which is a post-processing phase of a recognition system that calculates posterior probabilities of characters based on Viterbi decoding.
In Seni and Seybold (1999) a version of beam and Viterbi search-recognizer is presented. This search method provides the use of discrete probabilities generated by many character recognition systems based on stroke. Powalka et al. (1993) introduces a technique combining word segmentation and character recognition with lexical search to deal with segmentation ambiguities. A depth first trace of dictionary tree for text recognition using recursive procedure presented in Ford and Higgins (1990). For online handwriting recognition, in Bramall and Higgins (1995), by applying simple feature extraction a given dictionary is reduced. Afterwards, the reduced dictionary is refined by AI techniques. In Côté et al. (1998), for isolated cursive handwritten-word recognition, contextual knowledge is used. A dictionary tree representation with efficient pruning method, as a fast search method for large dictionary for on-line handwriting recognition system is proposed in Manke et al. (1996).
Of all these approaches, a dual-state word-beam search for CTC decoding currently enjoys increased interest, Scheidl et al. (2018), and will be described next.
2.3.1 A dual-state word-beam search for CTC decoding
The dual-state word-beam search for CTC decoding, Scheidl et al. (2018), is based on Vanilla Beam Search Decoding (VBS) Hwang and Sung (2016) for decoding of the CTC layer. The output of RNN is a matrix, and it is the input of the dual-state word-beam search method. In the dual-state word-beam search, a prefix tree is made of groundtruth label of the training set. It consists of two states: word-state and non-word-state, Figure 3. The next character of the current beam is either a word-character or a non-word-character, and it determines the subsequent state of the beam. The sets of word-characters and non-word-characters are predefined.
The temporal evolution of a beam depends on its state. For a beam in the non-word state, it is possible to be extended by a non-word-character, and it will stay in the non-word state. A word-character entering brings the system to the word state. Such a word-character is the beginning of a word. For a beam in the word-state the feasible following characters are presented by a prefix tree. This procedure iteratively repeats until a complete word is reached.
Scoring can be done in four ways:
Words: A dictionary is used without employing a language model (LM).
N-grams as LM: As a beam goes to non-word state from word state, the LM scores beam-labeling.
Ngram+forecast: As a word-character appends a beam, prefix tree presents all possible words. LM scores all of the relevant beam-extensions.
Ngram+forecast+sample: to restrain the following potential words, first some samples are randomly selected. Then, LM scores them. The total score value has to be refined to account for the random-sampling step.
The pseudo code of the dual-state word-beam search is illustrated in Algorithm 1. The list of symbols is as follows.
: The sequence of RNN output activations over time
: the set of beams at the present time step.
: Beam width
: The probability of finishing the paths of a beam with blank.
: The probability of not finishing the paths of a beam with blank.
: The probability allocated by the .
: The final iteration of the algorithm, .
: Empty beam.
: The last character of the beam.
: A beam.
: A character.
: A beam character at t.
: the number of words in the beam .
( ): Best beams based on the highest value of .
: The number of words exists in the beam .
: The probability of seeing character for extension of the beam .
In RNNs, such as LSTM, the exact alignment of the observed word image with the ground truth label is not clear. Hence, a probability distribution at each time step is used for prediction. Which makes it more important to use an adequate coding scheme.
However, even after the CTC stage, additional processing steps from the above mentioned repertoire are needed to boost classifications.
Unfortunately, although, using linguistic resources is clearly advantageous, there are cases where this is not, or only partly possible:
Not all problems enjoy the presence of abundance or digitally encoded contemporary text content.
In historical collections there may be virtually no resources, not even a lexicon
Many collections, e.g., administrative once have a dedicated jargon, abbreviations and non-standard phrasing. Even diaries may contain idiosyncratic neologisms.
Many collections have outdated geographical and scientific terminology, such as the historical document collection belonged to Natuurkundige Commissie’s scientific exploration of the Indonesian Archipelago between year 1820 and 1850 Weber et al. (2017). This heterogeneous handwritten manuscript contains 17,000 pages of the field notes based of the scientists’ natural observation in German, Latin, Dutch, Malay, Greek, and French. Biological terms vary greatly over periods in history Schuh (2003).
There is, however, an additional way to improve the classification performance. Impressive results using an ensemble method were presented in Stuner et al. (2016), however the number of networks was so large (118). that the need for a less drastic approach is becoming urgent. We will therefore focus on the probabilities of small-scale ensemble.
2.4 Ensemble system
A simple but effective method for improving an individual classifier performance is the ensemble method Ho (1992); Romesh Ranawana (2006); Günter and Bunke (2003, 2005); Karimi et al. (2015); Yang et al. (2015); Menasri et al. (2012); Tin Kam Ho et al. (1994); Van Erp and Schomaker (2000); Powalka et al. (1995). In Romesh Ranawana (2006)
, it is shown that having diverse classifiers is a key point for classifier fusion. Using ensembles for handwriting recognition with hidden-Markov models as basic word classifiers,Günter and Bunke (2003) compares different ensemble creation methods: Bagging, AdaBoost, Half & half bagging, random subspace, architecture as well as different voting combination methods for handwriting-recognition task. It is shown than each of four methods, increases the performance.
The impact of dictionary size, the train-set size and the number of recognizers in ensemble systems is studied for off-line cursive handwritten-word recognition in Günter and Bunke (2005). The ensemble methods are Bagging, AdaBoost and the random subspace, while the recognizers are HMMs with different configurations. It is verified that increasing the size of the training set and the number of recognizers elevate the performance of the system, while the larger dictionary pull down the performance.
Recently, in Karimi et al. (2015), ensemble classifiers for Persian handwriting recognition was used. They used AdaBoost and Bagging to combine weak classifiers created from hand-crafted families of simple features.
In the deep learning domain,Yang et al. (2015) obtained very high accuracy for Chinese handwritten character recognition using deep convolutional neural networks and a hybrid serial-parallel ensembling strategy which tries to find an “expert” network for each example that can classify the example with a high accuracy, or if such a network cannot be found, falls back to the majority vote over all networks.
In Menasri et al. (2012), an ensemble system is used for handwriting recognition of RIMES Grosicki et al. (2008) dataset. The ensemble uses eight recognizers; including: Four variants of a recurrent neural network (RNN), a grapheme based MLP-HMM, and two variants of a context-dependent sliding window based on GMM-HMM. For RNN, a multi-dimensional long-short term memory neural network (MDLSTM) Graves and Schmidhuber (2009) is used.
In an ensemble system, majority voting can be used if the output of of each individual recognizer is only the best hypothesis label. If recognizers of ensemble system output a ranked hypotheses list, Borda count is possible Tin Kam Ho et al. (1994); Van Erp and Schomaker (2000) to determine the result. In this case, it is required that the ranked list shows a sufficient diversity of intuitive candidates, i.e., with a low edit distance from the target. Two ensemble system of handwritten recognition methods are presented in Powalka et al. (1995) : using word-list merging; and linear combination.
The good results represented in literature are often based on a fairly complex system with many hyperparameters. In a e-science service such as Monk which currently has about 530 different manuscripts, it is clear that human attendance and detailed selection of hyper parameters for each of those documents by human and crafting is impossible.
In this section, we present a limited-size ensemble system for word recognition with a minimum of human intervention. The suggested system uses an adequate label-coding scheme and a dictionary as the only resource for the language model. The system is described as follows.
3.1 The Extra-separator coding scheme
In the common coding scheme, we call it ’Plain’, only the characters which are present in the word image appear in the corresponding label. In the ’Extra-separator’ coding scheme, one more character is appended at the end of each label. The appended character, named the extra separator (e.g., ’’), must not exist in the alphabet of the dataset. The aim of adding the extra-separator character is to give the recognizer an extra hint concerning the end-of-word shape condition.
3.2 Neural Network
The neural network is a convolutional BiLSTM neural network, and it is an end-to-end trainable framework inspired by Shi et al. (2017). The main configuration of the networks is detailed in Table 1. In this section, we explain the essential components of our approach.
|A dual-state word-beam|
|Transcription||search CTC decoding|
|L1||512 hidden units|
|Bidirectional-LSTM||L2||512 hidden units|
|L3||512 hidden units|
|Max Pooling||W and S:12|
|Convolution||K:, S:1, p:1|
|Max Pooling||W and S:12|
|Convolution||K:, S:1, p:1|
|Max Pooling||W and S:12|
|Convolution||K:, S:1, p:1|
|Max Pooling||W and S:22|
|Convolution||K:, S:1, p:1|
|Max Pooling||W and S:22|
|Convolution||K:, S:1, p:1|
|input image||12832 gray-scale image|
Configuration of our a convolutional recurrent neural network from input image (bottom) to last output (top). ’K’, ’W’,’S’ and ’P’ denote kernel size, window size, stride and padding.
The prepossessing is performed in each epoch of training. It is consists of: a) data augmentation through randomly stretching/squeezing the gray-scale images in the width direction, b) re-sizing the images intoand c) normalization. Data augmentation is performed to increase the size of training set, and it is achieved by changing the width of an image randomly by a factor between 0.5 and 1.5. Next, both the original gray-scale images and those added through data augmentation are re-sized so that either the width is 128 pixels or the height is 32 pixels. After that, we pad the image with white pixels until the size is . Then we normalize the intensity of the gray-scale image. Note that our method does not need baseline alignment or precise deslanting. Please note that one of our datasets was already deslanted to 90.
3.2.2 A 5-layer CNN
The pixel-intensity values after preprocessing are fed to the first of 5-layers of a CNN to extract feature sequences. Each layer of the CNN contains a convolution operation, normalization, the ReLU activation functionNair and Hinton (2010), and a max pooling operation. The size of the kernel filters in each layer is . Given the fixed important hyperparameter setting, such as the number of layers, the only variable control parameters concern the number of units in the hidden layers. The simple table of three possible sizes is used with the random probability of 0.33 for selecting the sizes of hidden units. The sizes of the numbers of hidden units used in our experiments are shown in Table 2. The number of layers, size of kernel and optimizer is our configuration, and differ from Shi et al. (2017).
, we used RMSPropTieleman and Hinton (2012). Moreover, we used five convolutional layers instead of seven suggested in Shi et al. (2017).
|set||Image||Word||Word CI||Image||Word||Word CI|
The five convolutional layers are followed by three layers of BiLSTM. Because the last convolutional layer contains 512 hidden units, each BiLSTM has 512 hidden unit.
3.2.4 Connectionist temporal classification (CTC)
The CTC output layer contains two units more than characters in the alphabet (A) of the given dataset: the suggested Extra separator (e.g., ’’), and a common blank for CTC, which differs the space character. Therefore, the alphabet of CTC output is:
The output units determine the probability of detecting the relevant label at the time. Further, the blank unit determines the probability of observing blank, or ’no label’. For CTC decoding, we use the dual-state beam search presented in Scheidl et al. (2018). This method is explained in section 2.3.1.
3.3 The ensemble system
For an input image, the outcome of the CTC decoder is a string as a word hypotheses with its relative likelihood. The word hypothesis obtained from five networks are sent to the voter component. Plurality voting is then applied Peleg (1978), where the alternatives are divided to subsets with identical strings. The subset with largest number of voters are selected. In case of a tie, the subset with the highest averaged likelihood is the winner. If the number of subsets is equal to the number of alternatives, the alternative with the highest likelihood is the winner. The winning string is considered as the final, best label of the input image. This approach was chosen after a pilot experiment, using Borda-count voting, whiteout good results. This may be due to the lack of diversity in the ranked candidate lists. Therefore, the more simple approach using plurality voting with exception handling was performed.
In this section, firstly, we describe the datasets used in the experiments. Then, we explain how our experiments were carried out. Finally, we report the numerical results.
In this paper, we used two datasets which differ in time period and language, summarized in Table 3. The first dataset is named RIMES, which was used to be comparable with the state-of-the-art methods. This database has different versions. We used isolated words of the version of ICDAR 2011 for evaluation of the methods and making the comparison with the published results possible Grosicki et al. (2006). The RIMES database is drawn from different types of handwritten manuscripts: postal mails and faxes. It contains 12,723 pages written by 1,300 volunteers using black ink on white paper. The RIMES dataset consists of 51,738 images of French handwriting for training, 7,464 images for validation and 7,776 images for testing. The dictionary size of the training set is 4,943 words, the validation set is 1,612 and the test set is 1,692, and the dictionary size of the whole dataset is 5,744 words. The comparison is accomplished case insensitive as it is common for the RIMES dataset, and the accent were contemplated. In the evaluation process of our model on RIMES, two dictionaries were used: Concise and Large. The Concise dictionary contains the whole words within the RIMES dataset, = 5,744 (6K). A French dictionary called Large (50K) is used to study the effect of a larger dictionary.
The second dataset belongs to the National Archive of the Netherlands, named KdK (Het Kabinet der Koningin or Dutch Queen’s Office)Van der Zant et al. (2008); Van Oosten and Schomaker (2014). The manuscript was written between years 1798 and 1988, the year 1903 was used. The KdK dataset contains 172,440 word images. The number of word classes of the total dataset is 11,749 and 10,747, case-sensitively and case-insensitively, respectively. Regardless of case-sensitivity, there are 1 to 5,628 sample(s) in each class. The length of the word samples is 1 to 28 character. In the case-sensitive manner, 5% of the test words does not occur in the training words, and is ’out of vocabulary (OOV)’. OOV in the case-insensitive manner, is 4.5%. The remaining words are considered as is referred to as ’in vocabulary (INV)’. Figure 4 shows four original samples of the KdK dataset. For evaluation, two dictionaries are used: Concise and Large. The Concise dictionary contains all the words in the KdK dataset (12K); the size of the Dutch Large dictionary is 384K, BV (2018 (accessed October 17, 2017).
4.2 Quantitative results
In this section, we evaluate our model on the RIMES and the KdK datasets in terms of coding scheme (Plain vs Extra separator) and ensemble/single network. Moreover, for the RIMES dataset, the results of our model is compared with the-state-of-the-art methods suggested in Stuner et al. (2016, 2017); Poznanski and Wolf (2016); Menasri et al. (2012); Ptucha et al. (2019); Sueiras et al. (2018). In Dutta et al. (2018), very good result are reported. However, their system was trained with a large amount of synthetic data. Therefore, we do not find it comparable with our approach, which is exclusively based on the the given dataset, and its augmentations.
For the Extra-separator coding scheme, a character which is absent in the given dataset was found automatically as the extra-separator character, the bar sign (); hence,the bar sign is annexed to the end of each image label, Figure 4. As a result, the size of the output of the CTC layer increases. The RIMES dataset contains 80 unique characters. Meaning that the size of the output layer of the CTC layer is 82 (80 unique character, one extra separator, and one common blank). The KdK dataset contains 52 unique characters. Therefore, the size of output layer of CTC layer is 54 (52 unique character, one extra separator, and one common blank). We compare the result of this addition to the Plain coding scheme. Two CTC decoder methods are used: dictionary-free (Best path) and with dictionary (dual-state word-beam search Scheidl et al. (2018)). For the dual-state word-beam search, two dictionaries are used for each dataset; Concise and Large.
Table 4 shows the effect of the two coding schemes, single recognizer and ensemble voting on the RIMES dataset showing word accuracy (%). For each of the two coding schemes (Plain and Extra separator), the five architectures were trained, which resulted to 10 trained networks. Then the networks were evaluated using the Best-path CTC decoder and the dual-state word-beam search CTC decoder applying the Concise (6K) and the Large (50K) dictionaries. The result of each evaluation and the relative average standard deviation (avg sd) are reported. In the bottom row of the Table 4, the voting-based result of the ensemble of the five networks is presented.
Best path vs Dual-state word-beam search
: the results confirm that using a decoder with dictionary considerably improves the performance (95-97%) as expected (t-test,, significant). The dictionary-free Best-path CTC decoder is given a low performance, still at 88-89%. Moreover, when the dual-state word-beam search CTC decoder is used, adding an extra-separator character enhances the model.
Plain vs Extra separator: for the Best-path CTC decoder, both Plain and Extra separator have an average of 84.5%, (t-test, , N.S.). Therefore, the extra separator has no effect. However, for a dual-state word-beam search CTC decoder using the Concise dictionary, Plain has an average of 94.3%, and Extra separator has an average of 95.2%, (t-test, , significant). Hence, the extra separator is effective; for a dual-state word-beam search CTC decoder, using the Large dictionary, Plain has an average of 92.9%, and Extra separator has an average of 94.1%, (t-test, , significant). Therefore, the Extra separator is effective again, for the case of a large dictionary.
Single network vs Ensemble: ensemble voting increases the performance where its effect is more on a weaker recognizer (4 pp increase in performance for the dictionary-free CTC decoder using the Plain/Extra separator coding scheme, final row vs average and individual). An ensemble of five recognizers, using the CTC decoder with the Concise dictionary combined with the Extra-separator coding scheme results in the highest performance (96.6%, column 6, bottom).
To study the effect of the number of networks in the ensemble on the final accuracy, the result of randomly selected 1, 3, 5, 10 and 15 network(s) are shown in Figure 5 for the RIMES dataset. The coding scheme is Extra separator, and CTC decoder is the dual-state word-beam search using the Concise dictionary. The networks in the ensemble only differ in the random initialization and number of the units over the layers, also randomly selected from the set in 1 through 4. The maximum accuracy is obtained by the ensemble of 15 networks, 96.72%, which is just 0.09 pp is more than using 10 networks.
Table 5 shows the comparison of our method on the RIMES dataset with Stuner et al. (2017, 2016); Poznanski and Wolf (2016); Menasri et al. (2012); Sueiras et al. (2018); Ptucha et al. (2019) in the terms of a number of characteristics: number of recognizers, homogeneity of the algorithm, word accuracy (%) and the complexity of the approach, not to be confused with computational complexity, e.g. deep learning method without extra complicated modules.
For the KdK dataset, the results are as follows. The samples of the KdK dataset for our experiment were binarized, then sheared 45 degrees in the anticlockwise direction to the slant angle in this style approximately 45 degrees. Afterwards, the white borders of images were removed horizontally and vertically, until the place where the first black pixel is observed. In Figure 6
the deslanted, white-removed images are shown. To derive a more accurate estimation of the performance of our model, we ran 5-fold cross-validation. Each architecture,where , is trained, either using the Plain coding scheme or using the Extra separator, resulting in 50 trained networks (). Then, each network is tested three times: using the dictionary-free Best-path CTC decoder; using the dual-state word-beam search CTC decoder applying Concise (12K) and Large (384K).
Table 6 shows the average (avg) and standard deviation (sd) of word accuracy (%) of five architectures using 5-fold cross-validation and varying per architecture, over the following parameters: dictionary (none, Concise, Large), and coding scheme (Plain, Extra separator) (). Each row is derived from 30 network evaluations. In other words, each row is the result of one architecture, regardless of the used CTC decoding method, dictionary, and coding scheme. Slightly lower performance is expected as the Best-path CTC decoder pulls it down. Similar result is achieved for each coding scheme, regardless of the used CTC decoding method, dictionary, and architecture. The Extra-separator has a higher performance, 94.5%, which is 0.4 pp higher than the Plain decoding scheme.
The Table 7 shows the average (avg) and standard deviation (sd) of word accuracy (%) of using dictionary on 5-fold cross-validation and varying per dictionary, over the following parameters: architecture (A to ), coding scheme (Plain, Extra separator) (). Each row is derived from 50 network evaluations.
Figure 7 shows the behavior of a single network A, using the Extra-separator coding scheme and the dual-state word-beam search CTC decoder. For different word lengths and for the OOV and INV condition in the KdK dataset. The blue and red dots represent the accuracy on OOV and INV, respectively.
The continuous green and black lines in Figure 7 indicate the word-length occurrence of the train and the test sets in the KdK dataset in one round of the 5-fold cross-validation. It is demonstrated that the single network on INV words with a length up to 17 characters has a high accuracy and is promising. For longer words the performance becomes erratic. The single network does not perform satisfactorily on short OOV words with 1 to 4 characters. The performance on OOV words which have 5 to 15 characters is highly adequate. For OOV words whose length is between 16 and 20, the performance is variable. Surprisingly, for OOV samples longer than 21 characters, the model has a high performance.
Figure 8 shows the accuracy of words achieved by network A on one round of 5-fold cross-validation on the KdK dataset. On the axis words are sorted in order of increasing relative log frequency of the test set. The blue circles indicate INV words. The dark red circle reveals the average accuracy and the log occurrence of OOV. Note the different ’threads
’ in the curve, revealing groups of easy and difficult (slow-starting) classes. In a lifelong machine-learning, the horizontal axis corresponds to time, starting with just a few examples on the left. The average of the performance on OOV samples is high, at, where is frequency in the test set (samples).
Table 8 shows the comparison of the effect of the two coding schemes (Plain and Extra separator) and the CTC decoder application on the ensemble for the five rounds of the cross-validation of the KdK dataset.
Best path vs Dual-state word-beam search: using no dictionary conditions in more than 93% accuracy. Using a decoder with dictionary boosts the performance (t-test, , significant). Adding an extra separator enhances the model, when a CTC decoder with dictionary is used.
Plain vs Extra separator: for the Best-path CTC decoder, Plain has an average of 90.6%, and Extra separator has an average of 90.7% (t-test, , N.S.). Therefore, the extra separator has no effect ; for a dual-state word-beam search CTC decoder using the Concise dictionary (12K), Plain has an average of 96.3%, and Extra separator has an average of 96.8% (t-test, , significant). Therefore, the extra separator is effective ; for a dual-state word-beam search CTC decoder using the Large dictionary (384K), Plain has an average of 95.5%, and Extra separator has an average of 96.1% (t-test, , significant). Therefore, the extra separator is effective .
Single network vs Ensemble: ensemble voting increases the performance where its effect is more on a weaker recognizer (3 pp increase in performance for the dictionary-free CTC decoder for Plain/Extra separator). Ensemble of five recognizers used the CTC decoder with the Concise dictionary combined with the Extra separator coding scheme results in the highest performance (97.4%).
Figure 9 shows comparison of the effect of two coding schemes and dictionary application on single architecture and ensemble voting on the RIMES and the KdK datasets showing the weighted average. Table 9 shows the average of word accuracy (%) on the RIMES and KdK datasets, using the Concise dictionary and the Extra-separator coding scheme.
The results indicate that it is possible to achieve a high word accuracy (%) in comparison to the state of the art with a limited-size ensemble, a homogeneous algorithmic approach and a low complexity Stuner et al. (2016, 2017); Poznanski and Wolf (2016); Menasri et al. (2012); Sueiras et al. (2018); Ptucha et al. (2019) (cf. Table 5). In those studies, numerous networks (up to 118 or 2100 network instances) are required in the ensemble. Whereas our method only uses five networks, yielding comparable or better results. In the proposed method, feature descriptors such as histogram of oriented gradients (HOG Dalal and Triggs (2005)) are not used, the process starts with a pixel image and is trained end to end.
Results also indicate that the average performances of the two coding schemes (Plain and Extra separator) differ significantly if the dual-state word-beam search is used for CTC decoding. In other words, the extra-separator character, ’’, tagging the end of the word, boosts the result of the dual-state word-beam search CTC decoding. This increase in performance occurs despite the slight increase of the model size by adding the extra-separator character. However, the effect on the result of CTC best-path decoding, i.e., a non-dictionary method, is limited. On the other hand, using the decoder with dictionary boosts the performance. Finally, ensemble voting clearly improves the word accuracy (%); its effect is stronger on weaker recognizers.
It should be noted that the reported result is based on realistic images with many word-segmentation problems, and therefore can be considered as a conservative estimate ( cf. Figure 6).
We have shown that medium length OOV words (5 to 11 characters) profit from training information in the shorter words in the training set (cf. Figure 7). Longer OOV words (11 to 23 characters) profit from the training on words whose length is 1 to 11 characters. Interestingly, OOV can have a high performance in a range for which there are not many examples (cf. Figure 7). In addition, for INV words shorter than 18 characters, the accuracy is higher than 95%. Therefore, our method recognized the common length OOV and INV words with a high accuracy. Alternatively stated, we demonstrate an important finding on a single network where increasing the size of difficult in-vocabulary word classes yields superior results, while the performance on easy in-vocabulary word classes is high even for a limited number of samples.
The goal of this research is not a record attempt towards maximized accuracy on the RIMES and the KdK datasets. Higher performance can undoubtedly be achieved using a larger ensemble. However, our choice for an ensemble of 5 voting elements results in a compromise with a very good and stable performance. The more than 1 pp jump in performance from one individual classifier to five classifiers is larger than the less than 0.3 pp increase in performance from 5 to 10 classifiers, and the increase in the performance is even smaller for higher numbers of classifiers in the amble, showing diminishing returns.
Furthermore, we have shown that by providing a more than 30 times larger dictionary, only a slight drop in performance occurred. In addition, for the dictionary-free approach, using an ensemble system results in a much higher performance with more stability than a single network. However, in the higher performing approach, the relative improvement is present but less prominent, when a dictionary is used. Moreover, as expected from previous research, using the CTC decoder with a dictionary increases the performance of our model compared to dictionary-free CTC decoder.
This study was aimed at achieving high-performance handwritten word recognition, using deep learning, however, with a limited cost in terms of network handcrafting combined with low complexity. Our model consists of an ensemble of just five homogeneous end-to-end trainable recognizers, using plurality voting with a solution for ties. Each recognizer is composed of five convolutional layers and three BiLSTM layers, followed by a CTC layer. Diversity is fostered by various number of units in the hidden layers of the CNNs. For CTC decoding, a dual-state word-beam search is applied, using only the given dictionary as the only language model. Furthermore, we study the effects of the dictionary-free Best-path CTC decoding on a single network and on the ensemble. Training the system is done from scratch, exclusively on the given dataset, and data augmentation is not used during testing. The word accuracy (%) of our model is 96.6% on RIMES, and 97.4% on the KdK dataset, a locally collected historical handwritten dataset. Results show that an ensemble size higher than five networks only yields limited further improvement; the method is not very sensitive to diverse network correspondence. Moreover, we showed that using an extra separator in the label-coding scheme boosts the performance with advantage of using it in case of a large dictionary.
We showed that by providing times larger dictionary, only a slight drop in performance occurred. Ensemble voting improves the performance; its effect is more on weaker recognizers. Longer out-of-vocabulary (OOV) words benefit from training information in the shorter words in the training set.
On in-vocabulary word classes, increasing the number of samples yields better results. However, it does not have an effect on easy word classes. The performance of our model is even relatively high for OOV classes in word-length ranges, where there are a limited number of samples in the training set. The suggested method is applicable to e-Science services where it is not feasible to manually tailor hyperparameters, pre-processing and language model for each manuscript based on prior knowledge.
Word-based LSTMs cannot make use of the large textual content. Therefore, as future work, we plan to extend our approach to handle the handwritten line recognition task. Moreover, we will explore the applicability of our model on other datasets with different languages, and increase the performance on out-of-vocabulary words. Furthermore, the challenge of high-performance recognition of long words will be addressed.
This work is part of the research programme Making Sense of Illustrated Handwritten Archives with project number 652-001-001, which is financed by the Netherlands Organisation for Scientific Research (NWO). We would like to thank Gideon Maillette de Buy Wenniger for thoughtful advice and the Center for Information Technology of the University of Groningen for providing access to the Peregrine high performance computing cluster.
- LeCun and Bengio (1998) LeCun Y, Bengio Y (1998) Convolutional networks for images, speech, and time series. In: Arbib MA (ed) The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, MA, USA, pp 255–258
- Hochreiter and Schmidhuber (1997) Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Computing 9(8):1735–1780, DOI 10.1162/neco.19126.96.36.1995
- Graves et al. (2005) Graves A, Fernández S, Schmidhuber J (2005) Bidirectional LSTM networks for improved phoneme classification and recognition. In: Duch W, Kacprzyk J, Oja E, Zadrożny S (eds) Artificial Neural Networks: Formal Models and Their Applications - ICANN 2005, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 799–804
- Graves and Schmidhuber (2009) Graves A, Schmidhuber J (2009) Offline handwriting recognition with multidimensional recurrent neural networks. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in Neural Information Processing Systems 21, Curran Associates, Inc., pp 545–552
- Li and Wu (2014) Li X, Wu X (2014) Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp 4520–4524
- de Buy Wenniger et al. (2019) de Buy Wenniger GM, Schomaker L, Way A (2019) No padding please: Efficient neural handwriting recognition. In: 15th International Conference on Document Analysis and Recognition (ICDAR 2019), Sydney, Australia
- Doetsch et al. (2014) Doetsch P, Kozielski M, Ney H (2014) Fast and robust training of recurrent neural networks for offline handwriting recognition. In: 14th International Conference on Frontiers in Handwriting Recognition, pp 279–284, DOI 10.1109/ICFHR.2014.54
- Stuner et al. (2016) Stuner B, Chatelain C, Paquet T (2016) Handwriting recognition using Cohort of LSTM and lexicon verification with extremely large lexicon. arXiv:1612.07528
- Puigcerver (2018) Puigcerver J (2018) A probabilistic formulation of keyword spotting. PhD thesis, University of Valencia
- Scheidl et al. (2018) Scheidl H, Fiel S, Sablatnig R (2018) Word beam search: A connectionist temporal classification decoding algorithm. In: The International Conference on Frontiers of Handwriting Recognition (ICFHR), IEEE Computer Society, pp 253–258
- Graves et al. (2006) Graves A, Fernández S, Gomez F, Schmidhuber J (2006) Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd International Conference on Machine Learning, ACM, New York, NY, USA, ICML ’06, pp 369–376, DOI 10.1145/1143844.1143891
Hauser and Schulz (2007)
Hauser AW, Schulz KU (2007) Unsupervised learning of edit distance weights for retrieving historical spelling variations. In: Proceedings of the First Workshop on Finite-State Techniques and Approximate Search, pp 1–6
- Emlen (2017) Emlen NQ (2017) Perspectives on the Quechua-Aymara contact relationship and the lexicon and phonology of Pre-Proto-Aymara. International Journal of American Linguistics 83(2):307–340
- Voigtlaender et al. (2016) Voigtlaender P, Doetsch P, Ney H (2016) Handwriting recognition with large multidimensional long short-term memory recurrent neural networks. In: 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp 228–233, DOI 10.1109/ICFHR.2016.0052
- Grosicki et al. (2006) Grosicki E, Carré M, Geoffrois E, Augustin E, Preteux F (2006) La campagne d’évaluation RIMES pour la reconnaissance de courriers manuscrits. In: Actes Colloque International Francophone sur l’Ecrit et le Document (CIFED’06), Fribourg, Switzerland, pp 61–66
- Van der Zant et al. (2008) Van der Zant T, Schomaker L, Haak K (2008) Handwritten-word spotting using biologically inspired features. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(11):1945–1957, DOI 10.1109/TPAMI.2008.144
- Van Oosten and Schomaker (2014)
- Van der Zant et al. (2009) Van der Zant T, Schomaker L, Zinger S, Van Schie H (2009) Where are the search engines for handwritten documents? Interdisciplinary Science Reviews 34(2-3):224–235, DOI 10.1179/174327909X441126
- He et al. (2016) He S, Samara P, Burgers J, Schomaker L (2016) Image-based historical manuscript dating using contour and stroke fragments. Pattern Recognition 58:159–171
- Schomaker (2019) Schomaker L (2019) A large-scale field test on word-image classification in large historical document collections using a traditional and two deep-learning methods. ArXiv
Poznanski and Wolf (2016)
Poznanski A, Wolf L (2016) CNN-N-Gram for handwriting word recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2305–2314, DOI10.1109/CVPR.2016.253
- Almazán et al. (2014) Almazán J, Gordo A, Fornés A, Valveny E (2014) Word spotting and recognition with embedded attributes. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(12):2552–2566, DOI 10.1109/TPAMI.2014.2339814
- Hardoon et al. (2004) Hardoon DR, Szedmak S, Shawe-Taylor J (2004) Canonical correlation analysis, an overview with application to learning methods. Neural Computation 16(12):2639–2664
- Dalal and Triggs (2005) Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol 1, pp 886–893 vol. 1, DOI 10.1109/CVPR.2005.177
- Graves (2012) Graves A (2012) Supervised sequence labelling with recurrent neural networks, vol 385. Springer-Verlag Berlin Heidelberg
- U.-V. Marti (2002) U-V Marti HB (2002) The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition 5:39–46
- Menasri et al. (2012) Menasri F, Louradour J, Bianne-Bernard AL, Kermorvant C (2012) The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition. In: Viard-Gaudin C, Zanibbi R (eds) Document Recognition and Retrieval XIX, International Society for Optics and Photonics, SPIE, vol 8297, pp 263–270, DOI 10.1117/12.911981
- Sueiras et al. (2018) Sueiras J, Ruiz V, Sanchez A, Velez JF (2018) Offline continuous handwriting recognition using sequence to sequence neural networks. Neurocomputing 289:119 – 128
- Ptucha et al. (2019) Ptucha R, Such FP, Pillai S, Brockler F, Singh V, Hutkowski P (2019) Intelligent character recognition using fully convolutional neural networks. Pattern Recognition 88:604 – 613
- G. K. Zipf (1935) G K Zipf (1935) the psycho-biology of language. Houghton, Mufflin, Oxford, England
- Dutta et al. (2018) Dutta K, Krishnan P, Mathew M, Jawahar CV (2018) Improving CNN-RNN hybrid networks for handwriting recognition. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp 80–85, DOI 10.1109/ICFHR-2018.2018.00023
- Vinciarelli and Luettin (2001) Vinciarelli A, Luettin J (2001) A new normalization technique for cursive handwritten words. Pattern Recognition Letters 22(9):1043 – 1050
- Okafor et al. (2017) Okafor E, Smit R, Schomaker L, Wiering M (2017) Operational data augmentation in classifying single aerial images of animals. In: 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, pp 354–360
- Okafor et al. (2018) Okafor E, Schomaker L, Wiering MA (2018) An analysis of rotation matrix and colour constancy data augmentation in classifying images of animals. Journal of Information and Telecommunication 2(4):465–491
- Shi et al. (2017) Shi B, Bai X, Yao C (2017) An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(11):2298–2304, DOI 10.1109/TPAMI.2016.2646371
- Levenshtein (1966) Levenshtein VI (1966) Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady 10:707
- Wagner and Fischer (1974) Wagner RA, Fischer MJ (1974) The string-to-string correction problem. J ACM 21(1):168–173
- Seni et al. (1996) Seni G, Kripásundar V, Srihari RK (1996) Generalizing edit distance to incorporate domain information: Handwritten text recognition as a case study. Pattern Recognition 29(3):405 – 414
- Oommen and Loke (1997) Oommen B, Loke R (1997) Pattern recognition of strings with substitutions, insertions, deletions and generalized transpositions. Pattern Recognition 30(5):789 – 800
- Angell et al. (1983) Angell RC, Freund GE, Willett P (1983) Automatic spelling correction using a trigram similarity measure. Information Processing & Management 19(4):255 – 261
- Youssef Bassil (2012) Youssef Bassil MA (2012) OCR post-processing error correction algorithm using Google’s online spelling suggestion. Journal of Emerging Trends in Computing and Information Sciences 3(1):90–99
Asonov D (2010) Real-word typo detection. In: Horacek H, Métais E, Muñoz R, Wolska M (eds) Natural Language Processing and Information Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 115–129
Chantal Amrhein (2018)
Chantal Amrhein SC (2018) Supervised ocr error detection and correction using statistical and neural machine translation methods. Journal for Language Technology and Computational Linguistics 3(1):49–76
- Wells et al. (1990) Wells C, Evett L, Whitby P, Whitrow R (1990) Fast dictionary look-up for contextual word recognition. Pattern Recognition 23(5):501 – 508
- Favata (2001) Favata JT (2001) Off-line general handwritten word recognition using an approximate beam matching algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(9):1009–1021
- Shannon (1951) Shannon CE (1951) Prediction and entropy of printed English. The Bell System Technical Journal 30(1):50–64
- Shannon (1948) Shannon CE (1948) A mathematical theory of communication. The Bell System Technical Journal 27:379–423 (Part I) 623–656 (Part II)
- Guyon and Pereira (1995) Guyon I, Pereira F (1995) Design of a linguistic postprocessor using variable memory length markov models. In: Proceedings of 3rd International Conference on Document Analysis and Recognition, Montreal, Quebec, Canada, vol 1, pp 454–457 vol.1, DOI 10.1109/ICDAR.1995.599034
- Swaileh et al. (2016) Swaileh W, Paquet T, Mohand K (2016) A syllabic model for handwriting recognition [Un modèle syllabique pour la reconnaissance de l’écriture]. In: CORIA 2016 - Conference en Recherche d’Informations et Applications- 13th French Information Retrieval Conference. CIFED 2016 - Colloque International Francophone sur l’Ecrit et le Document, Association Francophone de Recherche d’Information et Applications (ARIA), pp 23–37, DOI 10.1109/ICDAR.2015.7333846
- Cheng-Lin Liu et al. (2002) Cheng-Lin Liu, Koga M, Fujisawa H (2002) Lexicon-driven segmentation and recognition of handwritten character strings for Japanese address reading. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(11):1425–1437, DOI 10.1109/TPAMI.2002.1046151
- Seni et al. (1996) Seni G, Srihari RK, Nasrabadi N (1996) Large vocabulary recognition of on-line handwritten cursive words. IEEE Transactions on Pattern Analysis and Machine Intelligence 18(7):757–762, DOI 10.1109/34.506798
- Seni and Anastasakos (2000) Seni G, Anastasakos T (2000) Non-cumulative character scoring in a forward search for online handwriting recognition. In: 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), vol 6, pp 3450–3453, DOI 10.1109/ICASSP.2000.860143
- Seni and Seybold (1999) Seni G, Seybold J (1999) Forward search with discontinuous probabilities for online handwriting recognition. In: Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR ’99 (Cat. No.PR00318), IEEE Computer Society, Washington, DC, USA, pp 741–744, DOI 10.1109/ICDAR.1999.791894
- Powalka et al. (1993) Powalka RK, Sherkat N, Evett LJ, Whitrow RJ (1993) Multiple word segmentation with interactive look-up for cursive script recognition. In: Proceedings of 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), Tsukuba City, Japan, pp 196–199, DOI 10.1109/ICDAR.1993.395750
- Ford and Higgins (1990) Ford DM, Higgins CA (1990) A tree-based dictionary search technique and comparison with n-gram letter graph reduction. In: Plamondon R, Leedham G (eds) Computer Processing of Handwriting, World Science Publishing Co., pp 291–312
- Bramall and Higgins (1995) Bramall PE, Higgins CA (1995) A cursive script-recognition system based on human reading models. Machine Vision and Applications 8(4):224–231
- Côté et al. (1998) Côté M, Lecolinet E, Cheriet M, Suen C (1998) Automatic reading of cursive scripts using a reading model and perceptual concepts. International Journal on Document Analysis and Recognition 1(1):3–17, DOI 10.1007/s100320050002
- Manke et al. (1996) Manke S, Finke M, Waibel A (1996) A fast search technique for large vocabulary on-line handwriting recognition. In: Proceedings of the 5th International Workshop on Frontiers in Handwriting Recognition, UK, pp 183–188
- Hwang and Sung (2016) Hwang K, Sung W (2016) Character-level incremental speech recognition with recurrent neural networks. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 5335–5339, DOI 10.1109/ICASSP.2016.7472696
- Weber et al. (2017) Weber A, Ameryan M, Wolstencroft K, Stork L, Heerlien M, Schomaker L (2017) Towards a digital infrastructure for illustrated handwritten archives. In: Ioannides M (ed) Final Conference of the Marie Skłodowska-Curie Initial Training Network for Digital Cultural Heritage, ITN-DCH 2017, Springer, Olimje, Slovenia, vol 10605
- Schuh (2003) Schuh RT (2003) The Linnaean system and its 250-year persistence. The Botanical Review 69(1):59–78
- Ho (1992) Ho T (1992) A theory of multiple classifier systems and its application to visual word recognition. PhD thesis, State University of New York at Buffalo, Buffalo, NY, USA, UMI Order No. GAX92-22062
- Romesh Ranawana (2006) Romesh Ranawana VP (2006) Multi-classifier systems: Review and a roadmap for developers. International journal of hybrid intelligent systems 3(1):35–61
- Günter and Bunke (2003) Günter S, Bunke H (2003) Ensembles of classifiers for handwritten word recognition. International Journal on Document Analysis and Recognition (IJDAR) 5(4):224–232, DOI 10.1007/s10032-002-0088-2
- Günter and Bunke (2005) Günter S, Bunke H (2005) Off-line cursive handwriting recognition using multiple classifier systems on the influence of vocabulary, ensemble, and training set size. Optics and Lasers in Engineering 43(3):437 – 454
- Karimi et al. (2015) Karimi H, Esfahanimehr A, Mosleh M, jadval ghadam FM, Salehpour S, Medhati O (2015) Persian handwritten digit recognition using ensemble classifiers. Procedia Computer Science 73:416–425
- Yang et al. (2015) Yang W, Jin L, Xie Z, Feng Z (2015) Improved deep convolutional neural network for online handwritten Chinese character recognition using domain-specific knowledge. In: Proceedings of the 2015 13th International Conference on Document Analysis and Recognition (ICDAR), ICDAR ’15, pp 551–555
- Tin Kam Ho et al. (1994) Tin Kam Ho, Hull JJ, Srihari SN (1994) Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(1):66–75, DOI 10.1109/34.273716
- Van Erp and Schomaker (2000) Van Erp M, Schomaker L (2000) Variants of the Borda count method for combining ranked classifier hypotheses. In: Proceedings 7th International Workshop on frontiers in handwriting recognition (7th IWFHR), pp 443–452
- Powalka et al. (1995) Powalka RK, Sherkat N, Whitrow RJ (1995) Recognizer characterisation for combining handwriting recognition results at word level. In: Proceedings of 3rd International Conference on Document Analysis and Recognition, vol 1, pp 68–73 vol.1, DOI 10.1109/ICDAR.1995.598946
- Grosicki et al. (2008) Grosicki E, Carre M, Brodin JM, Geoffrois E (2008) RIMES evaluation campaign for handwritten mail processing. In: ICFHR 2008 : 11th International Conference on Frontiers in Handwriting Recognition, Concordia University, Montreal, Canada, pp 1–6
Nair and Hinton (2010)
Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, Omnipress, USA, ICML’10, pp 807–814
- Zeiler (2012) Zeiler MD (2012) ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701, 1212.5701
- Tieleman and Hinton (2012) Tieleman T, Hinton G (2012) Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA:Neural networks for machine learning 4(2):26–31
- Peleg (1978) Peleg B (1978) Consistent voting systems. Econometrica: Journal of the Econometric Society pp 153–161
- BV (2018 (accessed October 17, 2017) BV SW (2018 (accessed October 17, 2017)) Woorden.org, Nederlandse woordenboek. URL https://www.woorden.org
- Stuner et al. (2017) Stuner B, Chatelain C, Paquet T (2017) Self-training of BLSTM with lexicon verification for handwriting recognition. In: 14th International Conference on Document Analysis and Recognition (ICDAR 2017), Kyoto, Japan, pp 633–638