Improving BPSO-based feature selection applied to offline WI handwritten signature verification through overfitting control

04/07/2020 ∙ by Victor L. F. Souza, et al. ∙ Ecole De Technologie Superieure (Ets) UFPE 0

This paper investigates the presence of overfitting when using Binary Particle Swarm Optimization (BPSO) to perform the feature selection in a context of Handwritten Signature Verification (HSV). SigNet is a state of the art Deep CNN model for feature representation in the HSV context and contains 2048 dimensions. Some of these dimensions may include redundant information in the dissimilarity representation space generated by the dichotomy transformation (DT) used by the writer-independent (WI) approach. The analysis is carried out on the GPDS-960 dataset. Experiments demonstrate that the proposed method is able to control overfitting during the search for the most discriminant representation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In the Writer-Independent Handwritten Signature Verification (WI-HSV) approach, a single model is trained from a dissimilarity space and is responsible for verifying the signatures of any available writer in the dataset. Thus, the classification inputs are dissimilarity vectors, which represent the difference between the features of a queried signature and a reference signature of the writer.

The SigNet, proposed by Hafemann et al. (Hafemann et al., 2017)

, is a state of the art Deep Convolutional Neural Network (DCNN) model for feature representation in the HSV context and its feature vectors are composed of 2048 dimensions. Some of the features may be redundant when transposed to a WI dissimilarity space.

Thus, we propose to use a feature selection technique based on binary particle swarm optimization (BPSO) to obtain only the relevant dimensions on this transposed space (Chuang et al., 2011). The optimization is conducted based on the minimization of the Equal Error Rate () of the SVM in a wrapper mode (Radtke et al., 2006). In this scenario, one possible problem that can be faced is overfitting. Thus, the objectives of this study are: (i) to analyze the redundancy in the features obtained in the dissimilarity space generated by DT. (ii) The presence of overfitting when using Binary Particle Swarm Optimization (BPSO) to perform the feature selection in a wrapper mode. (iii) Whether overfitting control can improve optimization performance.

2. Basic concepts

Writer-Independent Handwritten Signature Verification
(Wi-Hsv)

The Dichotomy Transformation (DT) allows to transform a multi-class pattern recognition problem into a

-class problem. In this approach, a dissimilarity (distance) measure is used to distinguish whether a given reference and a questioned sample belong to the same class or not (Souza et al., 2020). When applied to the HSV context it characterizes the writer-independent (WI) approach, the samples are signatures and to perform the verification means belonging to the same writer (positive class) or not (negative class) (Souza et al., 2020).

The dissimilarity vector resulting from DT, u, is obtained by applying the absolute value of the difference from and , where is the respective feature of the questioned signature signature () and reference signature () (Souza et al., 2020).

Feature selection using BPSO

In a context of feature selection, particle swarm optimization algorithms are used in their binary version (BPSO) and have been obtaining good results (Chuang et al., 2011). We use a variation of PSO, in which the algorithm itself adjusts w, and dynamically over iterations, promoting global search in the beginning and local search in the final iterations (Zhang et al., 2013).

The transformation of the continuous search space into a binary space is conducted by using a V-shaped transfer function (Mirjalili and Lewis, 2013).

We propose to use a BPSO-based feature selection for WI-HSV in a wrapper mode. The optimization is conducted based on the minimization of the Equal Error Rate () of the SVM in a wrapper mode. The user threshold (considering just the genuine signatures and the skilled forgeries) was employed (Souza et al., 2020).

In the feature selection scenario, overfitting occurs when the optimized feature set memorizes the training set instead of producing a general model. Hence, it may fail to fit additional data. In the wrapper-based approach, the swarm optimization process becomes another learning process and may be subjected to overfitting. To decrease the chance of overfitting, a validation procedure can be used during the optimization process in order to select solutions with good generalization power.

According to Radtke et al. (Radtke et al., 2006), one possible validation strategy is the last iteration strategy, this approach validate final candidate solutions on another set of unknown observations – the selection set. By using this approach, the optimization routine produces better results than selecting solutions based solely on the accuracy of the optimization set alone. However, this strategy has the disadvantage that the solution is validated only once, after the optimization process is completed.

Another approach is the global validation strategy (Radtke et al., 2006), where the validation of the candidate solutions are executed in all iterations of the optimization process. This can be accomplished by storing the best validated solutions in an external (auxiliary) archive.

During both validation routines, the optimization set () is temporarily replaced by the selection set () to evaluate the fitness function. In the global validation strategy, at each iteration, all the best solutions previously found are grouped with the population of the new swarm and then ranked. Finally, the external archive maintains the best candidate solutions.

3. Experiments

The experiments are carried out using GPDS-960 dataset, specifically in the GPDS-300 stratification. The Exploitation set, where the tested set is acquired, is composed of writers 1 to 300. The Development set is formed by the other 581 writers, from these: 146 writers are randomly selected to compose the train set, another 145 for the validation set, another 145 for the optimization set () and the remaining 145 for the selection set ().

As in the work by Souza et al. (Souza et al., 2020), we use the highest value for the number of references, i.e., 12 references per writer, and the Max function as the partial decisions. In the training step, the model uses 10 genuine signatures and 10 random forgeries. During optimization (optimization and selection sets), the fitness function minimizes the with user threshold considering only genuine signatures and skilled forgeries. In this case, for each writer, 10 genuine signatures and 10 skilled forgeries are used. These operations are performed in the space with reduced samples, i.e., after prototype selection through Condensed Nearest Neighbors (CNN) (Souza et al., 2020). The test set is acquired as in (Hafemann et al., 2017). The SVM and IDPSO settings are the same as in (Souza et al., 2020) and (Zhang et al., 2013), respectively. The maximum number of iterations was set to 40. Five replications were carried out.

3.1. Results and discussions

Table 1 presents the results obtained by the the models with and without feature selection. Table 2 contains the comparison of the presented models with the state of the art methods for the GPDS-300 dataset (references can be found in (Souza et al., 2020)).

Approach #features
No feature selection 2048 3.47 (0.15)
Feature selection and no validation 1124 3.76 (0.07)
Feature selection and last iteration validation 1120 3.64 (0.08)
Feature selection and global validation 1140 3.46 (0.08)
Table 1. Comparison of considering the presented models, in the GPDS-300 dataset (errors in %)

In terms of validation strategy, the improvement when using any of the validation stages was enough to obtain better results when compared to the model without feature selection. Results indicate that not using a validation stage is worse than using validation at the last iteration, which in turn is worse than using the global validation strategy. Thus, by using the global validation strategy it is possible to control the overfitting of the model and, thereby, improve the performance of the BPSO-based feature selection approach.

Another aspect that can be observed is the presence of redundant features in the dissimilarity space generated by DT. Since, the model with feature selection and global validation uses only almost 55% of the total number of features and still manages to obtain similar when compared to the model trained with all the 2048 features.

Type HSV Approach #Ref #Models
WD Hafemann et al. (2016) 12 300 12.83
WD Zois et al. (2016) 5 300 5.48
WD Hafemann et al. (2017) 5 300 3.92 (0.18)
WD Hafemann et al. (2017) 12 300 3.15 (0.18)
WD Serdouk et al. (2017) 10 300 9.30
WD Hafemann et al. (2018) 12 300 3.15 (0.14)
WD Hafemann et al. (2018) (fine-tuned) 12 300 0.41 (0.05)
WD Yilmaz and Ozturk (2018) 12 300 0.88 (0.36)
WD Zois et al. (2019) 12 300 0.70
WI Dutta et al. (2016) N/A 1 11.21
WI Hamadene and Chibani (2016) 5 1 18.42
WI Souza et al. (2019) 12 1 3.69 (0.18)
WI Zois et al. (2019) 5 1 3.06
WI 12 1 3.47 (0.15)
WI 12 1 3.46 (0.08)
Table 2. Comparison of with the state of the art, in the GPDS-300 dataset (errors in %)

In general, our approach obtains low

. In the WI scenario, it is only worse than the model proposed by Zois et al. (2019). In the comparison with WD models, our approach only got worse results than Hafemann et al. (2018) (fine-tuned), Yilmaz and Ozturk (2018) and Zois et al. (2019), being better or comparable than the other cases. It is important to point out that, as a WI model, our approach has greater scalability than these other models, since only one classifier is needed to perform signature verification.

4. Conclusions

In this work, we evaluated the use of BPSO-based feature selection for offline writer-independent handwritten signature verification. The optimization was conducted based on the minimization of the Equal Error Rate () of the SVM in a wrapper mode. Experimental results showed that not using a validation stage is worse than using validation at the last iteration, which in turn is worse than using the global validation strategy. Thus, by using the global validation strategy it is possible to control the overfitting of the model and, thereby, improve the performance of the BPSO-based feature selection approach.

Acknowledgements.
This work was supported by CNPq, FACEPE and ÉTS Montréal.

References

  • L. Chuang, S. Tsai, and C. Yang (2011) Improved binary particle swarm optimization using catfish effect for feature selection. Expert Systems with Applications 38 (10), pp. 12699 – 12707. External Links: ISSN 0957-4174 Cited by: §1, §2.
  • L. G. Hafemann, R. Sabourin, and L. S. Oliveira (2017) Learning features for offline handwritten signature verification using deep convolutional neural networks. Pattern Recognition 70, pp. 163 – 176. External Links: ISSN 0031-3203 Cited by: §1, §3.
  • S. Mirjalili and A. Lewis (2013) S-shaped versus v-shaped transfer functions for binary particle swarm optimization. Swarm and Evolutionary Computation 9, pp. 1 – 14. External Links: ISSN 2210-6502 Cited by: §2.
  • P. V. W. Radtke, T. Wong, and R. Sabourin (2006) An evaluation of over-fit control strategies for multi-objective evolutionary optimization. In The 2006 IEEE International Joint Conference on Neural Network Proceedings, Vol. , pp. 3327–3334. External Links: ISSN 2161-4407 Cited by: §1, §2, §2.
  • V. L.F. Souza, A. L.I. Oliveira, R. M.O. Cruz, and R. Sabourin (2020) A white-box analysis on the writer-independent dichotomy transformation applied to offline handwritten signature verification. Expert Syst Appl. External Links: ISSN 0957-4174 Cited by: §2, §2, §2, §3.1, §3.
  • Y. Zhang, X. Xiong, and Q. Zhang (2013) An improved self-adaptive pso algorithm with detection function for multimodal function optimization problems. Mathematical Problems in Engineering vol. 2013. Cited by: §2, §3.