Pattern recognition (PR) plays a significant role in image-based forensics. There is a wide range of interesting topics, such as person identification based on biometrics cheng2019deep , postmortem identification gomez20183d , and the recent challenge of DeepFake detection li2020cvpr . In particular, PR-based similarity analysis can enable forensic document examination (FDE) to solve forgery detection qureshi2019hyperspectral , author identification he2019deep , signature authenticity verification soleimani2016deep , and other kinds of problems.
Complementarity (or compatibility) between patterns is a related concept to similarity (present in some of the aforementioned applications) that can also benefit from PR townshend2019end ; ostertag2020matching . For instance, shape analysis paixaotifs2019
and machine learningperl2011 have been used to analyze patterns (e.g., lines, shapes, characters) that emerge when two fragments of papers (shreds) are placed side-by-side. Such an ability is helpful to reconstruct manually and mechanically-shredded documents, where the purpose is to arrange the shreds – likewise in a jigsaw puzzle – to recover the original document(s). In the FDE context, the reconstruction may be imperative for further analysis if the information contained in such documents is essential to decide criminal and civil cases.
The automatic and digital reconstruction of shredded documents is desirable because the traditional (manual) approach is potentially damaging to the paper due to the continuous direct contact with the shreds, besides being a slow and tedious process for humans. Computational solutions typically involve an optimization process guided by an objective function that quantifies the global fitting (compatibility) of the shreds. This function can be computed directly from the entire image representing a reconstruction badawy2018discrete or derived from the compatibility between shreds (pairwise compatibility), being the latter more common.
In this direction, a recent work paixaotifs2019 verified the lack of robust pairwise compatibility measures addressing real mechanically-shredded documents (i.e., cut by paper shredder machines) with low color information and predominant textual content. To fill this gap, the authors explored PR by using similarity between characters’ shape as criteria for compatibility determination rather than traditional edge pixel-wise correlation, as used in marques2013 ; gong2016 ; chen2018high . The characters’ shape-based approach yielded better accuracy than the state-of-the-art in the reconstruction of strip-shredded documents (i.e., cut only in the longitudinal direction) at that time, however, with a significantly higher computational cost that limits the application for a large number of documents. Furthermore, the efficacy of the method strongly depends on the proper segmentation of text areas in the shreds, which may be challenging for more complex documents mixing text and pictorial elements (e.g., tables, diagrams, photos).
In this work, deep learning is leveraged to enable faster and more robust reconstruction of strip-shredded documents in more realistic scenarios. In a segmentation-free approach, the fitting of the shreds is modeled as a two-class (valid or invalid) PR problem. For this purpose, a convolutional neural network (CNN) is trained with local (small) samples extracted around the cutting sections of digitally-cut documents, i.e., digital documents submitted to simulated shredding. Besides facilitating data acquisition, the simulated process also provides the ground-truth order of the shreds for free. This enables the self-supervised learning of the model, in which class labels are determined directly from adjacency information of the shreds. A preliminary investigation of this approach paixao2018deep has indicated the potential of using simulated data to reconstruct real-shredded documents in constrained scenarios, such as single-page reconstruction and intra-database evaluation. The first scenario assumes that the shreds are previously separated by pages so that the reconstruction system should deal with one page at a time (limited to shreds per page). Note that this is far from a real case scenario with multiple documents to be reconstructed, but gives an idea of the method potential. On its turn, intra-database evaluation consists in using the same document collection to provide both training and test sets, which reduces the variation between the two sets.
This work extends the preliminary study paixao2018deep by addressing the more realistic and challenging task of reconstructing, at once, several shredded documents with heterogeneous content. The proposed evaluation explores cross-database scenarios, i.e., training and testing in different collections of documents. As an additional contribution, a new dataset with real strip-shredded documents (totaling shreds) of heterogeneous types was created for more extensive experimentation and released to the community to overcome the lack of publicly available collections representing real scenarios. This new dataset alone comprises more documents (100) than the full collection used in paixao2018deep (80). Moreover, our dataset is significantly more heterogeneous since it comprises 10 different document categories (e.g., handwritten documents and forms), which is not common in the literature on reconstruction of shredded documents. Such diversity enables confirming the generalization and robustness of the proposed method.
Other specific contributions of this work include: (i) a more in-depth description of the technical details; (ii) an extended discussion of the state-of-the-art; (iii) an ablation study to analyze the sensitivity of the method to some key parameters. Experiments were conducted on the two collections of shredded documents used in paixao2018deep , as well as on the newly-introduced dataset. Results have shown that our deep learning approach outperformed the competing methods, being capable of reconstructing mixed shredded documents ( shreds) with accuracy superior to , which brings the state-of-the-art of the document reconstruction problem to another level.
The remainder of the text is organized as follows. The next section presents the related work. Section 3 describes the proposed reconstruction system. The experimental methodology and the obtained results are, respectively, in Sections 4 and 5. Finally, conclusion and future works are drawn in Section 6.
2 Related Work
The literature on reconstruction of shredded documents has mostly focused on improving the optimization search process, relegating the image-based problem of verifying shreds’ compatibility to the background. Most works are restricted to the reconstruction of simulated-shredded documents morandell2008 ; prandtstetter2008 ; sleit2013 ; phienthrakul2015 ; gong2016 , and, in this situation, the criticalness of the compatibility evaluation step in the reconstruction pipeline is masked, giving rise to potentially misleading conclusions. Therefore, this review focuses on different approaches to assess compatibility between shreds, which is the main contribution of this work.
Inspired by the jigsaw-puzzle solving problem, most literature on reconstruction explores low-level features for compatibility evaluation. The most naive approach in this context is to apply distance metrics (e.g., Hamming, Euclidean, Canberra, Manhattan) on the raw pixels of opposite boundaries of two shreds smet2005 ; skeoch2006 ; marques2013 ; chen2017a ; chen2019solution . Some of these methods rely on the very edge skeoch2006 ; marques2013 , being more sensitive to the corruption of the shreds’ extremities caused by the mechanical cut. To alleviate this, Marques and Freitas marques2013 suggest removing some border pixels, which, in practice, results in limited improvement. Additionally, different color spaces have been investigated (RGB smet2005 , HSV skeoch2006 ; marques2013 , gray-scale chen2017a ) without great success for text documents due to their poor chromatic information.
More sophisticated compatibility measures were designed to solve image puzzles with rectangular tiles, such as prediction-based pomeranz2011 measure. In the context of document shreds, Andaló et al. andalo2017 proposed a modified version of the measure proposed by Pomeranz et al. pomeranz2011 , reaching near 100% of accuracy for simulated shredding with documents from ISRI-Tk OCR dataset nartker2005 , a collection of images commonly used to assess optical character recognition (OCR) software. Nevertheless, paixaotifs2019 ; paixao2018deep demonstrated that the accuracy of their method decreases dramatically when dealing with real-shredded documents of the same image collection.
Some authors designed compatibility measures focusing on text documents balme2007 ; morandell2008 ; ranca2013 . Balme balme2007 and Morandell morandell2008 addressed the problem of vertical misalignment between pixels around the cutting section, i.e., the area near the touching edges of two adjacent shreds. Both of them adopt binary image representation given the black-and-white appearance of the text documents. Balme’s measure is used in several works prandtstetter2008 ; gong2016 ; chen2018high , and consists of a weighted pixel correlation intended to mitigate the misalignment issue, whereas Morandell morandell2008 quantifies the degree of misalignment between corresponding text lines (“black” pixels) as a measure of compatibility. In an unsupervised approach, Ranca ranca2013
proposed learning the expected arrangement of pixels around the cutting section using information from the pixels inside the shreds. The best results were achieved with a simple probabilistic model, although they have also evaluated, unsuccessfully, feed-forward neural networks. Their experiments were also limited, given that they were carried out only on simulated-shredded data. Text-line detection was exploited inlin2012 ; sleit2013 ; pohler2015 , however, these methods struggle with ambiguities typically found in common text documents.
At a higher level of abstraction, compatibility can be assessed by exploring the matching degree of fragmented characters around the cutting section perl2011 ; phienthrakul2015 ; xing2017a ; xing2017b ; paixaotifs2019 . The continuity of character strokes was used as a matching criterion in phienthrakul2015 ; xing2017a . In such an approach, the reconstruction accuracy strongly relies on the vertical alignment of the shreds and the image quality around the shreds’ boundaries. Alternatively, learning-based matching has also been proposed in the literature perl2011 ; xing2017b ; paixaotifs2019 . Matching in perl2011 leverages OCR based on keypoint features. OCR tends to work well for general text recognition, but its application on corrupted characters is quite unstable, which turns it into a drawback of this formulation. Instead of identifying symbols, Xing et al. xing2017b proposed a learning model to identify valid combinations of symbols (restricted to the Chinese language) based on structural features. The work of Paixão et al. paixaotifs2019 , introduced in the previous section, analyzes the types of symbol combinations based on their shapes. Both xing2017b ; paixaotifs2019 depends on segmenting text information from shreds.
In recent work, Liang and Li liang2019reassembling proposed the word path metric
, which combines pixel- and character-level information (low-level metrics) with word-level information (high-level metric). A central procedure in their method is sampling candidate sequences and applying OCR for word recognition to improve pair compatibility estimation. Despite reporting accuracy comparable topaixao2018deep (the preliminary investigation on deep learning-based reconstruction), their validation relies on solely three real-shredded instances. For two of them, those which are up to 39 shreds, their method achieved accuracy above , while the third instance, with 56 shreds, yielded of accuracy. As mentioned by the authors, scalability for larger instances (i.e., with more shreds) is still an issue firstly due to the OCR working overhead and its accuracy degradation. Additionally, for better accuracy, the number and size of candidate sequences have to be increased, which compromises the run-time performance (it performed approx. 16 times slower than paixao2018deep for a 60-shred instance).
In the last years, deep learning started to be used in the context of jigsaw puzzle solving with simulated-cut tiles. Le and Li le2018jigsawnet applied convolutional neural networks (CNNs) to verify potential matching pieces in order to reduce the search space for the posterior optimization process. Paumard et al. paumard2018jigsaw solved small (33) 2D-tile puzzles following the seminal ideas introduced in doersch2015unsupervised ; noroozi2016unsupervised , in which CNNs are trained in a self-supervised way to predict the relative positions of patches cropped from a reference image. More related to our work, Sholomon et al. sholomon2016dnn
used a fully connected network to measure pairwise compatibility between 2D-tile. Boundary pixels of two tiles are fed to the network and the network’s output, i.e., the predicted adjacency probability, is assigned as the pair compatibility. Although these methods can improve the results, they only considered a non-realistic scenario with simulated-cut pieces.
To the best of our knowledge, this work is the first to explore deep learning in a realistic scenario with multiple shredded documents, having a preliminary investigation presented in paixao2018deep . The use of deep models aims to improve robustness in real shredding context, where the damage to the shreds’ borders prevents the use of similarity evaluation at pixel-level or of stroke continuity analysis. Additionally, our approach is able to cope with more heterogeneous content because the fitting of patterns is learned in a self-supervised fashion from large-scale data without segmenting symbols, as detailed in the next section.
3 Deep Learning-based Reconstruction
The proposed reconstruction system is essentially divided into training (off-line) and reconstruction (on-line) pipelines (Figure 1). The training (top flow) aims to produce a model capable of quantifying the compatibility between shreds based solely on the content around the cutting sections of digitally-cut documents. Small samples (given the whole document) extracted from these documents are the patterns to be learned. This local approach follows the intuition behind the manual reconstruction, where humans analyze the fitting of shreds based on local matching of fragmented patterns (mainly at text line level). These samples should be categorized as positive if they are likely to appear on real documents, or negative otherwise. In practice, positive samples are cropped from pairs of adjacent shreds, and negative from non-adjacent pairs. The learning process is said to be self-supervised because the adjacency relationship is automatically inferred in simulated shredding. After sampling, the deep model – a fully convolutional neural network (FCNN) – is trained as a classification model to distinguish between positive and negative samples, being the best model parameters determined through a validation process also using simulated data.
For reconstruction (bottom flow), the system has a mild assumption: the shreds of the documents to be reconstructed are already individualized in digital format, i.e., the documents were previously shredded, scanned, and their shreds were segmented. The best model obtained in the training stage is used for pairwise compatibility evaluation of the shreds. The resulting values, arranged as a square matrix, are the input for a graph-based optimization procedure that estimates a shreds’ permutation representing the final reconstruction. The training and reconstruction pipelines are presented in the following subsections alongside a more in-depth description of the proposed system.
3.1 Learning from Simulated-shredded Documents
The training pipeline aims to produce a model capable of quantifying pairwise compatibility, which means the probability of two shreds to be adjacent in a certain order (order matters given the nature of the reconstruction problem). The input can be any collection of digital documents from which all the training data is extracted through simulated shredding. This is particularly beneficial since there is a lack of publicly available datasets containing real-world textual shredded documents, and the generation of such a kind of dataset is tedious, error-prone, and highly demanding because it requires printing, submitting the documents to a paper shredder, manually organizing and scanning the shreds, and, finally, post-processing them. The generation of training data and the training process itself are discussed in the next sections.
3.1.1 Simulated Shredding
This process consists of longitudinally slicing digital documents into rectangular regions with the same dimensions, wherein the height of the regions is equal to that of the input image, and the number of regions is an approximation of the number of shreds produced by regular shredders for A4 paper sheets. Text documents in most available public collections are binary or almost binary (e.g., the two collections in Section 4.1). This motivated us to adopt a binary representation of the input documents – resulting from applying Sauvola’s method sauvola2000adaptive – before the simulated shredding.
The simulated shreds, however, present clean edges, which is very unlikely in real-world mechanical shredding. To cope with that, the original content of the two rightmost and leftmost pixel columns of the shreds is replaced by a black-and-white pattern drawn from the uniform binary distribution . An overview of the process is depicted in Figure 2.
3.1.2 Sample Extraction
The input of this step is a document-wise set of digital shreds, and the output is a set of samples to be used in the training of the deep learning model. Given an input document, samples are extracted by pairing shreds and cropping small regions around the touching borders: positive samples come from adjacent shreds (respecting the shreds’ ground-truth order) and negatives from non-adjacent shreds (or adjacent shreds in swapped order). As the shreds are automatically obtained, the samples can be self-annotated since the correct sequence of shreds in the original document is known.
Positive and negative training samples were extracted following the same procedure: given a pair of shreds, a sample is a rectangular region of pixels ( rows of the rightmost pixels of the left shred and rows of the leftmost pixels of the right shred). Such dimensions correspond to the minimum even-valued input size that the adopted network architecture (described in the next section) can handle.
The shreds were sampled every two pixels along the vertical axis, and a limit of positive samples per document was fixed. To produce balanced sets, the number of negative samples is limited to the number of positive samples collected in the same document. It is important to mention that the document datasets are available in binary format, as further discussed in Section 4.1. In this context, we define the level of information of a sample as the percentage of its text (assumed as black) pixels. For effective training, samples with an information level less than a threshold are discarded due to the class ambiguity of such cases. This threshold was empirically set to based on visual inspection of a few samples: lower than this value, samples usually look like scanning noise.
Before extraction, the pairs of shreds are firstly shuffled to ensure sampling in different regions of the document since the number of samples per document is limited. Note that the extraction procedure is applied to each fragmented document obtained with simulated shredding, one document at a time, resulting in balanced sets of positive and negative samples.
3.1.3 Model Training
At this point, two balanced sets (positive and negative) of shreds with pixels are available for training the deep learning model. The SqueezeNet iandola2016squeezenet architecture pre-trained on (RGB) images111The size reported in iandola2016squeezenet seems a typo since is the size used in the official implementation (https://github.com/forresti/SqueezeNet).
of ImageNet was chosen because it has been shown to be efficient for the classification task, i.e., it can achieve good performance with considerable few parameters, and due to its fully convolutional structure, which is particularly interesting during the inference time, as further discussed in Section3.2. More specifically, the vanilla (i.e. no bypass connections) SqueezeNet v1.1 implementation was adopted, which is a modification of the original SqueezeNet with similar accuracy in the classification task, however, with 2.4 times less computation effort222https://github.com/forresti/SqueezeNet/tree/master/SqueezeNet_v1.1..
Since SqueezeNet is fully convolutional, it can be fed with images whose dimensions are different from the original input size. Therefore, training with samples does not require any further architectural modifications. To leverage the ImageNet’s pre-training, the binary samples were replicated to the three channels of the network instead of reducing the network’s input to a single channel. The number of filters in the last convolutional layer was reduced from
(ImageNet’s number of classes) to two filters in order to match the positive and negative classes, and the weights for this layer were initialized under a zero-mean Gaussian distribution with a standard deviation of, as done in the original SqueezeNet implementation.
From the entire database, of the documents (random selection) were designated for training, and for the validation of the model. Therefore, samples of the same document are used exclusively either to train or validate the model. With the architecture properly adjusted for the problem and the weights initialized, the training can begin. The model was trained during epochs in mini-batches of images using the Adam optimizer with default settings kingma2014adam and the categorical cross-entropy loss. The classification accuracy was measured on the validation set at the end of each epoch, and the epoch that yielded the highest accuracy was chosen to determine the “best” model, i.e., the model deployed for compatibility evaluation.
3.2 Reconstruction of Mixed Shredded Documents
The system described in Figure 1 (bottom pipeline) assumes that the documents were previously fragmented by a paper shredder and that the resulting shreds were scanned and separated in image files at a disk (the semi-automatic segmentation process adopted by our group is detailed in paixaotifs2019
). After loading data, the shreds are also binarized with Sauvola’s algorithmsauvola2000adaptive since the model was trained with binary samples. Subsequently, as recommended in prandtstetter2008 , the blank shreds (i.e., those without black pixels) are discarded since they increase processing overhead without providing relevant information for the forensic examiners. Then, the trained neural model is applied for pairwise compatibility evaluation of the remaining (non-blank) shreds. These compatibility values (arranged as a matrix) are the inputs for the optimization search of the reconstruction problem solution: the permutation of shreds that (ideally) reassembles the original documents. Both compatibility evaluation and optimization search are discussed in the next subsections.
3.2.1 Pairwise Compatibility Evaluation
Given a set of non-blank shreds
, resulting from mixing the shredded documents to be reconstructed, an ordered pair, , denotes placed to the left of . The goal of this stage is to estimate a compatibility value for every , which can be arranged in a square matrix where each entry ) matches the compatibility for . In other words, quantifies how likely is the right neighbor of in the original document. The estimation of is focused on regions around the edges of and (see Figure (a)a). The rightmost pixels of each row of are joined (at left) with the leftmost pixels of , giving rise to a rectified image, where is the minimum height of both shreds. The rectified image carries the information to be evaluated by the trained model. To account for vertical misalignment, different images are derived from the rectified image by vertically shifting its right part (blue area) units in the range . Let each of these images to be denoted by , the subscript indicating the vertical shift. By default, is set to , thus (i.e., ) different images should be evaluated. Only the center rows are considered in computation, as illustrated in Figure (b)b.
For faster inference, the derived images are bundled in a batch of size and processed by the deployed neural network. Since the SqueezeNet architecture is fully convolutional and was trained with images of pixels, the inference on a image is equivalent to sliding vertically the -size trained network across the input image with an implicit stride of , as illustrated in Figure 4. Note that inference on pixels produces a pair of feature values (positive and negative). When applied to a image, an inference produces a feature map (
). After global average pooling, the map is reduced to a pair of positive/negative logits from which probabilities are obtained via softmax. The compatibilityis then set to the highest positive probability in a total of values. More formally,
where represents the network’s logits output given the image , and the positive probability computed by the softmax function on .
3.2.2 Optimization Search
The solution of the reconstruction problem is represented by a sequence of , where is a permutation of . The goal of this stage is to estimate a permutation from the previously computed compatibilities. To accomplish this, it was adopted the graph-based optimization model proposed by our group in paixaotifs2019 , which is closely related to morandell2008 ; prandtstetter2008 . The central idea is that the desired solution (permutation) is given by solving the Minimum-cost Hamiltonian Path Problem (MCHPP) of a weighted directed graph that represents the shreds and association costs (incompatibility) among them.
Given that MCHPP is a minimization problem, a distance (cost) matrix is first derived from the compatibility matrix by setting for , and for the diagonal elements of . The distance matrix can be viewed as a complete directed weighted graph , where a vertex maps to a shred , is the set of arcs, and is the weight function defined such that . As shown in paixaotifs2019 ; paixao2018deep , the solving formulation consists in first converting the MHCPP problem into an Asymmetric Traveling Salesman Problem (ATSP) by adding a “dummy” vertex and connecting it to all other vertices with zero weight arcs. ATSP is solved exactly as proposed in paixaotifs2019 . In summary, ATSP is reduced to the (Symmetric) Traveling Salesman Problem (TSP) by the two-node transformation proposed by Jonker and Volgenant jonker1983 . The solution for TSP is provided by an exact solver (Concorde software applegate2003 configured with QSOpt333http://www.dii.uchile.cl/~daespino/QSoptExact_doc/main.html.).
4 Experimental Methodology
The general purpose of the experiments is to evaluate the reconstruction accuracy by mixing different quantities of single-page shredded documents (hereafter referred to as documents for simplicity) following an incremental strategy. Besides the two evaluation datasets used in paixaotifs2019 and paixao2018deep , a new collection with 100 documents was assembled specifically for this investigation.
The experiments were divided into three main parts. First, the proposed method was evaluated in its default configuration. Then, an ablation study was conducted to assess the sensibility of our method concerning three key parameters. The last part is a comparative evaluation with state-of-the-art methods available in the literature. The following sections describe, respectively, the datasets and the performance metric used for quality assessment, the conducted experiments, and, finally, the computational platform on which the experiments were carried out.
4.1 Training Datasets
As stated in Section 3.1, the training of the model for compatibility evaluation should take, as input, any document collection. In fact, different training datasets should be provided to enable cross-database evaluation. In this work, two collections of scanned documents were used (one at a time) to extract training samples: Isri-OCR and Cdip.
In this paper, Isri-OCR refers to a subset of the ISRI-Tk OCR collection nartker2005 used in paixao2018deep , which includes binary documents (originally scanned at dpi) labeled as reports, business letters, or legal documents. The structure of these documents has a high degree of similarity, generally focusing on running text at the expense of graphical elements (i.e., pictures, tables, graphs).
This dataset comprises documents from the RVL-CDIP collection harley2015icdar , of which there are documents from each of the following classes: form, email, handwritten, news article, budget, invoice, questionnaire, resume, and memo. In summary, this dataset has a more diverse collection of documents. The documents were chosen arbitrarily, except for the restriction that they should present textual content at some level. Since the RVL-CDIP is a subset of the IIT-CDIP Test Collection 1.0 lewis2006building but in lower resolution, we decided to use the corresponding dpi images from the original IIT-CDIP dataset (the resolution matches that of the evaluation datasets).
4.2 Evaluation Datasets
Three datasets were used to evaluate the methods: S-Marques, S-Isri-OCR, and S-Cdip (the last is a contribution of this work). The “S-” prefix stands for mechanically “shredded” and was used to differentiate from the training datasets, which comprise the original (unshredded) documents.
S-Marques refers to the text documents in Portuguese of the strip-shredded dataset produced by Marques and Freitas marques2013 . To create this dataset, the authors collected paper documents (a digital backup is also available along with the dataset) and shredded them using a Cadence FRG712 strip-cut machine. The resulting shreds were scanned at dpi and then separated in JPEG files (one for each shred). Compared to the other datasets, as shown in Figure 5, S-Marques’s shreds have a more uniform shape, and are less damaged by the shredder’s blades, i.e., they are less curved and their borders are less corrupted (smooth serrated effect).
This dataset was produced in the previous work of our research group paixaotifs2019 ; paixao2018deep from a set of business letters and legal reports of the ISRI-Tk OCR collection, the same set used in andalo2017 to assess the reconstruction of simulated-shredded documents. The digital documents were printed onto A4 paper and subsequently submitted to a Leadership 7348 strip-cut paper shredder. To expedite the acquisition process, the shreds were spliced onto a high-contrast paper, and, after scanning (at dpi), they were segmented and stored individually in JPEG files. This process is more detailed in paixaotifs2019 .
The S-Cdip dataset, which is a particular contribution of this work, is the shredded version of the digital documents in Cdip. The same methodology to create S-Isri-OCR was also adopted for this dataset. As illustrated in Figure 5, the shreds of S-Isri-OCR and S-Cdip depict a higher degree of vertical misalignment in view of S-Marques, as well as more damage at their extremities.
4.3 Multi-reconstruction Accuracy Metric
The reconstruction quality for strip-shredded documents is often measured as the proportion of matching shreds in the reconstruction solution prandtstetter2008 ; morandell2008 ; andalo2017 ; paixao2018deep ; paixaotifs2019 , i.e., the number of pairs of adjacent shreds in the estimated solution that are also adjacent in the original document. For multi-reconstruction, nevertheless, the solution includes shreds from different documents, and the order these documents appear is not relevant. To account for this fact, we consider that, in a multi-reconstruction solution, the last shred of a document matches the first shred of any other document.
For a more formal definition, let denote the set of shreds (indices indicating the ground-truth order) of the -th individual document among those to be reconstructed. From the notation introduced in Section 3.2.1, it follows that and . A solution for the reconstruction problem is given by a bijective mapping of the documents shreds’ into positions, i.e., . For multi-reconstruction, a matching pair of shreds in a solution has the form (intra-document) or (inter-documents), where and . Therefore, following these two matching criteria, accuracy can be calculated as
where is the 0-1 indicator function. Note that accuracy ranges in the interval , where 0 implies a fully disordered reconstruction, and 1 is achieved only by a perfect reconstruction.
In the preliminary work paixao2018deep , the experiments were not cross-database since the documents of S-Isri-OCR were reconstructed with a model trained on documents of the ISRI-OCR Tk collection. In practice, such experiments assume the availability of training data that share significant appearance and structural similarities with test data. For a more realistic scenario, the experiments in this paper followed a cross-database protocol in which testing on S-Cdip (the dataset produced in this work) leverages the model trained on Isri-OCR, and testing on S-Isri-OCR and S-Marques uses the model trained on Cdip.
The evaluation is performed in an incremental way so that new shredded documents are gradually introduced to the reconstruction instance. For the sake of notation, let denote the number of mixed documents of a particular instance. The main purpose of the incremental approach is to evaluate whether the reconstruction accuracy degrades with the increasing of . Due to the processing burden of this type of experiment, the ablation study evaluates incrementally documents at a time, while the other two experiments also include , where denotes the size of the current evaluation dataset (, , or , as described in Section 4.1). For each value, a set of -size instances (i.e., mixed documents) should be sampled. Note that the size of the sample space varies significantly over . For example, there are possible ways of combining S-Cdip’s documents, whereas for , this number rises to . Instead of independently sampling combinations, the test instances are assembled in such a way that the -size instances are obtained by adding a single document to each instance of size . For a better description, let be a random sequence of the shredded documents of a particular dataset. The test instances for a particular consist of all possible consecutive documents, i.e., , , and so on, until . Note that this yields overlapping of test instances for the same , as well as across values.
In the first experiment, the incremental procedure was used to assess the robustness of the proposed method in its default parameters’ configuration: and samples size of (defined in Section 3.1.2), and (defined in Section 3.2.1).
For the second experiment, an ablation study was carried out to evaluate the sensitivity of the system with respect to the aforementioned parameters, one at a time. The parameters’ domain for , sample size, and were set to , and , respectively. The investigation of aims to verify the system’s robustness with respect to the amount of information contained in the samples, which can vary for different font types and sizes. The desirable behavior is that the average accuracy holds for the widest possible range of . The motivation behind the analysis of the sample sizes is confirming whether the locality assumption holds or not. Notice that training with -width samples requires adjusting the input window to in the pairwise compatibility evaluation stage (Section 3.2.1). The analysis of evaluates the need for (vertically) aligning the shreds at test time since no image processing was previously applied to this intent.
The final experiment aims at comparing our method (referred to as Proposed) against three relevant methods of literature. The first is referred to as Paixão, a character shape-based method developed in previous work paixaotifs2019 . The original implementation, intended for single-document reconstruction, uses caching of shape dissimilarities to improve time efficiency, which, on the other hand, compromises memory scalability for multi-document reconstruction. Therefore, reconstruction with this method was limited to documents. The second method is the one proposed by Liang and Li liang2019reassembling , which is referred to as Liang. Due to time restrictions of the provided implementation444https://github.com/xmlyqing00/DocReassembly., the multi-reconstruction experiment was run only for the datasets S-Marques and S-Isri-OCR limited to documents. We adopted the parameters for the real-shredded instances 1 and 2 (total of 3) of the original work. For the matter of consistency, we configured the OCR software on which the Liang method relies to the Portuguese language when testing on S-Marques. The last method, referred to as Marques marques2013 , relies on edge pixel dissimilarity for compatibility evaluation and was chosen due to its superior performance compared to other methods of literature, as can be seen in paixao2018deep ; paixaotifs2019 . While Paixão and Proposed share the same optimization formulation, Marques uses a simple greedy nearest-neighbor approach. Thus, for a fairer comparison, our system was also evaluated with the Marques’ optimization model to emphasize the role of compatibility evaluation in producing accurate reconstructions. The modified method is referred to as Proposed-NN.
4.5 Experimental Platform
The experiments were carried out on two machines: (M1) an Amazon AWS instance with 8 vCPUs (2.3GHz), 60GB RAM, and a GPU NVIDIA Tesla V100 (16GB); (M2) an Intel Xeon E7-4850 v4 (2.10GHz) with 128 vCPUs, 252GB RAM. The ablation study was fully performed on M1. The methods Liang, Paixão, and Marques, which do not require GPU processing, were conducted on M2. As Liang leverages OpenMP555www.openmp.org. directives to improve efficiency, we used 240 threads (120 vCPUs) in the experiments. For Proposed
/Proposed-NN, the compatibility evaluation in the experiments 1 and 3 was carried out on M1, while the optimization process was performed in M2 due to the large memory resources. The proposed system was implemented in Python with TensorFlow for training and inference, and with OpenCV for image processing. The code, pre-trained models, and datasets are publicly available athttps://github.com/thiagopx/deeprec-pr20.
5 Results and Discussion
5.1 Proposed Method
Figure 6 shows the multi-reconstruction accuracy (mean and confidence interval) obtained with the proposed method (default parameters) for three evaluation datasets. Overall, the proposed method performed above for the three datasets, and, comparatively, S-Cdip was verified, as expected, the most challenging test collection (an example of reconstruction is shown in Figure 7). The confidence interval tends to be wider as fewer documents are available, which is the case of S-Isri-OCR. Furthermore, the accuracy tends to stabilize for large , which means that the insertion of new documents into the reconstruction instance does not degrade accuracy, even though it increases considerably the complexity of the problem.
Breaking down the performance on the S-Cdip dataset, Figure 8 shows accuracy boxplots for single-reconstruction () across the dataset categories. From the documents, were perfectly reconstructed, and only had accuracy lower than . Remarkably, the reconstruction of handwritten documents achieved high accuracy ( in were superior to ) although no handwritten document was used to train the compatibility evaluation model. For documents of the type form, all reconstructions achieved accuracy higher than , being of them perfect. The results for the handwritten and form categories show that learning is not restricted to the symbolic level, and that lower-level features (e.g., strokes and horizontal lines) can also be learned by the model.
As seen in Figure 8, the letter category has a larger variability in comparison to the others. In this category, there are three documents with very small fonts and whose shreds have degraded borders beyond the regular corruption found in most shreds. Although the accuracy for these three documents is low (
), such values are not low enough to be considered outliers, which explains the elongated aspect of the letter’s boxplot. The poor outlier performance, more evident in thebudget and email categories, is mainly caused by three factors that may occur in combination or separately: (i) low quality of text symbols (i.e., low resolution, corrupted data), large flat areas (i.e., low amount of information), or (iii) large areas covered by patterns not learned by trained model. These challenging factors are illustrated in Figure 9. The shreds were placed side-by-side in the ground-truth order and the activation maps from the SqueezeNet’s last convolutional layer were adjusted and superimposed to the shreds boundary zones. Green areas represent high degree of compatibility, while red ones represent the opposite. Neutral zones are usually gray, indicating a balance between the positive and negative classes. Nonetheless, it can be noticed in Figure 9a reddish areas for neutral zones due to bias, or caused by noise (small black regions) in the highlighted areas close to the borders.
In the first case (Figure 9a), an email document with large blank areas and corrupted characters was reconstructed with of accuracy. Due to the low amount of information, the compatibility evaluation and, as a consequence, the reconstruction accuracy is more sensitive to corrupted data. The second document (Figure 9b) is a budget with a large area covered by a grid pattern, and for which the obtained accuracy was . Unlike the horizontal lines, which are captured by the model, the vertical lines lead to erroneous evaluations by the model. This is justified by the scarcity of such patterns in the training set, which comprises images from Isri-OCR. By restricting the shreds to their first rows (orange highlighted region in Figure 9), the reconstruction accuracy increases to . Although the aforementioned cases yielded low-accuracy reconstructions, it does not mean that the same behavior will invariably be observed for documents with similar layout/features. The reconstruction quality also depends on where the cuts take place. In S-Cdip, for example, there are an email and a budget document visually similar to those in Figure 9 for which the accuracy was and , respectively.
5.2 Ablation Study
The results for the three investigated parameters are summarized in Figures 10, 11, and 13. Figure 10 shows the accuracy sensitivity with respect to the parameter . Ideally, the system is expected to be robust to changes in this parameter. From the results, it can be observed a wider variation range for . The performance difference becomes less noticeable as increases, which represents a more realistic scenario for the reconstruction application.
Figure 11 shows the impact of the size of training samples on the final reconstruction performance. In general, the system generalizes better across the datasets for samples with reduced width, i.e., and . By keeping the samples narrower, visual ambiguity (illustrated in Figure 12) can be explored in compatibility evaluation of scarce/unseen patterns in the training data. For instance, the model can perceive a “wo” association as valid (as in “world”) if samples with “vo” (as in “voxel”, “volume”, and “reservoir”) were observed during the training. The results for samples were competitive in terms of accuracy on the S-Isri-OCR, where the documents have primarily textual content. However, the performance dropped significantly for documents with higher density of graphical elements (e.g., forms and budgets) present in S-Marques and S-Cdip.
Finally, Figure 13 shows the influence of the vertical shift range parameter () on the reconstruction performance. In practice, no sensitivity to this parameter was observed for S-Marques since the shreds for this collection are (practically) vertically aligned, as exemplified in Figure (a)a. In contrast, the results on the S-Isri-OCR and S-Cdip datasets, which better depict real-world conditions, show the relevance of properly treating the misalignment between shreds. The misalignment degree is higher for S-Cdip, which explains the consistent accuracy improvement with the increase of .
5.3 Comparative Evaluation
Figure 14 shows the comparative performance with the literature. The average accuracy of the proposed method using Concorde (Proposed) was consistently superior to the compared methods. Additionally, it demonstrated greater robustness, which is mainly evidenced by the stability of the accuracy curve with the increase of .
Unlike the proposed method, the modified version (Proposed-NN) – intended for comparison with Marques – presented a decay in accuracy with the increase of . Nevertheless, it greatly outperformed Marques, which also uses the same optimization approach, and Paixão, which leverages Concorde. In fact, Marques struggles with black-white documents since it is based on color features. Moreover, it is very sensitive to the damage on the shreds’ borders caused by the mechanical fragmenting process, and to the vertical misalignment of the shreds. Both issues are accentuated in the S-Isri-OCR and S-Cdip datasets, resulting in a significant drop in performance when compared to S-Marques. It can also be observed that the accuracy of Paixão degrades more sharply for S-Cdip, which is explained by the large presence of pictorial elements (as depicted in Figure 9b), and also by a greater diversity of symbols in different font types, sizes, and styles (including handwritten characters). When mixing documents, such diversity becomes a critical factor since Paixão assumes a fixed-size alphabet in which each symbol has a unique representative. For single-reconstruction (), Liang was capable of reconstructing 7 pages of S-Marques (in a total of 60) with of accuracy. These instances have a great concentration of text and no pictorial content. Nonetheless, the average accuracy considering all the 60 pages was under with a sharp decay as increases. The observed decay corroborates the scalability issue raised by the authors and mentioned in Section 2. Like Marques, the accuracy was dramatically worse for S-Isri-OCR than for S-Marques. This is because Liang strongly relies on the OCR capability of recognizing full words on composition of shreds (visually similar to Figure 7), and such capability is substantially affected by geometric distortion between shreds.
Besides reaching better accuracy, the proposed approach is also remarkably more scalable in terms of time performance than Paixão and Liang, as seen in Figure 15. Time scalability is a critical issue in real scenarios because it is expected much more than 5 shredded pages to be reconstructed. Note, in particular, that the time performance of Liang was the worst even leveraging heavy parallelism (240 threads). This is due to the overhead introduced by successive calls to the OCR software, which is the core of their method. Conversely, Marques is very time efficient, but, as shown in Figure 14, it delivered low accuracy reconstruction. Nonetheless, Marques’ time performance will serve as the lower bound of efficiency for future optimizations of the proposed method.
This paper addressed the reconstruction of mixed text documents focusing on the central problem of evaluating the compatibility between shreds. The proposed segmentation-free deep learning approach enabled faster and more robust reconstruction of strip-shredded documents in more realistic scenarios and it also has the benefit of self-supervised learning, which facilitates scaling the training data.
To enable a better and more extensive evaluation, we introduced a new dataset containing 100 mechanically-shredded documents ( shreds) with diverse layout. Despite the challenging scenarios, real-world cross-database experiments showed that our method achieved average accuracy superior to for different quantities of mixed documents. Nevertheless, the absence or scarcity of some patterns may hamper the proper reconstruction of the documents. A possible way to solve this problem is fine-tuning the model with samples from the inner region of the shreds belonging to the test documents themselves.
The ablation study evidenced that small and local samples are more effective for learning the compatibility between shreds. It is important, though, to consider this result in view of the limited diversity of the training data produced from relatively – in the context of deep learning – few documents. Additionally, the study showed the relevance of treating the misalignment between shreds at test time. An alternative approach for this issue is augmenting training data by simulating vertical misalignment. This would save processing time during the on-line reconstruction stage but could increase the complexity of the problem.
Comparative experiments showed that the accuracy of the proposed method (even in the modified version) was superior to the current state-of-the-art. When compared to Paixão paixaotifs2019 , for instance, our method generalized better for documents with a more diverse layout and appearance, and also scaled more time-efficiently for the multi-document scenario. Furthermore, the time savings obtained by Marques marques2013 (based on the naive dissimilarity between edge pixels) were shown at the price of low reconstruction accuracy. Finally, the recently published Liang method liang2019reassembling performed significantly worse than the proposed method in terms of accuracy, in addition to a limited time-scalability to real-world scenarios comprising several documents.
In addition to the mentioned directions, there are still other open problems to be addressed in our future research. First, reconstruction should be extended to even more realistic scenarios in which some shreds are missing, rotated, and/or significantly damaged (e.g., wet, torn, or wrinkled). Finally, we will also investigate the use of deep learning to extract text information (e.g, OCR, text/word spotting) from the reconstructed documents.
Conflict of interest
The authors declare that they have no conflict of interest.
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. Cloud computing resources were provided by AWS Cloud Credits for Research program. We thank the NVIDIA Corporation for the donation of a Titan Xp GPU used in this research. We also acknowledge the scholarships of Productivity on Research (grants 311120/2016-4 and 311504/2017-5) supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq, Brazil).
E.-J. Cheng, K.-P. Chou, S. Rajora, B.-H. Jin, M. Tanveer, C.-T. Lin, K.-Y. Young, W.-C. Lin, M. Prasad, Deep sparse representation classifier for facial recognition and detection system, Pattern Recognit. Letters 125 (2019) 71–77.
- (2) O. Gómez, O. Ibáñez, A. Valsecchi, O. Cordón, T. Kahana, 3D-2D silhouette-based image registration for comparative radiography-based forensic identification, Pattern Recognit. 83 (2018) 469–480.
Y. Li, X. Yang, P. Sun, H. Qi, S. Lyu, Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics, in: Conf. Comput. Vision and Pattern Recognit., 2020.
- (4) R. Qureshi, M. Uzair, K. Khurshid, H. Yan, Hyperspectral document image processing: Applications, challenges and future prospects, Pattern Recognit. 90 (2019) 12–22.
- (5) S. He, L. Schomaker, Deep adaptive learning for writer identification based on single handwritten word images, Pattern Recognit. 88 (2019) 64–74.
- (6) A. Soleimani, B. N. Araabi, K. Fouladi, Deep multitask metric learning for offline signature verification, Pattern Recognit. Letters 80 (2016) 84–90.
- (7) R. Townshend, R. Bedi, P. Suriana, R. Dror, End-to-End Learning on 3D Protein Structure for Interface Prediction, in: Advances in Neural Information Processing Systems, 2019, pp. 15616–15625.
- (8) C. Ostertag, M. Beurton-Aimar, Matching ostraca fragments using a siamese neural network, Pattern Recognit. Letters 131 (2020) 336–340.
- (9) T. M. Paixão, M. C. S. Boeres, C. O. A. Freitas, T. Oliveira-Santos, Exploring Character Shapes for Unsupervised Reconstruction of Strip-shredded Text Documents, IEEE Trans. Inf. Forensics Secur. 14 (7) (2019) 1744–1754.
- (10) J. Perl, M. Diem, F. Kleber, R. Sablatnig, Strip shredded document reconstruction using optical character recognition, in: Int. Conf. on Imag. for Crime Detection and Prevention, 2011, pp. 1–6.
- (11) H. Badawy, E. Emary, M. Yassien, M. Fathi, Discrete grey wolf optimization for shredded document reconstruction, in: Int. Conf. on Adv. Intell. System and Information, 2018, pp. 284–293.
- (12) M. Marques, C. Freitas, Document decipherment-restoration: Strip-shredded document reconstruction based on color, IEEE Latin America Trans. 11 (6) (2013) 1359–1365.
- (13) Y.-J. Gong, Y.-F. Ge, J.-J. Li, J. Zhang, W. Ip, A splicing-driven memetic algorithm for reconstructing cross-cut shredded text documents, Applied Soft Computing 45 (2016) 163–172.
- (14) J. Chen, D. Ke, Z. Wang, Y. Liu, A high splicing accuracy solution to reconstruction of cross-cut shredded text document problem, Multimedia Tools and Appl. 77 (15) (2018) 19281–19300.
- (15) T. M. Paixão, R. F. Berriel, M. C. Boeres, C. Badue, A. F. De Souza, T. Oliveira-Santos, A deep learning-based compatibility score for reconstruction of strip-shredded text documents, in: Conf, on Graphics, Patterns and Images, 2018, pp. 87–94.
- (16) W. Morandell, Evaluation and reconstruction of strip-shredded text documents, Master’s thesis, Institute of Computer Graphics and Algorithms, Vienna University of Technology (2008).
- (17) M. Prandtstetter, G. R. Raidl, Combining forces to reconstruct strip shredded text documents, in: Hybrid Metaheuristics, Springer, 2008, pp. 175–189.
- (18) A. Sleit, Y. Massad, M. Musaddaq, An alternative clustering approach for reconstructing cross cut shredded text documents, Telecommunication Systems 52 (3) (2013) 1491–1501.
- (19) T. Phienthrakul, T. Santitewagun, N. Hnoohom, A Linear Scoring Algorithm for Shredded Paper Reconstruction, in: Int. Conf. on Signal-Image Tech. & Internet-Based Syst., 2015, pp. 623–627.
- (20) P. De Smet, J. De Bock, W. Philips, Semi-automatic reconstruction of strip-shredded documents, in: SPIE Electronic Imaging, Vol. 5685, 2005, pp. 239–249.
A. Skeoch, An investigation into automated shredded document reconstruction using heuristic search algorithms, Unpublished Ph. D. Thesis in the University of Bath, UK (2006) 107.
- (22) G. Chen, J. Wu, C. Jia, Y. Zhang, A pipeline for reconstructing cross-shredded English document, in: IEEE Int. Conf. on Image, Vision and Computing, 2017, pp. 1034–1039.
J. Chen, M. Tian, X. Qi, W. Wang, Y. Liu, A solution to reconstruct cross-cut shredded text documents based on constrained seed K-means algorithm and ant colony algorithm, Expert Syst. with Appl. 127 (2019) 35–46.
- (24) D. Pomeranz, M. Shemesh, O. Ben-Shahar, A fully automated greedy square jigsaw puzzle solver, in: IEEE Conf. Comput. Vision and Pattern Recognit., 2011, pp. 9–16.
- (25) F. A. Andaló, G. Taubin, S. Goldenstein, PSQP: Puzzle solving by quadratic programming, IEEE Trans. on Pattern Anal. and Mach. Intell. 39 (2) (2017) 385–396.
- (26) T. A. Nartker, S. V. Rice, S. E. Lumos, Software tools and test data for research and testing of page-reading OCR systems, in: Electronic Imag., 2005, pp. 37–47.
- (27) J. Balme, Reconstruction of shredded documents in the absence of shape information, Tech. rep., Working paper, Dept. of Computer Science, Yale University, USA (2007).
- (28) R. Ranca, A modular framework for the automatic reconstruction of shredded documents, in: Workshops 27th AAAI Conf. on Artif. Intell., 2013.
- (29) H.-Y. Lin, W.-C. Fan-Chiang, Reconstruction of shredded document based on image feature matching, Expert Syst. with Appl. 39 (3) (2012) 3324–3332.
- (30) D. Pöhler, R. Zimmermann, B. Widdecke, H. Zoberbier, J. Schneider, B. Nickolay, J. Krüger, Content representation and pairwise feature matching method for virtual reconstruction of shredded documents, in: Int. Symp. on Image and Signal Process. and Anal., 2015, pp. 143–148.
- (31) N. Xing, S. Shi, Y. Xing, Shreds Assembly Based on Character Stroke Feature, Procedia Comput. Sci. 116 (2017) 151–157.
- (32) N. Xing, J. Zhang, Graphical-character-based shredded Chinese document reconstruction, Multimedia Tools and Appl. 76 (10) (2017) 12871–12891.
- (33) Y. Liang, X. Li, Reassembling Shredded Document Stripes Using Word-path Metric and Greedy Composition Optimal Matching Solver, IEEE Trans. on Multimedia 22 (5) (2020) 1168–1181.
- (34) C. Le, X. Li, JigsawNet: Shredded Image Reassembly using Convolutional Neural Network and Loop-based Composition, arXiv preprint arXiv:1809.04137.
- (35) M.-M. Paumard, D. Picard, H. Tabia, Jigsaw Puzzle Solving Using Local Feature Co-Occurrences in Deep Neural Networks, in: IEEE Int. Conf. on Image Process., 2018, pp. 1018–1022.
- (36) C. Doersch, A. Gupta, A. A. Efros, Unsupervised visual representation learning by context prediction, in: IEEE Int. Conf. on Comput. Vision, 2015, pp. 1422–1430.
M. Noroozi, P. Favaro, Unsupervised learning of visual representations by solving jigsaw puzzles, in: European Conf. on Comput. Vision, 2016, pp. 69–84.
- (38) D. Sholomon, O. E. David, N. S. Netanyahu, DNN-Buddies: a deep neural network-based estimation metric for the jigsaw puzzle problem, in: Int. Conf. on Art. Neural Networks, 2016, pp. 170–178.
- (39) J. Sauvola, M. Pietikäinen, Adaptive document image binarization, Pattern Recognit. 33 (2) (2000) 225–236.
- (40) F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0.5MB model size, arXiv preprint arXiv:1602.07360.
- (41) D. P. Kingma, J. Ba, Adam: A Method for Stochastic Optimization, in: Int. Conf. for Learn. Representations, 2015.
- (42) R. Jonker, T. Volgenant, Transforming asymmetric into symmetric traveling salesman problems, Operations Research Letters 2 (4) (1983) 161–163.
- (43) D. Applegate, R. Bixby, V. Chvatal, W. Cook, Concorde: A code for solving traveling salesman problems, http://www.math.uwaterloo.ca/tsp/concorde, accessed on: June 10, 2020 (2003).
- (44) A. W. Harley, A. Ufkes, K. G. Derpanis, Evaluation of deep convolutional nets for document image classification and retrieval, in: Proc. IEEE Int. Conf. on Document Analysis and Recognit., 2015, pp. 991–995.
- (45) D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, J. Heard, Building a test collection for complex document information processing, in: Conf. on Research and Develop. in Inf. Retrieval, 2006, pp. 665–666.