OSSR-PID: One-Shot Symbol Recognition in P ID Sheets using Path Sampling and GCN

09/08/2021 ∙ by Shubham Paliwal, et al. ∙ Tata Consultancy Services 0

Piping and Instrumentation Diagrams (P ID) are ubiquitous in several manufacturing, oil and gas enterprises for representing engineering schematics and equipment layout. There is an urgent need to extract and digitize information from P IDs without the cost of annotating a varying set of symbols for each new use case. A robust one-shot learning approach for symbol recognition i.e., localization followed by classification, would therefore go a long way towards this goal. Our method works by sampling pixels sequentially along the different contour boundaries in the image. These sampled points form paths which are used in the prototypical line diagram to construct a graph that captures the structure of the contours. Subsequently, the prototypical graphs are fed into a Dynamic Graph Convolutional Neural Network (DGCNN) which is trained to classify graphs into one of the given symbol classes. Further, we append embeddings from a Resnet-34 network which is trained on symbol images containing sampled points to make the classification network more robust. Since, many symbols in P ID are structurally very similar to each other, we utilize Arcface loss during DGCNN training which helps in maximizing symbol class separability by producing highly discriminative embeddings. The images consist of components attached on the pipeline (straight line). The sampled points segregated around the symbol regions are used for the classification task. The proposed pipeline, named OSSR-PID, is fast and gives outstanding performance for recognition of symbols on a synthetic dataset of 100 P ID diagrams. We also compare our method against prior-work on a real-world private dataset of 12 P ID sheets and obtain comparable/superior results. Remarkably, it is able to achieve such excellent performance using only one prototypical example per symbol.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Piping and instrumentation diagrams (P&ID) [10] are a standardized format for depicting detailed schematics about material flow, equipment components, and control devices in oil and gas, chemical manufacturing and underground utility companies. A P&ID is based on process flow diagrams which illustrate the piping schematics with instruments using line and graphical symbols, along with appropriate annotations. Presently, millions of such legacy P&IDs are stored in an unstructured image format and the data trapped in these documents is required for inventory management, detailed design, procurement, and construction of a plant. Manually extracting information from such P&ID sheets is a very tedious, time-consuming and error-prone process as it requires that users interpret the meaning of different components such as pipelines, inlets, outlets and graphic symbols based on the annotations, and geometric and topological relationships of visual elements [1]. To mitigate these issues, there is an urgent need to automate the process of digitizing P&ID sheets to facilitate fast, robust and quick extraction of information.

The major components of P&ID include symbols with field-specific meanings, pipelines representing connections between symbols, and textual attributes [3]. There exists very limited body of work on digitization of engineering drawing documents in the literature  [6, 11, 2, 18, 8]. Early attempts used traditional image recognition techniques utilizing geometrical features of objects such as edge detection [15], hough transform [30] and morphological operations [25]. Authors in  [31] proposed an approach where the system tries to learn and store the graphical representation of the symbol presented by the user. Later, the system uses the acquired knowledge about the symbol for recognition. These earlier systems based on standard vision techniques were not powerful enough to address the challenges arising due to noisy images, variations in text fonts, resolutions and minute visual differences among various symbols.

Fig. 1: Flowchart showing different modules of OSSR-PID proposed for Symbol Recognition in P&ID sheets.

Recently deep learning techniques 

[12] have yielded significant improvements in accuracy for information extraction from P&IDs. Recent work by Rahul et. al. [19] utilize a combination of traditional vision techniques and state-of-the-art deep learning models to identify and isolate pipeline codes, pipelines, inlets and outlets, and for detecting symbols. The authors proposed the use of Fully Convolutional Networks (FCN) based segmentation for detection of symbols to address low inter-class variability and noisy textual elements present inside symbols. However, this requires significant volumes of annotated symbols for training the network and fails for rarely occurring symbols for which training data is sparse. Additionally, in practice, the symbol-set keeps changing and ideally, the system should allow for introduction of new symbols without manual annotation or complete re-training. In effect, given just one clean symbol image, we wish to detect all instances of the symbol in noisy and crowded P&ID diagrams.

To address these challenges, we propose to exploit the recent advances in few-shot/one-shot learning  [14, 23, 21, 5, 24]. Recent attempts at one-shot learning broadly adopt two approaches: 1) metric learning [26, 28, 24] which involves learning an embedding space to separate different classes or 2) a meta-training phase [20, 13] which involves training a meta learner on a collection of tasks from the same task distribution to discover a task-independent network initialization. However, such techniques require a significant number of training tasks for meta-training prior to being employed for one-shot recognition. We wish to avoid the meta-training step entirely, and to directly perform one-shot symbol recognition. Hence, we propose a method for recognition of symbols with just one training sample per class, which is represented as a graph with points sampled along the boundaries of different entities present in the P&ID and subsequently, utilize Dynamic Graph Convolutional networks [27] for symbol classification. In particular, we make the following contributions in this paper:

  • We propose a novel, fast and efficient one-shot symbol recognition method, OSSR-PID, for P&ID sheets utilizing only one prototypical example image per symbol class.

  • We propose a method to sample pixels along the boundaries of different entities, which are present in P&ID image and construct a graph which captures the connectivity between different entities (instruments / symbols).

  • We utilize Dynamic Graph Convolutional Neural Networks (DGCNNs) [27] for symbol graph classification. We also augment the network with visual embeddings from Resnet-34 model trained on symbol-images of sampled points to make it more robust.

  • We utilize Arcface loss [4] for training of DGCNN network for the classification task as it optimizes feature embeddings to enforce higher similarity for intra-class symbols and diversity for inter-class symbols.

  • We evaluate OSSR-PID for symbol recognition on a synthetic dataset called Dataset-P&ID [16] 111 Dataset-P&ID: https://drive.google.com/drive/u/1/folders/1gMm_YKBZtXB3qUKUpI-LF1HE_MgzwfeR containing P&ID sheets in the test-set.

  • We also compare OSSR-PID against prior work by [19] on a private dataset of real-world P&IDs and achieve remarkable results.

Rest of the paper is organized as follows: Section II gives an overview of the problem that we solve in this paper. Details of dynamic graph CNN, its definition and edge convolutions are given in Section III. Subsequently, we describe the approach adopted for one shot symbol recognition in Section IV by explaining each step such as image tracing, path sampling, symbol-region segregation and graph convolutional network based symbol classification in Section IV-A IV-BIV-C and  IV-D respectively. Section V presents details of the dataset, results of the experiments conducted followed by discussions on them. Finally, we conclude the paper in Section VI.

(a) Individual paths obtained are plotted in different colors.
(b) Different loops from each path, plotted in different colors.
Fig. 2: Figures illustrating image-tracing output (a) Different paths obtained represented with different colors. (b) Different sequential loops (sampled points) obtained from each path covering the outline of image entities. Please note that in each path, there are multiple sequential loops over every image entity. As evident by the alphabet ’O’ in ”NOTE-23” in (a), entire alphabet is created using single path (represented in green color). A single sequential loop cannot cover both interior and exterior of character ’O’. Thus, there are two separate sequential loops on the outline as shown with different colors in (b).

Ii Problem Formulation

This paper proposes a method to process scanned P&ID sheets and detect the different symbols (components) present in each sheet. Additionally, we wish to find connections between components and identify different pipelines. Identification of pipeline involves sampling points along the periphery of the dark regions(printed) of P&ID sheets. Further, the regions are segmented from the points which form straight lines (i.e. pipes) to get the probable symbol regions. Subsequently, the points present in these regions are classified using Dynamic Graph Convolutional Neural Networks (DGCNNs) 

[27] as belonging to one of the given symbol classes.

Iii Dynamic Graph Convolutional Neural Network

DGCNN [27] takes given set of input data points to create a local graph in which each data point is connected to its nearest data points. The edges formed between the nearest neighbors are used for convolutional operations similar to GNNs [29]. As the name suggests, the graph created initially from the data is dynamic. Every layer of the network uses a k-NN algorithm to obtain a new set of neighbors based upon the data point’s embedding at the respective layer. Readers are referred to  [17] [27] for more details.

Iii-a Graph definition

Consider a directed graph , where represents vertices of the graph and represents edges that connect each vertex to its k-nearest neighbours. Each vertex has an -dimensional feature representation. Next, edge features are defined as , where are non-linear operations with set of learnable parameters . The constructed graph also consists of an additional edge with itself (i.e., self-loop) so that every vertex is pointing to itself.

Iii-B Edge Convolution

At each layer , the connectivity of every vertex is determined using a k-NN algorithm based on the feature embeddings of the vertices. For every pair of connected vertices and , features of edges are computed as explained in Section III-A. Subsequently, the edge features computed across the neighboring edges of a vertex are aggregated via a operator to update its embedding feature. Thus, the output obtained after the edge convolution is given by

(1)

However, to capture the local information as well as the global information, DGCNN defines , where defines the global information of the feature point and defines the local relative information of the feature point. Mathematically, this is represented as follows:

(2)

where, represents the set of neighbors to vertex . Figure 3 represents the Dynamic Graph CNN architecture used for symbol classification.

(a) DGCNN Architecture
(b) EdgeConv block
Fig. 3: (a) DGCNN [27] architecture, used for classification of sampled points into one of the symbol classes, takes -dimensional =1024 points as input. The network comprises of a set of EdgeConv layers whose outputs are aggregated globally to get a global descriptor, which is later used to obtain classification scores for classes. (b)

EdgeConv block: The EdgeConv layer takes input tensor of shape

and finds nearest neighbors based upon distance between embeddings. Subsequently, edge features are computed on these

edges by applying multi-layer perceptron on

and the output tensor of shape

is max pooled over

neighbors, resulting in final output tensor of shape of .
Fig. 4: DGCNN+Resnet-34+ArcFace Loss: Pre-trained Resnet-34 is used for extracting visual features from images() created from spatial information of point cloud. Subsequently, these Resnet-34 embeddings are aggregated with outputs of EdgeConv layers of DGCNN to obtain global descriptors, which are finally utilized to generate the embeddings. These embeddings are trained after introducing arc-margin with respect to their corresponding symbol classes.

Iv Methodology

In this section, we describe the proposed pipeline for symbol recognition, as shown in Figure 1, from P&ID sheets. Section IV-A describes how our image tracing technique produces smooth contours using a series of bezier [22] curves and lines. This is followed by a description of the path sampling technique in Section IV-B. A description of how the individual connecting pipes are segregated and potential separate regions are created for symbol identification is provided in Section IV-C. Finally, details of the DGCNN model for one-shot symbol localization and classification of separated contours is provided in Section IV-D and  IV-E.

Iv-a Image tracing

The black and white pixels in a binarized image give rise to several contours with dark regions forming separate contours. In image tracing, we process the image to produce a vector outline of these contours by approximating them via a set of algebraic equations like bezier curves 

[22]. The spline curves generalize the contours by creating smoother outlines. We use adaptive thresholding [7] on the input image with a window size of to efficiently segregate foreground (black pixels) and background (white pixels) regions. The contours for dark regions are obtained by a series of bezier curves and straight lines using the Potrace algorithm[22]. The output equations obtained are categorized as paths or loops. Contours which are connected in the diagram, are traced by sequential bezier curves which are connected end-to-end, forming a path. Each path is composed of one or more loops such that their ends are connected. Figure 1(a), shows different paths which are created to trace the dark contours using image tracing.

Iv-B Path sampling

After image tracing, we correct the image for rotation and scaling variations by applying a set of algebraic operations. The images are resized to a fixed width of while maintaining the aspect ratio. The paths generated from the Potrace algorithm consist of end-to-end bezier curves along the outlines of contours present in the image. We use these curves to sample a set of points which are continuous and are at fixed distance intervals. Since, the points are generated on the periphery of the contour regions, the close adjacent points (across the edges) are merged together. The set of points along each path formed on the outline of the same contour will have strong correlations w.r.t. slope and distance. The regions where the above two parameters vary, are marked as junction regions. Note that the potrace algorithm generates paths which contain loops. Each contour is guaranteed to be covered by the different loops present in the path. We create a unique graph for each path from the obtained set of points. The critical junctions of these graphs are obtained by identifying the points from where a new branch of points emerges or more than one branch of points merge together.

Fig. 5: (Left) Image patch from P&ID showing contours. (Right) Extracted symbol regions(red).

Iv-C Symbol Region Segregation

The paths obtained using path sampling include pipelines and symbols present in P&ID. The regions containing symbols are required to be segmented out for classification using Graph Convolutional Network. To achieve this, the obtained individual branches are processed to remove pipes (lines) from the P&ID image as follows:

  • The slope of each point in path is computed by using the average of two neighbors and as given by Equation. 3

    (3)
  • The points along the paths are traversed and for every point, a window of length and height in the orthogonal direction to the point’s slope, is checked for the presence of other points. If there exists no other point in the continuity of windows, then this query point on the path is classified as a line component, else it is classified as a potential region having symbols.

  • The parameter i.e., the height of orthogonal window is determined by traversing the path and finding the maximum distance from points present on contours, along the orthogonal direction to the point according to the following equation:

    (4)

    where are the points along the paths orthogonal to the query point .

  • Visual elements of a P&ID are largely connected (with few minor exceptions). In cases of small discontinuities in straight lines (pipes), the regions of discontinuity are marked as potential symbol regions. Even if the line terminates (at a symbol like a flange), and the slope changes by a significant amount, the terminal region of the line is marked as a potential symbol.

The individual regions obtained are plausible regions which can contain the symbols. Figure 5 shows a small patch of a P&ID illustrating the sampled points. Symbols are present over the straight lines. Junction points obtained over every path are used to segregate symbol regions, by removing the straight connecting lines.

Iv-D GCN based Symbol Classification

Symbol Classes: We use a set of symbol classes containing one clean sample image per class for symbol identification. We observe that these symbols exist in 4 different orientations each rotated by degrees. So, we augment the symbol images using rotation by degrees.

Fig. 6: Figure demonstrating the structure of the graph formed by using nearest neighbours which are evaluated based on L2-distance over each point feature shown in green.
Symbols DGCNN DGCNN + Arcface DGCNN + Resnet-34 + Arcface
Precision Recall F1-score Precision Recall F1-score Precision Recall F1-score
symbol1 0.7565 0.7005 0.7274 0.8669 0.8096 0.8373 0.8758 0.7960 0.8340
symbol2 0.8161 0.8512 0.8333 0.9202 0.8975 0.9087 0.9005 0.8920 0.8962
symbol3 0.7602 0.7238 0.7416 0.9119 0.7937 0.8487 0.9138 0.8043 0.8556
symbol4 0.7360 0.7825 0.7585 0.8293 0.8708 0.8495 0.8441 0.8475 0.8458
symbol5 0.8137 0.7630 0.7875 0.8809 0.9068 0.8937 0.8913 0.9022 0.8967
symbol6 0.8316 0.7562 0.7921 0.9220 0.9292 0.9255 0.9146 0.9146 0.9146
symbol7 0.7875 0.7478 0.7671 0.8938 0.8539 0.8734 0.8668 0.8310 0.8485
symbol8 0.7520 0.8473 0.7968 0.7925 0.9031 0.8442 0.7947 0.9274 0.8560
symbol9 0.6144 0.8366 0.7084 0.6672 0.9146 0.776 0.6714 0.8968 0.7679
symbol10 0.8595 0.7355 0.7926 0.9296 0.9003 0.9147 0.9244 0.9178 0.9211
symbol11 0.6786 0.8614 0.7591 0.7541 0.9219 0.8296 0.7583 0.9095 0.8271
symbol12 0.7609 0.61 0.6771 0.8626 0.6691 0.7536 0.8692 0.6786 0.7622
symbol13 0.8304 0.7907 0.8101 0.9087 0.8643 0.8860 0.9137 0.8412 0.8760
symbol14 0.8601 0.8175 0.8382 0.8636 0.9076 0.8850 0.8687 0.9060 0.8869
symbol15 0.7614 0.7537 0.7576 0.8660 0.9139 0.8893 0.8759 0.9348 0.9044
symbol16 0.7802 0.8481 0.8128 0.9041 0.8452 0.8736 0.8771 0.8679 0.8724
symbol17 0.6363 0.7968 0.7076 0.7176 0.8886 0.7940 0.7131 0.8904 0.7919
symbol18 0.8159 0.8259 0.8209 0.8670 0.9288 0.8968 0.8784 0.9320 0.9044
symbol19 0.7554 0.6301 0.6871 0.8617 0.6818 0.7612 0.8475 0.6858 0.7581
symbol20 0.7844 0.7636 0.7739 0.8857 0.8645 0.8750 0.8985 0.8560 0.8767
symbol21 0.8520 0.8193 0.8353 0.9335 0.9025 0.9177 0.9390 0.9044 0.9214
symbol22 0.7872 0.8282 0.8072 0.8981 0.8899 0.8940 0.8702 0.9028 0.8862
symbol23 0.7314 0.8093 0.7684 0.8152 0.8991 0.8552 0.8326 0.8938 0.8622
symbol24 0.7196 0.7593 0.7389 0.8535 0.8325 0.8428 0.8596 0.8685 0.8641
symbol25 0.7963 0.6293 0.7030 0.9140 0.6749 0.7765 0.9161 0.6702 0.7741
TABLE I: Comparison of the performance of symbol classification using DGCNN and its variants having Arcface loss and Resnet-34 embeddings on the synthetic Dataset-P&ID

Dynamic Graph CNN for learning symbol representation: The set of points generated along the periphery of obtained probable symbol regions are classified into one among the symbol classes using DGCNN. To illustrate, graph representation of filtered segregated regions is illustrated in Figure 6. Since every symbol lies on a line (pipe), we look at collinear line segments as defined in the previous section. Gaps or discontinuities between line segments associated with a single long line (pipe) are examined to check if there are symbols within the gaps. We use points inside these gap regions for the classification.

Observing the number of line segments intersecting a particular gap region, can help in improving the robustness of classification as certain symbols only allow for certain connectivity patters (for example certain valves may always have connectivity to two lines corresponding to inlet and outlet ). However, to handle the cases where we do not get any connectivity information, all the independent contours orthogonal and close to the moving lines (pipes) are used for the classification. We compute the following features for each point present on the contour:

  • Coordinate information defining each point on contour.

  • Hu-moments 

    [9] of each point are calculated and appended as point features. HU moments are a set of values which are invariant to image transformations like translation, scale, rotation and reflection. Since, the points are arranged sequentially, a window of (here, ) sequential points are used to calculate the seven HU moments for every feature point.

    (5)
    (6)
    (7)
    (8)
    (9)
    (10)
    (11)

    Equations. 5678910 and  11 represent the equations for the calculation of seven HU moments. The are the normalized central moments, which are computed as follows:

    (12)
    (13)
  • DGCNN authors [27] claim that with fewer than points in the graph, the performance of the network deteriorates dramatically. Therefore, we have fixed the number of points to be

    for our case. To maintain an equal number of data points for every symbol, we interpolate points over different loops based on their length such that the total number of points are constant.

By appending the seven HU-moment features to three coordinate value features, we get 9 features per point. These points are used to train the DGCNN network for classification. Figure 3 shows the architecture of the network used for symbol classification. The input to network, in our case, is 1024 points with each point having 9 features . This input is passed through three sequential edge-convolution layers. The outputs of all these three edge convolution layers are appended globally to form a 1-D global descriptor, which is used to generate classification scores for classes. The embeddings obtained at the final trained global layer are stored and used for comparison at inference.

Appending Resnet-34 with Dynamic Graph CNN

: To make symbol classification more robust, we append global embeddings of images from the ResNet-34 network. To achieve this, the ResNet-34 model is trained over the images of the graph of sampled points by initializing the network with imagenet pre-trained weights. The modified network architecture, as shown in Figure 

4, uses a similar methodology as used in DGCNN, however at the global embedding, the image visual features are additionally appended.

Iv-E Learning and Inference

Loss Function

: The cross-entropy loss paired with a softmax layer is the standard architecture used for most real world classification tasks which works by creating separation between embeddings of different classes. However, in case of symbols which are visually very similar to other classes, this loss causes the network to mis-classify if there occurs even a slight deviation in the test sample embeddings. We address this by utilizing Arcface loss 

[4] which trains the network to generate class embeddings with an additional angular margin between them and increases the discriminating power of the classifier. Equation. (14) represents the Arcface loss for classes with additional

margin. The loss function differs for each batch of

examples, where is the ground truth for the sample in the batch.

(14)

Augmentation Policy: Data augmentation is routinely employed to reduce over-fitting and obtain better generalization performance. However, while considering symbols having very minute inter-class variations, the standard augmentation techniques do not prove very beneficial. One underlying reason for this is that the most augmentation techniques perform uniform changes in the resulting augmented data. This can be addressed by incorporating different modifications for different sub-parts of each example image. In our approach, the augmentations are generated for each symbol image by incorporating affine transformations to each sub-part of the image with rotation ranging from angle °to °, scaling parameter ranging from to and shear parameter ranging from to . The resultant transformation matrix is used for transformation over a window of sequential points. The transformation matrices over the sequential windows have parameters in the range to . This augmentation policy gives more flexibility to the network to adapt for local changes in addition to retaining the advantages of the traditional augmentation approach.

Inference: To increase the robustness of the model at inference time, we take each symbol region image, cropped from the P&ID sheet, at two scales with different orientations. The two scales are chosen in a way that each symbol region image has a minimum width of and px respectively. Hence, for each given symbol-region image, we produce different embeddings which are compared using the cosine distance against the symbol embedding directory for the entire set of symbols. The final class label for the given symbol-region image is assigned based upon majority voting.

V Experimental Results and Discussions

V-a Data

We evaluate the performance of our proposed one-shot symbol recognition method on a synthetic dataset called Dataset-P&ID which consists of P&ID sheets in the test-set. These P&ID sheets contain various components (symbols) attached to a network of pipelines. In this paper, we aim to localize and subsequently classify a set of symbols, as shown in Figure 7, from these P&ID sheets. We also compare our proposed method against the method by Rahul et. al [19] on a private dataset of real P&ID sheets for symbol recognition.

Fig. 7: List of symbols ( - ) used for training and evaluation of our proposed method.
Total Symbols Regions Correct Regions Localized False Regions Localized Missing Regions
10630 10482 43 148
TABLE II: Performance of Symbol-Region Segmentation
Precision Recall F1-score
Symbols [19] Ours [19] Ours [19] Ours
Bl-V 0.925 0.913 0.936 0.896 0.931 0.904
Ck-V 0.941 0.911 0.969 0.902 0.955 0.906
Ch-sl 1.000 0.99 0.893 0.902 0.944 0.944
Cr-V 1.000 1.000 0.989 0.98 0.995 0.990
Con 1.000 0.875 0.905 0.922 0.950 0.897
F-Con 0.976 0.862 0.837 0.947 0.901 0.903
Gt-V-nc 0.766 0.894 1.000 1.000 0.867 0.944
Gb-V 0.888 0.871 0.941 0.884 0.914 0.877
Ins 1.000 1.000 0.985 0.982 0.992 0.990
GB-V-nc 1.000 1.000 0.929 0.977 0.963 0.988
Others 0.955 0.927 1.000 0.942 0.970 0.934
TABLE III: Comparison of proposed OSSR-PID which performs one-shot learning against Rahul et al [19] which uses a large training number of training images. Results are on a real world P&ID dataset

V-B Results

First, we present the results of symbol-region localization in TableII. Here, the emphasis is more on high recall to make sure that no symbol-region is missed prior to its classification. The results demonstrate that the proposed method for symbol-region segmentation performs remarkably well with only few misses (0.5% approx.). Next, we evaluate our one-shot symbol recognition method using DGCNN (with and without Arcface loss and Resnet-34) on the synthetic P&ID sheets and tabulate results in Table I. As it is evident from Table I, our proposed OSSR-PID method is able to recognize different symbols using only one prototypical example image per symbol class with excellent precision, recall and F1-score values. We also computed average accuracy of symbol classification for these experiments and obtained accuracy of symbol classification using DGCNN with categorical cross-entropy loss. However, we observed significant improvement in accuracy using Arcface loss where the average accuracy of symbol classification obtained is , and while using DGCNN+ResNet-34+Arcface loss we get comparable accuracy () with very slight improvement. Please note that we use points for the classification using DGCNN and the number () of nearest neighbors for EdgeConv block is set to be .

Fig. 8: Figure showing failure cases for symbol recognition by our proposed method.

Further, we compare OSSR-PID against prior-work [19] and present the results on real P&ID sheets dataset in Table III. For this comparison, we use the same set of symbols as used by Rahul et. al [19]. The single instance example of each class is taken from real P&ID sheet, and all the factors are fixed as described in  [19]. As it is evident from Table III that OSSR-PID performs remarkably well and comparable to the earlier method [19]. However, it should be noted that OSSR-PID requires only one single training image per symbol class and hence, and offers comparable performance to  [19] which is fully supervised and needs a large amount of annotated training data for each symbol class. At last, we also show some failure cases of our proposed method where it mis-classifies the symbols due to the noise and clutter present around the symbol of interest.

Vi Conclusion

In this paper, we present a technique for one-shot localization and recognition of symbols in a P&ID sheet by using a path sampling technique for extraction of probable symbol regions and a DGCNN for symbol classification. The technique is specialized for line-drawn engineering drawings and requires just one instance of a symbol image to yield impressive test recognition accuracy. The technique was tested on synthetic P&ID diagrams depicting oil and gas flow schematics, which will be released for future research. The current one-shot learning model offers comparable performance to prior fully supervised techniques, however, it still needs to be trained on inclusion of new class symbols. This forms the basis of future followup work on this topic, in which re-training can be fully avoided, i.e. employ zero-shot learning, to be used over different seen and unseen symbol classes.

References

  • [1] E. Arroyo, A. Fay, M. Chioua, and M. Hoernicke (2014-Sep.) Integrating plant and process information as a basis for automated plant diagnosis tasks. In Proceedings of the 2014 IEEE Emerging Technology and Factory Automation (ETFA), pp. 1–8. External Links: ISSN 1946-0759 Cited by: §I.
  • [2] E. Arroyo, X. L. Hoang, and A. Fay (2015-Sep.) Automatic detection and recognition of structural and connectivity objects in svg-coded engineering documents. In 2015 IEEE 20th Conference on Emerging Technologies Factory Automation (ETFA), pp. 1–8. External Links: ISSN 1946-0759 Cited by: §I.
  • [3] D. Benjamin, P. Forgues, E. Gulko, J. Massicotte, and C. Meubus (1988-05) The use of high-level knowledge for enhanced entry of engineering drawings. In

    [1988 Proceedings] 9th International Conference on Pattern Recognition

    ,
    pp. 119–124 vol.1. External Links: ISSN null Cited by: §I.
  • [4] J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019)

    Arcface: additive angular margin loss for deep face recognition

    .
    In Proceedings of the IEEE CVPR, pp. 4690–4699. Cited by: 4th item, §IV-E.
  • [5] X. Dong and Y. Yang (2019) One-shot neural architecture search via self-evaluated template network. ArXiv abs/1910.05733. Cited by: §I.
  • [6] C. Fahn, J. Wang, and J. Lee (1988) A topology-based component extractor for understanding electronic circuit diagrams. Computer Vision, Graphics, and Image Processing, pp. 119 – 138. External Links: ISSN 0734-189X, Document Cited by: §I.
  • [7] R. C. Gonzalez, R. E. Woods, and S. L. Eddins (2004) Digital image processing using matlab. Pearson Education India. Cited by: §IV-A.
  • [8] G. Gupta, Swati, M. Sharma, and L. Vig (2017-11) Information extraction from hand-marked industrial inspection sheets. In ICDAR 2017, pp. 33–38. External Links: ISSN 2379-2140 Cited by: §I.
  • [9] Z. Huang and J. Leng (2010) Analysis of hu’s moment invariants on image scaling and rotation. In 2010 2nd International Conference on Computer Engineering and Technology, Vol. 7, pp. V7–476. Cited by: 2nd item.
  • [10] S. H. Joseph and T. P. Pridmore (1992-Sep.) Knowledge-directed interpretation of mechanical engineering drawings. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 928–940. External Links: ISSN 1939-3539 Cited by: §I.
  • [11] H. Kato and S. Inokuchi (1990-06) The recognition method for roughly hand-drawn logical diagrams based on hybrid utilization of multi-layered knowledge. In [1990] Proceedings. 10th International Conference on Pattern Recognition, pp. 578–582 vol.1. External Links: ISSN null Cited by: §I.
  • [12] A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi (2019) A survey of the recent architectures of deep convolutional neural networks. CoRR abs/1901.06032. External Links: 1901.06032 Cited by: §I.
  • [13] Z. Li, F. Zhou, F. Chen, and H. Li (2017) Meta-sgd: learning to learn quickly for few shot learning. CoRR. External Links: 1707.09835 Cited by: §I.
  • [14] M. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz (2019)

    Few-shot unsupervised image-to-image translation

    .
    In IEEE International Conference on Computer Vision (ICCV), Cited by: §I.
  • [15] R. Maini and H. Aggarwal (2009) Study and comparison of various image edge detection techniques. In International journal of image processing (IJIP), Cited by: §I.
  • [16] S. Paliwal, A. Jain, M. Sharma, and L. Vig (2021) Digitize-pid: automatic digitization of piping and instrumentation diagrams. Lecture Notes in Computer Science Trends and Applications in Knowledge Discovery and Data Mining, pp. 168–180. External Links: Document Cited by: 5th item.
  • [17] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE CVPR, pp. 652–660. Cited by: §III.
  • [18] R. Rahul, A. Chowdhury, Animesh, S. Mittal, and L. Vig (2018) Reading industrial inspection sheets by inferring visual relations. In Computer Vision - ACCV Workshops, 2018,, pp. 159–173. Cited by: §I.
  • [19] R. Rahul, S. Paliwal, M. Sharma, and L. Vig (2019) Automatic information extraction from piping and instrumentation diagrams. ArXiv. Cited by: 6th item, §I, §V-A, §V-B, TABLE III.
  • [20] M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel (2018) Meta-learning for semi-supervised few-shot classification. CoRR. External Links: 1803.00676 Cited by: §I.
  • [21] V. G. Satorras and J. B. Estrach (2018) Few-shot learning with graph neural networks. In ICLR 2018, Vancouver, BC, Canada, April, Cited by: §I.
  • [22] P. Selinger (2003) Potrace: a polygon-based tracing algorithm. (online). External Links: Link Cited by: §IV-A, §IV.
  • [23] J. Snell, K. Swersky, and R. S. Zemel (2017) Prototypical networks for few-shot learning. In Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, and R. Fergus (Eds.), Cited by: §I.
  • [24] O. Vinyals, C. Blundell, T. P. Lillicrap, K. Kavukcuoglu, and D. Wierstra (2016) Matching networks for one shot learning. In NIPS, Cited by: §I.
  • [25] D. Wang, V. Haese-Coat, and J. Ronsin (1995) Shape decomposition and representation using a recursive morphological operation. Pattern Recognition 28 (11), pp. 1783 – 1792. External Links: ISSN 0031-3203 Cited by: §I.
  • [26] X. Wang, X. Han, W. Huang, D. Dong, and M. R. Scott (2019) Multi-similarity loss with general pair weighting for deep metric learning. IEEE CVPR. Cited by: §I.
  • [27] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2018) Dynamic graph cnn for learning on point clouds. CoRR. External Links: 1801.07829 Cited by: 3rd item, §I, §II, Fig. 3, §III, 3rd item.
  • [28] N. Wojke and A. Bewley (2018-03) Deep cosine metric learning for person re-identification. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Vol. , pp. 748–756. External Links: Document, ISSN null Cited by: §I.
  • [29] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip (2020) A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems. Cited by: §III.
  • [30] L. Xu, E. Oja, and P. Kultanen (1990) A new curve detection method: randomized hough transform (rht). Pattern Recognition Letters 11 (5), pp. 331 – 338. External Links: ISSN 0167-8655 Cited by: §I.
  • [31] L. Yan and L. Wenyin (2003-08) Engineering drawings recognition using a case-based approach. In Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., Cited by: §I.