Revisiting Regex Generation for Modeling Industrial Applications by Incorporating Byte Pair Encoder

05/06/2020 ∙ by Desheng Wang, et al. ∙ Ant Financial 6

Regular expression is important for many natural language processing tasks especially when used to deal with unstructured and semi-structured data. This work focuses on automatically generating regular expressions and proposes a novel genetic algorithm to deal with this problem. Different from the methods which generate regular expressions from character level, we first utilize byte pair encoder (BPE) to extract some frequent items, which are then used to construct regular expressions. The fitness function of our genetic algorithm contains multi objectives and is solved based on evolutionary procedure including crossover and mutation operation. In the fitness function, we take the length of generated regular expression, the maximum matching characters and samples for positive training samples, and the minimum matching characters and samples for negative training samples into consideration. In addition, to accelerate the training process, we do exponential decay on the population size of the genetic algorithm. Our method together with a strong baseline is tested on 13 kinds of challenging datasets. The results demonstrate the effectiveness of our method, which outperforms the baseline on 10 kinds of data and achieves nearly 50 percent improvement on average. By doing exponential decay, the training speed is approximately 100 times faster than the methods without using exponential decay. In summary, our method possesses both effectiveness and efficiency, and can be implemented for the industry application.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Regular expression, which can also be abbreviated as regex, utilizes a sequence of characters to define a search pattern and has been investigated for a long time. Based on Nondeterministic Finite Automaton (NFA) and Deterministic Finite Automaton (DFA) (Rabin and Scott, 1959), it can efficiently extract some important information such as bank card numbers and Chinese certificate numbers from unstructured and semi-structured data such as raw text. Due to the effectiveness and flexibility, regular expression has been widely used in the areas of Natural Language Processing (NLP) (Manning et al., 1999), Data Mining (DM) (Han et al., 2011), Information Retrieval (IR) (Manning et al., 2008), etc. To assure the quality of the regular expressions constructed for some specific tasks, however, expertise with respect to regular expressions is necessary, which means the construction of regular expressions is of great difficulty.

To overcome the drawbacks of generating regular expressions, some regular expression generation methods are proposed. Generally, these methods can be classified into two categories. As Figure

1 shows, the picture (a) stands for the method that regular expressions are generated from natural language. To address the problem, (Locascio et al., ) constructed a end-to-end model (Sutskever et al., 2014)

based on the Long Short-Term Memory Neural Network

(Hochreiter and Schmidhuber, 1997) and trained the model by utilizing a synthetic parallel corpus of natural language descriptions and regular expressions. Although the method in (Locascio et al., ) achieved the state-of-the-art performance, the authors in (Zhong et al., 2018) suggested that the distinct characteristics between the synthetic datasets and the real-world datasets might affect the ability of the end-to-end model which was demonstrated to achieve extremely low effectiveness on real-world data. Hence, a large amount of real-world parallel data is necessary to guarantee the effectiveness of the end-to-end model. In addition, deep models will consume a huge amount of computation resources and can hardly be applied in some low-resource environments.

The other kind of methods construct regular expressions from samples of the desired behavior as the picture (b) shows. (Bartoli et al., 2016) treated the problem as a program synthesis task and dealt with it by implementing an evolutionary procedure. Based on the structure of trees, they first generated some candidate regular expressions by utilizing templates. Next, a genetic algorithm containing crossover and mutation operations was used to find the result with the best fitness. Although this method was proved to be effective in (Bartoli et al., 2016), our experimental results show that their algorithm is time-consuming and can hardly deal with more than 5000 training samples in a tolerable time. Due to the computational complexity, this method is impossible to be applied to model some industrial applications.

Figure 1. Auto regex generation tasks. (a) stands for generating regex from natural language, and (b) represents learning regex from samples.

In general, these methods mentioned above generate regular expressions from character-level, which ignore the correlations between characters in the training sample.

In this work, we revisit the task of constructing regular expression from samples for modeling industrial applications. In order to improve both effectiveness and efficiency, we modify the genetic algorithm mentioned in (Bartoli et al., 2016) and propose three improvements. First of all, we modify the original fitness function and remove the limitation of forcing the genetic algorithm to choose the shortest regular expression. We find that the limitation will make regular expressions too generalized (e.g. the regex “.*” ) to distinguish the positive samples from those adversarial samples. In this work, the regular expression whose length is similar to the positive training samples’ length is recognized to be the best. Secondly, a compressed algorithm named byte pair encoder (BPE) (Sennrich et al., 2015)(Gage, 1994) was introduced to extract some frequent items from training corpus so that we can make use of these frequent items to construct more specific regular expressions. Furthermore, with the help of BPE, the searching space is shrunk and the genetic algorithm can be more efficient than before. Last but not the least, motivated by the simulated annealing algorithm (Van Laarhoven and Aarts, 1987), we do exponential decay on the population size of the genetic algorithm for accelerating the training process. In reality, positive samples may contain some noise. In this work, according to the analysis of the datasets, we assume the percentage of noise will not surpass 5%. In order to improve the robustness of our method, we do divide-and-conquer and make our method focus on the incorrectly matched samples. When the percentage of the remained training samples is lower than 5%, we will stop the training process.

Our method and a strong baseline in (Bartoli et al., 2016) are trained on both positive samples and negative samples. The learned regular expressions are expected to match the positive samples and mismatch the negative samples. In experiment, our method and the baseline are evaluated on 13 kinds of challenging datasets including Chinese certificate number, mobile phone number, email, etc. The results indicate the effectiveness of our method, which outperforms the baseline by approximately 50 percent on average. Especially, our method even achieves nearly 30 percent on the international mobile equipment identity (IMEI), car engine number and Chinese certificate number. By doing exponential decay, the training speed is nearly 100 times faster than the method without using exponential decay. Our contributions are summarized as follows.

  • To the best of our knowledge, we are the first to introduce the BPE algorithm into regular expression generation. By utilizing frequent items, the generated regular expressions are much more specific. Hence, when giving some challenging negative samples, the results show much better performance. The experiments show that our method achieves a 50 percent improvement on F1 score on average.

  • We modify the fitness function and remove the limitation of forcing the genetic algorithm to choose the shortest regular expression. We argue that the regular expression with the maximum matching characters for positive training samples and the minimum matching character for negative training samples and having similar length to the positive training samples is considered to be the best.

  • By doing exponential decay on population, the training speed is nearly 100 times faster than before, which makes the regex generation available for modeling industrial applications.

The rest of the thesis is organized as follows. We will introduce some related work about this thesis in Section 2. Section 3 lists the basic ideas of our method. We describe the experiments in Section 4. In this section, we introduce the experiment settings, result analysis, hyperparameter analysis and procedure to conduct these experiments. We will give the conclusion of our work and how these can be implemented into future work in the last section.

2. Related Work

Automatical Regex Generation (ARG) With the surges of data volume, more and more semi-structured and unstructured data are generated. Regular Expression as a long-researched topic can efficiently extract some useful information from these data. Due to the effectiveness, regular expressions have been widely used in many natural language tasks, e.g. name entity extraction (NER) (Chiu and Nichols, 2016) , information retrieval (Manning et al., 2008; Li et al., 2008; Bartoli et al., 2017). However, the construction of regular expressions is hard, tedious and expertise-demanded. To address this problem, some automatic regex generation methods are proposed. In general, regular expression generation can be divided into two categories including regular expressions from natural language and regular expressions from samples. In the first category, regular expressions are learned from some descriptions written in natural language. (Ranta, 1998)

developed a rule-based system that defines a natural language interface to regular expression generation. Recently,

(Locascio et al., ) treated this task as a machine translation task. Based on sequence to sequence framework, they made use of a long short term memory neural network to address the problem, in which the input of encoder was natural language and the output of decoder was regular expression. However, this end-to-end model suffers from demanding a lot of parallel training data, which is resource-cost.

The second category directly learns regular expressions from training samples which can be easily obtained. (Prasse et al., 2015) developed a method to generate the regular expression for recognizing email. However, their method can not be easily modified for other kinds of data. In (Bartoli et al., 2016, 2015, 2014) treated the problem as a program synthesis task and proposed a genetic algorithm to deal with it. Based on syntactic trees, they first generated some candidates by utilizing templates. The leaf nodes denote the regular expression grammar and the non-leaf nodes are operators including concatenate operator, group operator, etc. The regular expression is generated by implementing a deep-first algorithm on the syntactic tree. In order to find optimal regular expression, they defined a fitness function which was used to evaluate the quality of candidate regular expression. During the searching process, an evolutionary procedure was carried out including crossover and mutation operations. Although this method is effective, it is time-consuming and costs lots of resources to find optimal solution.

Byte Pair Encoder (BPE) These methods mentioned above are character-based, which means regular expressions are generated character by character. We argue that character-based methods can hardly capture some vital connections between characters. For instance, in a Chinese certificate number whose length is 18, the characters from 7 to 10 indicate the year of birth. Hence, characters from 7 to 8 can only be ”19” or ”20”. However, character-based methods ignore this limitation and the generated regular expression will be “\d\d\d\d\d\d\d\d\d\d\d\d\d\d\d\d\d\w”, which can easily be cheated by some fake examples. To construct more specific regular expressions, this work proposes a frequent-items-base method. Our method first utilizes BPE (Sennrich et al., 2015) (Gage, 1994) to extract some frequent items from the training examples by iteratively replacing the most frequent item with some unseen tokens. Then, these frequent items are exploited to construct regular expressions. Recently, BPE as a word segmentation technique has been widely used in many NLP tasks such as machine translation and text classification, and has achieved some significant improvements. In this thesis, we modified the original BPE algorithm to control the granularity of the generated frequent items.

3. Methods

As we mentioned above, the construction of regular expressions needs expertise and is difficult. To deal with this problem, we develop a novel Genetic Algorithm (GA) to automatically generate regular expressions in this section. GA is an evolutionary algorithm and utilizes simulated actions including crossover and mutation to find the solution with optimal fitness from searching space.

3.1. Task Statement

As we mentioned above, our task in this thesis is to automatically generated regular expressions from training samples. Given positive samples set and negative samples , the task needs to find a regular expression, which can perfectly distinguish the positive samples from the negative samples from searching space . The formula is defined in Equation 1.

(1)

where the fitness function is to evaluate the quality of the generated regular expressions and is defined in Equation 3.

3.2. Fitness Function

In a standard genetic algorithm, it is vital to define a fitness function to evaluate the quality of candidates. We treat the definition of fitness function as a multi-objective problem. Apparently, a satisfied regular expression should match more positive samples and less negative samples. In addition, from character perspective, the length of matched substring in positive samples is the longer the better while the fewer characters matched in negative samples is the better. Last but not the least, the length of generated regular expression is taken into consideration. In (Bartoli et al., 2016), the regular expression with fewer characters are considered to be the better one. However, in experiments, we find that those regular expressions with the shortest length are too generalized to distinguish the positive sample from negative samples. In this thesis, we argue that these regular expressions whose length is similar to the positive sample’s length is the best.

For a regular expression , given positive samples and negative samples , we first define an identifier function to denote if can totally match a sample .

(2)

For regular expression , the matched character number of the sample is defined as .

(3)

where

(4)

In Equation 4, function denotes the length of string, is Euler’s number.

Tricks Obviously, calculation of the fitness function is time-consuming because of the large amount of regex matching operations. To accelerate the training process, we utilize two tricks to implement our algorithm.

  • When doing regex matching, we insert “” and “$” the at the head and tail of regular expression, respectively. This operation will greatly reduce the number of substrings to be matched.

  • In each training epoch, we maintain a cache in our program to store the fitness score. For a regular expression, the fitness score will not be recalculated if we find the corresponding result in the cache.

(a)
(b)
Figure 2. Initialization of regex which is represented by as syntactic tree. Picture (a) is character-based strategy which is used in (Bartoli et al., 2016). Our frequent item-based method is shown in picture (b).

3.3. Initialization

On the initialization stage of GA, some candidates will be randomly generated. Then, crossover and mutation will be conducted between these candidates to find the optimal result. Motivated by (Bartoli et al., 2016), in this thesis, all regular expressions are constructed by using syntactic trees in which leave nodes are basic regular expression units chosen from the terminal set, and non-leaf nodes stand for operators including concatenation, matching one or more characters, etc.

The terminal sets are defined as follows.

  1. Alphabet constants: “a”, “b”, . . . ,“y”,“z”,“A”,“B”, . . . ,“Y” ,“Z”;

  2. Digit constants: “0”,“1”, . . . ,“8”,“9”;

  3. Symbols constants: “.”, “:”, “,”, “;”, “_”, “=”, “\”, “’”, “\\”, “/”, “?”, “!”, “}”, “{”, “(”, “)”, “[”, “]”, “¡”, “¿”, “@”, “#”;

  4. Alphabet ranges and digit ranges: “a-z”, “A-Z”, “0-9”;

  5. Common character classes: “\w”, “\d”;

  6. Wildcard character: “.”;

The functional sets are defined as follows:

  1. The concatenation operator “”. “” denotes concatenation operator in Figure 2;

  2. The group operator “()”;

  3. The list match operator“[t]” and the list not match operator “[^t]”;

  4. The match one or more operator “t++”;

  5. The match zero or more operator “t*+”;

  6. The match zero or one operator “t?+”;

  7. The match min max operator “t{,}+”, is the minimum times, is the maximum times;

1:: proportion threshold;: the size of string samples in vocab
2:best: bpe tokens;
3:import re
4:
5:def pair_freq_stats(vocab):
6: pair2freq = collections.defaultdict(int)
7: for word, freq in vocab.items():
8:  symbols = word.split()
9:   for i in range(len(symbols) - 1):
10:    pair2freq[symbols[i], symbols[i + 1]] += freq
11: return pair2freq
12:
13:def merge(best, vocab_in):
14: vocab_out = { }
15: bigram = re.escape(’ ’.join(best))
16: patten = re.compile(r’(?¡!\S)’ + bigram + r’(?!\S)’)
17: for word in vocab_in:
18:  word_out = patten.sub(”.join(best), word)
19:  vocab_out[word_out] = vocab_in[word]
20: return vocab_out
21:
22:vocab = {‘l o w ¡/w¿’: 5, ‘l o w e r ¡/w¿’: 2, ‘n e w e s t ¡/w¿’:   6, ‘w i d e s t ¡/w¿’: 3}
23:percent = 1.0
24:while percent p:
25: pair2freq = pair_freq_stats(vocab)
26: best = max(pair2freq, key=pair2freq.get)
27: percent = pair2freq.get(best)/
28: if percent p:
29:  vocab = merge(best, vocab)
30:  print(best)
Algorithm 1 BPE: Byte Pair Encoding Algorithm with Dynamic Proportion Threshold Control

The final regular expression is generated by using the deep-first search algorithm on the corresponding syntactic tree. As we mentioned above, however, traditional methods are character-based and suffer from low precision when training samples are insufficient. To solve this problem, in this thesis, we propose a frequent item-based initialization method. We first extract frequent items from training samples and then tokenize these samples by using BPE. In Figure 2, picture (a) denotes the initialization stage of traditional method and picture (b) illustrates ours. The basic idea of BPE is to iteratively replace the most frequent item with some unseen tokens. In this thesis, however, we find that these extremely sophisticated frequent items will do harm to the generalization performance of the generated regular expressions. In the original BPE, the granularity of frequent items is controlled by the hyperparameter of training epochs. However, the experiments demonstrate that this strategy is unable to generate gratifying frequent items. Hence, we modify the original BPE and set a threshold to control the frequency of the most frequent items. The algorithm will be terminated once the frequency of corresponding frequent item is smaller than the threshold. The pseudo code of the BPE algorithm is described in Algorithm 1.

Figure 3. Mutation operation. This operation randomly replace a subtree with another tree. As shown in this figure, the original subtree in the blue dash box is replaced by the subtree in the red dash box.

3.4. Evolution

During the initialization stage of the genetic algorithm, we randomly generate some candidates, which are also named populations and evaluated based on the fitness function. Next, the operations of crossover and mutation are carried out on these populations to find the result with the best fitness in the searching space.

Population Decay In (Bartoli et al., 2016), the population size is invariant in each training epoch. However, the training speed is strongly connected with the population size. To improve the efficiency of our algorithm, we do exponential decay on population size. In each epoch, the population size is defined as follows.

(5)

where is the decay parameter on population size, denotes epoch size, is the initialized population size and is the minimum number of population size during the training process and is used to prevent our algorithm from under-fitting.

Mutation In mutation operation, for a regular expression, we randomly choose a subtree and replace it with another syntactic tree. In the epoch, the mutation operation will be repeated for times to generate new populations. Details are shown in Figure 3.

Crossover In crossover operation, we randomly select two candidates from populations and switch their subtrees. In the epoch, the crossover operation will be repeated for times to generate new populations. The whole process of crossover is illustrated in Figure 4.

Figure 4. Crossover Operation. This Operation is used to exchange the subtrees of two different trees.

To jump out of the local optimal result, we randomly generate new populations for the epoch. In the end, we choose the best regular expressions as the output of the epoch according to the fitness function.

1: : is positive strings set , is negative strings set. , , : iteration for algorithm, : proportion threshold for bpe algorithm, : precision threshold for divide and conquer, : iteration threshold for divide and conquer.
2:: regex which generated from can perfectly distinguish the positive samples from the negative samples.
3:
4:initial , iteration = 0, = 0
5: = BPE(, )
6: = init(, , )
7:repeat
8:      = crossover_generate(, )
9:      = mutate_generate(, )
10:      = random_generate()
11:     
12:     
13:     for each in  do
14:         
15:     end for
16:      = top_K(, )
17:     iteration += 1
18:     best = get_best(,)
19:      += 1
20:      = get_precision(,best)
21:     if  and  then
22:         
23:          = remove(, best)
24:          = init(, , )
25:          = 0
26:     end if
27:until iteration ¡
28: = join(R)
Algorithm 2 RGGB:Regex Generation based on GA and BPE

3.5. Divide and Conquer

In reality, positive samples always contain some noise and may do harm to the performance of our algorithm. In this work, according to the analysis of dataset, we assume that the percentage of noise will not surpass 5%. For robustness, after several training epochs, those regular expressions whose precision on training samples surpasses a predefined threshold will be selected into a candidate set and will not be utilized for training anymore in the following training epochs. Besides, positive samples which are correctly recognized by any regular expression in the candidate set will be removed in the next training epoch. When the percentage of the remained training samples is lower than 5%, we will stop the training process. This strategy can make our algorithm focus on the incorrectly matched samples. In the end, the result is reported by concatenating the regular expressions in the candidate set using symbol “—” , which is shown in Equation 6.

(6)

where is the size of the candidate set. The core idea of our algorithm is summarized in Algorithm 2.

3.6. Complexity Analysis

For convenience, we reuse the notation defined in the previous subsections. denotes the size of population for syntax trees, is the number of positive samples, stands for the number of negative samples, denotes the complexity of a regex for matching a sample, refers to the exponential decay parameter on population size, and is epoch size.

The cost of evolution for the epoch is , Hence the total complexity is defined in Equation 7

(7)

Similarly, the computational complexity of the baseline is defined as follows.

(8)

Apparently, our method is much more efficient than the baseline.

4. Experiments

In this section, we first describe the construction of the dataset and give some experiment settings. To validate the effectiveness of our method, a strong baseline is introduced. Then, the results of experiment are discussed. At last, we analyze some vital hyperparameters, which may have a great effect on the performance of our method.

Order Data Type Size
1 Mac Address 10000000
2 IMEI 10000000
3 IP Address 6499413
4 Invoice Code 33606
5 Invoice Number 10000000
6 Mobile Number 10000000
7 House id 1248507
8 Car Engine Number 12599128
9 Company Unicode 10000000
10 Chinese Certificate Number 10000000
11 Car License 13739000
12 Email 10394457
13 Bankcard Number 14071316
Table 1. Details of Dataset.

4.1. Dataset and Experiment Settings

To the best of our knowledge, there is no open-source dataset in the area of learning regular expressions for samples. For comparison, we select 13 categories of challenging datasets from the database of our company. In order to protect personal privacy, we randomly replace the last two characters with another two characters for each sample

111Sufficient data protection was carried out during the process of experiments to prevent the data leakage and the data was destroyed after the experiments were finished. The data is only used for academic research and sampled from the original data, therefore it does not represent any real business situation in Ant Financial Services Group..

Details related to the dataset are shown in Table 1. Mac address is used to identify the physical address of hardware, and is composed of 16 hexadecimal numbers in which “:” will be inserted after every two hexadecimal numbers. The International Mobile Equipment Identity (IMEI) is used to identify mobile phones and satellite phones, and is made up of 15 or 17 numbers. The ip address is made up of four parts which are separated by “.”, and each part ranges from 1 to 255. The invoice code and invoice number are given by the tax department. The length of invoice code is 12 or 10 and the length of invoice number is 8. Both of them are composed of numbers. The Chinese Mobile number consists of 11 numbers. House id whose prefix is always “17” and “18” is utilized to identify the house property, and is composed of 18 numbers. The Car Engine number is exploited to identify a car’s engine and contains different lengths of characters. In China, company unicode is given by the administration for industry and commerce when a company is registered, and is made up of 18 characters. Chinese Certificate number contains 18 characters and is used to identify the legal residents of China. The first 17 characters are numbers and the last character can either be number or “X”. Car license is utilized to identify a car and is prefixed by a Chinese character which denotes a province of China. The rest of the characters in a car license are either numbers or upper case letters. There are several suffixes of email address including “qq.com”, ”163.com”, ”gmail.com”, etc. The lengths of bankcard numbers are different. However, for the same category of bankcard, the prefix is invariant.

For each dataset, we randomly select 10000 samples and combine them to construct the test set, which has 130000 items in total. The rest of data is utilized as a train set. Limited by the efficiency of the GP algorithm, we only sample a part of items from the train set for the training process. With regard to our method, except for the data chosen for training GP algorithm, the rest of data in the train set is used for training BPE.

In experiments, the same hyperparameters of our method and the baseline are equally defined. For each kind of data, we randomly sample 2000 positive samples from it, and take the same number of negative samples from the remaining categories. The training epoch size and the initialized population size are set as 5000 and 1000, respectively. The minimum population size of our method is defined as 200. To accelerate the training process of our method, the exponential decay parameter is set as 0.97. The granularity of frequent items in BPE is controlled based on a threshold, which is equal to 0.02. We begin to do divide-and-conquer after 30 training epochs and the precision threshold is set as 0.9.

4.2. Strong Baseline

To demonstrate the effectiveness of our method, a strong baseline in (Bartoli et al., 2016) is chosen for comparison, which achieves the state-of-the-art performance in the task of generating regular expression from samples. The baseline is also built based on a genetic algorithm and generates regular expression character by character. In each training epoch of the baseline, the population size is constant which will be demonstrated to be inefficient in the following subsection.

4.3. Evaluation Metrics

The results of experiments are reported based on precision, recall and F1 score. Given classes , for the category, the metrics are defined as follows.

(9)

where , and stand for precision, recall and F1 score, respectively. Besides, with related to the category, , and denote the number of true positives, false positives and false negatives , respectively.

Data Type Algorithm Precision Recall F-score Regular Expressions
Mac Address Baseline 1.0000 0.9992 0.9995 \w\w:[^_]++
RGGB 1.0000 1.0000 1.0000 [^␣]8:[^¿]++—\d:[^#]++—[^:]++[^␣]++
IMEI Baseline 0.4889 0.9671 0.6495 (?¡!\w)\d\d(?:\w\w\d\d\w\d\d)++\w\w\d\w\w \d
RGGB 0.9648 0.9418 0.9531 8686\d*+—86\d\d\d\d\d\d\d8686\d\d—86\d\d86\d\d86\d\d++—86\d\d\d\d\d\d\ d\d8686 \d—86\d\d\d\d\d\d\d\d\d8686—86\d\d\d\d\d\d\d\d\d86\d\d —86\d\d\d\d\d\d\d86\d\d\d\d —\d++[A-Za-z]\d[A-Za-z]\w\d\w++—……
IP Address Baseline 1.0000 1.0000 1.0000 \d++\.\d[^_]++
RGGB 0.9998 1.0000 0.9999 \.[^␣]++—\d++\.[^␣]++
Invoice Code Baseline 0.8494 0.9675 0.9046 (?¡!\w)\w\w\d[0-8]\w(\d\w\w\w)++[0-8]\w\w
RGGB 0.9617 0.9956 0.9784 110019\d*+—\d10019\d++—\d110019\d++—\d\d0019\d\d\d\d—\d\d001\d\d\d\d\d —\\̣d\d 001\d001\d\d—\d\d03\d\d\d\d\d\d—\d\d10019\d\w*+—\d\d\d001\d\d\d\ d\d\d—……
Invoice Number Baseline 0.9353 1.0000 0.9666 [0-8]\d\d\d\d\d\d\d
RGGB 0.9327 1.0000 0.9652 \d\d\d\d\d\d\d\d
Mobile Number Baseline 0.7452 1.0000 0.8540 (?¡!\d)\d\d\d\d\d\d\d\d\d\d\d(?!\w)
RGGB 0.7479 1.0000 0.8558 13\d1313\d\d++—13\d15\d*+—13\d13\d\d13\d\d—13\d13\d\d\d13\d—13\d13\d\d\ d\d13—13 \d13\d13\d\d\d—1315\d++—13\d\d15\d++—13\d13\\̣d\d\d\\̣d—13\d\d\ d13\d\d13—13\d\d \d\d13\d*+—1313\d*+—13\d\d\d\d\d13\d\d—1513\ d\d++—13\d\d\d13\d13\d—……
House id Baseline 0.9390 0.9005 0.9194 [0-8]\d[0-8]\d[0-8]\d[0-8][0-8][0-8]\d\d\d\d\d\d\d\d\d—\d\d\d\d\d\d\d\d\d\ d[^0-8]\d\d\d[^ 0-8]\d[^0-8][^0-8]—\d\d\d\d\d\d\d\d\d\d[^0-8]\d\ d\d[^0-8]\d[^0-8]\d
RGGB 0.9970 1.0000 0.9985 170\d17\d*+—170\d\d\d170\d++—170\d\d\d\d\d\d\d(?:\d\d)++— 17\d\d17\w++—17\d\d\d \d17\d\d++—17\d\d\d\d\d(?:\d\ d\d\d)++\d++—\d80\d\d80\d++—\d80\d\d\d\d\d\d8080\d \d\d\d\d—……
Car Engine Number Baseline 0.4643 0.9840 0.6309 (?:\d\d\d\d\d\d\d\d\d\d\w)?+(?¡![^,])\w\w\w\w(?:\d*+\w?+ \w?+\d*+[^:][^:][^,] [^¡][^:]++)?+[^,]++
RGGB 0.9416 0.9375 0.9396 [A-Za-z]\d*+[^’]\d*+—[^#]\d[^#][^#]\d\d[^#]—\d(\w)—\d(?:[^\d][^ \d])*+[^\d]\d*+—[^␣] [^␣][^␣][^␣]\d\d\d\d\d—……
Company Unicode Baseline 0.9036 0.8888 0.8961 (?¡!\w)\d\d\d\d\d\d\d++\w++
RGGB 0.9805 0.9728 0.9767 91(?:[^”]\d)++—\d++MA\w*+—91(?:\d\d)++\w\w—91\d*+\w\w++—\d++ \w\d++\w—(?:\d \d)++[\w]\d—\d++\w\w\d++\w—……
Chinese Certificate Number Baseline 0.4480 0.9109 0.6006 \d\d\d\d\d\d\d\d\d\d\d\d\d\d\d\d\d\d
RGGB 0.8969 0.9831 0.9380 220\d\d\d\d\d\w++—\d1010\d\d\d\d\d\w++—\d10\d++[\w]—\ 3̣030\w*+—\d30\d++[^ ␣] —\d20\d++[\w]—\d30\d30\d++—\2̣020\d++—\d2030\d++ —\d10\d\d\d\\̣d\d\d1030\d\d\d \d—\d10\d30\d++—\d3010\d++—\d110\d++[^/]—\ d30\d\d\d\d\d\d\d\d30\d\d\d\d\d—……
Car License Baseline 0.9998 0.9998 0.9998 (?¡![^_])[^\w]\w++
RGGB 0.9999 1.0000 0.9999 [^\w][^␣]++
Email Baseline 1.0000 1.0000 1.0000 [^@]++[^_]++
RGGB 1.0000 1.0000 1.0000 [^@]*+@163\.com—[^@]*+@[^”]*+
Bankcard Number Baseline 0.8897 0.9150 0.9022 \w\w\w\d\d\d\d\w\d\w\d\d\w\d\w\d\d\d\d++
RGGB 0.9667 0.9942 0.9802 622(?:\d\d)++—621(?:\d\d)++—622\d\d\d\d\d\d\d\d\d\d\d\d\d—621\ d\d\d\d\d\d\d621\d \d++—621\d\d\d\d\d\d\d\d\d\d\d\d\d—62\d\d\ d\d—\d++\*++\d++—……
Table 2. The result of our method RGGB and the baseline.
Positive Negative Algorithm Init Pop Size Min Pop Size Damping Index Elapsed time
400 400 Baseline 400 - - 0h25m40s
RGGB 400 100 0.99 0h0m13s
1000 1000 Baseline 1000 - - 1h24m13s
RGGB 1000 200 0.99 0h0m58s
4000 4000 Baseline 2000 - - ¿ 1day
RGGB 2000 200 0.99 0h9m12s
10000 10000 Baseline 1000 - - ¿5days
RGGB 2000 200 0.995 0h15m13s
Table 3. The training time of our method and the baseline.

4.4. Experimental Result

Our method together with the baseline is evaluated on the test dataset mentioned above, which contains 13 categories of data and 130000 samples in total. The results are reported in Table 2 and demonstrate the effectiveness of our method, which outperforms the baseline on 10 kinds of dataset and achieves nearly 50 percent improvement on average. Especially, for these data with obvious frequent items, our method surpasses the baseline to a great extent. For instance, our method outperforms the baseline by nearly 33 percent on the Chinese certificate number. We have introduced the structure of the Chinese certificate number in the previous section. In a specific Chinese certificate number, the first 2 characters stands for the province code, e.g. “22” is the code of JiLin province of China. However, the baseline utilizes the pattern of “\d\d” to capture these frequent items, which can be easily cheated by some adversarial samples. In the test set, the generated regular expression of the baseline will also match the bankcard number and the house id, which leads to the poor performance of the baseline. Our method applies BPE to capture these frequent items and construct the pattern much more precisely, which makes our method more robust to deal with the adversarial samples. For samples without frequent items, the performance of our method is similar to the baseline’s performance, which means our modification is reasonable and valid.

In this work, we propose some tricks to accelerate the training process of the GP algorithm in order to make auto regular expression generation feasible for industry applications. To evaluate the efficiency of our method, we choose different numbers of positive and negative samples, and test our method and the baseline in the same running environment. The results are recorded in 3. Apparently, our method is much faster than the baseline, which is nearly 100 times faster than the baseline. When choosing thousands of training samples, the baseline can hardly be trained.

In general, the results mentioned above demonstrate both effectiveness and efficiency of our method.

(a) (b) (c) (d)
Figure 5. Hyperparamters analysis results. (a) , (b), (c), (d) represent the result of numbers of positive training samples, epoch size, BPE threshold and decay parameter on population size, respectively.

4.5. Hyperparameter Analysis

In this subsection, we analyze some vital hyperparameters including the number of positive training samples, epoch size, exponential decay parameter on population size and the threshold of BPE, which may have effects on the performance of our method. Besides, we only report the performance of our method on the Chinese certificate number since it is the most challenging among all dataset. For a specific hyperparameter, the rest of experiment settings are kept unchanged. Figure 5 illustrates the results of these hyperparameters, which is reported according to the metrics defined in Equation 9. The square, triangle and dot represent precision, recall and F1 score, respectively.

Numbers of Positive Training Samples In hyperparameter analysis, the number of positive training samples and the number of negative training samples are equal. Surprisingly, there is no significant change in the recall. We argue that more training samples will make our method generate more specific regular expressions. At the beginning, the generated regular expression is too generalized to distinguish positive samples and negative samples due to the limited training samples, which leads to low precision and high recall. As the training samples grow, the precision and F1 score of our method are improved. In general, the performance is sensitive to the number of training samples at the beginning, and the upswing slows down when the number reaches to 3000. For efficiency, we suggest the number of the positive training samples can be set between 3000 and 10000.

Epoch Size The corresponding curves of recall and F1 score rise with fluctuation, which is as expected since more and more outstanding genes can be saved and generated with the epoch size growing. In addition, our algorithm utilizes divide-and-conquer strategy during each epoch. As the epoch size grows, more and more regular expressions will be generated for these mismatched samples in the last round of training epoch. The precision has no significant change since we utilize BPE to improve the ability of regular expressions to distinguish the positive samples and negative samples. It seems that bigger epoch size will help to generate better regular expressions. However, once the epoch size reaches 3300, the performance will not be improved. In general, for our method, the epoch size can be set as 3300 to achieve the best performance.

BPE Threshold As shown in (c), the recall and F-score are pool when the threshold is set as 0.001. In our BPE algorithm, the smaller threshold is the more frequent items will be captured. However, if the threshold is too small, our BPE algorithm will suffer from overfitting since the whole training sample may be recognized as a frequent item. Besides, the bigger threshold will generate the items with higher frequency. However, if the threshold is big enough, the BPE algorithm can hardly capture any frequent item and our method will rollback to the character-based method. Hence, the curve of F1 score grows first and then tends to be stable, and goes down in the end.

Decay Parameter From (d) of Figure 5, we find that the exponential decay parameter on population size does not affect the performance of our method. Some random strategies during the initialization stage of our method can account for the fluctuations in the curve. The results suggest that doing exponential decay parameter on population size is reasonable and valid.

5. conclusion

Traditional regular expression generation tasks are divided into two categories including generating regular expression from natural language and generating regular expression from samples. In this work, we focus on the second kind of task. For this task, although some effective methods are proposed, these methods suffer from inefficiency and can hardly be exploited to model industry applications. In order to make automatic regular expression generation available for industry, we propose a novel genetic algorithm which is motivated by (Bartoli et al., 2016). During the initialized stage, we generate some candidate regular expressions by using syntactic trees. Based on a pre-defined fitness function, the genetic mutation operation and crossover operation are carried out to find result with the best fitness from candidates. Different from the character-based methods, our method first utilizes BPE to extract some frequent items from the training examples, which are then utilized to construct more specific regular expressions. In order to accelerate training procedure, we conduct exponential decay on the number of candidates during each epoch.

Our method and a strong baseline are tested on 13 categories of data. The result indicates the validity of our method, which outperforms the baseline by nearly 50 percent on average. Especially, for those data with obvious frequent item like Chinese certificate number, our method even achieves 30 percent improvement. Furthermore, by doing exponential decay, our method is nearly 100 times faster than the baseline.

The future work will concentrate on the improvement of mutation and crossover operations so that more effective genes can be generated. In addition, we find that the generated regular expressions are of great complexity if BPE is utilized. Hence, we will explore some methods to simplify the output of our algorithm.

References

  • A. Bartoli, A. De Lorenzo, E. Medvet, and F. Tarlao (2014)

    Playing regex golf with genetic programming

    .
    In

    Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation

    ,
    pp. 1063–1070. Cited by: §2.
  • A. Bartoli, A. De Lorenzo, E. Medvet, and F. Tarlao (2015) Learning text patterns using separate-and-conquer genetic programming. In European Conference on Genetic Programming, pp. 16–27. Cited by: §2.
  • A. Bartoli, A. De Lorenzo, E. Medvet, and F. Tarlao (2017) Active learning of regular expressions for entity extraction. IEEE transactions on cybernetics 48 (3), pp. 1067–1080. Cited by: §2.
  • A. Bartoli, A. D. Lorenzo, E. Medvet, and F. Tarlao (2016) Inference of regular expressions for text extraction from examples. IEEE Transactions on Knowledge & Data Engineering 28 (5), pp. 1–1. Cited by: §1, §1, §1, §2, Figure 2, §3.2, §3.3, §3.4, §4.2, §5.
  • J. P. Chiu and E. Nichols (2016) Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics 4, pp. 357–370. Cited by: §2.
  • P. Gage (1994) A new algorithm for data compression. C Users Journal 12 (2), pp. 23–38. Cited by: §1, §2.
  • J. Han, J. Pei, and M. Kamber (2011) Data mining: concepts and techniques. Elsevier. Cited by: §1.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §1.
  • Y. Li, R. Krishnamurthy, S. Raghavan, S. Vaithyanathan, and H. Jagadish (2008) Regular expression learning for information extraction. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pp. 21–30. Cited by: §2.
  • [10] N. Locascio, K. Narasimhan, E. Deleon, N. Kushman, and R. Barzilay Neural generation of regular expressions from natural language with minimal domain knowledge. Cited by: §1, §2.
  • C. D. Manning, C. D. Manning, and H. Schütze (1999) Foundations of statistical natural language processing. MIT press. Cited by: §1.
  • C. D. Manning, P. Raghavan, and H. Schütze (2008) Introduction to information retrieval. Cambridge university press. Cited by: §1, §2.
  • P. Prasse, C. Sawade, N. Landwehr, and T. Scheffer (2015) Learning to identify concise regular expressions that describe email campaigns.

    The Journal of Machine Learning Research

    16 (1), pp. 3687–3720.
    Cited by: §2.
  • M. O. Rabin and D. Scott (1959) Finite automata and their decision problems. IBM journal of research and development 3 (2), pp. 114–125. Cited by: §1.
  • A. Ranta (1998) A multilingual natural-language interface to regular expressions. In Finite State Methods in Natural Language Processing, Cited by: §2.
  • R. Sennrich, B. Haddow, and A. Birch (2015) Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Cited by: §1, §2.
  • I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112. Cited by: §1.
  • P. J. Van Laarhoven and E. H. Aarts (1987) Simulated annealing. In Simulated annealing: Theory and applications, pp. 7–15. Cited by: §1.
  • Z. Zhong, J. Guo, W. Yang, J. Peng, T. Xie, J. Lou, T. Liu, and D. Zhang (2018) SemRegex: a semantics-based approach for generating regular expressions from natural language specifications. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Cited by: §1.