In age of “Big Data” and mobile Internet, the volume of data produced and processed through Internet expands fiercely. It brings a higher demand for storage and computation capacity. Since personal devices are incapable of handling it on many occasions, cloud service is endowed with more importance[1, 2, 3]. Meanwhile, cloud storage and computing brings new concern on privacy protection[4, 5].
To protect data privacy, outsourced data is usually encrypted in advance by data owners, after which the data query and search are performed. However, conventional search methods cannot be implemented in the ciphertext. Thus, search-supported encryption is proposed
, with which relevance degree among encrypted text can be measured and thus search over encrypted data becomes possible. Furthermore, considering ambiguity, typos, grammar variance, and semantic variety, bias is common for text matching. Fuzzy search is thus proposed[7, 8, 9] to achieve more robust search performance with these noises involved (e.g. misspelling detection feature of search engines). Facing the same demand, fuzzy search is also developed on encrypted data. As usual scenes of data search, users care most about the accuracy and efficiency of searching. Accuracy indicates that returned search files should be what the user wants, namely most similar to users’ search query. Efficiency is reflected by the time latency during search and obviously depends on the search algorithm performance. In this paper, we propose a novel multi-keyword fuzzy search design over encrypted cloud data. Targeting for it, we promote performance on both accuracy and time efficiency with different innovations.
Represent: Extract keywords from outsourced files or received queries, and transfer into word-vectors, a combination of which builds the final representation of files or queries.
Encrypt and index: Files and queries are both encrypted to enhance security. They are suggested to be encrypted in heterogeneous ways. The encryption algorithm and key are usually provided by data owners. With some data structure, encrypted files are organized and stored for indexing.
Search: In practice, data users send queries and data holders perform some search algorithm on the query and stored encrypted data. Search consists of the calculation of relevance score and ranking by the score. The data user usually only asks for the top-k most relevant files with the query instead of all relevant files.
The first step is critical when designing a fuzzy search mechanism. By representing files in a certain structure, keyword information within files is mapped into a uniform representation space. Similar files, mostly containing a close composition of keywords, are expected to be mapped close in this space. And the representation should be error-tolerant for common language bias, such as typos and synonyms, to return users’ desired results. On the other hand, the search index, encryption techniques, and search algorithms should cooperate well to conduct fast and correct retrieval.
Being one core indicator, promising search accuracy is achieved by current fuzzy search schemes on some occasions, while failing in many others. Typos, grammatical bias, semantic diversity often bring troubles. And the non-perfect representation scheme even creates new troubles. For example, anagrams (different words consisting of the same set of letters) will puzzle some schemes to map different keywords into the same location in the representation space. To overcome similar flaws and promote search accuracy, we propose a novel text presentation scheme that generating keyword-vectors based on designed ’order-preserved uni-gram’(OPU). OPU outperforms popular ’uni-gram’
of ’n-gram’ in term of accuracy in many cases. The keyword pair ’silent’/’listen’ (anagram) or ’keep’/’keap’(typo) has no chance to confuse our proposed mechanism as it can do in some other cases.
Because efficiency is another major concern during data search, accelerating search on without too much harm to accuracy is expected. We also propose an improvement on this aspect by renewing the design of data structure and search algorithm. Precisely, we propose an improved data organization scheme and design search algorithm based on it. A novel data clustering method is designed to gather similar files within a cluster. These clusters are some continuous area in the aforementioned representation space. Furthermore, an index tree is built in a hierarchical manner by organizing those file clusters. Namely, we design a hierarchical index tree (HIT) for data organization. Compared with the previous designs[10, 11, 12, 13], this design is expected to achieve better time efficiency with little harm to accuracy. And it’s flexible enough to adapt to different cases with less hand-adjusted parameters. Moreover, such tree-based data organization brings extra convenience to do verification[14, 15] after data retrieval to ensure the freshness, correctness, and completeness of returned data.
At last and most importantly, security and privacy should be guaranteed in our proposed architecture, which is expected to be ensured under different popular threat models. Therefore, focusing on the problem of fuzzy multi-keyword search over encrypted data, we summarize our contributions proposed in this paper in term of 2 aspects:
We improve accuracy under many cases by designing a novel file representation scheme named ’order-preserved uni-gram’ (OPU). It maps similar text to be close in representation space with kinds of noise involved.
We improve the search efficiency by designing a new data structure for data organization and corresponding search algorithms. Hierarchical index tree (HIT) is adopted in our scheme to improve time efficiency during query with slight harm to accuracy. To organize outsourced data, an improved dynamic clustering algorithm is proposed which needs less pre-set parameters and thus more flexible.
These two innovations target for different stages of search on encrypted data. There ain’t no such thing as a free lunch. First proposed contribution brings slightly more computation overhead during the file processing and indexing stage and the second contribution negatively influences the search accuracy as expected. However, both of the degradations are trivial enough that the overall design benefits both accuracy and time efficiency. The experiment proves the effectiveness of real-world linguistic dataset and our proposed design outperforms many state-of-the-art schemes in term of accuracy.
Ii Related Work
Ii-a Searchable Encryption
Curtmola et al.  propose a security definition on searchable encryption which is followed by most popular mechanisms. And Song et al. proposed the first practical searchable encryption mechanism. Previous works keep focusing on improving search accuracy and efficiency without harm to a necessary security guarantee. Wang et al. made an important improvement by introducing novel file indexing techniques. Cao et al. proposed a novel encrypted search scheme supporting multi-keyword matching by coordinate matching. Targeting reach similar demand, conjunctive keyword search, vector-space-based search models
and many other works are proposed. Many aforementioned works focus on finding a higher efficient file representation design. To the end of encryption, “Secure kNN” algorithm is adopted in most recently popular works[25, 24, 8, 20].
Ii-B Quicker search
Traverse the whole content of all files to calculate their similarity is obviously costly. To tackle the problem of quicker comparison on large-scale file storage, file indexing technique usually is adopted. As the first step, a file is represented by extracted keywords from it, which compresses the indexing volume to a great extent. Then it’s common to use some data structure to organize these compressed file indices, on which previous encrypted search schemes propose many innovations. Linear search by a single thread on indices always provides the lower bound of time efficiency because all files are traversed. As an improvement, some works[26, 27] try to improve latency through parallel computation and multi-task distribution. Some others focus on the improvement of the data structure for file indexing: hashing and tree-like data structure[10, 11, 12] are widely researched for this topic. Recently, Chen et al.
introduced Hierarchical Clustering mechanism into encryption data organization and search algorithm. It’s unavoidable in most cases that efficiency improvement from novel data structure brings harm to search accuracy because some files are skipped during the search to save time. To improve time efficiency without too much harm to accuracy is also expected in this area.
Ii-C Fuzzy Search
Language bias such as misspellings and multiple-semantic expression is common and ought to be recognized to improve search accuracy. However, it’s hard to distinguish many confusing pairs of words to be typos or different words. For example, how the automated system recognizes that received “catts eat mice” is a typo of “cats eat mice” and search for relevant stored data to the latter string? Worse still, encryption makes it even much more difficult because a slight difference in original natural language expression could be heavily zoomed out after encryption. When calculating the similarity of different text, these recognized bias declines the search accuracy. To enhance the robustness for search, it comes to the issue of “fuzzy search” topic. Li et al. first formalize and enable the fuzzy search over encrypted data while maintaining the security guarantee with the help of a pre-defined fuzzy set. Under similar pre-defined dataset, Fu et al. improve search accuracy when synonyms or antonyms exits in text. Without the assistance of extra information, fuzzy search usually is designed based on text representation and indexing schemes with high language-bias-tolerance. Wang et al. relieve the effect of typos or grammatical diversity by transforming keywords into ’bi-grams’. Furthermore, Fu et al. transform keywords into ’uni-grams’ in  to achieve better performance. But ’uni-grams’ in  still have many defects such as not enough robust when anagrams, special character or some other kinds of language bias are involved in text materials.
Iii-a System Model
As popular design[7, 21, 8, 9], the system model for our scheme consists of three components: data owner, cloud server and data user. Data owners encrypt files before outsourcing them and build a data structure to index these outsourced files. The file index is also encrypted. Data outsourced are only exposed to certified data users and trusted remote servers. Certified data users encrypt their queries and send them to remote servers. Search is operated on remote servers to leverage its computation and storage capacity. Relevance scores between queries and the stored data are computed with some algorithms. Remote servers then return most relevant files to data users. And data users would decrypt these files with keys from the data owner.
Iii-B Locality-Sensitive Hashing(LSH)
As a subclass of hash functions, locality-sensitive hashing functions have a significant feature: more similar items, which can be determined with some distance metric (eg. Euclidean distance), are more likely to be hashed into the same group. A LSH family is defined as if for each , two arbitrarily chosen points , satisfy:
where is the distance between these two points to represent their similarity. LSH  is a specific kind of LSH methods based on a function family, which can be formulated as:
where , are two vectors and and are two real numbers. , and are parameters and is the variable to be hashed.
Iii-C Bloom Filter
Bloom Filter is a special data structure widely adopted to map a high-dimensional point into a space of lower dimensions. A D-dimensional point is transformed into a one-hot m-bit vector with a hash function. With a set of independent hash functions , the point can be thus transformed into a m-bit vector with at most nonzero bits. With the features of
, more similar keywords are expected to be mapped into the same position with higher probability by the samefunction. Thus the finally generalized vectors are more likely to be similar or even the same. For a given set of points , independent LSH functions each into different bits of a m-bit vector . This vector is thus called ’Bloom Filter’. To judge whether a point , we simply generalize its corresponding bloom filter the same set of hash functions and test whether there is the same bloom filter for found. As proved in, the chance to give a false positive through this method is approximately . The minimal rate of false positive is , which is achieved when . So a better expected false positive is available with bigger . But to set small can keep produced Bloom Filter sparse, which is helpful to increase the accuracy of our scheme. This raises an important trade-off for application of bloom filters.
Iii-D Hierarchical Clustering Tree
Clustering algorithms are adopted to divide items into different clusters through comparing their similarity, namely items are divided into the same cluster if they are adjacent enough in the vector space. Popular hierarchical clustering techniques, such as k-means, DBSCAN and GMM, are widely adopted in data mining. Hierarchical clustering in sets a maximum number of elements in a cluster and then begins with random sample points to divide adjacent items into one cluster. For data organization in search cases, the element to be clustered can be a file or a query or the center point of a sub-cluster, all of which are first mapped to a uniform space and represented by a point in the space. In hierarchical clustering, the clustering algorithm is performed recursively on original elements and the points representing sub-clusters. Finally, we could generalize a hierarchical clustering tree. Specifically, each outsourced file is a leaf node of the hierarchical clustering tree built through. To search on the tree to find some node, the general time complexity is simply instead of for linear search.
Iv Proposed Algorithms
In this section, we introduce our proposed algorithms in detail. Our main innovations include a novel keyword transformation scheme and a specially designed data structure for indexing. The former is named ’order-preserved uni-gram’ to show its difference with traditional ’uni-gram’ or ’n-gram’. And the designed data structure is called “Hierarchical Index Tree”(HIT). With the help of OPU, more information of original natural language keyword can be reserved without privacy leaks, providing help for more accurate search. On the other hand, the proposed HIT is based on the traditional clustering algorithm and search tree but we propose a new clustering algorithm and corresponding search algorithm to better balance the trade-off between search efficiency and accuracy.
Iv-a “Order-Preserved Uni-gram” (OPU)
Misspelling occurs in three cases: ’misspelling of letters’, ’order of letters reversed’ and ’addition or missing of letters’. To judge the similarity of two keywords, the popular mechanism usually splits keywords into pieces. Wang et al. adopts the ’n-gram’ method, which would split the word ’task’ into the set when . This method achieves a good result for the first case but fails for another two cases. Based on this scheme, Fu et al. propose ’uni-gram’ method, under which, for instance, the uni-gram set of keyword ’scheme’ is . This scheme performs better for the other two cases. However, due to the lack of information on the order of letters, such a scheme is incapable of recognizing ’anagram’, such as ’devil’ and ’lived’.
To bring better fuzzy search, we propose a new method to transform a keyword into an ’order-preserved uni-gram’ vector, which would show advantages compared with previous works. While keywords are of different length, the output OPU vectors are expected to be equally long. The construction of an OPU vector can be divided into three steps: decompose, encode and infect. We introduce them respectively in the following paragraphs.
This step is pretty similar to that of traditional ’uni-gram’ generalization. Given a set of keywords extracted from some text materials, we perform this operation on each of them respectively. First, all keywords should be stemmed to eliminate the grammatical and other linguistic variations. Then all keywords would be dismembered into single letters. But different from the unordered set of letters for ’uni-gram’, we also record the position of each letter in the original keyword for further use.
Considering the legal maximum length of keyword for transformation is (including letters, numbers and widely used symbols), the length of output OPU vector, , should have a valid length of , all bits of which are binary. Specifically, the vector consists of “letter blocks”(LB), each of which has 26 bits, and one ’digit and symbol block’(DSB), whose length is 30 bits. Letters between ’a’ to ’z’ are mapped to bit in each letter block. And letter in corresponds to letter block in .
For example, if the target keyword is “add”, only , and are set to 1 with all other bits remaining 0 (because “a” and “d” are respectively the first and the fourth letter in alphabet). The last bits in indicate whether 10 digits (’0’-’9’) and 20 widely used symbols appear in . In practice, if a keyword is too long to fit the preset length of the OPU vector, it will be cast. As a trade-off, a longer vector can represent more complicated keywords but brings more overhead. On the other hand, a too short vector would make information loss normal, which brings severe hurt to representation rationality.
After encoding, we have vectors of uniform length representing keywords. The position of each letter in the original keyword is encoded in these vectors, making a critical difference with traditional methods.
A simple insight to realize fuzzy search is to raise tolerance for letter dislocation when transforming words to some standardized representation (such as the uni-gram vector adopted in our scheme). To enhance such tolerance of produced the uni-gram vector, we propose an ’Infect’ mechanism, which makes the most difference between our proposed method and previous methods. Each nonzero bit of the vector after “Encode” would share its weights with neighboring bits and the relation is determined by an Infection Function. After Infection, bits of original representation vector may be transformed from binary to float. A typical kind of Infection Functions can be formulated as:
where is the bit distance between two bits, is a factor to adjust the infection strength. Note is another to adjust how far the infection can spread and the farthermost distance is = . Because same letters on the neighboring position of the original keyword are mapped into two 26-bit-far-away bits, only when , infection happens. Such a mechanism would weaken the negative effect brought by letter dislocation.
To explain that in detail, we study it with an example. A keyword “add” is already encoded into a OPU vector after “Decompose” and “Encode”. If we set and , Infection happens: and are 1, others being 0. In the first wave of infection, value of and increases by . Then dthe second wave of infection follows: and are further increased by . So far, infection finishes because . Finally the OPU vector generalized from the keyword “add” becomes with :
Iv-A4 Analysis of Order-preserved Unigram (OPU)
We analyze the improvement of the proposed ’order-preserved uni-gram’ over previous ’bi-gram’ and ’uni-gram’ design in this part. Comparison is performed through examples with different requirements for fuzzy search taken into consideration. The similarity of two keywords is simply quantified in Euclidean distance as:
where and are respectively the representation vector of two keywords. is the length of vectors. Actually, in fuzzy search, we want simple typo brings no severe decrease to similarity score. In other words, if is the correct form of a keyword and is a corresponding fuzzy form, their distance should be as short as possible so that they are considered to be largely similar during a search. On the other hand, if and are different wanted keywords and they are similar under some metric, a robust representation design can still distinguish them after vector representation. For instance, “add” and “dad” are not expected to be thought “similar” or even “the same” under a good scheme, which may bring huge bias into search results.
Considering various requirements for fuzzy search, three types of misspellings cases should be taken into consideration:
Letter misspelling indicates when letters in a word are replaced by some incorrect ones. For example, “beer” can be misspelled as “berr”.
Wrong letter order
Wrong order of letters indicates when words are consist of a uniform set of letters, but some letters in them are arranged with the wrong order. For example, the word “bere” may be typed as “beer” for wrong letter order.
Insertion/Absence of letter
Insertion or absence of letter(s) in a word occurs frequently as well, causing typos in the text. For example, the word “pen” may be misspelled to be “pean” or “pn” in this case.
An ideal keyword decomposing approach should recognize the high similarity between a word and its typos. On the other hand, as an obvious trade-off, when a typo suffers from the severe difference from the correct form, the similarity should no longer be high, or the approach would become invalid to distinguish some different but similar words. For the three listed fuzzy cases, we calculate the relevance scores under different word decomposing approaches. And the results are shown with examples in TABLE I.
Except for three basic types of misspellings, OPU handles many other cases better than ’uni-gram’ and ’bi-gram’. The most obvious shortcoming of traditional ’uni-gram’ is that only the composition of letters in a word is recorded after decomposing, the information of their position is lost totally. Such a shortcoming makes it unable to distinguish different words in many cases. For example, for traditional ’uni-gram’ mechanism, anagrams like “listen” and “silent” produce the same keyword vector after transformation, which is incapable of satisfying users’ demand (’Dad is silent’ and ’Add is listen’ are regarded as same queries is apparently unacceptable). While in our proposed OPU, the position information is also encoded into the final word representation, so anagrams could no longer compromise our approach.
In summary, based on ’uni-gram’, our proposed OPU not only inherits all its advantage but also encodes the position of letters in the original word into final representation, which enhances our scheme in many cases. We qualitatively compare the ability of three mentioned mechanisms to show the difference between different words in different cases and the result is shown in TABLE II. Note that “Ex-1” concludes the case with non-alphabet involved in a string.
|Keyword #1||Keyword #2||bi-gram||uni-gram||OPU|
Iv-B “Hierarchical Index Tree” (HIT)
As the other main contribution, we exploit an efficient data structure to organize the file indices. Instead of the most naive linear organization, we design a tree-based index organization. To be precise, we build a hierarchical index tree to organize the file indices for faster file search. To relieve computation overhead, we don’t index a file straightforward through its full keyword vector, which usually is of high dimension. Instead, we map original word vectors into some intermediate representation of lower dimension and then perform indexing on it. As mentioned before, we choose “bloom filter” as the intermediate representation.
Iv-B1 Construction of HIT
Given the file representation vectors, each of which encodes the keyword information of a file, we propose HIT to organize them. To build HIT, we need to divide bloom filters into clusters. Some nearby clusters may form a larger cluster standing on a higher layer in HIT. We propose an improved dynamic K-means Algorithm to plan these clusters of different levels, through linking which we could build the final HIT. We explain the algorithm in Algorithm 1. Instead of giving a fixed tightness factor to determine the final cluster number, we compare the average point distance and minimum point distance in a cluster to determine whether this cluster should be further subdivided.
To generalize clusters of higher level, the clusters of lower-level are represented by their center coordinates and thus regarded as “points”. Algorithm 1 is therefore capable of clustering some tiny clusters into a larger one. Keep performing such operation until a final cluster which contains all points are generalized and the HIT is constructed at the same time.
Iv-B2 Search in HIT
To search files in HIT, we need to calculate the relevance score between a target representation vector, which usually is generalized from a query string, and some nodes in HIT. Only the leaf nodes in HIT represent real files and search process is expected to return k most similar stored files. To increase time efficiency, an intelligent search algorithm should not calculate the relevance score of every leaf between the target vector. Therefore, how to find reliable top-k most similar files without too much computation is the core problem. We thus design a search algorithm adapted to the HIT structure as explained in Algorithm 2, which could be notated as .
Only to traverse all files can ensure that the literally “top-k most relevant files” are always found. Linear traversing requires a time complexity of , where is the scale of stored files, which is unacceptable in most cases. Through the proposed search algorithm, we try to find a proper trade-off between search accuracy and time efficiency. In other words, we look up required files with time complexity and returned files can be expected to be included in the global “top-k most relevant files”. The designed experiment proves that HIT improves time efficiency to a high extent without bringing much damage on search accuracy.
V Architecture Construction
In this section, we explain in detail and step by step the complete pipelines of our proposed architecture, in which the aforementioned innovations are involved. Note that the scheme comprehensively leverages multiple algorithms and data structures in different steps, which have many dependencies with each other. But the general architecture is flexible with sub-modules replacement. For example, replacing bloom filter with other data structure or adopting different encryption approach would not disable the architecture but only change fine-grained operations. The overview of the proposed architecture is visualized in Figure 4 in detail.
V-a Keyword Extraction and Preprocessing
Given the set of files to be outsourced as , the first step is to extract the keywords from them. Preposition, pronoun, auxiliary words and other keywords without concrete semantic meaning are filtered out. Remaining keywords are noted as , where is the set of keywords extracted from . And we have . Then, we apply Stemming Algorithm on these keywords to eliminate grammatical diversity. For example, “listening‘’, “listen‘’ and “listened‘’ would be all stemmed into a uniform keyword ’listen’. After stemming, we then calculate the “term frequency” and “inverse document frequency”(TF-IDF) value of each keyword for further use.
Considering there all in total legal keywords, the TF value of the keyword, , in the file, , is formulated as:
where is the frequency of in . And the IDF value is on the other calculated as:
And the TF-IDF value of in the file is thus simply calculated by:
V-B Generalization of representation vectors of files
To relieve computation overheads, we would transform the original keywords in natural language form into some numerical representation. For computation and storage convenience, they are in general transformed into vectors of uniform length. Then we build a structured representation of each file based on the representation vectors of all contained keywords in it. In our proposed scheme, we follow two steps to reach the goal.
V-B1 Keyword representation
To represent a keyword in a uniform numerical form, we transform each keyword in into a corresponding OPU vector as introduced in the aforementioned sections. Then, with LSH functions, , a keyword would be encoded into at most bits in a bloom filter with else bits all zero.
V-B2 File representation
After previous steps, all keywords in a file can be well represented already with bloom filter and the file can thus be represented a set of keyword bloom filters. For example, for a file , we have where is the bloom filter of the keyword in . Furthermore, we can represent the file in a bloom filter as well. And the bloom filter of a file is of the same length with that of keywords. Through bit-wise addition and pre-calculated TF-IDF value of each keyword, the final representation of is formulated as:
Till now, we’ve got the final representation of files to be outsourced. For queries, which would be involved in the architecture in the search stage, we transform them into a numerical representation as same as the process for files.
V-C Construction of encrypted HIT
For search efficiency, we would outsource an aforementioned HIT for file indexing and search along with files to the remote servers. Besides, for privacy protection, we have to encrypt built HIT before the outsourcing. In this section, we introduce this critical step.
V-C1 Building HIT
Before this step, each file has been represented in a bloom filter, which can also be regarded as a point in a space, where is the length of adopted bloom filters. As introduced in the aforementioned section and illustrated in Algorithm 1, we can build a HIT to organize the indices of files now.
V-C2 Encryption of HIT and query
To ensure security and privacy, simply to transform data into bloom filters may be inadequate because such a transformation is determinative. Hence, we adopt secure kNN algorithm to encrypt all nodes in the HIT as well. On the other hand, a query will also be encrypted in a corresponding manner to enable the calculation of relevance between files and query. This process is decomposed into steps as following:
Given a security parameter , this method generates a security key , which is a tuple . and are both -dimensional invertible matrices and is a -dimensional vector.
With secure kNN, to encrypt an index vector , which should be a bloom filter in our scheme and of length , it should be first split into two vectors as follows:
where is a small randomness introduced for security consideration. Finally, the encrypted expression of is .
For queries in the search stage, it should also be encrypted to prevent information leak. And following previous steps, a query has also been represented in a -dimensional bloom filter, which is notated as . To encrypt it into a trapdoor, symmetric operations are operated to split it into two vectors as well:
Finally the trapdoor of the query vector is expressed as:
While once all data encrypted, we need to re-introduce the calculation of relevance score. Fortunately, secure kNN has a great feature that it allows invariant relevance score through naive inner production. Given an encrypted file index and an encrypted query , we can obtain their relevance score as follows:
Eq 14 proves that inner production on two vectors can produce the same result before and after the encryption by secure kNN. So far, we can search for the top-k most relevant files through HIT referring to a given query. The required algorithm has been introduced in previous sections and concluded in Algorithm 2.
Similar as explained in , thanks to the construction of HIT, returned files can be further verified to confirm the correctness, completeness, and freshness. To support this mechanism, the data owner needs to build a signed tree in advance and outsource it to the cloud server together with the index tree. Once the search finishes, the cloud server returns files together with the signed sub-tree along the search path to the data user. For example, referring to Fig 3, if files in are returned as search result, the representation vectors stored in would be all returned as well because these nodes are gone through in search path. Here, we could adopt some signature algorithm such as  to generate this signed hash tree. Then, with the received signed sub-tree, the data user calculates the signature of each node with its corresponding representation vector and compares the calculated signature with the returned signature. The returned files pass the test, and as claimed in , their correctness, completeness, and freshness could be verified without considering the real-time data update on remote servers.
Vi Privacy Analysis
In this section, we state the background settings and analysis of the security promise of our proposed architecture theoretically. The analysis is performed under two threat models, where privacy leakage resulting from untrusted data use is out of the discussion. And the authorization of data users and remote servers is also out of scope in this section. Namely, we assume all data users taken into consideration are already certified by the data owner.
Vi-a Threat Model
The cloud server is considered as honest-but-curious, which means it receives queries and executes the search as commanded, but tries to derive sensitive information from queries and stored encrypted data at the same time. The requirement of privacy in our scheme is as same as that defined in[9, 8, 18]. Two threat models asking for different levels of privacy protection:
Known Ciphertext Model
In this threat model, the cloud server is expected to gain only the content of encrypted files, encrypted file index and encrypted trapdoor of received queries.
Known Background Model
In this threat model, the cloud server is also interested in the statistical background information of encrypted data. It could perform a statistical attack to obtain more information about keywords[21, 40] involved in the search. Attach under this threat model also reveals unencrypted information through collecting and inference.
Vi-B Security Objectives
Because we adopt the encryption-before-outsourcing schemes in our proposed architecture, the privacy of file content is already guaranteed unless adopted encryption is broken. Then, there are other security objects we need to take into consideration:
File index and trapdoor of queries are both transformed from extracted keywords. The real content keywords should also be protected from being known by remote servers.
Remote servers should be prevented from learning the content of query/files from repeated search tasks. Thus, trapdoor generation from query/files should be fully deterministic. Thus, the same query string or file shouldn’t be always transformed into the same trapdoor.
We adopt bloom filter structure and secure kNN to ensure security and privacy. Because the generalization of bloom filters is deterministic with a certain family of LSH functions, it cannot reach the requirement of ’Trapdoor unlinkability’. However, during the encryption with secure kNN, randomness is introduced into all trapdoors, which helps our scheme to be capable of these security and privacy criteria. Detailed proof can be simply borrowed from that provided in[19, 41].
Vii Experiments and Evaluation
In this section, we design groups of experiments on a real-world dataset to estimate the performance of our scheme. We use ’20NewsGroups’ dataset as the raw plaintext file source in experiments. Main programs for the experiment are written in Python2.7. And all experiments were performed on a server having an ’Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz’ and 16GB available DDR4 memory space. All experiments are run on Ubuntu 16.04 LTS operation system.
Vii-a Evaluation Metrics
Parameters setting matters a lot in experiments, so we specify it at first. Except for value specifically declared in some experiments. The default value of parameters in our experiments are:
for “Infection” stage in generalizing OPU
each query contains 5 keywords
bloom filters have in total 8000 bits
in total LSH functions are used to build bloom filters
each query asks for 20 returned files
in Algorithm 1 is set to be 0.4
Each text file in the dataset ’20newsgroup’ has a set of keywords whose scale varies a lot. To relieve the negative effect brought by this bias, we first filter the dataset by referring to how many keywords a file contains. We select 3583 files from ’20newsgroup’, each of which has at least 200 keywords and at most 400 keywords. And in total 41558 raw keywords are extracted from these selected raw files. After stemming, keywords still sum to more than 3000. Instead of simply setting a threshold of relevance score to calculate search accuracy, we introduce a more practical accuracy metric for evaluation, which named overlap rate of top-k files. Because output return top-k files affect what the data user gets from his query, this metric is expected to be better estimated the search accuracy in data users’ real experience. The accuracy under such metric is formulated as:
where is the set of top-k files returned through traversing search on plaintext files, and is the set of top-k files returned through encrypted search. For instance, given the same query, the search performed on encrypted data and plaintext search return the same set of top-k files. Then reaches its upper limit, namely, . When , if the plaintext search returns a collection of files and the search on encrypted data returns , the accuracy should be . We compare our proposed scheme with the most popular scheme based on “uni-gram” keyword decomposing.
Vii-B Non-fuzzy Search
As a basic function, in non-fuzzy cases when there is no typos or misleading words in the query, a satisfying should, of course, show a convincing accuracy. We test the search accuracy and time efficiency on text file datasets of different volume. The experiment result is shown in Figure 6. The search time for both of the schemes are tested under the naive linear search for variable control.
For the different volume of the dataset, our proposed scheme achieves higher accuracy compared with Fu’s scheme with almost the same time performance. The main difference between these two schemes is how to represent a keyword, based on “uni-gram” of “order preserved uni-gram”. Hence, the experiment proves that the introduction of OPU in our proposed scheme does not bring obvious extra computation overheads but contributes to search accuracy to an evident extent.
In practice, the number of returned files, , ought to be allowed fluctuant and adjusted by the data user. Therefore, a robust search scheme should achieve steady search accuracy under with changes. We control the value of and maintain other variables unchanged in the experiment for comparison. The result is shown in Figure 7. Generally speaking, along with increasing of (return files for a query), search accuracy increases and our scheme always achieves higher accuracy.
Similar as other searchable encryption mechanisms[8, 42, 43] based on Bloom Filter, another critical trade-off between time efficiency and accuracy is the size of bloom filter. A bigger bloom filter decreases the chance of hashing collision but obviously increases the computation and memory storage overhead. To evaluate the influence of bloom filter size on our scheme, we set another group of experiments and the result is shown in Fig 8. Referring to the experiment, we find that our proposed scheme performs better in the set range of bloom filter size. And the time efficiency is nearly the same as in that in . And the increasing of bloom filter brings no extra time consumption compared with the traditional “uni-gram” scheme. It is because that OPU introduces no extra computation overheads into the bloom filter generalization stage.
At last, we also evaluate performance with different length of queries. The result is reported in Fig 9. Our proposed scheme performs steadily with different queries consisting of 1 ~10 keywords.
Vii-C Fuzzy Search
During the generation of ’order-preserved uni-gram’, ’Infection’ is the most important step to quality output ’order-preserved uni-gram’ to support fuzzy search for many different occasions as described in previous sections. And it’s one of the most important contributions in this paper. As core of this step, ’Infection Function’ adopted is adjusted by two parameters: and in Eq 3.We change the value of and independently to evaluate their influence on search performance. The experiment result is reported in Fig 10 where two misspelled keywords are inserted into each query string. The result argues that has a very slight influence on accuracy when varies between 2~5. However, when is set larger than 2, accuracy reported in our scheme drops aggressively.
We provide an intuitive explanation for this phenomenon: a larger allows the weight sharing between two more distant positions in a keyword but in practice, data user misspells keywords by exchanging letters on distant positions in a much lower possibility. For example, it’s more likely for the keyword “listen” to be misspelled to be “lisetn” than “lestin”. Hence, if is set too larger, it may sacrifice the accuracy under most occasions to conform some very corner cases.
To compare time efficiency in fuzzy search, we set two groups of experiments, one for spelling typos and another for anagrams. For misspelling, each query consists of keywords , and we select keywords in the query. The selected keywords become mutant by three types of letter operation on a single letter: letter replacement neighboring letters exchange and deletion or addition a letter. For example, the mutants of the word “search” in three cases respectively could be “seerch”, “saerch” and “serch”/“searrch”. With the keyword mutant involved in queries, we imitate the real fuzzy search scenes in practice. The experiment result is reported in Figure 11.
Another interesting case in fuzzy search is when anagrams are involved in queries and files. As explained already, anagrams make many currently popular search schemes on encrypted data invalid. To evaluate scheme performance with anagrams involved, we collect in a total of 500 pairs of anagrams from , and then insert anagrams into files and queries in pair. Anagrams are inserted into files. and are adjusted separately in two groups of experiments for variable control. Results of experiments are reported respectively in Fig 12 and in Fig 13
Through these two groups of experiments, our scheme outperforms the scheme based on traditional ’uni-gram’ more than for non-fuzzy search. We believe such improvement of accuracy obviously results from our proposed new mechanism to transform keywords in text to high dimensional vector space, namely ’order-preserved uni-gram’(OPU).
Vii-D Time efficiency
At last, we compare the search time consumption with HIT adopted and without it. From theoretical analysis, the time complexity is reduced from to and experiment result in Fig 14 conforms to the analysis well. Besides, comparing the result with previous experiments, we find HIT brings very slight accuracy harm, which is acceptable in most practical occasions.
In this paper, we propose a novel scheme for multi-keyword search over outsource encrypted data where a fuzzy search is well supported in many different cases. Novelty of this paper does no harm to the privacy guarantee on outsourced data.
Our contributions can be concluded into two aspects:
we propose a novel keyword decomposing scheme based on what is named “order-preserved uni-gram”(OPU), which eliminates many weaknesses of previous “uni-gram” and “n-gram” schemes.
we design a novel file indexing tree (HIT) which is based on hierarchical cluster tree while we improve the traditional K-means algorithm for data clustering. Thanks to the novel dynamic K-means algorithm, the indexing tree can be constructed more flexibleand with fewer parameters set manually.
Experiments on real-world data prove the effectiveness of our proposed architecture. OPU brings accuracy improvement to a great extent under various experiment settings, and the proposed HIT increases search time efficiency without obvious hurt on search accuracy,
-  H. T. Dinh, C. Lee, D. Niyato, and P. Wang, “A survey of mobile cloud computing: architecture, applications, and approaches,” Wireless communications and mobile computing, vol. 13, no. 18, pp. 1587–1611, 2013.
-  S. S. Qureshi, T. Ahmad, K. Rafique et al., “Mobile cloud computing as future for mobile applications-implementation methods and challenging issues,” in Cloud Computing and Intelligence Systems (CCIS), 2011 IEEE International Conference on. IEEE, 2011, pp. 467–471.
-  V. Namboodiri and T. Ghose, “To cloud or not to cloud: A mobile device perspective on energy consumption of applications,” in World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2012 IEEE International Symposium on a. IEEE, 2012, pp. 1–9.
-  Y. Sun, J. Zhang, Y. Xiong, and G. Zhu, “Data security and privacy in cloud computing,” International Journal of Distributed Sensor Networks, vol. 10, no. 7, p. 190903, 2014.
-  D. Chen and H. Zhao, “Data security and privacy protection issues in cloud computing,” in Computer Science and Electronics Engineering (ICCSEE), 2012 International Conference on, vol. 1. IEEE, 2012, pp. 647–651.
-  S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving secure, scalable, and fine-grained data access control in cloud computing,” in Infocom, 2010 proceedings IEEE. Ieee, 2010, pp. 1–9.
-  J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, “Fuzzy keyword search over encrypted data in cloud computing,” International Journal of Engineering Research and Applications, vol. 4, no. 7, pp. 1–5, 2014.
-  Z. Fu, X. Wu, C. Guan, X. Sun, and K. Ren, “Toward efficient multi-keyword fuzzy search over encrypted outsourced data with accuracy improvement,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 12, pp. 2706–2716, 2017.
-  B. Wang, S. Yu, W. Lou, and Y. T. Hou, “Privacy-preserving multi-keyword fuzzy search over encrypted data in the cloud,” in INFOCOM, 2014 Proceedings IEEE, 2014, pp. 2112–2120.
-  R. Brinkman, L. Feng, J. Doumen, P. H. Hartel, and W. Jonker, “Efficient tree search in encrypted data.” Information systems security, vol. 13, no. 3, pp. 14–21, 2004.
-  B. Wang, Y. Hou, M. Li, H. Wang, and H. Li, “Maple: scalable multi-dimensional range search over encrypted cloud data with tree-based index,” in Proceedings of the 9th ACM symposium on Information, computer and communications security. ACM, 2014, pp. 111–122.
-  Z. Xia, X. Wang, X. Sun, and Q. Wang, “A secure and dynamic multi-keyword ranked search scheme over encrypted cloud data.” IEEE Trans. Parallel Distrib. Syst., vol. 27, no. 2, pp. 340–352, 2016.
-  V. Gampala and S. Malempati, “A study on privacy preserving searching approaches on encrypted data and open challenging issues in cloud computing,” International Journal of Computer Science and Information Security, vol. 14, no. 12, p. 294, 2016.
-  J. Wang, H. Ma, Q. Tang, J. Li, H. Zhu, S. Ma, and X. Chen, “Efficient verifiable fuzzy keyword search over encrypted data in cloud computing,” Computer Science and Information Systems, vol. 10, no. 2, pp. 667–684, 2013.
-  Q. Zheng, S. Xu, and G. Ateniese, “Vabks: verifiable attribute-based keyword search over outsourced encrypted data,” in Infocom, 2014 proceedings IEEE. IEEE, 2014, pp. 522–530.
-  C. Wang, N. Cao, K. Ren, and W. Lou, “Enabling secure and efficient ranked keyword search over outsourced cloud data,” IEEE Transactions on parallel and distributed systems, vol. 23, no. 8, pp. 1467–1479, 2012.
K. Lang, “Newsweeder: Learning to filter netnews,” in
Proceedings of the Twelfth International Conference on Machine Learning, 1995, pp. 331–339.
-  R. Curtmola, J. Garay, S. Kamara, and R. Ostrovsky, “Searchable symmetric encryption: improved definitions and efficient constructions,” in ACM Conference on Computer and Communications Security, 2006, pp. 79–88.
-  D. X. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted data,” in IEEE Symposium on Security and Privacy, 2000, p. 44.
-  C. Wang, N. Cao, J. Li, K. Ren, and W. Lou, “Secure ranked keyword search over encrypted cloud data,” in IEEE International Conference on Distributed Computing Systems, 2010, pp. 253–262.
-  N. Cao, C. Wang, M. Li, K. Ren, and W. Lou, “Privacy-preserving multi-keyword ranked search over encrypted cloud data,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 1, pp. 222–233, 2014.
-  P. Golle, J. Staddon, and B. Waters, “Secure conjunctive keyword search over encrypted data,” in ACNS 04: International Conference on Applied Cryptography and Network Security, 2004, pp. 31–45.
-  H. Pang, J. Shen, and R. Krishnan, “Privacy-preserving similarity-based text retrieval,” ACM Transactions on Internet Technology (TOIT), vol. 10, no. 1, p. 4, 2010.
-  Y. Elmehdwi, B. K. Samanthula, and W. Jiang, “Secure k-nearest neighbor query over encrypted data in outsourced environments,” in Data Engineering (ICDE), 2014 IEEE 30th International Conference on. IEEE, 2014, pp. 664–675.
-  W. K. Wong, D. W.-l. Cheung, B. Kao, and N. Mamoulis, “Secure knn computation on encrypted databases,” in Proceedings of the 2009 ACM SIGMOD International Conference on Management of data. ACM, 2009, pp. 139–152.
-  Z. Fu, X. Sun, Q. Liu, L. Zhou, and J. Shu, “Achieving efficient cloud search services: multi-keyword ranked search over encrypted cloud data supporting parallel computing,” IEICE Transactions on Communications, vol. 98, no. 1, pp. 190–200, 2015.
-  S. Kamara and C. Papamanthou, “Parallel and dynamic searchable symmetric encryption,” in International Conference on Financial Cryptography and Data Security. Springer, 2013, pp. 258–274.
-  C. Chen, X. Zhu, P. Shen, J. Hu, S. Guo, Z. Tari, and A. Y. Zomaya, “An efficient privacy-preserving ranked keyword search method,” IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 4, pp. 951–963, 2016.
-  Z. Fu, K. Ren, J. Shu, X. Sun, and F. Huang, “Enabling personalized search over encrypted outsourced data with efficiency improvement,” IEEE transactions on parallel and distributed systems, vol. 27, no. 9, pp. 2546–2559, 2016.
-  M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, “Locality-sensitive hashing scheme based on p-stable distributions,” in Proceedings of the Twentieth Annual Symposium on Computational Geometry, ser. SCG ’04. New York, NY, USA: ACM, 2004, pp. 253–262. [Online]. Available: http://doi.acm.org/10.1145/997817.997857
-  B. H. Bloom, Space/time trade-offs in hash coding with allowable errors. ACM, 1970.
-  J. Macqueen, “Some methods for classification and analysis of multivariate observations,” in Proc. of Berkeley Symposium on Mathematical Statistics and Probability, 1967, pp. 281–297.
-  M. Ester, H. P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” 1996.
-  C. Biernacki, G. Celeux, and G. Govaert, “Assessing a mixture model for clustering with the integrated completed likelihood,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 7, pp. 719–725, 1998.
-  J. B. Lovins, “Development of a stemming algorithm,” Mech. Translat. & Comp. Linguistics, vol. 11, no. 1-2, pp. 22–31, 1968.
-  P. Willett, “The porter stemming algorithm: then and now,” Program, vol. 40, no. 3, pp. 219–223, 2006.
-  G. Salton and M. J. McGill, Introduction to Modern Information Retrieval. New York, NY, USA: McGraw-Hill, Inc., 1986.
-  M. Calderbank, “The rsa cryptosystem: History, algorithm, primes,” 2007.
-  O. Goldreich, “Secure multi-party computation,” Manuscript. Preliminary version, vol. 78, 1998.
-  A. Swaminathan, Y. Mao, G. M. Su, H. Gou, A. L. Varna, S. He, M. Wu, and D. W. Oard, “Confidentiality-preserving rank-ordered search,” in ACM Workshop on Storage Security and Survivability, Storagess 2007, Alexandria, Va, Usa, October, 2007, pp. 7–12.
-  W. Sun, B. Wang, N. Cao, M. Li, W. Lou, Y. T. Hou, and H. Li, “Privacy-preserving multi-keyword text search in the cloud supporting similarity-based ranking,” in Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security. ACM, 2013, pp. 71–82.
-  S. M. Bellovin and W. R. Cheswick, “Privacy-enhanced searches using encrypted bloom filters.” IACR Cryptology ePrint Archive, vol. 2004, p. 22, 2004.
-  Y.-C. Chang and M. Mitzenmacher, “Privacy preserving keyword searches on remote encrypted data,” in International Conference on Applied Cryptography and Network Security. Springer, 2005, pp. 442–455.
-  “www.manythings.org:everyday vocabulary anagrams,” Published online, 2001. [Online]. Available: http://www.manythings.org/anagrams/