Through the past few years, combining both coding and encryption in a single algorithm to reduce the complexity is a new tempting approach for securing data during transmission and storage [1, 2, 3, 4, 5, 6, 7, 8]. This new approach aims to extend the functionality of coding algorithms to achieve both coding and encryption simultaneously in a single process without an additional encryption stage like many other traditional schemes such as Pretty Good Privacy (PGP)  and Internet Protocol Security (IPsec) [10, 11].
According to [7, 8], employing the new combined simultaneous compression-encryption approach highly reduces the required resources for encryption (computational and power resources). Also, the new approach preserves all available standard features which are not available when applying traditional encryption schemes, such as progressive transmission for JPEG2000  (also available for JPEG ) and the random access feature (also called compressed domain processing) in JPEG2000. Furthermore, the new approach achieves more features and capabilities over traditional encryption schemes such as multilevel security access. The most attracting target for this new approach is the arithmetic coder.
Arithmetic coder is a lossless entropy coder used for most widespread multimedia coding standards as a last compression stage [13, 12, 14, 15, 16, 17, 18]. This is due to its higher compression efficiency than traditional Huffman coder . Arithmetic coder is included in JPEG image codec  and H.263 video codec  as an alternative option for Huffman coder. For more recent multimedia standards, which require more compression performance like JPEG2000  and JBIG  image codecs, H.264  and H.265  video codecs, arithmetic coder is mandatory.
In this paper, a lightweight authentication and integrity capabilities is proposed exploiting the nonlinear properties of the arithmetic coder. The following section describes some detailed information about various properties for arithmetic coders, which is necessary to explain the rest of the paper. Section III illustrates the previous work related to the proposed technique. Section IV provides a complete explanation of the proposed technique and introduces comparisons between the proposed technique and related works. Finally, conclusions and contributions of this paper are summarized in Section V.
Ii Arithmetic coding
Arithmetic Coding is a variable-length entropy coding technique used for lossless data compression. First approaches to arithmetic coder were already invented in 60’s . Then, arithmetic coding was able to gain more interest in the 80’s due to its high compression efficiency compared to the well-known Huffman coding algorithm  because the arithmetic coder overcomes the constraint that each symbol has to be coded by an integer number of bits as will be described with example in subsection II-B. Nowadays, there are many applications using arithmetic coding as described in section I.
Ii-a Simplified Example
The following example describes the process of arithmetic encoding and decoding. Assuming discrete-memoryless source with four symbols
with probabilities. The encoder creates a model called Probability Map starting from to as in Fig.1.
Generally, the algorithm for arithmetic encoding works as follows :
Coding starts with a current interval [L,H] which is initialized to [0,1]. In this context, (L) and (H) are acronyms for (Low) and (High) respectively.
For each symbol of the uncompressed data to be coded, two steps are performed:
The current interval is divided into subintervals, one subinterval for each symbol. The size of a symbol’s subinterval is directly proportional to the symbol’s predicted probability.
The encoder selects the subinterval assigned to the symbol that actually occurs, and mark it as the new current interval.
The width of the final subinterval is equal to the product of the probabilities of the actually occurred symbols. Then, the encoder chooses any point from this final interval and that point will be the output of the encoding process.
Fig.2 shows the above procedure for our simple example when coding a stream of data . Here, the output may be any point within the latest sub-interval, so we can select .
Now, the arithmetic decoder receives the point , and the decoder must have the same Probability Map for the encoder and must be informed how many symbols to decode. The decoder does the same as the encoder: it finds that the point is located inside the third interval, then the first symbol will be . Then dividing the third sub-interval [0.3,0.6] with the same probability ratios as in Fig.2. Now, the point is located inside the first sub-interval, so the second symbol will be . Doing the same until the end of the stream (the decoder knows that there are 6 symbols), the decoder will recover the coded stream .
Ii-B Advanced Example
Subsection II-A describes a basic implementation for the arithmetic coder which has the following problems:
As the stream of data goes longer and longer, the calculations become more and more complex as infinite precision is needed.
There is no mechanism to optimally select the point representing the last sub-interval. This point theoretically requires infinite precision as the encoder may select a point which needs large number of bits to represent it. In the example in Subsection II-A, the encoder can select instead of .
For the first problem, two important properties should be noticed. First, coding process works with probabilities (i.e. ratios). So, it is not obligatory to work within range [0,1] only, encoder and decoder can rescale the working region within [0,2] or [0,5] with the same probabilities (Ratios). Also, shifting the range to be [1,2] or [7,8] with the same ratios has no effect on the arithmetic coding efficiency in case the decoder at the other side uses the same conditions. Moreover, encoder and decoder can do both shifting and rescaling at the same time (for example, working in a range like [1,4] or [5,10]). The only two restrictions here is that both decoder and encoder must operate at the same manner and the probability region assigned for each symbol must match the actual symbol probability.
The second property can be concluded from Fig.2. It is clear that over the coding process, for each sub-interval, the upper probability value does not ever increase (may be stationary or decreasing, but never increase) and the lower probability value does not ever decrease (may be stationary or increasing but never decrease). So, upper and lower bounds are converging across encoding or decoding processes. Thus, as the lower and upper probability values start to converge, it is likely for both to have identical MSBs (most significant bits). Once upper and lower bounds have matching MSBs, the MSBs will continue to be matched till the end of the encoding or decoding processes. From that point on, MSBs will be fixed across all following coding stages. Shifting the matched MSBs out to the encoded data stream, and append a 0-bit for the LSBs of the lower limit and upper limit will double the current sub-interval and will reduce the complexity of calculations. For example:
Assuming Lower Range = 1011010101
Assuming Upper Range = 1101001000
Shift out the MSB and write it to the encoded output:
Lower Range = 011010101_
Upper Range = 101001000_
Adding the new LSB:
Lower Range = 0110101010
Upper Range = 1010010000
So, encoding and decoding processes can be performed within a fixed binary precision, and hence the first problem is solved.
The second problem is easy to solve as the process to find the optimum binary representation for the width of the final region is simple as shown by next numerical example:
If the final region is , only one bit is needed to represent it (As ) as this region can’t be positioned within the interval without passing across any of .
If the final region is , only two bits are needed to represent it (As ).
If the final region is , only three bits are needed to represent it (As ).
The width of the final region is the result of multiplication for the probabilities of all symbols included in the coded stream , as explained in Fig.2. Hence, the minimum number of bits () required for representing region length () is:
Note that for equation (1) is exactly the total amount of information contained within the coded bit-stream . So, the represented bit-stream here is the optimum (lowest possible length) representation. This is why the arithmetic coder achieves better compression than Huffman coders as stated in .
Ii-C Arithmetic coder for JPEG and JPEG2000
While MQ-coder is the standard implementation of the arithmetic coder used for JPEG2000 standard format , the QM-coder is the one for JPEG . The concepts and ideas are the same for both MQ-coder and QM-coder. Consequently, one of them, QM-coder, will be described here.
QM-coder  is a multiplication-free adaptive binary arithmetic coder that codes binary data streams with simple lookup-table-based operations. In order to avoid complex multiplications and scaling calculations, the QM encoder specifies a discrete-probability table contains 113 states as described in Table I. The only significant implementation difference between MQ-coder and QM-coder is that the probability table of MQ-coder contains 47 states.
Each row for the -coder’s table is considered a state representing a different probability map which, due to the binary nature of -coder, can be modeled simply by the probability of the Less Probable Symbol (LPS) denoted by . According to Table I, the probability of the (MPS) and (LPS) will be and respectively.
The QM-coder starts at an initial table entry (state) , which may be modified through the coding process according to the entries NMPS (Next index after MPS ) and NLPS (Next index after LPS). During the coding process, the encoder decides whether a received input bit is coded as MPS or LPS. Thus, the next state will be either , or respectively. However, if the Switch flag bit equals to , the coder exchanges the value for MPS and LPS.
Ii-D Avalanche effect for arithmetic coder
The avalanche effect for the arithmetic coder is an important criteria for using the arithmetic coder for security. According to [1, 2, 3], arithmetic coder is characterized by high error sensitivity and error propagation properties. Furthermore, it is proven by  that any arithmetic coder can be considered a chaotic random generator with proven cryptographic nonlinear properties. Moreover, a practical experiment is described in  uses the NIST’s statistical test tool  to support these cryptographic properties. Consequently, this means that any change in the input bit-stream for the encoder/decoder side (even in a single bit), leads to a huge avalanche effect for the all the following encoded/decoded output bit-stream. The following example demonstrates these properties.
Using the same discrete-memoryless source in subsection II-A, when coding the message , the detailed coding process and final coding result will be as in Fig.3. According to Fig.3, the point can be used as a result for coding the message. If the decoder has the same probability map, the decoder functional diagram will be the same as in Fig.3 and the decoded message will be identical to the original message. Now, If the decoder uses a wrong probability map (for example for symbols respectively), the decoder function diagram will be as in Fig.4 and the decoded message will be , which is different from the original message by . Let us name this type of errors as First type of errors in which the decoder has a different probability map. This type of errors also appears even when changing the order of the symbols without changing the probability values of any symbol.
Another type of errors, called Second type of errors, in which the received point has been changed. The coded message is which corresponding to the binary . Now, changing it to be with a single bit error which leads to start at point will be . Here, the decoder will have a functional diagram as in Fig.5 and the recovered message will be with errors.
This type of errors when applied to any arithmetic coder, it affects only all the symbols following the bit error. If the error is in the first bit, the whole message will be decoded incorrectly, but if the error affects only the last usable
bit (i.e. not the padding bits), the whole message will be coded correctly except the last symbol.
Ii-E Adapting arithmetic coder for security
Two types of errors for the arithmetic coder had been described in the previous subsection. The first type of errors can be used Inside the arithmetic coding stage without any additional encryption stage (i.e. without other additional processing). This makes the arithmetic coder performs both compression and encryption simultaneously [7, 8]. Besides, this type of errors can be used also for error detection  and correction , which will be described in section III.
The second type of errors can be used in tandem with traditional digital signature algorithms to sign a small part of the bit-stream to achieve low complexity integrity and authentication capabilities, which is the main idea of the proposed work, described in section IV. This type of errors is used Outside the arithmetic coding stage.
When using the first type of errors, the following criteria must be carefully taken into account. Referring to Fig.3, the width of the final interval will be (), but when applying the first type of errors at the encoder (to achieve encryption) by making the probability model to be for the symbols respectively (i.e. swapping both symbols and ), as described in Fig.6, the width of the final interval will be (), which equals () of the original interval width without encryption. Thus, according to equation (1), the output bit-stream will be expanded by an extra 6-bits as this type of errors reduces the compression efficiency because it doesn’t maintain the recommended statistical model. Besides, it should be noted that as the message goes longer, the number of needed extra bits will be increased. This is an important criteria to be considered as it may cause expansion instead of compression as described in  with a practical example.
To achieve security without sacrificing the compression efficiency, the probability model should be maintained without any modifications. This can be done by applying the same permutations for both the probability model and the symbols’ order (i.e. Maintain the same probability region width for each symbol). This doesn’t affect the compression efficiency as the compression efficiency doesn’t be affected by how we order the symbols within the probability model. This criteria has been utilized for joint compression and encryption in [7, 8].
Now, applying this type of errors for Fig.3 by making the probability model to be for the symbols respectively, the obtained results described in Fig.7. The width of the final interval for both Fig.3 and Fig.7 will be (). So, the compression efficiency is maintained and additional security benefit is gained.
Iii Review of previous work
Iii-a Forbidden Symbol
Forbidden Symbol(s) [25, 26, 27] is a technique utilizing the nonlinearity, high error propagation and high error sensitivity of arithmetic coder for extending the arithmetic coder usage for error detection, as described in , in tandem with compression and encryption. Furthermore, according to , error correction capabilities are also achievable. This is done by inserting one or more dummy symbols within the probability map. This technique can be considered an application for the first type of errors, described in subsections II-D and II-E.
Dummy symbol(s) assigned a relatively small probability value(s) (i.e. small region(s) in the probability map). The cost for this additional feature is reducing the compression efficiency as discussed below.
According to subsection II-E, assigning an incompatible probability values within the probability map reduces the compression efficiency. As discussed in [28, 27], assuming the total probability of the forbidden symbol(s) is (), then the actual used probability map will be () for each coding iteration. Hence, the following equation describes the amount of redundant bits per each coding step:
In addition to equation (2), assuming the length of the uncompressed stream is symbols, then the following equation calculates the total additional length when coding this uncompressed stream with an arithmetic coder applying the concept of forbidden symbol:
Practically, the value of can be quite small and could be ignored. To prove this, consider the MQ-coder as an example. The minimum applicable width of the region for any symbol within the probability map of MQ-coder is , which can be the assigned width for the forbidden symbol (). Hence, as the typical code-block length is bytes [29, 12], so the maximum length of the bit-stream to be coded is bits (). Assuming the least possible size of the total probability map which equals to 0.75 instead of 1  (i.e. ), then applying equation (2) and equation (3), an additional bits are added per each code-block. Regarding the sample JPEG2000 image in Fig.8 which contains 309 code blocks with total compressed size of bytes when coding with libopenjpeg  with the default coding configurations, the maximum total additional size for such an image is bytes (i.e. maximum compression loss is ) per a single forbidden symbol.
Forbidden symbol technique is an error detection and correction scheme which cannot be used as an alternative to integrity . The main difference between integrity and error detection is that the error detection can detect any modification of the data (within certain limit) which is unintended such as channel errors, on the other hand integrity can detect unlimited number of errors including intended and unintended. Integrity cannot correct errors, just detect it.
Thus, another standard for securing the transmission and storage of images called JPsec (Secure JPEG 2000) had been designed . JPsec provides encryption, source authentication and data integrity in a compliant manner. This means that all available features such as progressive transmission and compressed domain processing are not affected. Techniques described in [7, 8] achieves only encryption with higher efficiency than JPsec, but cannot achieve data integrity and source authentication. The proposed technique extends the scheme in , which is more efficient than , to achieve integrity and authentication besides encryption.
Iv The proposed technique
Iv-a Design idea
In the context of this paper, the term complete data stream (CDS) refers to a stream of uncompressed symbols (just before the arithmetic coder) to be coded using a complete cycle of coding by the arithmetic coder. A complete cycle of coding for the arithmetic coder starts by initializing the arithmetic coder’s registers and ends up by flushing the arithmetic coder registers, as described in [13, 12].
For JPEG2000, A typical code-block equals to 4096 bytes [12, 29], so the maximum length of CDS will be 4096 bytes. The minimum length of CDS for JPEG2000 cannot be stated here as the CDS per each code-block depends on the statistics for each code-block itself. To be more accurate, applying the practical implementation described in  for the image in Fig.8, Table II gives the actual calculations for the length of CDS.
For JPEG, CDS can be much longer (as images can be coded within a single CDS). Thus, when applying the implementation of jpeg in  for all 29 reference images in , Table III gives a real calculations for the length of CDS.
As discussed and proven in details with examples in previous sections, the arithmetic coder is characterized by high nonlinearity, high error propagation and high error sensitivity properties. Thus, by gathering a small part at the end of each CDS and applying any digital signature scheme to sign this gathered bytes only, integrity and source authentication can be achieved with a small cost as described in Fig.9. Also, the encoder must generate a unique random value (called nonce) for each image. Nonce is appended to the gathered bytes before applying the digital signature scheme. Nonce and the final signature can be sent within the image inside the comment field for JPEG and JPEG2000 formats.
Once the signed image had been received at the decoder side, the image is first decoded by the arithmetic coder. Then, the decoder gathers data (at the end of each CDS like encoder) the and extracts Nonce, then applies the hash function to gathered data and nonce exactly the same as the encoder. After that, the decoder extracts the signature, decrypts it with the proper public key and compares this decrypted output with the signature output to verify the integrity and authenticates the source of the image as described in Fig.10.
Any other secure hash function or private key algorithm, or even another signature algorithm can be used. Using a private key algorithm assures the source authentication of the image.
If any error occurred over the compressed stream, even in a single bit at any position, this error will diffuse through the remaining bits of the stream and will extremely affects the last few bits at the end of the recovered stream at the decoder side, and subsequently this effect can be detected by the signature algorithm. Consequently, instead of applying the signature algorithm for all bits of the coded image, only applied for a small portion of the image with the same results, thanks to the special characteristics of the arithmetic coder. This can be considered as utilizing the second type of errors described in subsection II-D.
Iv-B Implementation issues
In the previous subsection, the minimum length of the gathered part per each CDS is 16 bytes. According to , any block cipher or hash function uses a block length of 64-bits, is vulnerable to a practical attack called Birthday paradox attack. Thus, a minimum length for the gathered part per each CDS is 16-bytes.
Adding nonce to the gathered bits is mandatory because the last bytes of the CDS, with a high probability, are equal to zeros due to quantization [13, 12, 29]. Thus, even by using a strong signature algorithm to sign the gathered data which may be the same for more than one image is not secure. So, by adding a unique nonce as a first block of the gathered data, with minimum length of 16 bytes for each image can be considered secure.
Iv-C Comparison with forbidden symbol technique
As described in the previous subsections, the proposed technique utilizes the special features of the arithmetic coder, but applied outside the arithmetic coder. So, it does not affect the arithmetic coder operation or the compression efficiency, unlike forbidden symbol technique. Additionally, the proposed technique has no limits for the number of detected errors as it is an integrity scheme, not an error detection scheme. Moreover, for the forbidden symbol technique, if an error occurs and the decoder did not pass through any forbidden symbol, the error will not be detected, unlike the proposed technique which utilizes a proven secure cryptographic hash function.
Iv-D Comparison with JPsec
JPsec employs a cryptographic signature algorithm like the proposed technique, but JPsec applies this algorithm for all bytes of the CDS, unlike the proposed technique which is more efficient. Considering the JPEG2000 case, with a maximum CDS of 4096 bytes, the proposed technique is applied only for 16 bytes per each stream, which is 256 times faster than JPsec. To be more practical, considering the mean value ( bits) from Table II, the proposed technique is approximately 96 times faster than JPsec.
Although JPsec achieves also encryption besides integrity and source authentication, an efficient encryption technique described in  can be combined with the proposed technique to achieve all security services as JPsec with notably small amount of resources.
V Conclusions and future work
In this paper, a new lightweight technique is proposed to attain integrity and source authentication utilizing the arithmetic coder in a low cost manner compared to JPsec standard. Unlike the forbidden symbol technique, the proposed technique does not affect the compression efficiency of the arithmetic coder and produces a robust error detection scheme (integrity). When combining the proposed technique with , an efficient and low complexity joint compression, encryption, integrity and source authentication can be achieved for systems with limited resources like IoT and embedded systems.
“A secure arithmetic coding based on markov model,”Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 6, pp. 2554 – 2562, Jun.
-  M. Sinaie and V. T. Vakili, “A low complexity joint compression-error detection-cryptography based on arithmetic coding,” in 10th International Conference on Information Science, Signal Processing and their Applications (ISSPA 2010), May 2010, pp. 233–236.
-  S. Chokchaitam and P. Teekaput, “Protecting embedded error detection arithmetic coding from eavesdroppers,” in 2005 Digest of Technical Papers. International Conference on Consumer Electronics, 2005. ICCE., Jan 2005, pp. 59–60.
-  C.-P. Wu and C.-C. Kuo, “Design of integrated multimedia compression and encryption systems,” Multimedia, IEEE Transactions on, vol. 7, no. 5, pp. 828–839, 2005.
-  J. Zhou, Z. Liang, Y. Chen, and O. Au, “Security analysis of multimedia encryption schemes based on multiple huffman table,” Signal Processing Letters, IEEE, vol. 14, no. 3, pp. 201–204, 2007.
-  N. Nagaraj, P. G. Vaidya, and K. G. Bhat, “Arithmetic coding as a non-linear dynamical system,” Communications in Nonlinear Science and Numerical Simulation, vol. 14, no. 4, pp. 1013 – 1020, 2009.
-  M. Grangetto, E. Magli, and G. Olmo, “Multimedia selective encryption by means of randomized arithmetic coding,” Multimedia, IEEE Transactions on, vol. 8, no. 5, pp. 905–917, 2006.
-  H. Y. El-Arsh and Y. Z. Mohasseb, “A new light-weight jpeg2000 encryption technique based on arithmetic coding,” in MILCOM 2013 - 2013 IEEE Military Communications Conference, Nov 2013, pp. 1844–1849.
-  J. Callas et al. (2007, Nov.) OpenPGP Message Format. RFC 4880 (Proposed Standard). Internet Engineering Task Force.
-  P. Karn et al. (1995, Aug.) The ESP DES-CBC Transform. RFC 1829 (Proposed Standard). Internet Engineering Task Force.
-  S. Frankel et al. (2003, Sep.) The AES-CBC Cipher Algorithm and Its Use with IPsec. RFC 3602 (Proposed Standard). Internet Engineering Task Force.
-  ISO, JPEG 2000 image coding system – Part 1: Core coding system. www.iso.org, 2004, no. ISO/IEC 15444-1.
-  ——, Digital compression and coding of continuous-tone still images: Requirements and guidelines. www.iso.org, 1994, no. ISO/IEC 10918-1.
-  ITU-T, Video coding for low bit rate communication. www.itu.int, 2005, no. E 27414.
-  ——, Advanced video coding for generic audiovisual services. www.itu.int, 2013, no. E 38445.
-  ——, High efficiency video coding. www.itu.int, 2016, no. E 41298.
-  ISO, Coded representation of picture and audio information – Progressive bi-level image compression. www.iso.org, 1993, no. ISO/IEC 11544.
-  ——, Coding of audio-visual objects – Part 3: Audio. www.iso.org, 2009, no. ISO/IEC 14496-3.
-  G. K. Wallace, “The jpeg still picture compression standard,” IEEE Transactions on Consumer Electronics, vol. 38, no. 1, pp. xviii–xxxiv, Feb 1992.
-  N. Abramson, Information Theory and Coding. McGraw-Hill Inc.,US, 1963.
-  L. W. Couch, Digital and Analog Communication Systems (8. ed.). Pearson, 2012.
-  M. Sinaie and V. T. Vakili, “Secure arithmetic coding with error detection capability,” EURASIP J. on Information Security, vol. 2010, pp. 4:1–4:9, Sep 2010. [Online]. Available: http://dx.doi.org/10.1155/2010/621521
-  N. I. of Standards and Technology. (2010, April) Nist statistical test suite. [Online]. Available: http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html
-  ISO, JPEG 2000 image coding system: Wireless. www.iso.org, 2007, no. ISO/IEC 15444-11.
-  C. Boyd, J. G. Cleary, S. A. Irvine, I. Rinsma-Melchert, and I. H. Witten, “Integrating error detection into arithmetic coding,” IEEE Transactions on Communications, vol. 45, no. 1, pp. 1–3, Jan 1997.
-  J. Chou and K. Ramchandran, “Arithmetic coding-based continuous error detection for efficient arq-based image transmission,” IEEE Journal on Selected Areas in Communications, vol. 18, no. 6, pp. 861–867, June 2000.
-  M. Grangetto, G. Olmo, and P. Cosman, “Error correction by means of arithmetic codes: an application to resilient image transmission,” in Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 IEEE International Conference on, vol. 4, April 2003, pp. IV–273–6 vol.4.
-  R. Anand, K. Ramchandran, and I. V. Kozintsev, “Continuous error detection (ced) for reliable communication,” IEEE Transactions on Communications, vol. 49, no. 9, pp. 1540–1549, Sep 2001.
-  T. Acharya and P. Tsai, JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures. Wiley, 2005.
-  H. R. Sheikh, M. F. Sabir, and A. C. Bovik. Live image quality assessment database release 2. [Online]. Available: http://live.ece.utexas.edu/research/Quality/subjective.htm
-  B. Macq et al. Openjpeg library and applications. [Online]. Available: https://github.com/uclouvain/openjpeg
-  W. Stallings, Cryptography and network security - principles and practice (5. ed.). Prentice Hall, 2013.
-  ISO, JPEG 2000 image coding system – Part 8: Secure JPEG 2000. www.iso.org, 2007, no. ISO/IEC 15444-8.
-  T. Richter. A complete implementation of 10918-1 (jpeg). [Online]. Available: https://github.com/thorfdbg/libjpeg
-  “Fips pub 180-4, secure hash standard (shs),” 2015, u.S.Department of Commerce/National Institute of Standards and Technology.
-  “Fips pub 186-4, digital signature standard (dss),” 2013, u.S.Department of Commerce/National Institute of Standards and Technology.
-  K. Bhargavan and G. Leurent, “On the practical (in-)security of 64-bit block ciphers: Collision attacks on http over tls and openvpn,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16. ACM, 2016, pp. 456–467. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978423