I Introduction
In January of 2018, the prime minister of the UK appointed one of her ministers to focus on issues related to loneliness^{1}^{1}1https://www.theguardian.com/society/2018/jan/16/mayappointsministertacklelonelinessissuesraisedjocox. This acknowledges the basic human need to form connections with other people. From a technological point of view, Online Social Networks (OSNs) are typically used to reflect realworld social connections and establish new ones. Besides established global OSNs, there is a rising market for services that explicitly focus on the connections between people in proximity, i.e., neighborhood networks like Nextdoor^{2}^{2}2http://www.nextdoor.com/ and its competitors. We note the apparent desire and need for people to form connections with those in physical proximity. How can technology assist this need?
First, we observe that people use their smartphone to access social networking services. OSNs have shifted to Mobile Social Networks (MSNs). Facebook, for instance, lists 1.15 billion mobile daily active users on average for 2016^{3}^{3}3https://investor.fb.com/investornews/pressreleasedetails/2017/FacebookReportsFourthQuarterandFullYear2016Results/default.aspx. Among the most frequent concerns with established OSNs or MSNs is the potential misuse of data and loss of user privacy. This concern is heightened by the highly sensitive user and sensor data that modern smartphones provide (cf. [1, 2]). One of the key reasons for these concerns is the centralized architecture of the systems – the social networking service provider has all the data and can use it beyond the level to which users intended to share it with the service provider [3]. By utilizing short range wireless interfaces, the need for centralized servers in social networking scenarios can be reduced. Utilizing Bluetooth, WiFi Direct, or NFC, smartphones can communicate directly with each other.
Consider the stimulation of connections between people who do not know each other yet, utilizing a proximitybased MSN that takes into account the described privacy concerns of centralized social networking architectures. Previous work points to what approach such a system can make in order to foster user interaction: psychological studies point out that any social network is structured by homophily, which means that people who are similar to each other tend to connect with each other [4]. How can we determine the similarity of two users with mobile devices without contacting a central server?
When answering that question, we have to consider the applicability of the solution in a quickly changing mobile devicetodevice context, as well as implications imposed by shortrange technologies like NFC. This implies the use of only small amounts of data and a limited number of necessary data exchanges. A lot of the information that is relevant for the social profile of a user is already available on the smartphone itself [5]. As the needed data is present and connectivity between devices is also given, we focus on the question of how to process and compare the data under the constraints of bandwidth and computing limitations of mobile devices.
Based on the use case of two strangers meeting, we develop a method of similarity estimation for proximitybased MSNs. Two users can quickly approximate their similarity when meeting, without exchanging clear text data nor contacting any central server. While our approach is applicable to any other data that can be represented as a multiset, in this paper, we focus on one of the most typical features of social profiles, the musical taste of the user. Not only is listening to music one of the most typical usages for smartphones [6]; musical taste is, after gender, the most commonly disclosed profile feature in Facebook [7]. Musical taste is a common feature that users tend to identify with, that thus can serve as an appropriate feature for similarity estimation or can be used in the recommendation of new contacts.
In this paper, we present our approach that allows the estimation of the similarity of two users’ musical tastes based on probabilistic data structures. While approaches for set similarity estimation exist for the Bloom Filter (BF), we develop an approach for Counting Bloom Filters (CBFs) and CountMin Sketches (CMSs), suitable for multisets. We discuss our approach based on experiments done with synthetic data and real user music listening history data. We conclude with a concrete approach that is applicable for multiset similarity estimations in devicetodevice scenarios.
Hence, the main contributions of this paper are:

An approach to similarity estimations in proximitybased MSNs.

The introduction of new comparison metrics for CBFs and for CMSs.

An evaluation of the introduced metrics based on both synthetic and real data sets – showing support for our approach to spaceefficient similarity estimations.
Ii Related Work
Before the advent of smartphones, a similar idea of social networking was described in [8]. Here, users exchange identifiers of existing OSNs with each other via Bluetooth and can look up each other’s information on the OSN. Having the data already available on the smartphone gives us the possibility to directly compare data instead of relying on existing OSNs. Some other papers present similar ideas, utilizing Bluetooth and central servers [9] or manually entered interests to find user similarities [10]. ESmalltalker also follows the idea to share data via Bluetooth and describes a socalled Iterative Bloom Filter for finding the intersection of two sets of topics of interest [11]. More recent work deals with proximitybased mobile social networking: with EShadow, the user can see profiles of other users in proximity and has to evaluate manually if he/she is interested in another user without support for automatic similarity estimation [12]. In SANE, the devices of users with similar interests are used for forwarding messages [13], while there is no system to stimulate interactions between users or offer recommendations for new contacts.
Papers that focus more on the algorithmic side of this topic often deal with the research areas of private set intersection or secure multiparty computation
. Here, the application scenarios usually require a much higher level of privacy than estimating similarity in the proximitybased social networking scenario. Especially the factor proximity reduces potential attacker vectors. Often, multiple data exchanges are necessary for handshakes, keyexchanges, etc.
[14]. Furthermore, some approaches rely on third parties to perform homomorphic encryption [15] or need other peers to perform computations [16]. While these third parties usually do not learn anything about the two users, they are not needed in our approach, with which it is possible to estimate the similarity of two users with one single devicetodevice data exchange.Iii Background
Multisets. A multiset is a generalization of a set, allowing for multiple instances of each of its elements. A series of events for which the frequency is important – but the order is not – can be described as a multiset. Take for example the visited locations of a user. Every location or area can be described by a unique string. Each visit adds one element of the corresponding string to the multiset. Users with similar movement patterns will generate similar multisets. In this paper, we focus on the musical taste of users. Without the need to have the user explicitly enter this information, we can just collect data about the songs a user listened to [17]. Storing a unique string representation for each song for each time it is played yields a multiset that represents the musical taste of the user. In order to reduce the amount of data that needs to be exchanged, we want to avoid sending clear text music playlists between clients.
Bloom Filter (BF).
Probabilistic data structures are able to represent large amounts of data spaceefficiently. Querying the data yields results with a certain probability. The tradeoff is between used memory and precision. A BF yields probabilistic set membership
[18]. It consists of a bit vector with bits, initialized with zeros, and pairwise independent hash functions, each of which yields one position in the bit vector when hashing one element. When adding an element to the set, all hash functions are applied, yielding positions in the bit array, which then are switched to . We visualize a BF in Figure 1.When querying for set membership, the BF can answer whether the element is definitely not in the set – when the query element hashes to at least one in the bit vector – or is likely in the set – when all hash positions in the bit vector are . In the latter case the queried element is either in the set or there is a collision with another element.
Counting Bloom Filter (CBF). An extension to the BF to adapt it to work with multisets is to make each field in the bit vector not binary but a counter [19]. An example is shown in Figure 2. The resulting CBF can yield a close estimation of the cardinality of the queried element in a multiset. Analogously to the false positives due to collisions in the original BF, the estimated cardinality from the CBF represents an upper bound of an element’s cardinality in the original multiset. In order to achieve a closer approximation of the cardinality of an element, using multiple hash functions can be useful. A single hash function can have collisions and there can be collisions between hash functions.
In this paper, we use collision instead of hash collision, because the collisions that occur do not have to be hash collisions: as each element has to be mapped to the length of the (C)BF and not the entire namespace of the hash function, there may be collisions even if there is no hash collision. For example, two different hashes from hash function could be mapped to the same position of the (C)BF, resulting in a collision but strictly speaking without having a hash collision. Utilizing multiple hash functions can help reduce the impact of collisions: when querying a CBF for the cardinality of an item , it is hashed with each hash function and the lowest counter is returned. Through the described collisions, the yielded number is equal or higher than the true amount.
CountMin Sketch (CMS). Another probabilistic data structure that works with multisets is the CountMin Sketch [20]. It is often used to provide information about the frequency of events in streams of data. A CMS consists of columns and rows (cf. Figure 3). Each field is initialized with . Each row is associated with one hash function. Adding an element increments the counter at the positions indicated by the hash functions.
It is worth stating the relationship between CBF and CMS: adding the rows of a CMS yields a CBF. A CBF with length and one single () hash function is equal to a CMS with width and depth , given that the same hash function is used in the CMS:
(1) 
Comparison of two BFs. There are some papers dealing with the comparison of two sets utilizing BFs [21] [22] [23] [24]. In [21], the authors compare two BFs with a bitwise . A large number of s in the result indicates similarity. The authors of [22]
calculate string similarity utilizing BFs by creating ngrams, adding those into BFs and using the Dice coefficient (see Equation
3). The idea is that identical ngrams will hash to identical positions in the bit array as long as the same length and same hash functions are used. In [23], the authors utilize BFs to estimate path similarity for paths in computer networks. They define a Bloom Distance, which is the logical of both BFs, followed by counting the number of s in the results and dividing by the length of a BF. The authors of [24]use cosine similarity on two BFs to determine similarity. In
[11], the authors iteratively compare two BFs. They start with a small BF with a high false positive rate and increase its size in a second round of comparisons if the similarity value of the first round was above a predefined threshold.Iv Metrics for Comparing CBFs and CMSs
Because in our scenario, we are dealing with multisets, utilizing a BF and thus leaving out the cardinality of each element in the multiset would not reflect the musical taste of the user anymore. It is the cardinality of the element (”play count”) that indicates the number of times a specific song was played back.
In order to compare multisets, one can apply similar metrics like suggested for the BF, i.e., cosine similarity and Dice coefficient. Both cosine similarity (in case of positive values) and Dice coefficient yield a value between and , where indicates no similarity and indicates sameness. The cosine similarity for two vectors and is defined as:
(2) 
The numerator indicates the dot product of and . The denominator indicates the multiplication of the lengths of the two vectors: . Given a multiset , we can interpret the cardinalities of the elements in the multiset as elements of a vector. Thus, for two multisets , we will just write , assuming an appropriate vector representation of the cardinalities of the elements in and .
The Dice coefficient of two sets and is given by:
(3) 
In order to obtain the Dice coefficient for two multisets and , we can also use Equation 3 by employing the cardinality and the intersection of multisets. The cardinality gives the sum of all occurrences of all elements in the multiset. The intersection of two multisets and can be determined by the minimum function applied for each element : If (denoting that has exactly instances of ) and , then the following holds for each element:
(4) 
To the best of our knowledge, there is no research specifically about the comparison of two CBFs or two CMSs. A prerequisite for the pairwise comparison is that the two data structures to be compared are of the same length and use the same hash functions; i.e., the same elements will always hash to the same positions in the data structure. Based on these prerequisites, we will now transform the idea of both cosine similarity and Dice coefficient to CBFs as well as to CMSs, yielding metrices for these data structures.
Iva Cosine Similarities for CBFs and CMSs
As the data structure of a CBF is a vector, we can immediately use Equation 2 for defining the cosine similarity of two CBFs .
For the cosine similarity of CMSs, we view each CMS as a collection of vectors (one vector each row), cf. Figure 3, and propose the following.^{4}^{4}4If (resp. ) is a CBF (resp. CMS), its positions will be denoted by (resp. ).
Definition 1 (CMScosSim).
Let and be two CMSs with the same dimensions and utilizing the same hash functions. The CMS cosine similarity of and is given by:
(5) 
where and .
Thus, the CMS cosine similarity of two CMSs averages the cosine similarity of all corresponding rows.
IvB Dice Coefficents for CBFs and CMSs
The bitwise operations suggested for the comparisons of two BFs do not work with CBF and CMS as we have counters – instead of binary values – at each position, cf. Figure 2 and 3. Therefore, we transfer the idea of the Dice coefficient for multisets to CBFs as follows:
Definition 2 (CBFDice coefficient).
Let and be two CBFs with length and utilizing the same hash functions. The CBF dice coefficient of and is given by:
(6) 
Note that in order to approximate the numerator for the multiset Dice coefficient, the CBF dice coefficient in Equation 6 applies the minimum function for each position, and for the denominator, the cross sum of both CBFs is used.
Extending the Dice coefficient to CMSs reuses the dice coefficient for CBFs.
Definition 3 (CMSDice coefficient).
Let and be two CMSs with the same dimensions and utilizing the same hash functions. The CMS dice coefficient of and is given by:
(7) 
Hence, the Dice coefficient of two CMSs utilizes the average of the CBF Dice coefficient of each pair of corresponding rows.
IvC Comparing Mulisets via CBFs and CMSs
Given two multisets and , we can now estimate their similarity via CBFs or via CMSs. Let and be CBFs (of the same length and using the same hash functions) and let and be CMSs (of the same dimensions and using the same hash functions) for and . In the following evaluation, we show how we can use
(8) 
and
(9) 
to approximate
(10) 
Likewise, we investigate the approximation of
(11) 
by
(12) 
and
(13) 
V Experimental Results
For the evaluation, we use both a synthetic data set and real user music listening histories. The synthetic data set
consists of multisets of strings. For the strings contained in each of those multisets, we use random ASCII strings that are characters long. As the strings are entered into the hash functions of CBF and CMS, we could have picked any other random string to achieve the same effects. Each multiset has unique entries on average. Given a multiset of random strings , the other multisets are chosen such that comparing to the other multisets yields Dice coefficients of to in increments of , i.e.,
For the real data RD, we used the taste profile subset^{5}^{5}5https://labrosa.ee.columbia.edu/millionsong/tasteprofile of the million song data set [25]. In order to have appropriate data for comparisons, we chose a subset of active users who each listened to at least distinct songs. The subset contains distinct users, distinct song titles, and recorded plays. In order to be able to visualize the results, we chose a subset of these users that yields a range of different similarity values. To enter data into the data structures, we built a unique string for each song. RD consists of roughly multiset comparisons. On average, each multiset has unique entries.
Va Synthetic Data SD / Comparing CMSs
For evaluating CMSDice on SD, we start with a CMS encoding of SD with columns and rows. For each , let be the corresponding CMS for . Figure 4 illustrates the comparison of the Dice coefficient of the multisets as ground truth – – with the CMSDice of the CMS representation – . The Dice coefficient is plotted in red. The xaxis indicates the multiset pair combination that is compared – sorted by Dice coefficient. The yaxis gives the similarity score. The blue dots represent the CMSDice similarity score.
In Figure 5, we give the same plot for cosine similarities. We compare the cosine similarity of the multisets as ground truth – – with the cosine similarity of the CMS representation – . The first observation we make is that Dice and cosine measurement yield almost identical results, which can be seen by the red lines in Figure 4 and 5 having almost identical slopes. Therefore, for the rest of this paper, we focus on the Dice coefficient. Another observation we make is that the similarity estimation by CMSDice is always slightly higher than or equal to the Dice coefficient ground truth – so the similarity between two multisets is always correctly estimated or overestimated due to collisions, never underestimated.
Typically, when using a CMS, the user wants to perform queries and get accurate results. The values for and are chosen accordingly. In our case, we just want to perform a
similarity estimation without querying for specific entries. We investigate what influence the number of columns and the number of rows have on the estimation of similarity. In order to do so, we plot the same comparisons of the synthetic data for different values of and . For each combination, we calculate the root mean square error (RMSE) of the similarity estimation by CMSDice from the Dice coefficient of the ground truth. The RMSE quantifies to what extent the similarity estimation differs from the ground truth similarity score. The lower the RMSE, the better the approximation of the Dice coefficient. Based on comparisons, we calculate the RMSE for different combinations of (xaxis) and (yaxis) values. The result is given in Figure 6.
With an increasing number of columns, the RMSE decreases. The number of rows does not significantly influence the RMSE: increasing the number of rows does not reduce the RMSE of the similarity estimation.
VB Synthetic Data SD / Comparing CBFs
Visualizing the RMSEs for similarity estimation by CBFDice with different length and number of hash functions , we get Figure 7.
Increasing the length of the CBF reduces the average error of CBFDice while increasing the number of hash functions increases the error.
VC Real Data RD
Using the real data set RD, our findings using synthetic data SD are confirmed. Figure 8 shows the RMSEs for CMSDice, and Figure 9 for CBFDice.
We achieve the highest accuracy and simultaneously the lowest memory size by using a CMS with one row, which is a CBF with one hash function (cf. Equation 1). In Figure 10, we visualize the estimation error with CBFDice (which is equal to CMSDice in this case) utilizing this data structure with a length of . We can see the error of each similarity estimation and can see that we never underestimate the similarity. Looking at the first comparison pairs, the values for the ground truth similarity scores are very low. The values for the similarity estimation by CBFDice range roughly from to . For the remaining comparisons, we observe that the higher the ground truth similarity score is, the lower is the error range by the CBFDice estimation.
Vi Discussion
In order to discuss the experimental results, we start by analyzing the regular BF. Consider a regular BF with one hash function, utilized for comparing sets. If two sets are equal, the BFs should be equal and even the smallest length yields the correct estimation. Regarding memory size (length of the BF), comparing two disjunct sets is the worst case scenario: estimating the similarity of two disjunct sets by performing on the two BFs should yield for every position. There is one factor that introduces a deviation from : the number of unique inputs for a given length of the BF. Because of the limited length of the BF, several elements are hashed to the same position in the bit vector, even if there is no hash collision (see description in Section III). By increasing the length of the BF, the probability for such collisions is reduced. The more unique elements are entered into the BF, the more positions are set to . Thus, the more unique elements are in at least one of the sets, the longer the bit vector should be if a small error in estimating the similarity is desired.
When comparing two disjunct sets with cardinalities and , the BFs have to have at least a length of in order to theoretically be able to correctly estimate a similarity of . Now imagine increasing the number of hash functions. It increases the error: more bits are set to , which creates a higher similarity estimation. In the following, we show that these conclusions are also true for the CMS and CBF.
As described in Section III, when the goal is to query a CBF or CMS for cardinality of an element of a multiset, the user profits from utilizing multiple hash functions. In our scenario of similarity estimation, we do not need to query for specific items and do not profit from multiple hash functions in the same way. As described above for the BF, the opposite is true for the CBF: both Figure 7 and 9 indicate the trend that the more hash functions we use, the worse the error is. This is because of the collisions: multiple elements are mapped to the same positions in the bit vector. The more hash functions we use, the more collisions there are. This can be compensated by increasing the length of the CBF, which would just unnecessarily increase the needed memory size.
Regarding the CMS, we increase the number of hash functions by increasing the number of rows (see Figures 6 and 8). However, we do not see an increase in error when using more hash functions. This is because each hash function corresponds to one row. The probability of collisions is the same in each row and the average of the rowwise similarities calculated by CMScosSim and CMSDice contains this error. Considering memory size and computation time, we should use just one single row.
The best and worst case for similarity estimation are the same as described for the BF: the higher the real similarity, the lower is the RMSE. The same elements definitely hash to the same positions. Only those elements not present in the other multiset introduce an error in the similarity estimation through collisions. Thus, the more dissimilar the multisets are, the larger the potential error. This can best be seen in Figure 10. Note how the spread of blue dots (similarity estimations by CBFDice) spans a larger part of the yaxis (similarity score) for lower ground truth similarity scores (red dots). This means that for lower similarities, there are higher errors. All errors are produced by collisions and lead to an overestimation of similarity.
Compared to the BF comparison, for the CBF and CMS, the cardinality of each element is the additional factor to consider. For the regular BF, each collision has the same effect on the error. Using a CBF or CMS, the influence a collision has on the error of the similarity estimation is based on how the data of the multiset is distributed. If two users listen to two different songs very frequently and those two songs are mapped to the same position in the data structure, then the error in the similarity estimation can be significant. The error is less significant if the collision occurs for two different songs the two users listened to less frequently.
We conclude that a general approach for estimating the similarity of two multisets in proximitybased mobile applications can be:

use onehash CBF / onerow CMS as a data structure

estimate the average number of unique input elements

define an appropriate threshold for the given scenario
We showed that the onehash CBF gives the best estimation while having the smallest memory size. Our discussion showed that the average number of unique input elements is the relevant factor for how well the estimation is. Based on this number, we can pick the length of the CBF. We showed that a length of twice the average unique inputs is necessary to theoretically still be able to estimate with full accuracy in the worst case scenario of disjunct multisets. Lastly, after performing the similarity estimation, one should have a threshold to be able to tell if the result should be regarded as significant.
Taking the music listening histories in our scenario, we have an average unique input of elements and regard similarities above to be relevant. Picking a size of twice the average unique input gives a length of the data structure of . We plot the ground truth (red) and the CBFDice similarity estimation (blue) in Figure 11. The green area marks similarity scores above . We observe some false positives: the blue dots in the green area that correspond to red dots below the green area. The leftmost blue dot in the green area indicates the largest error. Here, we estimate a significant similarity of while the ground truth value is about . If we need more accurate results, we choose a larger length, for example like in Figure 10. A positive side effect of the larger errors for low values is that it somewhat provides privacy through lackofaccuracy: estimating a similarity of corresponds to a ground truth value of between to about – we cannot make an accurate assumption about the actual similarity.
Vii Conclusion
In this paper, we approached the problem of multiset similarity estimation in the scenario of proximitybased MSNs. We developed the comparison metrics CBFDice and CMSDice for the similarity estimation of two CBFs and two CMSs. Applying these metrics, we can approximate the Dice coefficient when comparing two multisets. We evaluated our approach with both synthetic data and real music listening history data.
Our results show that the more hash functions we utilize in a data structure, the higher is the error in the estimation. The larger the data structure is, the smaller is the error. We achieve the lowest error when utilizing a onehash CBF / onerow CMS. Here, we minimize the number of collisions by using only one hash function. The collisions are the source of the error in the similarity estimation. In general, the higher the real similarity, the better is the estimation.
The data structure onehash CBF is appropriate for the given scenario of similarity estimation between two users, requiring only a single data exchange between two smartphones. We described the general approach for assessing the appropriate size of the data structure by estimating the average number of unique input elements as well as defining a threshold for the similarity score. Using the real user music listening histories with a mean of about unique entries, we showed that a onehash CBF with length (twice the average unique inputs) suffices to accurately estimate the similarity between two multisets.
While we presented the scenario of two strangers meeting and quickly determining the similarity of their musical tastes, our approach can be applied in a variety of other scenarios. In general, any two systems that log any events can be compared with our approach. Utilizing CBFDice, one can perform fast and spaceefficient similarity estimations of the two systems in terms of the frequencies of the logged events.
For future work in the proximitybased MSN scenario, potential attack scenarios like malicious users should be addressed. Furthermore, an implementation for mobile devices can help evaluate the performance of our proposed approach. For the scenario of stimulating interaction between strangers, additional features besides musical taste should be considered, e.g., visited locations.
Acknowledgment
This work has received funding from project DYNAMIC^{6}^{6}6http://www.dynamicproject.de (grant No 01IS12056), which is funded as part of the Software Campus initiative by the German Federal Ministry of Education and Research (BMBF). We are grateful for the support provided by Niklas Lensing, Bianca Lüders, Peter Ruppel, Boris Lorbeer, Sandro Rodriguez Garzon, Martin Westerkamp, Kai Grunert, Tanja Deutsch, Bernd Louis, and Axel Küpper.
References
 [1] F. Beierle, V. T. Tran, M. Allemand, P. Neff, W. Schlee, T. Probst, R. Pryss, and J. Zimmermann, “Context Data Categories and Privacy Model for Mobile Data Collection Apps,” Procedia Computer Science, 2018 (to appear).
 [2] ——, “TYDR  Track Your Daily Routine. Android App for Tracking Smartphone Sensor and Usage Data,” in MOBILESoft ’18: 5th IEEE/ACM International Conference on Mobile Software Engineering and Systems. ACM, 2018 (to appear).
 [3] M. Falch, A. Henten, R. Tadayoni, and I. Windekilde, “Business models in social networking,” in CMI Int. Conf. on Social Networking and Communities, 2009.
 [4] F. Beierle, K. Grunert, S. Göndör, and V. Schlüter, “Towards Psychometricsbased Friend Recommendations in Social Networking Services,” in 2017 IEEE 6th International Conference on AI & Mobile Services (AIMS 2017). IEEE, 2017, pp. 105–108.
 [5] F. Beierle, S. Göndör, and A. Küpper, “Towards a Threetiered Social Graph in Decentralized Online Social Networks,” in Proc. 7th International Workshop on Hot Topics in PlanetScale mObile Computing and Online Social neTworking (HotPOST). ACM, Jun. 2015, pp. 1–6.
 [6] A. Smith, “U.S. Smartphone Use in 2015,” http://www.pewinternet.org/2015/04/01/ussmartphoneusein2015/, Accessed 20180215.
 [7] R. Farahbakhsh, X. Han, A. Cuevas, and N. Crespi, “Analysis of publicly disclosed information in Facebook profiles,” in Proc. 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). ACM, Aug. 2013, pp. 699–705.
 [8] A. Beach, M. Gartrell, S. Akkala, J. Elston, J. Kelley, K. Nishimoto, B. Ray, S. Razgulin, K. Sundaresan, B. Surendar, M. Terada, and R. Han, “WhozThat? Evolving an Ecosystem for ContextAware Mobile Social Networks,” IEEE Network, vol. 22, no. 4, pp. 50–55, 2008.
 [9] N. Eagle and A. Pentland, “Social Serendipity: Mobilizing social software,” Pervasive Computing, IEEE, vol. 4, no. 2, pp. 28–34, 2005.
 [10] A.K. Pietiläinen, E. Oliver, J. LeBrun, G. Varghese, and C. Diot, “MobiClique: Middleware for Mobile Social Networking,” in Proc. 2nd ACM Workshop on Online Social Networks (WOSN). ACM, 2009, pp. 49–54.
 [11] Z. Yang, B. Zhang, J. Dai, A. Champion, D. Xuan, and D. Li, “ESmallTalker: A Distributed Mobile System for Social Networking in Physical Proximity,” in 2010 IEEE 30th International Conference on Distributed Computing Systems (ICDCS), Jun. 2010, pp. 468–477.
 [12] J. Teng, B. Zhang, X. Li, X. Bai, and D. Xuan, “EShadow: Lubricating Social Interaction Using Mobile Phones,” IEEE Transactions on Computers, vol. 63, no. 6, pp. 1422–1433, Jun. 2014.
 [13] A. Mei, G. Morabito, P. Santi, and J. Stefa, “SocialAware Stateless Routing in Pocket Switched Networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 1, pp. 252–261, Jan. 2015.
 [14] C. Dong, L. Chen, and Z. Wen, “When Private Set Intersection Meets Big Data: An Efficient and Scalable Protocol,” in Proc. 2013 ACM SIGSAC Conference on Computer & Communications Security (CCS). ACM, 2013, pp. 789–800.
 [15] F. Kerschbaum, “Outsourced Private Set Intersection Using Homomorphic Encryption,” in Proc. 7th ACM Symposium on Information, Computer and Comm. Security (ASIACCS). ACM, 2012, pp. 85–86.
 [16] J. Tillmanns, “Privately Computing SetUnion and SetIntersection Cardinality via Bloom Filters,” in 20th Australasian Conf. on Inf. Security and Privacy (ACISP), vol. 9144. Springer, 2015, pp. 413–430.
 [17] F. Beierle, K. Grunert, S. Göndör, and A. Küpper, “Privacyaware Social Music Playlist Generation,” in Proc. 2016 IEEE International Conference on Communications (ICC). IEEE, May 2016, pp. 5650–5656.
 [18] B. H. Bloom, “Space/Time Tradeoffs in Hash Coding with Allowable Errors,” Commun. ACM, vol. 13, no. 7, pp. 422–426, Jul. 1970.
 [19] L. Fan, P. Cao, J. Almeida, and A. Z. Broder, “Summary Cache: A Scalable WideArea Web Cache Sharing Protocol,” IEEE/ACM Transactions on Networking, vol. 8, no. 3, pp. 281–293, Jun. 2000.
 [20] G. Cormode and S. Muthukrishnan, “An Improved Data Stream Summary: The CountMin Sketch and its Applications,” in LATIN 2004: Theoretical Informatics. Springer, 2004, pp. 29–38.
 [21] N. Jain, M. Dahlin, and R. Tewari, “Using Bloom Filters to Refine Web Search Results.” in WebDB, 2005, pp. 25–30.
 [22] R. Schnell, T. Bachteler, and J. Reiher, “Privacypreserving record linkage using Bloom filters,” BMC Medical Informatics and Decision Making, vol. 9, no. 1, p. 41, Aug. 2009.
 [23] B. Donnet, B. Gueye, and M. A. Kaafar, “Path similarity evaluation using Bloom filters,” Computer Networks, vol. 56, no. 2, pp. 858–869, 2012.
 [24] M. Alaggan, S. Gambs, and A.M. Kermarrec, “BLIP: Noninteractive DifferentiallyPrivate Similarity Computation on Bloom filters,” in Stabilization, Safety, and Security of Distributed Systems, ser. LNCS, A. W. Richa and C. Scheideler, Eds. Springer, 2012, no. 7596, pp. 202–216.
 [25] T. BertinMahieux, D. P. Ellis, B. Whitman, and P. Lamere, “The million song dataset,” in Proc. 12th International Society for Music Information Retrieval Conference (ISMIR), 2011.