Recommender Systems (RS) are personalisation tools that are used by online services to recommend items to users (Pizzato et al., 2013). RSs usually make recommendations by computing a preference score between and that represents the extent to which a particular user would like a particular item. This is done by using explicit preference information (for instance, a user’s profile where they have specified their preference) or implicit preference information, such as a user’s purchase history. RSs have become increasingly advanced over the last decade, and most popular online services such as Amazon and Netflix use them to enhance their users’ experience (Bell and Koren, 2007).
Reciprocal Recommender Systems (RRSs) are a subtype of RSs that recommend people to other people. They are commonly used in online dating and social services. While RSs make recommendations based on a unidirectional preference relation involving an inanimate item, RRSs are inherently more complex in that they must make recommendations based on both sides of a bidirectional preference relation. Applying a conventional recommender system to a reciprocal environment results in recommendations that are only satisfying to one of the two users involved in the interaction. RRS design also involves a number of considerations that are not involved in unidirectional recommendation. For instance, a popular product being repeatedly recommended is not usually a problem, but a popular user appearing in everyone’s recommendations often represents a negative experience for that user (Kleinerman et al., 2018). RRSs are often adapted from RSs, where two unidirectional preferences are computed and then combined into a single preference score that represents the preference of two users for each other.
RSs (and therefore RRSs) are often categorised as content-based or collaborative filtering systems. Content-based systems make recommendations based on a user’s preference for specific aspects of an item. These preferences are sometimes explicit, but are more usually inferred implicitly from a user’s preference for previous items (Aggarwal, 2016). Collaborative filtering algorithms use correlations between multiple users to make recommendations, often by extracting latent factors from a preference matrix of users and items, and inferring preference for those latent factors through historical preferences. Historically, collaborative filtering algorithms have been more effective than content-based algorithms (Aggarwal, 2016). However, in content-rich environments, the reverse can be true. Content-based filtering algorithms also tend to be more effective at solving the Cold-Start Problem (Lam et al., 2008; Lin et al., 2013), where it is difficult for the system to make effective recommendations for new users because of the lack of preference history.
Online dating services and social networks often provide a content-rich environment, with users making decisions about whom to express preference to based on a great many factors, including image data, free text profiles and categorical data such as age and job. In particular, image data is extremely important to modern social services, with many such as Instagram using images as the basis for interactions. This has been demonstrated by informal research from industry111https://www.gwern.net/docs/psychology/okcupid/weexperimentonhumanbeings.html. In this paper, we present a novel recommender system, Temporal Image-Based Reciprocal Recommender (TIRR), that uses a Recurrent Neural Network (RNN) to interpret a user’s history of preferences for images, and make predictions about their future preferences in order to make recommendations. This is a significant improvement on the only other previous image-based RRS, ImRec(Neve and McConville, 2020), in the sense that it outperforms both ImRec (previously the state of the art in content-based reciprocal recommendation) and also the current state of the art collaborative filtering solutions.
In addition to the advantages in terms of its improvement in the ROC curve on cross-validation, TIRR is also an advance of the field in the sense that it provides a unified system that predicts matches directly, as opposed to two separate predictions of unidirectional preferences followed by an aggregation. There is some doubt as how to combine two unidirectional scores into a single bidirectional score in a way that is fully representative of two users’ bidirectional preference for each other; TIRR solves this by predicting the bidirectional relation end to end.
The system was tested using a popular online dating service. We used users and approximately expressions of preference combined split across train and test sets.
The original contributions of this paper are therefore threefold:
We present a content-based RRS, TIRR, that makes recommendations based on historical sequences of images utilizing Siamese networks and LSTMs.
Previous RRSs predict two unidirectional preferences and then aggregate them; the end-to-end algorithm detailed in this paper predicts the probability of a match directly.
Based on tests using real-world data, TIRR outperforms not only other content-based RRSs but also the state-of-the-art collaborative filtering RRS.
2. Related Works
This section contains a review of other academic works that formed the background for this study. This includes works on RRSs, content-based recommendation and on recurrent neural networks for understanding image-based histories.
2.1. Reciprocal Recommender Systems
RRSs are recommender systems used for person-to-person matching, in settings such as online dating, social networks (He and Chu, 2010) and job recommendation (Siting et al., 2012). They are complex in the sense that they need to consider the preferences of both sides.
The earliest RRS in the literature is RECON (Pizzato et al., 2010). This is a content-based recommender system designed by Pizzato et al. based on recommendation using categorical data such as age and hobbies. For two users and , the algorithm calculates the preferences of the two users andand
and combines them using the harmonic mean into a single bidirectional preference relation that represents the projected preference of the two users for each other.
RCF (Xia et al., 2015) extended reciprocal recommendation with an implementation of a collaborative filtering system. RCF uses nearest-neighbour-based recommendation, where for candidate users and it calculates the similarity between and the other users that have liked and vice-versa. RCF demonstrated improvements in a number of areas over RECON, and was at the time considered the best in class reciprocal recommender system.
Subsequently, a number of systems have made improvements to both collaborative and content-based systems, in addition to designing hybrid systems that exploit the best of both subtypes. For example, Kleinermann et al. improved on RCF by reducing the bias of user popularity on recommendations (Kleinerman et al., 2018). ImRec demonstrated that image-based recommendation was more effective than recommendation based on categorical profile data (Neve and McConville, 2020).
2.2. Content-Based Recommender Systems
Content-based recommender systems make recommendations based on users’ preferences for specific content on a service. This might be structured content such as the category of an item, or it might be unstructured content such as images and free text description.
Recommendations based on unstructured data appear most often in news recommendation (Lang, 1995; van Meteren and van Someren, 2000). This is a rich area for research because news articles are often written with set structures that make them easier to process, and also because of concerns about serendipity and recommendations reinforcing existing echo chambers. Papers such as (Bansal et al., 2016)
demonstrate the capacity for deep learning systems based on freetext information to make effective recommendations.
Examples of content-based recommender systems basing their results on images is much less common. Lei et al. used user preferences to train a model based on ImageNet(Deng et al., 2009) that predicted user preference for one of two images (Lei et al., 2016). This trains a network to map both users and images into the same space by generating embeddings for both, with images that the user preferred being close to the user in the space, and images the user did not like being further away. User preference for subsequent images can then be predicted by relative distance from the user.
Another example is DeepStyle (Liu et al., 2017), which uses a Siamese Network to predict user preference for clothes based on images. DeepStyle uses pairs of positive and negative samples with user preference as the output to differentiate between the two images. This can then be used to make predictions about whether a user might like a new image by comparing it to an existing liked image.
2.3. Recurrent Neural Networks
This paper uses Recurrent Neural Networks (RNNs) to interpret time series data for recommendation. RNNs contain loops, which feed the output of a network back into current neurons. This means that they implement a concept ofmemory: they store computed results, and these results have an impact on subsequent predictions. Each step therefore incorporates information from the previous steps into the prediction.
Standard RNNs are particularly good at processing short sequences, but their memory is short-term memory: when training them using longer sequences, the early items in the sequence have very little impact on the final prediction. This is known as the vanishing gradient problem (Hochreiter, 1998)
.This also exists in deep neural networks, where early layers learn very slowly when trained with backpropagation.
Various architectures have been proposed to overcome this limitation. One that has been particularly successful in allowing RNNs to hold and use information for longer is the Long Short-Term Memory Network (LSTM) (Hochreiter and Schmidhuber, 1997). A LSTM uses a forget gate
comprised of a Sigmoid function that determines whether information is kept or not: a value close toresults in the information being forgotten by the network, whereas closer to results in the information being stored. This allows for much longer sequences to be processed, which is particularly useful in the field of recommendation, where long sequences of user behaviour are common. RNNs have been used successfully in recommender systems to incorporate time series data into recommendations (Twardowski, 2016; Wu et al., 2017), but not in reciprocal recommender systems.
In this section, we describe a model that produces predictions about user preferences based on the RNN interpretation of user history. The RNN-based model uses a pre-trained siamese network at its core, so we describe that independently first, and then its use in the context of the RNN.
3.1. Problem Formulation
The online dating service we used currently only supports heterosexual relationships. We can therefore assume that for a set of users of one gender there is a set of candidate users for recommendation . A user may have an ordered history of preference expressions for users of length , for example, where represents the expression of positive or negative preference of user for the user at time .
In our reciprocal system, our objective is to estimate , the reciprocal preference score that represents the projected degree of preference of two users for each other. We consider that is a function of the historical preferences of and as well as the two users themselves, and train a model to predict it using all of this information:
Where represents the parameters of the model. Note that contrary to most previous approaches to RRSs, our approach trains a single model to predict reciprocal preference using all of the information, as opposed to combining the results of two models predicting unidirectional preference. Also note that the reciprocal preference is symmetrical i.e. .
3.2. Service & Data
The data for our model was provided by a popular online dating service with several million registered users. On this service, the user experience is streamlined so that everyone goes through the same process of interaction.
A user finds other users by searching, or by viewing recommendations on a list page. From the list page, they can view profiles with images, text and categorical data. If finds a user that they want to interact with, they can send a . In our algorithm, this is used as a unidirectional indicator of preference.
User can choose whether or not to reciprocate this . If they do reciprocate, this is considered a ; if not it is considered a . These represent bidirectional indicators of preference or negative preference respectively. Users who have can subsequently message each other, and potentially agree to meet in person.
As we wanted to focus on an algorithm that measured personal attractiveness of users for each other, we made the decision to exclude images from the dataset that did not include user faces using automatic face detection. It is common for deep learning based on faces to also include cropping an affine transformation of features, but preliminary experiments showed that this did not improve our results.
We also made a number of exclusions in order to increase the reliability of the dataset. We excluded users who had been removed from the service for any reason (often these users are not using the service correctly, which implies that they are not expressing preferences based on their own intuition). Although for privacy reasons we are unable to release the dataset used in our experiments, we do hope that the algorithm will be reproduced on other services.
3.3. Siamese Network Unidirectional User Preference Learning
In this section, we briefly describe the Siamese network (Koch et al., 2015) we used to learn unidirectional user preferences, a building block of our proposed method, originally included as a component of ImRec (Neve and McConville, 2020). We will utilize this in a novel way to ImRec to demonstrate superior performance.
3.3.1. Siamese Network Concept
Siamese networks are commonly used in object recognition (Vinyals et al., 2016) and tracking (Bertinetto et al., 2016), where they have excelled for face verification in scenarios with relatively little training data, known as one-shot or few-shot learning. As shown in Figure 1
, a Siamese network consists of two symmetrical CNNs with shared weights, and a loss function based on the outputs of these two networks and a ground truth label.
The network is trained from tuples of the form from where and are photos of that have been Liked by and is a photo of that has been Disliked by . Using as an anchor, two pairs are made from the tuple; is a positive pair where the expected output is and as the negative pair where the expected output is . From these pairs, the network is trained to differentiate between a Liked and a Disliked image, given another Liked image.
The key to this is that the network uses shared weight parameters for the training and inference process. We map and to and using , which are two points in a 128 dimensional space. We calculate the difference between the two points as follows:
Relating the Siamese network back to our original problem formulation in Section 3.1, this gives us a basis for estimating a unidirectional preference relation based on two images, one image from ’s preference history , and the current user , solving the problem:
3.3.2. Network Layers
Table 1 shows the architecture of the CNN that makes up the two symmetrical branches of the Siamese network. The small convolution kernels used have been shown to effectively identify facial features in deep convolutional networks (Parkhi et al., 2015). The network was trained using an Adam optimiser, with a learning rate of .
The output of the network is a value between and expressed by a Sigmoid function, representing whether is more likely to Like or Dislike based on the two images. In the next section, we extend this to use the whole of ’s preference history and show how this becomes an even more effective predictor of preference.
3.3.3. Loss Function
The Siamese network was trained using binary crossentropy. This is a standard loss function used in training neural networks, the formula for which is given below. In the following equation,
is the binary variable representing Like and Dislike,is the embedded distance between two images, is a neural network and is the predicted probability of resulting in a Like.
Binary crossentropy was shown experimentally to result in the highest effectiveness metrics for the network.
We note that it is also common to train Siamese networks using a contrastive loss function, which uses a margin to increase the network’s error when it misclassifies two very similar images. In most situations where a Siamese network is applied, such as face detection, misclassifying two very similar images is as incorrect as misclassifying two very different images. However, this is not true in our application, where user preferences are not necessarily categorical, and similar images are more likely to be liked by a user than very different images. This explains why binary crossentropy may have been more effective in our tests.
The contrastive loss function is defined below, where the terminology used is the same as in equation 4 and is the margin:
3.4. Incorporating RNNs for Learning User Preference History
The Siamese network described above, when trained on unidirectional preference, is an effective if elementary model. In this section, we describe the RNN we use to interpret the user history based on the results of the Siamese network.
The output of the Siamese network is a point in 128-dimensional space that represents the preference of a user for an image based on comparison with the anchor image . Based on initial experimental work, we chose an LSTM-based RNN architecture to interpret the time series of images. The forget gate of the LSTM is particularly intuitive in this case. For a state at time , a forget gate described by , a write gate and a candidate write derived from the input and the previous state, the next state is described by the equation:
We might intuitively expect that preferences expressed by users would change over time, and the forget behaviour of the LSTM allows us to model this, with the input for the state of the LSTM modelling the preferences of user being the user , and the final input at being the user whom we wish to estimate ’s preference for.
The LSTM is visualised in Figure 2. Because users have variable length preference histories, we fill the histories of users with shorter histories with dummy images and use a masking layer to filter them. The LSTM and subsequent dense neural network form a representation in 256-dimensional space of the user’s preference as a time series.
Specifically, the network consists of an input layer, which accepts a maximum of outputs from Siamese networks in 256-dimensional space concatenated together. Experiments determined that more than this did not significantly alter the performance of the network. The layers are described in Table 2. If a user has fewer preferences expressed than this, the earlier images are filled with zeroes, and the network learns to interpret this as dummy data. Following the LSTM, the network consists of a single dense layer of 128 neurons, and then a dropout layer with a dropout rate of . The network was trained with an Adam optimiser with a learning rate of .
3.5. Training and Match Prediction
This section describes training the network to predict matches between two users. As described in Section 3.2, our objective is to differentiate between interactions consisting of bidirectional expressions of preference, Matches, and unidirectional expressions of negative preference, Dislikes.
The full training process is visualised in Figure 3. Our experiments determined that the network trained extremely slowly when trained in its full form from an initial randomised state, and we therefore pre-trained the Siamese network segment of the network using one dataset, shown in green. The subsequent training of the full system on matches was done using a separate dataset, shown in red. The final evaluation was done using a third dataset, shown in blue. In addition, Neve et al. demonstrated that the Siamese Network training was more effective when two networks were trained separately on male and female data (Neve and McConville, 2020). As the service providing our data currently only supports heterosexual dating, this split does not decrease the usefulness of the application in this case.
Training for the Siamese networks were based on triplets sampled from users split evenly over male and female images. Images were cropped and centered on the faces of users before training. Other methods of preprocessing such as affine transformations, which have been shown to improve the predictive power of other networks (Lewenberg et al., 2016) did not have any impact on performance. The Siamese networks were trained to predict unidirectional preferences i.e. was an image had Liked (but not necessarily with reciprocity) and was an image had Disliked.
Following convergence of the Siamese network, the LSTM network was trained based on the preference histories of users to predict Matches and Like-Dislike Tuples. This dataset was separate from the dataset used to train the Siamese network. Histories were capped at one year, because of concerns that changes to the service’s design and search algorithm over time might have an effect on user preferences. They were also capped to a maximum of
preferences, because initial experiments showed that longer sequences did not improve accuracy, and because some outlier users express thousands of preferences, which results in an unreasonable increase in training and prediction times.
Finally, the LSTM was validated on a separate dataset of Matches and Like-Dislike Tuples. There was no overlap in preference expression between the three datasets. There was overlap between the users contained in these datasets, but as in a real-world situation the system would be trained based on users on the service and subsequently used to make predictions for those users in addition to new users, testing in this way is valid and representative.
4. Results & Evaluation
In this section, we present the results for TIRR compared to the current state of the art in both content-based and collaborative filtering.
4.1. Evaluation Metrics for Reciprocal Environments
RRSs generally use similar metrics for success as standard machine learning models: evaluation via theROC Curve and the related metrics Precision, Recall and their combined metric F1 Score. However, because of the requirement for reciprocal success, their definitions are a little different in this scenario, so we present them here as defined by Pizzato et al. in (Pizzato et al., 2013). In the following equations, is the set of recommended users, is the set of recommended users who matched with each other, and is the set of recommended users where at least one expressed negative preference for the other.
As the models predict a value between and that represents the strength of the mutual preference relation, the ROC curves in this section are drawn by moving a threshold between these two values and plotting the true and false positive rates.
4.2. Siamese Network Results
We first describe the results generated by the pretrained Siamese network. This network provides a basis for the main user preference prediction model, as the output embeddings from this network provide an input for the RNN.
The ROC curve for the Siamese network is displayed in Figure 4 as the blue line (the green dotted line is the - reference line). This curve was drawn based on a test set of samples not in the original training dataset. In general the network is capable of differentiating between a single Liked image and a single Disliked image based on an anchor image. The curve itself is slightly erratic, but this is not entirely unexpected: a single anchor image is unlikely to enough information to differentiate between positive and negative preference.
As the model by itself is not directly the source of the recommendations, it would not be appropriate to compare it to other recommender systems. For this reason, we present this model without a point of comparison. However, in Section 4.3 we will compare two approaches that use this model as a building block for reciprocal recommendation.
As the embeddings in 128-dimensional space from the output of the Siamese network form the input of the RNN. It is therefore useful to visualise these embeddings. In order to do this, we use Uniform Manifold Approximation and Projection for Dimensionality Reduction (UMAP) to reduce the 128-dimensional vectors to two-dimensional vectors for visualisation. This visualisation is displayed in Figure 5.
In this visualisation, the black datapoints represent Disliked images and the red datapoints represent Liked images. It is clear from the visualisation that the embeddings are separable to some extent, even in two dimensions. The anomalous black cluster in the top right of the image represents heavily distorted or very poor quality images, or images misclassified by the face detection algorithm (i.e. images that do not contain a face). These tend to be almost universally Disliked.
4.3. TIRR vs Content-Based Algorithms
As described in Section 1, recommender systems are divided into content-based algorithms and collaborative filtering algorithms.
Figure 6 displays a comparison of TIRR with other content-based algorithms. As described in Section 2, RECON (Pizzato et al., 2010) is a an algorithm that identifies a user’s implicit preferences for categorical data, and ImRec (Neve and McConville, 2020)
is an algorithm that uses images to make predictions without the RNN-based component of TIRR, instead using a Random Forest and aggregation function.
RECON struggled to generate effective recommendations on our dataset. As RECON was also evaluated on a private dataset, without comparing the datasets directly, it is difficult to establish why this is, but one possibility is that modern dating services place a higher emphasis on visual content than services did ten years ago, at the time RECON was developed. ImRec performs better than RECON, but performs significantly worse than our proposed method TIRR. The key difference between TIRR and ImRec is the RNN-based process that allows TIRR to interpret historical and time-series data in order to make predictions, whereas ImRec treats user preferences in a global way, with no ability to capture individual users preferences.
The AUC and maximum F1 score for the three algorithms is described in Table 3. The scores are based on the threshold that gave the best F1 score in the training set, used in the test set. We consider that this significant improvement of our proposed method TIRR derives from the ability of our algorithm to interpret a user’s history of preferences for images over time, and take account of a user’s potentially shifting preferences, whereas Imrec provides a global model across all users without distinguishing more than one preference per user at a time, and RECON doesn’t make use of images at all.
The table also lists the precision and recall at the points where the best F1 score was recorded. While F1 is an excellent measure of overall performance of an algorithm, the individual precision and recall numbers and their balance are particularly important in RS research because precision tends to influence the trust users have in the RS, which in turn affects their use of it(Herlocker et al., 2004). It is noteworthy that while ImRec was relatively successful at predicting which image a user would like, its precision was relatively low in comparison with other algorithms, whereas TIRR has very high precision, and is therefore more likely to be trusted and used.
4.4. TIRR vs Collaborative Filtering
In addition to comparing TIRR to other content-based RRSs, we also ran tests comparing it to the current best-in-class collaborative filtering algorithms, RCF and LFRR.
is a collaborative filtering algorithm based on latent factor models trained by stochastic gradient descent, andRCF is a neighbourhood-based collaborative filtering algorithm. TIRR outperformed both of these algorithms on our test dataset, although by a slimmer margin than its lead on current content-based filtering algorithms. Nonetheless, this represents a significant advancement in the field of reciprocal recommendation, as in services where images prominently used, our algorithm is likely to be more effective than current collaborative filtering methods.
Table 4 lists the peak performance metrics for the two algorithms. In addition to the higher F1 score, TIRR also has a comparable balance of precision and recall to LFRR.
In this paper, we presented a novel algorithm to interpret user preference history using only photos and make predictions about future preferences for reciprocal recommendation. We demonstrated that this can effectively be used as a predictor for the probability of mutual preference between two users, and therefore forms the basis for an effective recommender system. We also demonstrated that our algorithm outperforms state of the art reciprocal recommender systems in offline tests using a large dataset from a dating service with real users.
This research demonstrates the value of including historical preference in reciprocal recommendation. Previous research has demonstrated the value of using RNNs to interpret sequences of preferences in user-item recommendation, but this is the first time it has been used in reciprocal recommendation. The improvement over a similar algorithm that does not use sequences of data shows the value of this approach.
Finally, the model itself represents a significant advance in the field of content-based reciprocal recommendation. The model’s success also allows us to draw interesting conclusions about the significance of photos in online dating, given their strong predictive power in this dataset. It also provides interesting insight into the potential power of content-based algorithms in online dating: while in many fields, they are outperformed by collaborative filtering, the algorithm presented in this paper performs better on evaluation metrics than the current state-of-the-art collaborative filtering algorithm.
- Recommender systems: the textbook. 1st edition, Springer, London, England. External Links: Cited by: §1.
- Ask the gru: multi-task learning for deep text recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, Recsys 2016, New York, NY, pp. 107–114. Note: External Links: Cited by: §2.2.
- Lessons from the netflix prize challenge. ACM SIGKDD Explorations Newsletter - Special issue on visual analytics 9 (2), pp. 75–79. Note: External Links: Cited by: §1.
Fully-convolutional siamese networks for object tracking.
Proceedings of the 2016 European Conference on Computer Vision, ECCV 2016, , pp. 850–865. Note: External Links: Cited by: §3.3.1.
ImageNet: a large-scale hierarchical image database.
Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, CCVPR 2009, Miami, FL, pp. 248–255. Note: External Links: Cited by: §2.2.
- A social network-based recommender system (snrs). Data Mining for Social Network Data 12 (), pp. 47–74. Note: External Links: Cited by: §2.1.
- Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems 22 (1), pp. . Note: External Links: Cited by: §4.3.
- Long short-term memory. Neural Computation 9 (8), pp. 1735––1780. Note: External Links: Cited by: §2.3.
- The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6 (2), pp. 107–116. Note: External Links: Cited by: §2.3.
- Optimally balancing receiver and recommended users’ importance in reciprocal recommender systems. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys ’18, New York, NY, pp. 131–139. External Links: Cited by: §1, §2.1.
- Siamese neural networks for one-shot image recognition. In Proceedings of the 2015 ICML Deep Learning workshop, , , pp. . Note: External Links: Cited by: §3.3.
- Addressing cold-start problem in recommendation systems. In Proceedings of the 2nd international conference on Ubiquitous information management and communication, ICUIMC ’08, New York, NY, pp. 208–211. External Links: Cited by: §1.
- NewsWeeder: learning to filter netnews. In Proceedings 12th International Conference on Machine Learning,, ICML 1995, , pp. 331–339. Note: External Links: Cited by: §2.2.
- Comparative deep learning of hybrid representations for image recommendations. In The IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, , pp. 2545–2553. Note: External Links: Cited by: §2.2.
Predicting personal traits from facial images using convolutional neural networks augmented with facial landmark information. In
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, , pp. 4365–4366. Note: External Links: Cited by: §3.5.
- Addressing cold-start in app recommendation: latent user models constructed from twitter followers. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’13, New York, NY, pp. 283–292. External Links: Cited by: §1.
- DeepStyle: learning user preferences for visual recommendation. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017, New York, NY, pp. 841–844. Note: External Links: Cited by: §2.2.
- ImRec: learning reciprocal preferences using images. In Proceedings of the Fourteenth ACM Conference on Recommender Systems, Recsys ’2020, New York, NY, pp. 170–179. Note: External Links: Cited by: §1, §2.1, §3.3, §3.5, §4.3.
- Deep face recognition. BMVA (), pp. 1–12. Note: External Links: Cited by: §3.3.2.
- Recommending people to people: the nature of reciprocal recommenders with a case study in online dating. User Model User-Adap Inter 23 (5), pp. 447–488. Note: External Links: Cited by: §1, §4.1.
- RECON: a reciprocal recommender for online dating. In Proceedings of the fourth ACM conference on Recommender systems, RecSys ’10, New York, NY, pp. 207–214. External Links: Cited by: §2.1, §4.3.
- Job recommender systems: a survey. In Proceedings of the 7th International Conference on Computer Science & Education, ICCSE ’12, Melbourne, VIC, Australia, pp. 920–924. External Links: Cited by: §2.1.
- Modelling contextual information in session-aware recommender systems with neural networks. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, New York, NY, pp. 273–276. External Links: Cited by: §2.3.
- Using content-based filtering for recommendation. In Proceedings of the ECML 2000 Workshop: Maching Learning in Information Age,, ECML 2000, , pp. 47–56. Note: External Links: Cited by: §2.2.
- Matching networks for one shot learning. Advances in Neural Information Processing Systems 29 (), pp. . Note: External Links: Cited by: §3.3.1.
- Recurrent recommender networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM ’17, New York, NY, pp. 495–503. External Links: Cited by: §2.3.
- Reciprocal recommendation system for online dating. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM ’15, New York, NY, pp. 234–241. Cited by: §2.1.