Disentangled Multimodal Representation Learning for Recommendation
Many multimodal recommender systems have been proposed to exploit the rich side information associated with users or items (e.g., user reviews and item images) for learning better user and item representations to enhance the recommendation performance. Studies in psychology show that users have individual differences in the utilization of different modalities for organizing information. Therefore, for a certain factor of an item (such as appearance or quality), the features of different modalities are of different importance to a user. However, existing methods ignore the fact that different modalities contribute differently to a user's preferences on various factors of an item. In light of this, in this paper, we propose a novel Disentangled Multimodal Representation Learning (DMRL) recommendation model, which can capture users' attention to different modalities on each factor in user preference modeling. In particular, we adopt a disentangled representation technique to ensure the features of different factors in each modality are independent to each other. A multimodal attention mechanism is then designed to capture user's modality preference for each factor. Based on the estimated weights obtained by the attention mechanism, we make recommendation by combining the preference scores of a user's preferences to each factor of the target item over different modalities. Extensive evaluations on five real-world datasets demonstrate the superiority of our method compared with existing methods.
READ FULL TEXT