DeepAI AI Chat
Log In Sign Up

Efficient Image Gallery Representations at Scale Through Multi-Task Learning

by   Benjamin Gutelman, et al.

Image galleries provide a rich source of diverse information about a product which can be leveraged across many recommendation and retrieval applications. We study the problem of building a universal image gallery encoder through multi-task learning (MTL) approach and demonstrate that it is indeed a practical way to achieve generalizability of learned representations to new downstream tasks. Additionally, we analyze the relative predictive performance of MTL-trained solutions against optimal and substantially more expensive solutions, and find signals that MTL can be a useful mechanism to address sparsity in low-resource binary tasks.


page 1

page 2

page 5


Exploring the Role of Task Transferability in Large-Scale Multi-Task Learning

Recent work has found that multi-task training with a large number of di...

Low Resource Multi-Task Sequence Tagging – Revisiting Dynamic Conditional Random Fields

We compare different models for low resource multi-task sequence tagging...

Synergies Between Disentanglement and Sparsity: a Multi-Task Learning Perspective

Although disentangled representations are often said to be beneficial fo...

Curriculum Modeling the Dependence among Targets with Multi-task Learning for Financial Marketing

Multi-task learning for various real-world applications usually involves...

SpeechNet: A Universal Modularized Model for Speech Processing Tasks

There is a wide variety of speech processing tasks ranging from extracti...

A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks

Much efforts has been devoted to evaluate whether multi-task learning ca...

Efficient and robust multi-task learning in the brain with modular task primitives

In a real-world setting biological agents do not have infinite resources...