< Back to previous page

Publication

Cross-modal fashion search

Book Contribution - Book Chapter Conference Contribution

© Springer International Publishing Switzerland 2016. In this demo we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we demon-strate two tasks: (1) given a query image (without any accompanying text), we retrieve textual descriptions that correspond to the visual attributes in the visual query; and (2) given a textual query that may express an interest in specific visual characteristics, we retrieve relevant images (without leveraging textual meta-data) that exhibit the required visual attributes. The first task is especially useful to manage image collections by online stores who might want to automatically organize and mine predominantly visual items according to their attributes without human input. The second task renders useful for users to find items with specific visual characteristics, in the case where there is no text available describing the target image. We use a state-of-the-art visual and textual features, as well as a state-of-the-art latent variable model to bridge between textual and visual data: bilingual latent Dirichlet allocation. Unlike traditional search engines, we demonstrate a truly cross-modal system, where we can directly bridge between visual and textual content without relying on pre-annotated meta-data.
Book: Lecture Notes in Computer Science
Pages: 367 - 373
ISBN:9783319276731
Publication year:2016