< Back to previous page

Publication

Inferring user interests on social media from text and images

Book Contribution - Book Chapter Conference Contribution

© 2015 IEEE. Inferring user interests on social media from text and images is addressed as a multi-class classification problem. We proposed approaches to infer user interest on Social media where often multi-modal data (text, image etc.) exists. We use user-generated data from Pinterest.com as a natural expression of users' interests. We consider each pin (image-text pair) as a category label that represents a broad user interest, since users collect images that they like on the social media platform and often assign a category label. This task is useful beyond Pinterest because most user-generated data on the Web is not necessarily readily categorized into interest labels. In addition to predicting users' interests, our main contribution is exploiting a multi-modal space composed of images and text. This is a natural approach since humans express their interests with a combination of modalities. Exploiting multi-modal spaces in this context has received little attention in the literature. We performed eleven experiments using the state-of-the-art image and textual representations, such as convolutional neural networks, word embeddings, and bags of visual and textual words. Our experimental results show that in fact jointly processing image and text increases the overall interest classification accuracy, when compared to uni-modal representations (i.e., using only text or using only images).
Book: 2015 IEEE International Conference on Data Mining Workshop (ICDMW)
Pages: 1342 - 1347
ISBN:9781467384926
Publication year:2015
BOF-keylabel:yes
IOF-keylabel:yes
Authors from:Higher Education
Accessibility:Open