Publications
Chosen filters:
Chosen filters:
Cross-modal representation of spoken and written word meaning in left pars triangularis University of Antwerp Ghent University KU Leuven
The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words ...
Cross-modal fashion search KU Leuven
© Springer International Publishing Switzerland 2016. In this demo we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we demon-strate two tasks: (1) given a query image (without any accompanying text), we retrieve textual descriptions that correspond to the visual attributes in the visual query; and (2) given a textual query that may express an interest in specific visual characteristics, we ...
Multimodal Machine Learning for Personalized Interaction with Cultural Heritage KU Leuven
Multimodal machine learning involving textual and visual data is a fundamental research topic in the cross-field of natural language processing and computer vision. The employment of multimodal machine learning for personalized interaction in the cultural heritage domain has attracted increasing attention in recent years. On one hand, it can improve user experience for visitors in a physical museum when adopted in tasks such as multimodal ...
Cross-Modality Image Registration Using a Training-Time Privileged Third Modality KU Leuven
In this work, we consider the task of pairwise cross-modality image registration, which may benefit from exploiting additional images available only at training time from an additional modality that is different to those being registered. As an example, we focus on aligning intra-subject multiparametric Magnetic Resonance (mpMR) images, between T2-weighted (T2w) scans and diffusion-weighted scans with high b-value (DWI [Formula: see text]). For ...
Fashion meets computer vision and NLP at e-commerce search KU Leuven
In this paper we focus on cross‐modal (visual and textual) e-commerce search within the fashion domain. Particularly, we investigate two tasks: 1) given a query image, we retrieve textual descriptions that correspond to the visual attributes in the query; and 2) given a textual query that may express an interest in specific visual product characteristics, we retrieve relevant images that exhibit the required visual attributes. To this end, we ...
Left perirhinal cortex codes for similarity in meaning between written words University of Antwerp KU Leuven
Left perirhinal cortex has been previously implicated in associative coding. According to a recent experiment, the similarity of perirhinal fMRI response patterns to written concrete words is higher for words which are more similar in their meaning. If left perirhinal cortex functions as an amodal semantic hub, one would predict that this semantic similarity effect would extend to the spoken modality. We conducted an event-related fMRI ...
Strain differences of the effect of enucleation and anophthalmia on the size and growth of sensory cortices in mice KU Leuven
Anophthalmia is a condition in which the eye does not develop from the early embryonic period. Early blindness induces cross-modal plastic modifications in the brain such as auditory and haptic activations of the visual cortex and also leads to a greater solicitation of the somatosensory and auditory cortices. The visual cortex is activated by auditory stimuli in anophthalmic mice and activity is known to alter the growth pattern of the cerebral ...
Applications of artificial intelligence for the resource-scarce cultural heritage domain University of Antwerp
The cultural heritage domain, mainly represented by Galleries, Libraries, Archives, and Museums (GLAM), is massively digitising its collections, which leads to an increasing amount of raw digital material. Such material is slow and expensive to annotate, as it requires the intervention of highly skilled professionals who are difficult to find and costly to train. Artificial intelligence (AI) made great progress in the last decade and nowadays ...
On the localization of tastes and tasty products in 2D space Hasselt University
People map different sensory stimuli, and words that describe/refer to those stimuli, onto spatial dimensions in a manner that is non-arbitrary. Here, we evaluate whether people also associate basic taste words and products with characteristic tastes with a distinctive location (e.g., upper right corner) or a more general direction (e.g., more right than left). Based on prior research on taste and location valence, we predicted that sweetness ...