< Back to previous page

Publication

End-to-end learning for music audio

Book Contribution - Book Chapter Conference Contribution

Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.
Book: International Conference on Acoustics Speech and Signal Processing ICASSP
Pages: 6964 - 6968
ISBN:9781479928934
Publication year:2014
BOF-keylabel:yes
IOF-keylabel:yes
Accessibility:Closed