Deep learning for audio-based music recommendation
by Sander Dieleman

The advent of deep learning has made it possible to extract high-level information from perceptual signals without having to specify manually and explicitly how to obtain it; instead, this can be learned from examples. This creates opportunities for automated content analysis of musical audio signals. In this talk, I will discuss how deep learning techniques can be used for audio-based music recommendation, with which we can tackle the item cold-start problem that burdens the prevailing collaborative filtering approaches.

About the speaker: Sander Dieleman is a Research Scientist at Google DeepMind. He was previously a PhD student at Ghent University, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. In the summer of 2014, he interned at Spotify in New York, where he worked on implementing audio-based music recommendation using deep learning on an industrial scale.