Abstract: We will take the audience on a musical journey. The journey of an artificial intelligence learning how to reinvent musical instruments. Starting with digitally interpreting sounds, mimicing instruments using feed-forward neural nets, and ending with a metamorphosis of instruments using Generative Adversarial Networks.
Synthesizing techniques, such as subtractive and sample-based synthesis are used to create sounds of instruments we know but require a lot of tuning. A less used technique, additive synthesis, consists of combining basic sounds. This can also be used by an artificial intelligence to generate these sounds for us. And maybe more..
Computations and interpretations on digital signals, such as sound recordings, are typically complex and not very intuitive. Fourier transformations have allowed us to decompose complex sounds into combinations of sine waves. This exactly what we need for additive synthesis.
We started of simple by training a model to combine sine waves to mimic existing sounds. Using a large set of recorded sounds with known tone and instrument, we trained a supervised feed-forward neural net to generate the spectrograms based on tone and instrument. Based on the spectrogram we can easily replay the actual sound.
Obviously we did not go through all this effort to play the sounds that we already know. We want to explore the unexplored, travel the rimworld of sound space! Generative Adversarial Networks require us to set up two seperate networks. A generating network, inventing sounds, trying to please a second network, which is trained to recognize known instruments. At first this allows us to create sounds which are very similar to the instrument the second network is trained to recognize. However, as we tweak the settings of the generative network, we can morph to a new paradigm of sounds.
Bio: I'm David, I work as a teacher, teaching artificial intelligence, at the applied university of Utrecht. Before I started teaching I've worked as a prototyping developer for AI research at Philips Research developing intelligent agents using machine learning and reinforcement learning for consumer healthcare products. It was very interesting but my need for more social engagement has driven me towards teaching. A step that has made me a very happy person so far.
I love how artificial intelligence is like a lens through which we can see the world. Whether the models we make of the world are realistic or not - do brains really work like a neural network? Not really - they are still inspiring and often usefull. It's just fun to try and recreate everything. There is only one thing that I probably like even more and that's making music. I've been playing piano and guitar since the age of 7 and it is a big part of who I am. Later on in life I've gotten more interested in the electronic world of music too and now have arrived at a point where I can combine my two favorite things. Create instruments using AI.
David Isaacs Paternostro
AI Professor | University of Utrecht