Music Hackspace, Ircam RAVE, Neural Networks
I watched this Max meet-up hosted by Music Hackspace.
The first presentation by Can Memişoğulları
“Can Memişoğulları is a multi-disciplinary artist from Istanbul-Turkey. Memişoğulları’s artistic practices consist of electro-acoustic and acoustic composition, new media art installations, and A/V live performances. His art aesthetic builds on deconstruction for the sake of construction and the earnings from that process. He is also interested in human interaction and its consequences in art. To examine that, he likes to create structures and systems that can decide and upgrade themselves onwards with the help of their audience. They can react, adapt, and communicate. As a result of these qualities, they are engaging, immersive, and alive. Lately, his focus is on creating new pieces that interpolate technology, human interaction, philosophy, and art.”
Memişoğulları’s presentation on his work had some similarities with mine due to him using the computer vision tools to build a tracking system, though his work is focused on the physical space a gallery.
I found the final presentation by Samuel Pearce-Davies who is studying a PhD in Computer Music at Plymouth University. He’s talk was focused on neural networks and reconstructing audio from spectrograms using Markov chains, as well as generative midi composition using Markov chains.
I had previously spent some time researching Markov chains last year when creating a generative sequencer so found this talk to be interesting.
This lead me to read about Rave (Realtime Audio Variational autoEncoder) and nn~
https://forum.ircam.fr/projects/detail/rave-vst/
https://discourse.flucoma.org/t/drumgan-how-to-make-boring-drum-sounds-in-more-steps/1513/11
I played around with nn~ for a while however its important to note (as show in the forum above) that training our own model requires a lot of power so is currently quite limiting, it is however still interesting to play with.