Practical/Production
For the creation of my audio paper I used Ableton Live as it is most accessible to be and I have lots of experience in using it.

Using the Ableton Simpler device with my Push 2 allowed me chop up and play with the samples intuitively as an instrument. This is my favourite thing to do with Ableton. I played around with looping and different slicing modes (transient detection, section) and found individual patterns I liked (like using an MPC). I also used a technique where I used an arp and random note selector to choose random samples, what combined with Lives LFO modulator devices allows for fairly complex randomisation.





I used Max to experiment and abstract the samples I collected, as detailed in a separate blog.
The more textural abstract layer of samples was made using a granular synth I made in max. I then recorded experiments of changing the values while recording, and changing out samples on the fly.
I would have liked to spend more time fine-tweaking individual samples, and working in all of the ones I collected however didn’t have enough time in the end. One of the sample flips I was happy with was the transition element when moving from the interior of the car to the train. The sample used was from Steve Reich’s Different Trains to create a building chord.
I recorded the voice over in my room using a Lewitt LCT 440 pure that I bought earlier this year with left over student loan money. Even though Im uncomfortable recoding my voice and listening back to it, especially in the company of others, since owning the my own microphone, and being able to use it in my own space is something that gives me confidence and makes it much more accessible to me. The idea of booking out one of the rooms at uni to record my voice would make me very uncomfortable.
The vocal processing was quite minimal. Just some Eq on nasal frequencies and some multi band compression. I didn’t any reverb on my voice which is a first as I didn’t want to muddy the whole thing, and I was thinking about conceptually the need. The whole paper is constructed as a journey in an external environment, a blanket reverb didn’t suit this idea of shifting locations. I suppose the voice is a bit like my inner monologue, of things I’ve been thinking about recently. I tried to make the voice clear throughout by sidechaining the soundscape elements (location and sampled songs)
After exporting the file I listened to it and decided it was a bit loud and busy in the beginning so I decided to change the dynamics a bit in Ableton. This made the paper feel like a progressing journey instead of a wall of sound, what can work, but isn’t what I was going for.

After listening to the audio I opened it up on iZotope RX elements so see if it picked up any clicks or clipping. The levels were all fine but it detected some clicks so I let it do its thing. I think we can all agree that regardless of ones thought of the sound itself, the visualisation of a spectograph is beautiful, like the northern lights combined with a bonfire.
I mix in headphones as I don’t own any speaker monitors, my headphones are beyerdynamic DT900 PRO X. They are semi-open back and I really enjoy using them. I used sonarworks SoundID Reference to balance out any of the colour they provide.
We talked in class about how we would consider sharing/ distributing the audio paper. I wouldn’t release it as it was submitted as I wouldn’t want my voice on it. I would like to take it down a route similar to Burial’s recent album/ep releases. I feel this would also remove the perceived academic element of it (what could be seen as reducing the audience), while still maintaining the motivation of its creation.