The first in the series of workshops was held in the Anatomy Museum at King’s College London, on Friday 27 January 2017, 17.00 – 21.00.
App development is currently in progress.
For the first workshop, we re-purposed two existing apps and created a user experience that facilitated an interaction between the museum space and the users’ voices and mobile devices.
We spoke, sang and thought through the phonemes and letters of one single sentence: “What keeps us together”?
1. Smart phone
2. A barcode scanner app (QR Reader 5.9.2, available for free for both IPhone and Android phones.)
3. A dictation app (Philips Dictation Recorder, available for free for both IPhone and Android phones.)
1. Take a look at the map of the room and the distribution of the phonemes and letters in the room as indicated on the map.
2. Please note that indicated locations of the phonemes and letters are approximate.
3. In total, there are sixteen phonemes and letters, sixteen short audio clips (each phoneme or letter has an equivalent audio clip), and sixteen Quick Response Codes (QR codes) placed throughout the room.
4. You will see the QR codes both on the ground floor and on the balcony.
5. Each QR code contains an audio piece (vocalization of a phoneme or a letter).
6. To retrieve the audio pieces one by one, please download QR Reader app from ITunes or Google Play for free. You can barcode swipe the codes with your smart phone.
7. Retrieve the audio pieces and listen to them one by one. Take your time and feel free to start from anywhere in the room.
8. When you feel ready, vocalize each phoneme or letter at the related spot, where you see the QR code and retrieve the audio piece of the phoneme or letter.
9. Record your vocalizations with Philips Dictation Recorder and send them to email@example.com
10. We will play back the vocalizations together.
The audio samples below are sound collages based on the participants’ recordings gathered during the first workshop. At Sonic Arts Research Centre we distributed these recordings to different speakers to further experiment spatialization of the voices. We used binaural recording to capture this experience. The recording sessions were held in the Sonic Lab and the Surround Studio engineered by David Bird. Please listen to the samples below using headphones.
Sonic Lab 1
Sonic Lab 2
Pictures from Workshop 1:
This project is a collaboration between King’s College London’s Department of Music and Department of Media and Computing & Rapid-Mix at Goldsmiths, University of London. It is supported by the Cultural Institute at King’s as part of the Early Career Researchers scheme.
The second workshop took place in Dr. Marianthi Papalexandri-Alexandri’s class, “Shaping Sound,” at Cornell University in October 2017. For this workshop, we followed the guidelines suggested above for Workshop 1. However we made an addition: We developed a “auto-composition” code, which would allow composing recorded voices with different spacings (i.e. voices composed one second apart, four seconds apart, or nine seconds apart). Using musical instruments, we also explored different vocalisations and sound effects. Below are the audio sample composed with different spacings (one second, four seconds, and nine seconds) and pictures from the second workshop: