Acoustic Ecology of an AI System
Acoustic Ecology of an AI System aims to provide a critique of how AI intends to be recognised through the sounding of synthesised voices used in conversational AI systems. Currently these synthesised voices aim to be recognised as human but are designed and illustrate very narrow portrayals of human beings. In particular they present harmful representations of women. This project takes research from Kate Crawford and Vladan Joler’s 2018 project Anatomy of an AI System that demonstrates the vast network of entangled non-renewable materials, data and human labour required for these conversational interactions to operate. The purpose of my project is to reattach disembodied, acousmatic synthesised voices to space, time and architecture, using designed acoustics and sound design. Here I illustrate the misguided representation and intended recognition of synthesised voices by instead highlighting the materiality and multiplicity of the actuality of these voices. I add richness and complexity to understandings of voice to provide other narrative possibilities, speculating on alternative ways to vocalise synthesised voices.
This work was presented as part of Research and Waves new online platform: Attune, alongside other works questioning 'Can words be neutral?'
You can experience the interactive audio and read the accompanying short essay here: https://attune.researchandwaves.net/acoustic-ecology-of-an-ai-system.html
Special thanks to Henrik Nieratschker
Sample Audio Clips:
Source Audio Clip - Example of of ‘Babble’, created by Google WaveNet
Manipulated ‘Babble’ Audio Clips