Polyphonic Embodiment(s)






2019-2022
        Polyphonic Embodiement(s) is a project produced in collaboration with Nestor Pestana with AI development by Sitraka Rakotoniaina

This project aimed to uncover how AI transcribes understandings of voice onto notions of identity, by recreating a voice recognition AI that claims to be able to tell what your face looks like from the sound of your voice.

As the project title suggests, the work invites people to consider the multi-dimensional nature of voice and vocal identity from an embodied standpoint. Additionally, it calls for contemplation of the relationships between voice and identity, and individuals having multiple or evolving versions of identity. The collaboration with the custom-made AI software creates a feedback loop to reflect on how peoples’ vocal sounding is “seen” by AI, to contest the way voices are currently heard, comprehended and utilised by AI, and indeed the AI industry. 

Material Experiments:



 
The video documentation for this project below shows ‘facial’ images produced by the voice-to-face recognition AI, when activated by my voice, modified with simple DIY devices. The speculative nature of the project is not to suggest that people should modify their voices in interaction with AI communication systems. Rather the simple devices work with bodily architecture and exaggerate its materiality, considering it as a flexible instrument to explore vocal potential. In turn this sheds light on the normative assumptions contained within AI’s readings of voice and its relationships to facial image and identity construction.

Testing the devices with the AI:

Supported by Techne SS&WPF Fund
Mark