Ancestral (R)evocations - Tate Modern



I worked with artist Erika Tan to help realise her semantic sound data sonification which was apart of the Ancestral (R)evocations installation in the Tate Modern Tanks in October 2024. The work explores data sonification through a DIY diagnostic tool comprising fragmented instruments, mechanised parts, and live machine-learning feedback, manifesting as physical and virtual interventions into ancestral and archival traces via performance, sound, video, text, and computational and human labour.

The semantic sound data sonification was split into two channels: 


Museum



Trained a Machine Learning (ML) algorithm on field recordings recorded within archival spaces at the Tate Modern. These sounds were fed into a Realtime Audio Variational autoEncoder (RAVE) developed by IRCAM - this neural network learns to re-synthesise the sounds, artificially, in real-time. The model learns a compressed, low-dimensional representation of the high-dimensional audio input. This compressed "manifold" can be explored through exploration of the "latent space" - movement within this space will modify the audio output, corresponding to different learned representations of the archival training data. For example, one region within the latent space could correspond to the background chatter of voices. This latent space is explored, and semantically meaningful movements can be automated.

The Museum model takes a 5-dimensional input corresponding to the vectorised keywords tied to the current work. This vector modulates spatial position in the latent space - this modulation is aleatoric in nature but constrained by the semantically meaningful regions recorded in the exploration phase. This creates a generative soundscape, constantly changing, yet supporting the conceptual framework of the artwork. Through this methodology, archival sonification feeds back into the museum collection.

·


Ancestors



This channel utilises live recordings of instruments played around Tate Britain, which are processed through a custom granular synthesizer developed by GitHub user jaffasplaffa. Granular synthesis is a method of sound design that breaks an audio sample into tiny fragments, or "grains", typically ranging from 1ms to 100ms in length. Each grain can be independently manipulated—adjusting its timing, speed, phase, and frequency—allowing for a dynamic and textured reshaping of the original sound.

For the data sonification process, incoming vectors are mapped to four key parameters within the granular synthesizer. One of these is the randomization of grain position, which determines the extent to which grains are played back in a different sequence from the original recording, creating a unique reordering of time. Other modulated parameters include grain phase, speed, and reverse playback. Together, these variations create a constantly evolving soundscape.


   
· 



Home