top of page


Puppets / 2024

Body-interactive Audio-Visual Performance 
at Stanford CCRMA, with
Libby Ye

Visualizes invisible voices through brainwave and motion data.
Explores how technology can externalize inner presence and connect minds beyond language.

In this live performance, we used an EEG headset (brainwave and motion sensors) and Genki Ring wearable to generate real-time sound and visuals. The parameters were processed in MAX and Wekinator, with the original sounds produced in Reaper.

The visuals were live-coded in CHydra by the parameters and processed through OBS.

My Role: Concept direction, sound composition, system integration, and performance. 

Full Video

Interactive Multimedia

How can the body itself become an instrument for multisensory expression and communication?

© 2021 - 2025 Anna Matsumoto. All Rights Reserved.

bottom of page