top of page


Puppets / 2024

Body-interactive Audio-Visual Performance 
at Stanford CCRMA, with
Libby Ye

Visualizes invisible voices through brainwave and motion data.
Explores how technology can externalize inner presence and connect minds beyond language.

In this live performance, we used an EEG headset (brainwave and motion sensors) and Genki Ring wearable to generate real-time sound and visuals. The audio was processed in MAX, with the original composition produced in Reaper, while the visuals were live-coded in CHydra and processed through OBS.

My Role: Concept direction, sound composition, system integration, and performance. 

Full Video

Interactive Multimedia

How can the body itself become an instrument for multisensory expression and communication?

© 2021 - 2025 Anna Matsumoto. All Rights Reserved.

bottom of page