top of page
Puppets / 2024
Body-interactive Audio-Visual Performance
at Stanford CCRMA, with Libby Ye
Visualizes invisible voices through brainwave and motion data.
Explores how technology can externalize inner presence and connect minds beyond language.
In this live performance, we used an EEG headset (brainwave and motion sensors) and Genki Ring wearable to generate real-time sound and visuals. The parameters were processed in MAX and Wekinator, with the original sounds produced in Reaper.
The visuals were live-coded in CHydra by the parameters and processed through OBS.
My Role: Concept direction, sound composition, system integration, and performance.
Full Video

bottom of page



