Our approach centered on creating an experience that was both engaging and unsettling. By limiting interaction to facial movements, we forced users to engage with technology in an unconventional way.
Immersive
Interactive
Installation
TouchDesigner
MediaPipe
VCV Rack 2
Audio Visage is an interactive music installation that explores the intersection of human expression and technology. Users control a procedurally generated soundtrack using only facial movements, while a large display shows a dark and glitchy webcam. As audio designer, I orchestrated the integration of facial motion capture with a custom modular synth system, creating an immersive experience that prompts reflection on our relationship with technology.
Our approach centered on creating an experience that was both engaging and unsettling. By limiting interaction to facial movements, we forced users to engage with technology in an unconventional way.
The ominous aesthetic and glitch effects were deliberately chosen to evoke a dystopian sci-fi atmosphere, encouraging viewers to question the rapid advancement of technology. We mapped specific facial movements to musical parameters, creating an intuitive yet alien method of musical expression.
The technical implementation required seamless integration of multiple systems. We combined VCV Rack for the modular synth, MediaPipe for facial tracking, and TouchDesigner for visuals and integrating the whole project.
Learning a new program, VCV Rack 2, I developed a custom modular synth patch with 5 layers which included a drone, glitch effects, and randomly generated leads. The patch incorporated the signals from the facial tracking to control different elements.
We utilized MediaPipe's motion capture API through TouchDesigner to track facial movements, translating them into control signals. These signals were then routed via OSC protocol to VCV Rack. Each facial movement corresponded to specific musical parameters: mouth opening controlled drone intensity, head rotation affected filter frequencies, lateral movement panned audio, and blinks triggered glitch effects.
Working closely with our visual team, I composited the different visual elements to ensure the generative visuals responded to the same control signals, creating a cohesive audiovisual experience.
The project demonstrated our group's ability to integrate complex systems into a seamless and fully functional interactive experience while maintaining a strong conceptual foundation. The technical implementation and thematic exploration of this project pushed our boundaries in real-time audio processing and motion capture.
Core Visuals: JamieLynn Gallagher
Asset Design: Jennifer Saloutenko & Nyasia Revill
Audio: Brandon Riley