Coids is an interactive fulldome installation that explores how human emotional states can drive the real-time generation of visual content. By integrating EEG (electroencephalogram) signals with AI-generated imagery, the work examines whether emotion can remain an active and meaningful agent in the age of machine-made aesthetics.
Inside the dome, participants’ brainwave activity continuously modulates a generative particle system—shaping parameters such as motion, density, and color—thereby creating an evolving visual landscape that responds to inner affective states. The resulting imagery draws on astronomical references like nebulae and cosmic structures, linking psychological experience to spatial and cosmological scales.
The title Coids is inspired by the Boids algorithm, which simulates collective behavior in flocks. Here, that logic is reimagined: not as biological motion, but as an affect-driven swarm—where particles behave according to emotional input rather than predefined rules.
Rather than symbolizing emotion through static motifs, Coids proposes a structural mode of visualization—where feelings flow continuously through code, forming intimate, ambient interactions. It is an immersive feedback system that makes the invisible—emotion—tangible through space, light, and motion.
Coids is a single-user, EEG-driven interactive fulldome installation presented in a 15-meter diameter hemispherical dome theater. The system uses the Muse 2 headband as the primary input device, capturing live brainwave data from two frontal channels (AF7 and AF8). Data is streamed via Mind Monitor into TouchDesigner, where a custom Python-based FFT analysis extracts alpha, beta, and theta band power in real time. Emotional states are computed based on established valence–arousal models. Valence is estimated via alpha asymmetry, while arousal is calculated using the beta–theta power ratio. These values are mapped to both visual prompts for a fine-tuned Stable Diffusion model (via API access) and to parameters in a custom-designed real-time GLSL particle system built in TouchDesigner. AI-generated cosmic textures are created using a LoRA-tuned Stable Diffusion model, trained on NASA’s open-source astronomical imagery dataset. Visual parameters such as particle velocity, dispersion, and color are continuously modulated by the viewer’s emotional input. The final output is rendered in polar coordinates and projected in 4K resolution across a 360° fulldome surface. Due to hardware constraints, the installation is designed for sequential interaction: one user engages with the system while others observe. A temporal smoothing algorithm ensures continuity and perceptual stability in the real-time visuals.