Heather Hyerin Im
Work       Teaching        Research       Bio         Creatives

Nuance + Mistake makes Meaning, 2025 



I collaborated with musicians and waacking dancers on an interactive performance <NUANCE>, using AI-driven media to reveal how machine irregularities can open new expressive and interpretive possibilities.

NUANCE extends this inquiry through a performance and media installation shown across a 75-meter LED wall. Blending disco and waacking with AI/DX-generated visuals, the piece stages real-time interplay between human movement and algorithmic variability. By transforming AI’s noise into a visual language, NUANCE reinterprets the cultural and historical layers of street dance and explores how embodied gesture and computational perception converge to create a shared, emergent “nuance.” Also, my presentation, ‘Mistake Makes Meaning’ received the Grand Prize. 



My Role:      Creative technologist, Designer, Artist
Press:           Women Times: A Young Artist Asks: What Is Art After AI?
Artist:
Heather Hyerin Im
Eddie Taesu Kim

Dance:
Da Seul Lee 
Chae Myung Jeon
Kyung Min Cho 

Music:
Chang Hyun Park

Date
Jul - Nov 2025






Nuance, 2025







In this project, we employed three different approaches to visually express the unique moods of the music.




Approach 1





This workflow illustrates a full pipeline that transforms an audio signal into music-responsive visual output. The audio file is analyzed frame by frame, extracting features such as amplitude, spectrum, bass/mid/high energy, envelope, and onsets through a Feature CHOP.
These parameters generate a sequence of reactive base images that reflect the dynamics of the sound. Each base image is then passed, along with a text prompt, into a Stable Diffusion img2img process, which interprets the audio-driven patterns into stylistically coherent visuals—such as evolving floral forms that respond to changes in musical intensity and texture. 






Approach 2

This workflow depicts a real-time pipeline that transforms incoming human-pose data into multiple styles of live-rendered visualizations




Approach 3






I also created a series of glitch-driven, morphing visual effects that blend IPAdapter-based image prompting with AnimateLCM’s real-time animation capabilities. By integrating multiple reference images, attention-guided fade masks, and ControlNet-driven motion cues, the system generates fluid yet deliberately unstable transitions - producing a distinctive glitch aesthetic that evolves dynamically across frames. 
This pipeline allowed me to craft high-fidelity, audio-reactive glitch visuals suitable for performance environments and experimental media art.







    Mistakes Make Meaning: The Future of Generative AI, 2025 





    I also presented <Mistake Makes Meaning>, a talk proposing an alternative stance toward AI that foregrounds error, inconsistency, and noise as generative forces rather than technical deficits. The presentation received the Grand Prize.