I trained a sound classification model to recognize four everyday sounds — zipper, knock, rub, and click — to explore how common auditory cues can interact with visual responses. I created some visualizations in respond to each of them. I started by testing it using keyboard as input. After getting visuals ready, I added voice recognition into it. I asked chatgpt to help me with visualization coding.
Video Demo