I trained a sound classification model to recognize four everyday sounds — zipper, knock, rub, and click — to explore how common auditory cues can interact with visual responses. I created some visualizations in respond to each of them. I started by testing it using keyboard as input. After getting visuals ready, I added voice recognition into it. I asked chatgpt to help me with visualization coding.

p5js

Video Demo

af56cf01f7e3d914633629ffc83aa7ee.mp4

https://teachablemachine.withgoogle.com/models/jWBZBMcaL/