3aac5c21ac15c91e17af749df0a18d67.mp4
After experimenting with several function demos from ml5.js, I found the bounding box tracking feature the most intriguing. I was particularly drawn to how it could extract subtle facial data in real time, such as the width or movement of specific facial parts. After deciding to focus on this feature, I began brainstorming different ways to use these dynamic facial measurements as an input method for generative design.
Among the ideas I explored, I was most inspired by the mouth’s bounding box. I wanted to transform this movement into a visual form that felt organic and calming. I’ve always been fascinated by vase patterns. That inspired me to create a project where users can “grow” a vase using their mouth movement.
In this project, the program tracks the width of the mouth over a period of three seconds. Each frame’s mouth width is translated into a white line, and these lines are stacked symmetrically on top of one another in real time. Over time, these lines accumulate to form a symmetric vase-like shape — a digital artifact shaped entirely by the user’s subtle facial motion.
I finally make the patterns appear only after the 3s-process is finished, which I found more intriguing. I also added a 2d-to-3d function, but somehow I had difficulty going back to the 2d-drawing canvas. Probably because of the conflict between normal canvas and that for WebGL? I tried different ways but none of them worked.