Overview
For this week’s assignment, I utilized a KNN model to develop an interactive musical program. Users can trigger different songs from various musicals with different body gesture. For instance, when they cover half of their face, the music playing would change to the phantom of opera; when they pose their hands like a cat’s claw, the music would change to “Memory” from cats, when they wear anything green, the song would change to “Somewhere over the rainbow” from the wizard of Oz. The model I used also allows users to train their own dataset, which improves the accuracy of the output.
Demo
Technical Problem
At first I want to use both the users’ speech and movement as input to trigger different output, however, I had some difficulties combining two models together. Still, I think it would be more cool if the user can use both their songs and dance to interact with the musicals.