how can machine learning support people’s existing creative practices? Expand people’s creative capabilities?’
- Watching Fiebrink really opened up my scope for machine learning design. Her creation, wekinator, made the machine learning process seem so easy, especially compared to complex, mathematical coding that would go in if it was not for machine learning algorithm. I think machine learning can be much more accesible to users of all kinds of backgrounds — it is more hands-on and learn as you play around type of style (in my opinion). On the other hand, hard coding can have a learning block (also in my opinion haha). I really resonated with what Fiebrink said when she was doing the demo of wekinator. As she was creating a demo, she suggested that what she’d doing right now is cool, but something that is already possible with other types of interfaces. Then, she added another layer with blotar, thus making hand gestures into an instrument that creates a fun, bizzare sound that no other instrument can make.
- I also liked the example of the tree bark instrument — it reminded me of a performance piece by Spencer (class of 2022 itp) that was showed at the last spring show. I really love how machine learning can help humans forget about the stem aspects (not entirely), and allow the users to emotionally, physically, and mentally engage with their projects.
- I also loved this talk because it made me want to use the software that Fiebrink mentioned for one of my class finals (maybe two!).
dream up and design the inputs and outputs of a real-time machine learning system for interaction and audio/visual performance. This could be an idea well beyond the scope of what you can do in a weekly exercise.
- As I mentioned before, I think Fiebrink’s programs do a really nice job making machine learning approachable, and it made me want to use for programs for one of my finals (ITP class Alter Egos).
- As an input, it would be cool to use hands as the manipulator. Although this seems super cliche, with Alter Egos performance, the staging will be pretty dark and I will have a costume on, which means I can’t have my face as a source of input. It might be interesting to have body posture/movement as an input as well, but realisitcally speaking, I’m not sure how accurately the camera would be able to capture the movements in the dark. I also thought about doing eye tracking, but I want to have white lenses on that completely cover my irises, so I’m not sure if that would be a good idea. Maybe I could do open eyes v.s. closed eyes?
- As an output, I think audio would be cool and complementing to my class (because we are learning how to do audio/visual manipulation). However, if I could get this to work, I think having lightings change as an output would be a really cool concept as well.
- Just like a thermin, I also think it’d be fascinating to have each hand represent an output. For example, left hand would be volume, while right hand would be the pitch.
Create your own p5+ml5 sketch that trains a model with real-time interactive data. This can be a prototype of the aforementioned idea or a simple exercise where you run this week’s code examples with your own data
‘automatic’ camera that captures the video when the user makes a ‘v’ sign:
https://editor.p5js.org/jiwonyu/sketches/RB0AYLhUo
- I thought it would be cute to have a camera that would detect when a user makes a common hand gesture in picutres, v, because it’s annoying to set up self-timer.
my attempt to make data collecting more discrete, instead of pressing 1 and 2:
https://editor.p5js.org/jiwonyu/sketches/u6k0FIEP9
- I really couldn’t think of an elegant way to collect datas, so I thought that I could ‘start’ collecting data when there was noise (using p5 sound volume) v.s. when there isn’t. For example, if the noise level is below 10 (mapped bewteen 0-100), the it would be category 1 data (whatever this may be), while if the noise level is above 10, the sketch would collect category 2 data (this can also be anything). In my imagination, I thought that I could make category 1 into my ‘others’ category, while category 2 can be my specific data group (Which i didn’t decide yet).
- However, nothing was being read, and sometimes when I run the sketch the browser did not even ask to use the mic, so I wasn’t sure how to troubleshoot that.
Improve the handPose example we built in class https://editor.p5js.org/yining/sketches/dX-aN-8E7
handpose p5 sketch with all five fingers:
https://editor.p5js.org/jiwonyu/sketches/JapMciUty
