Soft Robotic Final

Deformable light fixture.

I want to use silicone as the light diffuser and the container. I want to make a deformable light fixture, as I inflate it, the silicone deformed and inflate so that the light will deform too because it is refracting inside the silicon.

My fixture had two layers. One layer has the air chamber, the other layer has the led strips inside. Because my led strip is not flexible enough, I was having a problem to keep the stip flat to the mold. As a result, I have to pour more silicone as I expected. My air chamber was too big, therefore, it was leaking air. However, I still like the result. The fixture looks like emissive materials.

In the future, I will definitely keep exploring using silicone and light together. The softness of the silicone and how well it diffuses the light create lots of posibilities.

 

 

 

Energy Final

Most of the copper wire sold have enamel coating, to make this train move, it has to be bare copper wires. And winding the copper wire is the most tricky part. The thickness of the wire not only determine how dense the copper coil could be but also the softness of the copper wire which determines how easy the coil the train could pass.

I first tested the copper wire with 0.05” diameter and AA battery. I was not able to make the battery move. I thought it was that wire is to thick but that was only part of the reason. 

Then, I bought the wire with 0.02” diameter, this time the battery is moving, but because the wire is too soft and flexible, it was so hard to make a smooth path. After lots of testing and rewinding, I decided to use tape to constrain the copper coil.

In order to make the battery move, the magnet has to be Neodymium Magnets, as they are conductive and powerful. Also, the direction when winding the coil also matters. If winding counter-clockwise, according to the Right-hand grip rule, at both ends of the batteries, it needs north pole of the magnet. Otherwise, south pole for clockwise winding.

My original plan is to use a solenoid to hit something to make a sound. So I grabbed a solenoid and also bought five 1.5-3 v mini motors. My plan was to have two separate copper coil very close to each other, and connect the motor or solenoid to two copper coil, therefore when the battery is passing the connection point, the solenoid and motor will be triggered. However, as I mentioned, even I tried to make the copper coil as smooth as possible, the friction is still too big for the train to move completely to the next coil. It will just stop at the connection point, although the motor is moving as I expected. 

Then, I came up with the idea to utilize the strong magnetic force of that Neodymium magnet. I tried hanging a Neodymium inside a glass cup.  As you can see in the video, the magnetic force is powerful enough to move the hanging magnets. To get a better result, I tested and decided to use Coke glass bottles.

 

 

Then, I played around with bottle filling the different amount of water to make the sound with different tones.

 

The next step for this is to extend the length of the copper wire and tried to make a loop, therefore, I can have more bottles.  I tried many times to build a loop, but because the wire I get was too soft, the electromagnetic train will stop at the corner. To make the coil above takes about 180 ft of copper wire. Sid and I probably had a speed around 4 hours to just winding the copper wires. And around 200 ft. copper wire is wasted for many reasons.

 

Water Mist Light

We have a 24V and 1 A power adapter. To integrate our circuit, we used a 7812 Voltage Regulator to power the two led stripes which need 12v dc, and a 7805 Voltage Regulator to power the Arduino UNO. We also used a TIP120 to control the 24v ultrasonic fog maker.

Everything works fine, we also add two extra heatsinks on the TIP120 and 7812 Voltage Regulator, as they are getting super hot when operating.

For the fabrication part, we bought a clear acrylic tube, and laser cut the base for the container, a base for the ultrasonic fogger, also a lid over the fogger to minimize water splash. To seal the bottom, we used epoxy, and we used silicon and heat shrink tube to seal our led strips too.

 

After putting everything together, we found that the lid that used to prevent water splash, also trap the fog at the same time. Therefore, we had to remove that. Also, when operating after 10 mins, the 7812 Voltage Regulator is getting super hot, and our led strips start to flicker. We made the decision to use two separate adapters and only have the 7805 to power the Arduino.

 

To avoid water splash without trapping the fog, we design a different lid. The fog is slowing coming out of the center tube and moving gently.

 

We also recorded the lamp without the lid. However, we actually find that the water splash is interesting, and the movement of the fog is more obvious too. The Arduino will control the ultrasonic fog maker, if the humidity of the room is reaching a certain number, the ultrasonic fogger will be turned off. And when the water level is below a certain point, the fog maker will also turn off.

Into the Masterpieces – Nature of Code Final

 

Into the Masterpieces

Into the Masterpieces uses different machine learning models from ml5.js libraries. This project uses KNN classifier with PoseNet to recognize the poses of the users, then chooses our pre-trained style transfer models to turn the live cam video into the style of those masterpieces.

In this final project, I worked with Xingyue. We both like the idea of the style transfer that how a model can learn and turn everything to a very specific style of painting. And we are noticing that some painting like Mona Lisa by Leonardo da Vinci also has very iconic poses. Therefore, we came up with this idea to let users mimic the poses of the character in the painting, which trigger the style transfer function. 

Through this playful interaction, we would like to offer a whole new way of interaction between the viewers and paintings. In this case, they are mimicking the poses and, then, becoming part of the paintings. 

We decided to use the ML5 library, and use the KNN classifier with PoseNet to train our model to identify the gesture which triggers each style transfer function, and recognize the pose of users. 

 

The Siesta (after Millet), Vincent Van Gogh, 1890

The Dream (Le Rêve), Pablo Picasso, 1932

The Scream, Edvard Munch, 1893

 

Xingyue and I choose these three iconic painting with recognizable poses. And we trained our models through spell.run.  And one of the models is trained and downloaded from ml5 libraries.

 

Soft Robotic Crayfish/Hairy Crab Claw Machine

Crayfish and Hairy Crab are really popular in China. However, they are super aggressive. Usually, Crayfish is not served in a fine dining restaurant. Crayfish is more like a street food dish, which is super joyful when sharing with your friends and family. Therefore, I am thinking of adding more fun to that experience. A crayfish/hairy crab claw machine with soft robotic claw.

As you can see, this creature is super aggressive. When picking them up, you need to be very gentle.

I think that claw machine will fit perfectly with street food, as many night market in Asia provides a mixed experience with games and street foods.

To break down my project, I would like to design a soft robotic are that are easy to grab crayfish, I probably will not design the whole control system. 

Light and Interactivity #13

 

 

 

I think this spot probably has the best lighting on the 4th floor. First, this little corner is probably one of the darkest corners on the floor, and The light cast a pretty shadow on the wall. And the light is behind the stair, the contrast between the dark and bright area is really interesting.

 

 

The hallway near the IMA area also has amazing lighting. The glass hallway makes an interesting reflection of the light outside, and the class only got few spotlights on, the contrast between the darker classroom and hallway is also interesting. The tone of the light inside is cooler than the outside.

 

 

Some process of our final project. Our light right now will have three parts. The top will be the mist created by the ultrasonic fog maker. In the middle, all electronic components and led strips will be hiding in here, and half cover with water. There will be two led strips one facing up to the fog and one facing down to the water, also the ultrasonic fog maker will be hidden here too. The bottom will be just pure water.

soft robot #4

 

I choose to use the stencil method. I laser cut a piece of fishbone shape acrylic first. The use of cardboard as the base of the mode then uses hot glue around my stencil to build a wall. Then, I just pour the silicon into the mold and wait to cure, then place my stencil on top of the cured silicone and apply the release agent and then pour silicon again. The problem I got is that I forgot which side is the tunnel for the air to go in.

 

 

Soft Robot Crane Machine

Use a soft robotic arm to get your favorite toy

Nature of Code# Final Project Proposal

For my final project, I want to have a machine learning model that classifies people’s emotion. I am not entirely sure where I will apply this to, but here are some thoughts.

I want to create a creature in the digital world that react to people’s emotions. The creature is able to sense people’s emotion and change its appearance. For example, The following weird flower that I created in C4D will change its appearance if the user has a different emotion.

If you are friendly and curious, the creature is also curious about you.

When you are not happy, the flower is getting anxious.

 

I will start testing in p5 or processing. However, I want to push further and utilize this in an AR or VR experience. I might export the object and animation to Three.js or Unity.

 

Here is a trained model that Dan showed me created by Brendan Sudol:

https://brendansudol.com/writing/tfjs-emotions

Light and Interactivity #Final_1

During the light observation assignments this semester, I observe many stunning moments when fog or mist diffused the light. I am obsessed with the fog and I really want to use that for my final assignment. 

I did some research and decided to use ultrasonic fog maker to generate the mist, which is more sustainable and safer. I want to use a glowing geometric shape inside the fog, similar to the above pictures.

One thing I am still experimenting with is the light source and the interaction. I bought so EL panel and wires. 

 

With EL wires, I can easily make the shape I like, and it is waterproof, but I have much less control over the EL wires. 

For the actual piece, I want to have a clear rectangle box and place the water and ultrasonic fogger at the bottom. Then, hang my neon like the moon from above.

 

Soft Robotic #2

I put a bunch of triangles together, and create this shape. I want to let the ballon will twist when inflating with air. I was wondering if the ballon can twist in one direction, I tried adding a few internal constraints, but it did not work. I am curious how does the class example we saw works, which turn flat ballon turn into a cube.

 

Here is a paragraph from Alexis Noel’s blog about frog tongue:

Frog tongues

“The frog’s tongue is able to capture an insect in under 0.07 s, five times faster than a human eye blink. … The frog tongue projects out of the mouth using an inertial projection mechanism: the jaw rapidly opens, the tongue rotates and inertia of the tissue causes the tongue to project toward the prey.”

The speed of how quick frog tongue behave really interest me. Surprisingly, the frog takes advantage of inertial when opening the mouth to speed up the process.

I think such a feature could be used to detect and take out the defective product in a mass produced assembly line. When a product is detected as defective, the frog tongue will be projected and take out that defective in a short amount of time.