#Blog 6 Group Research Project 1
Background
In Newtonâs Sleep, those selected elites gradually develop illusion about people, nature and other things from their original planet, Earth. They cannot get rid of these and have to live with these illusions unless they all get lobotomy and have electronic brain implants. Another problem is that if people choose to live with these illusions, then we should create a complete status where people can have a âshared experience with elements of interactivityâ, which the illusions are right now not able to do.
Project Idea
With the background above, our group focused on a kind of floor that can project interactive images according to peopleâs brain waves.
ïŒhttps://www.yimashijie.com/3dqxty/baike/2412.htmlïŒ
Here is my [Sketch] for our project:
When a person walks on the floor wearing the helmet (see 2 in picture above), the helmet can detect the brain waves of the person and decode his/her perception of this moment. We designed two different types of helmet to show that there might be two major competing companies that both make this kind of stuff in that universe, just like Apple and Microsoft.
The information can then be sent to the signal receptor (see 3 in picture above) on the ground and allow the floor to get what people are seeing. According to peopleâs illusion, the floor will project images with various projectors (see 4 in picture above) to create a holography of the illusion.
It will also combine visual, auditory, olfactory, and tactile sensation experiences. Auditory and olfactory experiences are realized through the integration system located in the signal receptor: speaker and fragrance diffuser are added at the bottom of the receptor.
The tactile sensation experience is realized through the gloves (see 1 in picture above): enter your NetID and press the âConnectâ button. You are ready to go! The gloves can send the space position of themselves to the receptor, so when you reach the hologram, the device will know you are there. And by using micro-current released by the gloves, people can feel as if they are touching and interacting with the 3D image.
In addition, the holography can give responses as real ones do using AI technology. Every interaction with it is tailored and unique.
Go Beyond Current Technology
There are two main interesting technologies in this project: the holography and brain-wave reading.
1. Holography
There is holographic projection technology, but no institution in the world has actually made it practical for use yet. It is easy to record the intensity of light, but it is not easy to record the phase that has the function of determining the position of light, which the real holographic technology actually refers to. If we can record both the intensity and position of the light, then holographic projection will be able to image directly in the air, and be viewed from all angles. (Zhihu)
There is a project I found that has succeeded in creating holography, but the size and level of complexity of the image is still limited.
Our project aims to create stable and grand scene that has practical uses.
2. Brain-wave Reading
As holography, Brain-wave reading is also an existing technology, but requires a lot of improvement.
One example is InteraXon. It built a wearable EEG device along with software to classify the brain waves they measure. Low-frequency âalphaâ waves indicate a relaxed state; higher-frequency âbetaâ or âgammaâ waves indicate a busy or concentrating mind. However, InteraXon co-founder Ariel Garten said, âAlthough you could control technology with your brain, the way that you did it was not very effective. Frankly, you could just turn the thing with your hand much more readily.â
Another example is Openwater. It uses skull-penetrating infrared light to measure blood flowâa sign of which brain areas are working hardest.
But none of the current technology can read what is the exact image people see in their minds and that is one of the most important features of our project.
Interactivity of the project
According to my previous research (See more in Blog 1), I have defined interaction as cyclic conversation in general, which involves two or more actors, or purposeful creature (Crawford, p.3). As for our project, it involves at least two participants: the user and the device with AI programming system that can receive input and give feedback to the user. And the conversation between the two are cyclic since the feedback given by AI is tailored to the usersâ input instead of preprogrammed stiff answers, thus the user will be triggered to really think about its feedback and give meaningful responses. It is neither one-direction lectures nor one-round conversations. Also, the degree of our project is relatively high because it has intellectual utility, which is to help people have near-real connections with their illusion, in another sense, their memory of Earth imbedded in their blood.
Successes and Failures
1. Successes
We designed a relatively complete ecosystem of the device. By this I mean we prototyped almost all the concrete objects people need to use the function, and tried to use different ways to interpret the 3D projection. Also, we began the project quite early, so we had enough time to revise our prototype.
2. Failures
I think the prototype we did was not that delicate and detailed enough to show all of our ideas. For example, because of workload, we only made one projector, however, what we really thought of at first was that there are multiple projectors so that the 3D imaged can cover every space that the users see.
Also, what is the next step of this project? The project is intended to help users live with their illusion and remind them of their home planet. But questions may arise that people will be so obsessed with the virtual world and lose track of what is real indeed that they have.
Personal Contributions
- Provided the initial idea of our project: floor that can project interactive images according to peopleâs minds
- Planned the whole system of our project
- Designed and made the two gloves that can realize tactile sensation
- Made a base for the brain wave receptor
- Decorated the gloves and the receptor
- Made a sunflower as a prop
- Provided ideas for the performance: the context of two scenes
Process our group used to work together
February 22
We had our first meeting to decide which idea to use. We created a Google folder and everyone put his/her ideas in it. After we went through every idea, we had a heated discussion focusing on three of them (details are included in this Google doc). The main disagreement was that some of us wanted to make something more complex, have a higher level of interactivity and can fit into the universe of the novels more, while others wanted to make something less complex considering workload and thought that we shouldnât be bounded by the novel. Because opinions vary, we werenât able to decide right away, so we plan to stop the meeting and work on all three ideas first and consult with professors.
February 23
We didnât want to be stuck at the first stage, so we had another round of discussion at night. It seems surreal that we actually made the decision really quickly that day and zoomed in on one brief idea, which is the floor.
February 25
On Friday, we went to the workshop to learn some techniques of cardboard making. We drew drafts and did some prototype during the workshop. After that, we began to really make our project. That day, we finished the basic part of the project, including two helmet, two gloves, and a brain wave receptor.
February 27
We all came to school on Sunday. I made a base for the brain wave receptor, and decorated the gloves and the receptor. Sebastian made a projector. After we discussed about the play, Phoebe drafted the lines for our performance in Chinese first and Rena translated it into English. After we made clear what scenes we are going to use, I made a sunflower as a prop. Rena and Phoebe also helped.
March 3
Today we have rehearsal for our performance. Because we all have different schedule, we have to rehearse separately. Rena and I rehearsed first. I am the AI and all I have to do was to squeeze myself into a white bed sheet and read my lines in a mechanical voice. Itâs a bit harder for others since they have to remember their lines and do body language and facial expressions. I rehearsed for several times with Rena to help her memorize her lines. And during the rehearsal, we found out several places where the lines need to change.
After I rehearsed with Rena, I waited for Sebastian and Phoebe to come. Unfortunately, Sebastianâs documentation for the previous recitation disappeared same how, so he had to rewrite all of that again before deadline. After he solved this emergency, we three had rehearsal outside the piano classroom, making awkward sounds. The whole process working with my teammates I hope everything goes well on Friday!
Assessment of a performance from another group
The project by Mood Bubble team is a machine that can connect and share peopleâs emotions and memories. People can upload their emotions to the main brain with tubes connecting to their heads.
It is relevant in a way to the Winter market since it also conveys the idea of sharing consciousness. It meets the criteria of interactivity of the assignment because this machine can help two or more people exchange their thoughts and thus, creating different scenarios for different users.
I think this artifact has a good intention to make it easier for people with different backgrounds to show sympathy to others, however, it is not that creative since it borrows the idea in the fiction directly with few changes. The performance is pretty easy to understand with those emotions drawn on the heads and physical tubes.
If there is any anything that can improve, I would suggest them to reduce one or two rounds of repeated performance and focus on every experience and elaborate more deeply into their concepts, Or they can also add a section reflecting on the possible consequences of this artifact.
March 4, 2022, Century Avenue, Younian Liu