Good Shooting Spot for VR by Ryan

This is the location for the spot I have found which is along the river side, just the oppostie of the Bund, and around it is the Lujiazui CBD where JinMao Tower and Shanghai Center are.

This is how popular the spot is, with many photographer here to shoot for the view of the Bund.

These two pictures are for the surroundings with highrises. The following are a group of pictures I have taken for the Bund from sun set till evening.

Project example by Ryan

Example One

This is the personal webpage of a photographer Xan Pardron, who has been taking photos in different cities at same spots to record the behaviour of different people, and combine them to tell the story of life and city.

Example Two

This is another photographer called Richard Silver, he has created a number of arts called Time Slice, he takes photos in different places around the globe through the day and combine them together to see the changes of a place in a day within one picture.

this is also a video I took a year ago at Taihu in Suzhou

Thoughts on image-based VR by Ryan

The TED-Talk given by Blaise Aguera y Arcas impresses me a lot, it is unbelievable to me that in 2007 there is already an application that can connect 2D images together to reconstruct a 3D model. The way how it works is based on images from all over the world made in all ways that have different perspectives of a certain thing, and combining the details of this from all perspectives to reconstruct it in a 3D ourcome.  But the problem is obvious, the quality of the images directly influence the 3D model, the authenticity, the sharpness, the depth of the image, there are many factors that can influence the reconstruction. As is shown in the presentation, there is a image that the Notre Dame de Paris is actually in a poster which is not real, but is also being counted in the application to form perspectives on the model, this might influence the accuracy of the reconstruction. Also we need to think of that there might be images that have been processed, these will make great influence that can cause distortion. I think this is the reason why it takes such long time, to deal with the classification of the images collected, which is related to machine learning, I think.

Talking about the evolvement of the challenge, what I have observed from the publications of the four original developers of the Photosynth, they are still in some way working in 3D image area. Many challenges are related to machine learning that is to better the identification in images, bettering the resolution the same as improving the ways of motion and image capture technologies. Those who still stick it out try to improve image and motion capture, the others work on making the environment of the 3D model more real, giving stereoscope to the environment, setting geo-localization. I think it is going to be impressive if this techonolgy becomes mature, as we can see 3D models built up based on related 2D images, for example, even though the Notre Dame de Paris has been burnt, we can still reconstruct it in 3D modeling based on past images. What’s more, we can use 2D imaginary pictures to built up 3D models for things that are in fantasies or myths or reconstruct the relics that has already been destroyed.

Week4 – Group project research

In the article The Art of Interactive Design written by Crawford, he talks about his definition for interaction, “a cyclic process in which two actors alternately listen, think, and speak”(Crawford, 5). And after finishing the group project research, combined with Crawford’s idea, what interaction means to me now is that, something which is interactive should affect the user himself when the user is using its certain function, it is like a communication which goes two-way.

The two projects I have looked for are “Materiable” and “SoundFORM”, both from Tangible Media Group which is the MIT Media Lab. Both of the projects are using a shape changing interface that can visualize physical objects, these ideas make people manipulate physical objects not from the original way by using computers but using hands in a controllable way, so when users are changing the physical objects by interfere the interface, the interface will shift its shape as an outcome of the user interference. I consider this as a perfect way of interaction, it is like people are communicating with the object, it will return something to your behaviour.

The “Materiable” project can respond to a variety of properties, as it has been mentioned, “The system can create computationally variable properties of deformable materials that are visually and physically perceivable”, so the shift of shape on this project can reflect a lot of things, the strength when someone is pushing finger on it and even measure the change of strength, the change of colors and so on. The result of its interaction with users is that users can directly feel from haptics through the changes of the blocks. I consider this as a perfect way of communication between a project with a user.

Materiable

The “SoundFORM” project is also amazing that, being similar with the “Materiable”, on this project, users can interact with the notes by changing the soundwaves that is visualized on the project, as the developers have said, “Through the use of a shape-shifting display, synthesized waveforms are projected in three dimensions in real time affording the ability to hear, visualize, and interact with the timbre of the notes”. This is quite amazing that, it is impossible to feel the soundwave and directly change it by hands, but the visualization of it through blocks make the interaction between producers and their compositions possible. Through the gesture vocabulary, the synthesized waveforms can be modified just by hand movement. This project shows the communication between notes, soundwaves and human gestures.

SoundFORM

The idea of our group project is a multifunction life jacket, the jacket can deal with several emergencies to save user’s life. There are rocket injector on the back, an oxygen tank, an anchor in the front and also a fire extinguisher with a high-tech helmet. When there is a earthquake, the helmet and the jacket can prevent you from falling objects, when there is severe haze, the helmet can become a gas mask to avoid poisonous gases, when there is huge fire, the fire extinguisher can put it out, when there is hurricane or the sea level has risen, the anchor can prevent you from drowning and the rocket pack can get you out from any kind of circumstances. The way it is being interactive, from my perspective is that when facing different circumstances it can have certain function that can deal with specific condition, the user uses certain function, the jacket will return corresponding outcome to save the user. To fit the requirement that the project should be something that appears in the future, as the jacket involves so many functions altogether, and from our idea that it can be contained in a capsule so it is very portable, this idea is totally new and sounds incredible now so I think it fits the idea that it is only possible in the future.

Bibliography

Crawford, “What Exactly is Interactivity,” The Art of Interactive Design,  pp. 1-5.

Soundform

https://tangible.media.mit.edu/project/soundform/

Materiable

https://tangible.media.mit.edu/project/materiable/

Week 5 – VR/AR homework about the Oculus Connect 6 Developer’s conference

For the Oculus Connect 6 Developer’s conference, I choose to watch the 12-minute summary and two videos, Creating Spatialized Music for VR/AR and A New Architecture: Unity XR Platform. It is amazing for me to see that Oculus Quest is going to import hand tracking without using hand controllers, it is a great progress towards making VR experience more real, as how we use our heads in VR vision is not going to be controled by using a controller as we press button on it or swip our fingers on its touching pad. So by realizing this can imitate the detail movement of our hand, like how our joints has bent, how much strength we are using to realize more complicated movement and effects in VR applications.  For the Passthourgh+, I think it is also important and convenient as you do not need to take off the headset to see what is happening around, which makes using the VR headset much safer in case of any circumstances. Applying facebook society in Oculus is making it more interactive as users can share their experience and hold event in real time, and it is more conveient than in the reality to invite someone to play with you as everyone just need to wear the headset and join others online which feel like being together. The idea of facebook Horizon is also amazing as it is a highly interactive world which is described by Zuchberg that users can do anything they want in it. I believe this must be a great achievement as any open world, not only in VR, but also on other platform is extremely hard to realize, because there are always limits for the users to manipulate the open world, for example, if users want to create something, they want to cut something into a customized shape takes high demand to the phsical engine, not to mention the VR open world. To make it real but highly controllable is really amazing to me. The Ctrl-lab that is working on neural control system is the one that fascinates me the most, it is incredible to control user movements by just thinking with a wristband on, I have no idea how this works in detail, but instead of planting devices inside the body to make this work, just using a wristband is far beyond my expectation. Combining this with the hand tracking system, it is possible to foresee that we even don’t need a menu when using VR headset, as we can do the movements by hands and send commands by just thinking, these all make VR a second world to live in. For the machine perception part, how real the VR scene looks like and how precise the human face and body can be captured and displayed are both amazing, I  think it is a progress towards making the VR world more real, instead of only using cartoon figures to communicate.  

For the first video, the Unity one, has mentioned three points, Unity XR integration, API convergence and Universal Render Pipeline. The Unity XR integration has mentioned that many assets and scripts has been packed into pachages to make it easier for developers to download whatever they want by choosing the corresponding packages. But what interests me is that developers can use VR to do 3D modeling and rendering, instead of using mouse to drag objects on PC, this time, it is like being in the Unity to manipulate the objects, if it can also be applied with the hand tracking system, it will be amazing that we can shape the objects by hands to whatever shape we want and drag them to wherever we want. For the API convergence, it is about cross-platform APIs, that the API in VR headset can be applied to other applications even though they have different interfaces, it is very helpful to eliminate many problems when developers are switching from platforms to platforms. For the Universal Render Pipeline, I am quite confused how it works, but I thing I learn is that, it uses tile renderer which is quite expensive, so it is important to do the best rendering effect at the ultimate stage to save unnecessary computing cost.

For the second video, the spatialized music it talks about the preocess of making music spatialized in VR. Unlike traditional headlock stereo music which is very plane, Quad mixing sound can be heard from every direction and is world-relative, and there is always a compass to determine where the sound comes from. Ambisonic mixing sound follows users around and is also world-relative. The way how videos with spatial sound are created in a different sequence then usual, that music is created first then the visuals. Also, overall exeperience are made by unreal engine, while wise which is a game-oriented sound design tool is made to realized the sound effect, and music is placed inside the enviornment. It is totally new for me to learn about how music and visuals are combined in VR, because it is totally different than traditional graphical music on PC, take games for example, even though players can hear where the sound comes from and even how far they are but it is not in an immersive environment, not matter how real it feels, it cannot be compared with a VR one. But it is obvious that to realize spatial sounds is much harder on VR.

The following is the photo I took along the Huangpu River when I was riding along the river side, the reeds artfully cover the buildings behind them with the beautiful sunset.