Week 5: Challenges in VR Development

From the TED talk, I was particularly intrigued about how accurate image-stitching and depth detection shown in Photosynth was accomplished through using user data. This begs the question regarding user autonomy and how their data and images are being used. We can consider a hypothetical situation where photo data could be used to make political statements, vandalism, and threats. How would users be able to control what their data is being used for?

Particularly with scholarly VR articles, I was reading on Photo Tourism, which is a similar concept to what Photosynth was aiming at. Besides the aforementioned issue regarding user data, I was thinking about the physical limitations of VR. Many aspects of tourism lie outside of simply sight and audio. If you were to create a life simulating tourism experience, how would one simulate scent and touch? Though non-visual methods have been used for games such as chair chambers and whatnot, how would you integrate it seamlessly in another context? How would you automate it?

Leave a Reply