Reflection on Image-based VR and Photosynth

The TED Talk by Blaise Aguera y Arcas is an inspiring video showing that the image-based VR techniques were taking form as early as in 2006. However, it also suggests that the field has encountered some challenges so we are not experiencing it on a large scale now.  From my perspective, the main challenges that might have held technologies like Photosynth back are the computational techniques, data privacy issues, and user demands. 

The speaker Blaise Aguera y Arcas focuses on zooming interface and how to seamlessly transit from one picture to another, using a system like Seadragon. And his latest patents and research shows that he is exploring viewing interface that is stronger in performance and requires less computational power. And an interesting fact is that he filed a patent to help phone users protect their data, probably include images, from massive mining, which is the kind of work required to create a project like Photosynth. That seems a sign that privacy concerns will hold back the development of this semantic-rich network of photos.

After looking through the research by other researchers mentioned in the talks,  some other interesting trends emerged. Richard Szeliski, for example, starts to explore more of video-related topics and space construction technology. Noah Snavely continues to research depth and scene reconstruction. Steve Seitz is researching texture-related topics. What surprises me is that the majority of the research explicitly mentions “virtual reality” in their title. Only very recently one of their research points out that the outcome of the paper is aimed at VR consumption. So I think one of the reasons that we are not seeing a lot of examples of Photosynth is that we don’t have the ideal viewing interface for it. With VR developing at a high speed and the increasing ownership of VR devices, technology like Photosynth might have more role to play.

Leave a Reply