Seeing the TEDTalk, I was reminded of a presentation I saw at a research the center I interned for this summer. In this presentation, my fellow researchers presented the creation of point cloud maps projects and transporting information from cameras (these were attached around the car’s circuit in order to capture data for a 3D map later on used for self-navigation) together with the other state of the art sensors, the quality of what multiple cameras could help create seemed very volatile (especially when it came to analyzing of the depth of the image), the system having hard time to discern a biking person on the road. And because in my head, there is not much difference between the quality of what can video and image capture, I was really surprised by the quality and precision the Photosynth offered. Of course there must be a huge difference between processing data from a moving vehicle and computing a map based on spatio-temporal relations and taking many still images and aligning them based on similarities. Still, just looking at the dates of the earliest publications of the developers of Photosynth – 2000, even 1999 – once can see how much hard work it takes to make algorithms that take overlying 2D images and reconstruct them into a very elegant 3D map.