The TED Talk was back in 2007, which is surprising but also reasonable that utilizing 2D images to make 3D models has been there. The used examples ,including Grassi Lakes and Notre Dame, were all the reconstruction with photos taken by tourists or software building teams. However this application had many limits at that time; for example, in the Notre Dame’s case, there was one image came from a poster and influenced the model more or less. The quality of images and the computing methods to reconstruct the model were both the focus areas for the development’s direction.
Then the evolvement of the challenges, according to the four publications of developers of Photosynth, are similarly related to machine leaning such as image classification, identification and also methods of image capturing and motion improvements. As time goes on, the enormous number of data and improving software computing logistics all contribute to the developments and it may take more time to reach a satisfactory level, especially when comparing this technology to other networking things. All these kinds will need a long term of researching and experiments to improve.
And about why some people quit and some stay. I’m guessing one reason may be the expanding areas of the image-based technology, from simply 3D model to VR, AR and more spaces. The expanding brings new blood and kicks out those who can’t follow up.