The VR production project is a great way to actually get involved with hands on production techniques and put the theories we learned in class into practice, I had a great time working with VR production and also learned a lot of VR related knowledge along the way. Since I have a certain level of proficiency with video editing in premiere, I thought this project will be fairly simple in the post production phase. However, I ended up spending the whole weekend to accomplish what I had in mind. I learned that while working with new forms of media such as VR, there is not as much resources at our disposal since it is still a developing area. Not only is there a lack of support from different video applications, VR specific technologies such as the Insta 360 camera we used in class can have many problems since the algorithms for VR is still in development for perfection. The production process itself was also much more complicated than I thought it would be, I have to do many effects by hand just so that the left and right eye can have the same effect. While it is very simple to fool the eyes in a traditional video, having post production effects align is another story, I ended up manually masking the sky on both eyes to achieve a successful special effect, while also creating a separate mask for the ground to make the colors look natural as opposed to just having filters over the lens. If I get the chance to do this project again, I will definitely film the day and night footage in one shooting session, without moving the camera
When we first got access to the 2k files from previous classes, I experimented with premiere of what I can and cannot do:
adding greenscreen effects and avatars
Another problem I had during production is the alignment of day and night footage, filmed at the same spot but with slightly different angle. I had a feeling that this would not work well according to the degrees of freedom talk we had in class, how it is extremely difficult to find the exact same pan/tilt angle as the previous shot. Yet through some experiments with post production it seems like I can manipulate the footage to some degree to make it better.
One attempt I made was to adjust the footages in premiere so that they can line up, however changing one section of the VR video will throw the rest of the alignment off, making it impossible to get a perfect alignment without having the original footage be correct, or have heavy photoshop editing like the “911” call of Avatar, which is expensive/time-consuming and not ideal.
On second thought I decided to only photoshop one frame of the night footage to lay over the day footage, Instead of using a whole video which is extremely hard to manipulate frame by frame by hand.
Aside from all the technical difficulties with video editing skill, the biggest problem I had with this project is the handling of huge data files. The 8k 30fps footage from the 360 pro is a completely different thing from the 2k files I have been experimenting with. The IMA studio has decent computers(32 ram, 1080 GPU), yet the 8k footage is too for the computer to even function normally. However, the production was a lot smoother after the downgrading from 8k to 4k, though the process of downgrading took a few hours as well. I believe that if the hardware catches up, we can have a much easier time with VR production because it is a very processing power intensive form of media.
video encoding constantly freezing
As for demoing, I was not able to do so during the IMA show because I had other projects I had to demo during the show. But I did demo the video to many of my friends during the production phase just to get suggestions and feedback. It seemed like most people are interested in VR and its possibilities, and I believe that the Hyperreal theme that I decided to go with is a perfect fit for the VR platform. Since no other forms of media can have the same amount of immersiveness and impact to humans, VR combined with spatialized audio is the best way to convince the human body to really engage with what they are watching.
creating a hyperrealistic world
I am very grateful that I had the chance to formally learn the theories of VR and AR in an academic setting, since I learned most of my audio and video knowledge on my own. I think this class is a great way to get me started in VR production, but most importantly get me thinking in a VR mindset. Before formally learning about the VR AR terminologies and theories I had a rough idea of how the technology works by aligning left and right eye with their corresponding difference in position depending on their distance from you, but now I can use the correct terms such as disparity and parallax. One exchange I had with Professor Naimark changed the way I think about VR, when I proposed the idea of using machine learning to replace humans with solid silhouettes, I thought that the VR will look correct as long as the left and right eye positions are correctly matched. However, the response I got really surprised me, I was told that even if they look spatially correct, it will still be a flat figure in the correct spot. Very obvious observation if you think about it, yet so abstract in concept. Which is why actually getting to try VR production in this class is a great way for us to explore and learn how things work in the world of virtual reality. I will definitely continue to think about human perceptions with the concept I learned in this class, I am sure that these theories will be beneficial to the projects that I will work on in the future.