VR/AR Final Project Documentation – Jonghyun Jee

Title: D.R.E.A.M. 

Subtitle: Data Rules Everything Around Me

Description: D.R.E.A.M. brings you into the middle of Shanghai, where neck-craning skyscrapers and bustling passers-by are surrounding you. Everything seems just another ordinary day in Shanghai, and you might wonder what’s so special about this cityscape. As time goes by, you’re going to notice there is something strange about your surroundings. When you look closely, you can spot a number of buildings (including Oriental Pearl Tower) that are gradually disintegrating. Background noise sounds a bit different now, as footsteps and voices get slightly distorted. The sky is peeling off and the black screen beneath reveals itself; all the people around begin to disintegrate too. Everything what you see now is, quite literally, data. 

Location: Lujiazui Skywalk, Shanghai

Goal: Our team tried to make the maximum use of glitch effect in VR experience in terms of not only visual but audio part, too. As its title indicates, our video touched upon an idea that we are surrounded by data—or anything that can be reduced to data. We were partly influenced by The Matrix Trilogy, and a thought experiment known as “Brain in a Vat,” in which I might actually be a brain hooked up to a complex computer system that can flawlessly simulate experiences of the outside world. We hope our audience to experience an unreality that is so realistic that they feel as though they are in a simulation within the simulation. The title is also a subtle, intended pun because what rules everything around me is, at least when I am wearing a VR headset, nothing but data.

Filming: Amy, Ryan, and I went to the skywalk twice to shoot 360 degree video. For the first shot, weather was fine but a little shiny so there was a slight light flare in the footage. We took video in two spots; one in a less crowded, park-like place and the other in the middle of the skywalk. Since we picked the late morning time in Tuesday, there were less people than usual. The whole process was quite smooth. Insta 360 Pro 2 camera was equipped with a straightforward user interface. For some reason, the footage we took in the middle of the skywalk was later found to be corrupted, and that was a little bit of a downer. The other footage seemed nice but on the very top of it was the tip of a peeping antenna. For the second shot, we did not make any mistake and could get the full footage of the both spots. The weather was pretty cloudy so more suitable, and luckily nobody disturbed our shooting. A few number of people showed an interest, but most passers-by just continued their way.

Post-Production: The first thing we did after the shoots was to stitch these video files together. We watched our videos on the Oculus Go and checked how they looked in 3-D display. The videos were highly immersive without no additional edit; they were already quite powerful. We decided to use the second video we took on the skywalk, as it had more a variety of elements that can be manipulated. Since we only used Adobe Premiere Pro for the visual part, effects and tools we could utilize were somewhat limited. If not labelled “VR,” most of the effects were inapplicable to our files since those effects only worked for flat surface. At first, we added a glitch effect on the entire video just to see how it looks like. When the effect is applied for the whole part, the features of 3-D stereo image were hardly visible. So we traced a mask layer from each building and specified the area in which the effect got kicked in—a bit of manual labor but totally worth it. We modified parameters separately for each building so that their glitch effects look distinct from each other, to avoid seeming a way too identical. Later in the video, the glitch effects get contagious so the sky and people also become distorted, gradually disintegrated into particles. The room for improvement, I think, is to make more use of a man who looks the audience straight in the face. In the later part of the video, there is a guy who stands in front of the camera and takes a video of it. We tried to add more effects on this guy, but realized that we needed After Effect to visualize what we wanted to try (to make him continuously back flip or so). Many people, after watching our demo product, gave us a feedback that our video would become more interesting if we put  spatial audio. I tried to add spatial audio multiple times, but could not reach a satisfactory result; sci-fi sound effects were positioned on each building as an audio indicator that hints where to look at, however, the result had no significant differences between the spatial audio one and the stereo one. If given more time, we would definitely work more on the spatial audio part so we can possibly direct our audience to see where we want them to see. Overall, we could successfully visualize what we envisioned—but could not maximize the possibilities of audio. 

Reflection:  Throughout the semester, I definitely enjoyed learning both theoretical and practical knowledge of immersive media. The new concepts and terms we learned were at first a little confusing; and yet, they got more and more clear as we began to work on our own project. During the show, it felt great to be noticed for our efforts when we could see a lot of “wow” faces from our testers. What I learned the most during the course is to think in a 3-D way; all of my filming and video editing skills were limited to 2-D flat screen and so was my visual imagination. This course helped me to add a new dimension in the canvas of my mind. Now I have more understanding on how VR/AR actually works—now I can feel such a deep consideration and hard efforts behind the scenes of virtual reality. 

VR project documentation (partner: Kat, Ben)

Topic: marriage market immersive experience in 1.5 min.

VR production

Shooting: The camera is easy to get a hands on, while later on we found out that there is some problem about the ambient sound recording– could be the problem with filming setting (or stitching/editing/rendering). The 2 shootings ended with objections of the marriage market. It was expected, and we managed to get quality footage out of the 2 shootings without permission (technically). As the main person responsible for communication and the only team member who speaks Chinese, I conclude that the communication before shooting may not be necessary, since those people in the marriage market are reluctant to cooperate. A cat and mouse game may be a better choice in this scenario (in China). Also, courage is strongly needed to take out the weird looking camera, set it up in front of the crowd’s eyes and start filming ASAP before people rule us out.

Post-production:

  1. We managed to shoot in 2 different places and 2 different point of view (third person and first person). We worked out a narrative based on it: At first, the profile of the viewer is exhibited in the market and scrutinized by passers-by. The viewer can watch the comments from the passers-by. Then, the viewer comes to the market personally, being examined by the buyers. The decision to use this narrative comes in the discussion after viewing the video for several times.
  2. The cut of the clips–scaling down from 20min to 1.5min also tortured us. Eventually, we got bored watching it again and again, and the best parts stood out.
  3. The part of adding subtitles took me a long time–The disparity of the subtitles should be the same as the video, since the video is 3D. I had to check the anaglyph for the scale of disparity.

Demo experience

During the IMA show, almost everybody who watched the video enjoyed it (or found it disturbing, as our video conveys the upsetting feeling one would feel in the marriage market). It’s important to inform the viewer of the background of the video before putting the headset on their heads, so that they could get to know what happens in such short period of time. Moreover, as the VR headset itself is a selling point, to make the video alone stand out (not relying on the medium to impress people), is quite important. Perhaps, the video could be more intriguing if it’s interactive (although it requires time for user to learn to use the handle).

Final Reflection by Ryan

It has been a quite short semester. I really have enjoyed the VR class, I wish I can take this course every semester to make my own VR videos every year. The experience of producing a VR video is so amazing, the feeling of making a VR video is totally different from making a normal video, I feel more sense of achievement. Every time I load my video into Oculus Go and check how it works, I will be amazed at what I have done, and how they have turned to something real enough that I can feel I am in it. I am very thankful for my choice of taking this course, and having such amazing professor Naimark and learning assistant Dave and all my classmates, especially Amy and John who are my teammates working together for the final project.

For the production part, shooting comes the first. The process of taking footage did not have many problems, people passing by hardly pay attention to the camera during the shooting. The only problem we meet, from my perspective, is the antenna. For the first footage, I bend the antenna too low that the camera can include it into the footage, and we cannot delete it in Pr, this is a relatively huge problem. There is one point needs to be mentioned, is the setting in the phone application of the Insta 360 Pro 2, we need to have the setting of 360 3D, or the footage we take cannot be stitched correctly. Also there are problem exporting the footage from the SD cards, even though we did not meet the problem of the corruption of the footage, some of the other teams had met the problem which was upset. And we also need to make sure the setting in the stitcher, so that the video stitched will be in two parts, and the part for left eye will be placed top with the part for right eye placed bottom.

For the Pr part, I and John have been working days for the final effect. I found out that if we want to apply the effect in Pr that are not included in the folder of immersive video, we need to put the effect seperately in other cover to be applied to the video. If we put other effect with the effect of plane to sphere, the video will be shrinked and distorted, which has been troubling us for several hours. Then I try to apply the effect posterize, and other effect to the video, these work well, except for the problem that there is a bar in the middle of the video showing that the effect needs GPU acceleration. And the problems is not solved till now, I have checked the option of GPU acceleration rendering in the setting, so I am thinking about it might be the problem of GPU, or I should not apply these effect to the video since these effects might be too many for the video. Since our theme is data, so we want to realize the effect similar to movie Matrix, and we apply the effect of VR glitch which perfectly fits our idea. Then I play with key frames to do the animation with mask, we draw the mask of all the buildings, to make the effects only work on certain buildings in sequence, since the video only takes one minute, so for each mask, we need to include a lot of buildings at the same time, or we will not have enough time to make buildings start becoming glitchy one by one. The hardest part of drawing the mask is to draw the mask of the sky, we need to sketch the frame of all the buildings which takes a lot of time and is very hard to have a very detailed frame of all these buildings. Thanks to the camera for being at one spot, we only need to have one mask for the sky since it does not move. After the finishing the glitchy effect, we try to change the HSB of the video, and find out a color combination of purple and green, as all the buildings are changed into green, combined with the effect of glitch, everything look so similar to the scene in Matrix or a apocalyptic scene. Playing  with key frames and all kinds of effect can always bring me surprising result, so does it in VR video, after finishing applying all the effects and load the video into the headset, I am so amazed by myself and feel so proud. But still we need sound, which we make things even better. But the result is that the spatial auido work station did not work really well, so we failed to make spatial audio for the video, but just do several sound correction and manipulation for the video. This part is mainly done by John, so I appreciate him a lot.

For the demoning part, since I have another project to show, so I only spent a few amount of time doing the demo. But still I try my best to show the video to as many people as possible and receive many good feedbacks from them. Everyone I have shown to are amazed and enjoy the video. I am very proud of myself every time they praise the video, it is the sense of achievement that the producer of a video can receive positive feedback face-to-face from the viewers. Even though there are some problems for the viewers to watch the video due to the controler and the interface of Oculus Go, everyone enjoy it very much.

For conclusion, I want to say everything just end so fast. I really want to enjoy more, it is a pity for me that I can only make one video, and I have tried more for the video. I can make something better that can satisfy myself more, but still, want we have made is quite a success. Next time if I am going to make a VR video, I will definitly work better since I have the exeperience and will not make same mistakes. Also I will try more new things except for just adding effects, I really want to add interaction to the video, but this time, since we do not have enough time for that, we have not done that. But I will definitely explore the use of Unity in the interaction part of VR videos, that will bring more fun and possibilities for it, I want something not just an experience, but something to interact with. I love this course, for all we have achieved, and all the amazing faculties and classmates.

VR Production and Demo Experience – Alex Wang

The VR production project is a great way to actually get involved with hands on production techniques and put the theories we learned in class into practice, I had a great time working with VR production and also learned a lot of VR related knowledge along the way. Since I have a certain level of proficiency with video editing in premiere, I thought this project will be fairly simple in the post production phase. However, I ended up spending the whole weekend to accomplish what I had in mind. I learned that while working with new forms of media such as VR, there is not as much resources at our disposal since it is still a developing area. Not only is there a lack of support from different video applications, VR specific technologies such as the Insta 360 camera we used in class can have many problems since the algorithms for VR is still in development for perfection. The production process itself was also much more complicated than I thought it would be, I have to do many effects by hand just so that the left and right eye can have the same effect. While it is very simple to fool the eyes in a traditional video, having post production effects align is another story, I ended up manually masking the sky on both eyes to achieve a successful special effect, while also creating a separate mask for the ground to make the colors look natural as opposed to just having filters over the lens. If I get the chance to do this project again, I will definitely film the day and night footage in one shooting session, without moving the camera

When we first got access to the 2k files from previous classes, I experimented with premiere of what I can and cannot do: 

adding greenscreen effects and avatars

Another problem I had during production is the alignment of day and night footage, filmed at the same spot but with slightly different angle. I had a feeling that this would not work well according to the degrees of freedom talk we had in class, how it is extremely difficult to find the exact same pan/tilt angle as the previous shot. Yet through some experiments with post production it seems like I can manipulate the footage to some degree to make it better.

One attempt I made was to adjust the footages in premiere so that they can line up, however changing one section of the VR video will throw the rest of the alignment off, making it impossible to get a perfect alignment without having the original footage be correct, or have heavy photoshop editing like the “911” call of Avatar, which is expensive/time-consuming and not ideal.

alignment attempt

On second thought I decided to only photoshop one frame of the night footage to lay over the day footage, Instead of using a whole video which is extremely hard to manipulate frame by frame by hand.

showing day and night footage

Aside from all the technical difficulties with video editing skill, the biggest problem I had with this project is the handling of huge data files. The 8k 30fps footage from the 360 pro is a completely different thing from the 2k files I have been experimenting with. The IMA studio has decent computers(32 ram, 1080 GPU), yet the 8k footage is too for the computer to even function normally. However, the production was a lot smoother after the downgrading from 8k to 4k, though the process of downgrading took a few hours as well. I believe that if the hardware catches up, we can have a much easier time with VR production because it is a very processing power intensive form of media. 

video encoding constantly freezing

As for demoing, I was not able to do so during the IMA show because I had other projects I had to demo during the show. But I did demo the video to many of my friends during the production phase just to get suggestions and feedback. It seemed like most people are interested in VR and its possibilities, and I believe that the Hyperreal theme that I decided to go with is a perfect fit for the VR platform. Since no other forms of media can have the same amount of immersiveness and impact to humans, VR combined with spatialized audio is the best way to convince the human body to really engage with what they are watching.

creating a hyperrealistic world

glitch effect and audio

I am very grateful that I had the chance to formally learn the theories of VR and AR in an academic setting, since I learned most of my audio and video knowledge on my own. I think this class is a great way to get me started in VR production, but most importantly get me thinking in a VR mindset. Before formally learning about the VR AR terminologies and theories I had a rough idea of how the technology works by aligning left and right eye with their corresponding difference in position depending on their distance from you, but now I can use the correct terms such as disparity and parallax. One exchange I had with Professor Naimark changed the way I think about VR, when I proposed the idea of using machine learning to replace humans with solid silhouettes, I thought that the VR will look correct as long as the left and right eye positions are correctly matched. However, the response I got really surprised me, I was told that even if they look spatially correct, it will still be a flat figure in the correct spot. Very obvious observation if you think about it, yet so abstract in concept. Which is why actually getting to try VR production in this class is a great way for us to explore and learn how things work in the world of virtual reality. I will definitely continue to think about human perceptions with the concept I learned in this class, I am sure that these theories will be beneficial to the projects that I will work on in the future.