VR Production Experience by Aibike Begali

Being the first ever IMA class that I have taken, VR/AR Fundamentals by Michael Naimark with Dave Santiano assisting provided some solid basis and fundamental insight into the area of Interactive Media. Working with VR more specifically, made it comprehensive to be able to produce a stereoscopic video as well as to edit and manipulate it such that an enjoyable and exciting product can be obtained. 

First, by constantly keeping an eye on world updates and news in the sphere of VR/AR, it was easy to submerge into trends and to understand the basics of VR/AR medium when used as a tool of expression and communication.

Secondly, trying out our own shootings out in Shanghainese streets with Insta360 Pro camera provided by the university gave us great opportunity to be able to file and document our real-life surroundings in a 360 degree video. Some slight obstacles occurred in terms of lighting and transporting the footage to PC devices due to the heavy weight of the footage and the importance of proper stitching. 

Third, the process of post-production required some in-depth understanding of manipulation of stereoscopic video and spatial audio, however with the intense help of Dave, all the iterations and work process in either After Effects or Premier Pro were remarkably possible and enjoyable. It made me think how accessible projects in VR are becoming to any amateur creators and videographers with the development of user interfaces of such video-editing softwares. 

Showtime during IMA show on December 13 showed us how powerful VR medium is in terms of teleporting people to designated locations and sharing the experience and the message aimed to be displayed. 

VR/AR Final Project Documentation – Jonghyun Jee

Title: D.R.E.A.M. 

Subtitle: Data Rules Everything Around Me

Description: D.R.E.A.M. brings you into the middle of Shanghai, where neck-craning skyscrapers and bustling passers-by are surrounding you. Everything seems just another ordinary day in Shanghai, and you might wonder what’s so special about this cityscape. As time goes by, you’re going to notice there is something strange about your surroundings. When you look closely, you can spot a number of buildings (including Oriental Pearl Tower) that are gradually disintegrating. Background noise sounds a bit different now, as footsteps and voices get slightly distorted. The sky is peeling off and the black screen beneath reveals itself; all the people around begin to disintegrate too. Everything what you see now is, quite literally, data. 

Location: Lujiazui Skywalk, Shanghai

Goal: Our team tried to make the maximum use of glitch effect in VR experience in terms of not only visual but audio part, too. As its title indicates, our video touched upon an idea that we are surrounded by data—or anything that can be reduced to data. We were partly influenced by The Matrix Trilogy, and a thought experiment known as “Brain in a Vat,” in which I might actually be a brain hooked up to a complex computer system that can flawlessly simulate experiences of the outside world. We hope our audience to experience an unreality that is so realistic that they feel as though they are in a simulation within the simulation. The title is also a subtle, intended pun because what rules everything around me is, at least when I am wearing a VR headset, nothing but data.

Filming: Amy, Ryan, and I went to the skywalk twice to shoot 360 degree video. For the first shot, weather was fine but a little shiny so there was a slight light flare in the footage. We took video in two spots; one in a less crowded, park-like place and the other in the middle of the skywalk. Since we picked the late morning time in Tuesday, there were less people than usual. The whole process was quite smooth. Insta 360 Pro 2 camera was equipped with a straightforward user interface. For some reason, the footage we took in the middle of the skywalk was later found to be corrupted, and that was a little bit of a downer. The other footage seemed nice but on the very top of it was the tip of a peeping antenna. For the second shot, we did not make any mistake and could get the full footage of the both spots. The weather was pretty cloudy so more suitable, and luckily nobody disturbed our shooting. A few number of people showed an interest, but most passers-by just continued their way.

Post-Production: The first thing we did after the shoots was to stitch these video files together. We watched our videos on the Oculus Go and checked how they looked in 3-D display. The videos were highly immersive without no additional edit; they were already quite powerful. We decided to use the second video we took on the skywalk, as it had more a variety of elements that can be manipulated. Since we only used Adobe Premiere Pro for the visual part, effects and tools we could utilize were somewhat limited. If not labelled “VR,” most of the effects were inapplicable to our files since those effects only worked for flat surface. At first, we added a glitch effect on the entire video just to see how it looks like. When the effect is applied for the whole part, the features of 3-D stereo image were hardly visible. So we traced a mask layer from each building and specified the area in which the effect got kicked in—a bit of manual labor but totally worth it. We modified parameters separately for each building so that their glitch effects look distinct from each other, to avoid seeming a way too identical. Later in the video, the glitch effects get contagious so the sky and people also become distorted, gradually disintegrated into particles. The room for improvement, I think, is to make more use of a man who looks the audience straight in the face. In the later part of the video, there is a guy who stands in front of the camera and takes a video of it. We tried to add more effects on this guy, but realized that we needed After Effect to visualize what we wanted to try (to make him continuously back flip or so). Many people, after watching our demo product, gave us a feedback that our video would become more interesting if we put  spatial audio. I tried to add spatial audio multiple times, but could not reach a satisfactory result; sci-fi sound effects were positioned on each building as an audio indicator that hints where to look at, however, the result had no significant differences between the spatial audio one and the stereo one. If given more time, we would definitely work more on the spatial audio part so we can possibly direct our audience to see where we want them to see. Overall, we could successfully visualize what we envisioned—but could not maximize the possibilities of audio. 

Reflection:  Throughout the semester, I definitely enjoyed learning both theoretical and practical knowledge of immersive media. The new concepts and terms we learned were at first a little confusing; and yet, they got more and more clear as we began to work on our own project. During the show, it felt great to be noticed for our efforts when we could see a lot of “wow” faces from our testers. What I learned the most during the course is to think in a 3-D way; all of my filming and video editing skills were limited to 2-D flat screen and so was my visual imagination. This course helped me to add a new dimension in the canvas of my mind. Now I have more understanding on how VR/AR actually works—now I can feel such a deep consideration and hard efforts behind the scenes of virtual reality. 

VR project documentation (partner: Kat, Ben)

Topic: marriage market immersive experience in 1.5 min.

VR production

Shooting: The camera is easy to get a hands on, while later on we found out that there is some problem about the ambient sound recording– could be the problem with filming setting (or stitching/editing/rendering). The 2 shootings ended with objections of the marriage market. It was expected, and we managed to get quality footage out of the 2 shootings without permission (technically). As the main person responsible for communication and the only team member who speaks Chinese, I conclude that the communication before shooting may not be necessary, since those people in the marriage market are reluctant to cooperate. A cat and mouse game may be a better choice in this scenario (in China). Also, courage is strongly needed to take out the weird looking camera, set it up in front of the crowd’s eyes and start filming ASAP before people rule us out.

Post-production:

  1. We managed to shoot in 2 different places and 2 different point of view (third person and first person). We worked out a narrative based on it: At first, the profile of the viewer is exhibited in the market and scrutinized by passers-by. The viewer can watch the comments from the passers-by. Then, the viewer comes to the market personally, being examined by the buyers. The decision to use this narrative comes in the discussion after viewing the video for several times.
  2. The cut of the clips–scaling down from 20min to 1.5min also tortured us. Eventually, we got bored watching it again and again, and the best parts stood out.
  3. The part of adding subtitles took me a long time–The disparity of the subtitles should be the same as the video, since the video is 3D. I had to check the anaglyph for the scale of disparity.

Demo experience

During the IMA show, almost everybody who watched the video enjoyed it (or found it disturbing, as our video conveys the upsetting feeling one would feel in the marriage market). It’s important to inform the viewer of the background of the video before putting the headset on their heads, so that they could get to know what happens in such short period of time. Moreover, as the VR headset itself is a selling point, to make the video alone stand out (not relying on the medium to impress people), is quite important. Perhaps, the video could be more intriguing if it’s interactive (although it requires time for user to learn to use the handle).

Final Reflection by Ryan

It has been a quite short semester. I really have enjoyed the VR class, I wish I can take this course every semester to make my own VR videos every year. The experience of producing a VR video is so amazing, the feeling of making a VR video is totally different from making a normal video, I feel more sense of achievement. Every time I load my video into Oculus Go and check how it works, I will be amazed at what I have done, and how they have turned to something real enough that I can feel I am in it. I am very thankful for my choice of taking this course, and having such amazing professor Naimark and learning assistant Dave and all my classmates, especially Amy and John who are my teammates working together for the final project.

For the production part, shooting comes the first. The process of taking footage did not have many problems, people passing by hardly pay attention to the camera during the shooting. The only problem we meet, from my perspective, is the antenna. For the first footage, I bend the antenna too low that the camera can include it into the footage, and we cannot delete it in Pr, this is a relatively huge problem. There is one point needs to be mentioned, is the setting in the phone application of the Insta 360 Pro 2, we need to have the setting of 360 3D, or the footage we take cannot be stitched correctly. Also there are problem exporting the footage from the SD cards, even though we did not meet the problem of the corruption of the footage, some of the other teams had met the problem which was upset. And we also need to make sure the setting in the stitcher, so that the video stitched will be in two parts, and the part for left eye will be placed top with the part for right eye placed bottom.

For the Pr part, I and John have been working days for the final effect. I found out that if we want to apply the effect in Pr that are not included in the folder of immersive video, we need to put the effect seperately in other cover to be applied to the video. If we put other effect with the effect of plane to sphere, the video will be shrinked and distorted, which has been troubling us for several hours. Then I try to apply the effect posterize, and other effect to the video, these work well, except for the problem that there is a bar in the middle of the video showing that the effect needs GPU acceleration. And the problems is not solved till now, I have checked the option of GPU acceleration rendering in the setting, so I am thinking about it might be the problem of GPU, or I should not apply these effect to the video since these effects might be too many for the video. Since our theme is data, so we want to realize the effect similar to movie Matrix, and we apply the effect of VR glitch which perfectly fits our idea. Then I play with key frames to do the animation with mask, we draw the mask of all the buildings, to make the effects only work on certain buildings in sequence, since the video only takes one minute, so for each mask, we need to include a lot of buildings at the same time, or we will not have enough time to make buildings start becoming glitchy one by one. The hardest part of drawing the mask is to draw the mask of the sky, we need to sketch the frame of all the buildings which takes a lot of time and is very hard to have a very detailed frame of all these buildings. Thanks to the camera for being at one spot, we only need to have one mask for the sky since it does not move. After the finishing the glitchy effect, we try to change the HSB of the video, and find out a color combination of purple and green, as all the buildings are changed into green, combined with the effect of glitch, everything look so similar to the scene in Matrix or a apocalyptic scene. Playing  with key frames and all kinds of effect can always bring me surprising result, so does it in VR video, after finishing applying all the effects and load the video into the headset, I am so amazed by myself and feel so proud. But still we need sound, which we make things even better. But the result is that the spatial auido work station did not work really well, so we failed to make spatial audio for the video, but just do several sound correction and manipulation for the video. This part is mainly done by John, so I appreciate him a lot.

For the demoning part, since I have another project to show, so I only spent a few amount of time doing the demo. But still I try my best to show the video to as many people as possible and receive many good feedbacks from them. Everyone I have shown to are amazed and enjoy the video. I am very proud of myself every time they praise the video, it is the sense of achievement that the producer of a video can receive positive feedback face-to-face from the viewers. Even though there are some problems for the viewers to watch the video due to the controler and the interface of Oculus Go, everyone enjoy it very much.

For conclusion, I want to say everything just end so fast. I really want to enjoy more, it is a pity for me that I can only make one video, and I have tried more for the video. I can make something better that can satisfy myself more, but still, want we have made is quite a success. Next time if I am going to make a VR video, I will definitly work better since I have the exeperience and will not make same mistakes. Also I will try more new things except for just adding effects, I really want to add interaction to the video, but this time, since we do not have enough time for that, we have not done that. But I will definitely explore the use of Unity in the interaction part of VR videos, that will bring more fun and possibilities for it, I want something not just an experience, but something to interact with. I love this course, for all we have achieved, and all the amazing faculties and classmates.