Part 4: 360 and VR Videos

Christiane Snyder
7/4/2016

The main difference between 360 and VR videos is that VR videos are stereoscopic (have a depth component to the image). 360 videos or 360 monoscopic videos can be captured or rendered using one spherical camera or a single camera on a rig. They are said to have a “spherical” view of a scene.


Monoscopic 360 Rendering


To add the aspect of depth in VR videos, they have to be captured/rendered with two cameras-one for each eye. In real life, the distance between your eyes cause a slight change between the two images that each individually see (the view from your left eye is slightly shifted from the view from your eye). This disparity is what causes the perception of depth when you look at the world around you.


Stereoscopic 360 Rendering


The projection technique of capturing both stereoscopic and spherical video content is referred to as Omnidirectional stereo (ODS) and is specifically designed for viewing with a head mounted display. In ODS, footage is normally filmed with a distance of about 6.4cm (the average distance between a person’s eyes) between the left and right camera.

There are varying ways to deal with this issue, the most popular are the following:

NOTE: These approaches only apply to virtual reality videos. You do not have to worry about this during VR development with Unity and Unreal because they already account for this issue.

Converged

Two cameras a set distance apart, at mirror angles facing the same scene. Not ideal because the images will not match each other and will be almost painful to look at.

Parallel

Two cameras a set distance apart, at the same angle looking at the same scene. Not ideal because it will be two images of separate parts of the scene.

Off-Axis

Two cameras a set distance apart with mirrored non-traditional frustums. This technique is commonly used in modern implementations because they views will only have a slight disparity and be easily processed by the viewers eyes and brain without any head or eye aches.

When you first think about it, rendering a stereoscopic and spherical view of scene may seem straight-forward: place two cameras in a 3D scene, real or not, and rotate them. The main issue with that technique for real life video capture is that while rotating the cameras, they’ll get in each other’s ways and block part of the scene in the final image/video. In rendered scenes the camera is often invisible, so the issue is slightly different, if the two cameras are stationary, they’ll stay the same distance away from each other, but there are points that one camera will be behind the other and they will almost always be different distances from the part of the scene they are trying to view.

For real life video footage, there is no solution to this problem currently. Although for rendering 3D virtual content, there are a few ways to get around this. The first option is to create your own solution by either working in code and extending a raycaster or by using some way to stitch perspective images together. Or you could use one of the newly available plugins for your modeling/rendering software, like DomeMaster3D (available for Max3D, Maya and SoftImage). Throughout our tutorials, we’ll use DomeMaster3D due to how accessible and easy it is to use.

Learn more

  • If you would like to learn more about this topic, check out Google’s Developer doc on this technique:
  • https://developers.google.com/vr/jump/rendering-ods-content.pdf

  • If you would like to see an example working with this projection model and see it in action, check out our tutorial on rendering a VR video in Maya for Youtube and Google Cardboard.