The Sky

The first consideration would relate to the type of geometry onto which the image should be applied. Using a card, and setting this back deep into Z space was deemed to be an appropriate approach.

However, analysis of the shot from the perspective of the moving shot camera, revealed an absence of parallax, which is perhaps understandable given that the sky image is clearly made up of clouds at different distances and depths, yet these are projected onto a card that, whilst residing in 3D space, is essentially flat. (Fig 1)

Fig 1: The sky image applied to a planar (card) geometric object

For my second iteration there appeared to be two potential options. The first being to revisit the sky layer in Photoshop and break this into several sub-layers based on perceived distance to camera. Then project these onto separate cards, each placed at slightly different positions in Z space. The problem with this approach was that volumetric elements are extremely complex and, trying to separate the cloud elements proved to be unsatisfactory. The second option was to retain the sky as a ‘whole’ image but project it onto a different surface.

However using a curved geometry such as a Sphere or Cylinder offers more potential because the curvature allows us to use the geometry as a canopy over the other scene elements.

Therefore, when the sky texture is applied, clouds that are visually closer to the camera when viewed as a 2D image can actually be aligned to geometry, which is also closer to camera. This is evident (Fig 2) where the curvature of the spherebrings the upper section of the image closer to camera.

Fig 2: Showing the sky image applied to a spherical geometric object

The effect of this is that sections of the texture on geometry closer to camera will displace more, during the camera move, than other clouds that are closer to the belly of the sphere.

Test renders were recorded of just the sky element but through the full camera animation. Version 1 (Sky_Card.mov) shows the texture applied to a Card and version 2 (Sky_Sphere.mov) has the texture applied to the inside face of a sphere. Both geometric elements are the same distance from camera in Z space and both use the camera projection method to apply the texture. Even though the camera move is slow and moderate in terms of range, subtle differences in how the clouds closest to camera move are clearly evident. The sphere projection does give the sense that the clouds closest to camera are moving more quickly and thereby reinforcing our perception of depth.

The difference in parallax of near and distant clouds is most evident in the areas arrowed. (Fig 3)

Fig 3: contrasting parallax in near and distant clouds.

This is perhaps a contributing factor in why it is conventional, in 2.5D compositing, to apply sky images to spheres and why this is the preferred approach in this project. However, in order to gain a true sense of how the cloud parallax works, further analysis will be needed once the scene is fully constructed and other elements are also displacing during the camera move.

Once the decision is taken to use a sphere as the target geometry, consideration turns to how the texture will be applied. During the parallax testing, the image was temporarily applied using the camera projection method it is also possible to apply the texture directly to the sphere UV’s.

The UV Method

This method applies the texture directly to the full extent of the sphere geometry UV’s.

If the sky texture were a full 360° panoramic image then this would be an advantage but, in this case, the image extends to fill the UV’s of the sphere (Fig 4) resulting in stretching of the texture.

Fig 4: The texture stretches to the extents of the UV surface of the sphere

Moreover, because the image is applied to the inside of the sphere, the texture is reversed, although these is quickly resolved by adding a Mirror node and enabling the horizontal (flop) attribute.

The sphere node has a U-extent and V-extent attribute, which allows the texture to be confined to an area of the UV’s rather than the entire geometry. Fig 5 shows the U-extent and V-extent pulled in to arbitrary values of 70 and 120 respectively. This goes some way to reinstating the look of the original sky texture but much more finessing of these values will be needed to replicate the image with complete accuracy.

Fig 5: Image partially recovered using the U and V Extent attributes in the Sphere properties

In 2D view (Fig 6) we can see that the sky image is way too low in frame. To correct this, the sphere needs to be repositioned, rescaled and reoriented, which equates to three attributes, each with X, Y and Z parameters as well as the U and V extent attributes.

Fig 6: The sphere must be transformed in 3D to assume the correct position

Having spent 20-minutes adjusting the values across this attribute set in order to reinstate the look of the original sky and not being especially close to achieving this outcome (Fig 7). I conclude that an approximation is the most realistic expectation one can have using this method.

Fig 7: Experimentation with the sphere’s 3D transforms  

The Camera Projection Method

Camera projection is a method which can bridge the gap between the traditional matte painting and a full CGI representation. Zwerman and Okun describe a process where “…by creating a single matte painting and dissecting it into layers, the matte artist can project separate elements onto simple geometry in the computer. This allows the artist to achieve the sense of dimensionality without a tedious model build and long render times.” (2010, p.584)

In this case the projection camera was derived from the ‘hero’ shot, which I deemed to be frame 200, or the final frame of the shot camera’s motion path as the matte painting is in full frame.

With this method, there is no need to reposition, scale or rotate the sphere, or change the U and V extent attributes. This is because the projection camera will only impart the image from its vantage point and, given that this is taken from the shot camera’s motion path (at frame 200) is guaranteed to line up correctly (Fig 8).

It is projecting the image onto the surface of the geometry rather than assigning it to the UV’s so, other than its position and shape, the construct of the geometry is irrelevant.

Fig 8: The projected sky texture in 3D view

In 2D view (Fig 9) the projected image is identical to the 2D layer.

Fig 9: The projected sky texture in 2D view

Leave a comment

Design a site like this with WordPress.com
Get started