Propositional Statement

Whilst there is no need to discuss the artistic aspects of the scene further. However, irrespective of how a matte painting will be integrated into the show, there are a range of preparatory steps that the artist must undertake.

Introduction

This study seeks to explore a Photoshop and Nuke Only Matte Painting workflow described by Garrett Fry. This allows 2.5D compositing to be used to bring subtle movement into 2D artwork quickly and efficiently by exploiting depth cues such as changes in perspective and parallax.

This workflow does not necessitate the involvement of a 3D application but instead utilises the 3D capability of the compositing software to apply images onto basic nuke geometry such as planar surfaces, spheres, cylinders and cubes. The image can be applied directly onto the UV’s of the geometry or can be projected from Camera nodes, which are also provided.

Fig 1: Garrett Fry’s Photoshop and Nuke Only Matte Painting Workflow

The diagram (above) illustrates that only Photoshop and Nuke are used. In this case a process in which the images are projected onto geometry using in-built projection cameras is described. The projection images are exported from Photoshop and imported into Nuke. Then geometry, and other connecting nodes, are created in Nuke to facilitate the projection process. A shot camera is introduced and animation is derived, either through manual keyframing or tracking an existing shot (match-moving) and is the prism from which the final shot is ultimately seen by the viewing audience.

To facilitate the illusion of depth, each geometric object is transformed along the Z axis, thereby residing at different distances or depths from the shot camera. Therefore, when the camera moves through its animation, there is visible parallax between the areas of the matte painting at different depths relative to the camera.

A variation of this is would be, rather than using projection cameras, to directly assign the images to the UV’s of the geometry. The key difference would be that, once each geometric object had been transformed to the appropriate position along the Z axis, they would each need to be positioned, rotated and scaled until the original matte painting is reinstated.

The Card node in Nuke has three built-in optical attributes designed to simplify this process. The Z attribute allows the Card to be translated to various depths relative to the shot camera but, as opposed to the ‘Z translate’ attribute, scales the card dynamically so it retains the correct size relative to world space zero. The ‘lens in focal’ and ‘horizontal aperture’ attributes ensure that the card retains the correct focus and aspect ratio when the Z value is changed. Theoretically utilising these optical attributes should make reinstating the matte paining in 3D space quicker and more accurate opposed to manually positioning and scaling the Cards as it eliminates the risk of human error associated with using transforms to manually reinstate the painting.

The Matte Painting

The Photoshop only matte painting workflow involves a wide range of artistic factors, which are considered in the development of all artwork for this study. This is the matte painting created for this project. It depicts a stylised meadow with atmospheric lighting.

Fig 2: Matte Painting used in this study

Of more significance to this study is that the painting comprises of more than 20 individual elements, each of which will need to occupy a specific position in depth relative to the camera. The premise will be to manually animate a camera that gently pushes forwards into the shot, or pulls backwards out of the shot, resulting in an interpolated magnification (scaling up or down) of the scene. During this the individual elements will separate, based on their respective distance to the shot camera and thereby produce a parallax effect, which is one of the cues used by humans as a way of perceiving depth.

Analysis

This study explores the ‘Photoshop and Nuke only’ matte painting workflow and involves the construction of the multi-layered matte painting (previously described) using three variations of the workflow.

Method 1 will utilise one or more static cameras, placed at strategic positions in 3D space, to project the images onto simple geometric shapes created in Nuke.

Method 2 will again utilise Nuke geometry but the images will be assigned directly to the UV’s of these shapes. These will be translated into various positions in Z space that is logical to the depth cues in the matte painting and then positioned, rotated and scaled to reconstruct/approximate the look of the original scene.

Method 3 will mirror the second method except, where Card nodes are used, the built-in optical attributes will be used to reconstruct the matte painting from all the disparate elements displaced along the Z axis.

In order to ensure parity across all three methods, the following conditions will be maintained across the project:

  • A common shot camera with identical optical settings, transformation attributes and animation
  • The same image set from the layered Photoshop file
  • Identical scene resolution and project settings
  • Identical render format for all exported sequences, or parts of

The analysis will consider opportunities and limitations of each method such as construction speed, accuracy in reconstructing the matte painting, proliferation of errors during the camera move and the complexity and time requirements needed to resolve these errors.

To some degree the matte painting can be broken into multiple elements that are distinctive, each requiring consideration around the best method to apply imagery to geometry and indeed how to create and place the geometry. This introduces the possibility of testing the different methods on a ‘per element’ basis, and analysing each, without consideration of other areas of the painting.

However further levels of analysis will need to be undertaken which examine how the multiple parts align to reconstruct the whole painting and also how these parts interact with each other when seen through the prism of the moving shot camera.

Juxtaposition with a Photoshop Only Matte Painting Workflow

Garrett Fry describes a ‘Photoshop Only’ matte painting workflow as one in which the matte painter receives an element, or gathers it themselves, and paints on top of that element. This could be a piece of concept art, a photographic image (or frame from a video sequence) or even a render from 3D.

The matte painting would then be delivered to the compositor, typically as a set of images in which the layers are reduced into logical sets, a process known as flattening, and all colour corrections and filters are applied to their respective layers. Each layer is then saved out as an individual file, with their own alpha channel, using a lossless RGBA format.

Fig 3: Garrett Fry’s Photoshop Only Matte Painting Workflow

Fry suggests that this workflow continues to be relevant within the modern pipelines and suitable for quick patches, paint-overs, sky domes/sky replacements or arbitrary texturing tasks (2016). There is also scope for a painting in this format to be used in its entirety in situations where the shot does not necessitate a camera movement. It would simply go through a process of compositing before being delivered to the editor for inclusion in the show.

Once the matte painting, using the ‘Photoshop and Nuke’ workflow is complete, it will be possible to examine its scope in contrast with the same painting presented as a Photoshop Only format. This can be done by subjecting the flat 2D version of the painting to the same camera movement and examining the impact on depth cues such as changes in perspective and parallax.

Finishing

Once the final solution has been achieved, additional work will be undertaken to add realism and dynamism to the sequence. These are likely to include, but not limited to:

  • Animation and parallax in the clouds
  • Subtle displacement of the branches in the foreground tree to emulate the effect of wind
  • A small number of leaves falling to ground and settling
  • Gentle animated movement of the foreground sunflower and tyre swing
  • Lens flare to accentuate the halo on the bell tower of the church

These effects are purely aesthetic touches to finish the shot. As they do not contribute to the study, will not be subject to any commentary or analysis.

Parallax

In their study of motion parallax as a determinant of perceived depth, Gibson et al describe ‘apparent motions’ of stationary objects which arise during locomotion. (1959, p.40) Such dynamic and extreme camera movement exposes the limitations of the matte painting, which is essentially a two-dimensional representation of an environment and therefore incapable, at least in its traditional form, of presenting as three-dimensional by delivering parallax over the duration of the shot.

One solution would be to build a full CGI representation of the environment in which everything is fully modelled, textured, lit and rendered using 3D software. Indeed there are circumstances where this approach is used in filmmaking, particularly if the environment will feature multiple times in the production, and possibly from different perspectives and under varied climatic or lighting conditions. There is no doubt that the technology already exists to follow this path, and there is no shortage of talented artists capable of delivering totally believable environments in this format. However the creation of such an asset is time-intensive, which translates into high production costs. It is also a disproportionate response in most cases and therefore difficult to justify.

Camera projection is a method which can bridge the gap between the traditional matte painting and a full CGI representation. Zwerman and Okun describe a process where “…by creating a single matte painting and dissecting it into layers, the matte artist can project separate elements onto simple geometry in the computer. This allows the artist to achieve the sense of dimensionality without a tedious model build and long render times.” (2010, p.584)

A number of these cues are visualised by Anandh Ramesh in his online subscription-based course entitled “Stereoscopy Basics: Entering the Third Dimension. In video 2: Explanation of Stereoscopic 3D [timecode 03:48 – 07:00] he explains how humans perceive 3D in a 2D image. In the image [figure 1] he explains that, if we assume the trees are roughly the same size, the relative reduction in size puts them further back. On ‘interposition’, he refers to occlusion and the depth cues derived from the branches of some trees obscuring or occluding the branches of other trees. On ‘heights in picture plane’ he refers to cues arising from the vertical position some objects relative to others. Note the base of the leftmost tree has a slightly higher vertical position to the second tree so the brain subconsciously places the ‘higher’ tree further back. On ‘texture gradient’ he refers to colours becoming cooler and less saturate as they approach the horizon line. We see this in the trees along the horizon line when compared with the trees in the foreground.

Figure 4:  Visual Depth Cues, courtesy of Digital Tutors

In this second image Ramesh makes further reference to depth cues arising from ‘interposition’, particularly in how the upright poles occlude, and are occluded by, the deer and ‘heights in picture plane’ in the vertical position of the various tree bases.

Figure 5: Visual Depth Cues, courtesy of Digital Tutors

Reichelt et al describe motion-based cues in depth perception as involving shifts on the retinal image, induced by relative movements between observer and objects. Among them are motion parallax, kinetic depth and dynamic occlusion. Motion-based cues play an important role for depth perception; particularly in static scenery motion-parallax provides a fast and reliable depth estimate. (p.2)

Perspective Distortion

To follow…

Bibliography

Abels,H. (2015) Pluralsight Online Training. Matte Painting Basics and the Static Camera Shot. Video 20: Photoshop File Prep. (accessed 5th November 2020)

Burman, M (2016) Learn Squared. Intro to Matte Painting. Subscription-based online training. Accessed 4th November 2020.

Fry, G (2016) Projection Elements for Matte Painters. https://www.youtube.com/watch?v=mmsTMbyfAW8 Retrieved 5th November 2020

Gibson, E.J., Gibson, J.J., Smith, O.W. &Flock, H. (1959). Motion parallax as a determinant of perceived depth. Journal of Experimental Psychologv, 58, 40-51.

Hawk, L: Pluralsight: Creating a 3D Scene with a 2D Image in NUKEX. https://www.pluralsight.com/courses/threed-scene-2d-image-nukex-1237 Retrieved 4th November 2020

Hay, J. C (1966) Optical motions and space perception: An extension of Gibson’s analysis Psychological Rev 73, 6, 550–565.

The Foundry. Documentation on the Card Node. http://127.0.0.1:49660/Documentation/html/content/reference_guide/3d_nodes/card.html?cshid=Card2. Accessed 12th Nov 2020.

End

Keep for this type of image distribution

Fig X: Primary GroupsFig Y: Layers in the ‘background’ group

Leave a comment

Design a site like this with WordPress.com
Get started