Projection Strategy

The primary structures are taken from the Kitbash Steampunk range and have been configured and assembled for use in this scene. The ground has been modelled and textured specifically for the project and the distant areas (currently black) will also be created for the project using a matte painted cyclorama.

The camera movement creates a number of challenges that will be addressed separately in this document. However the primary objective for this project was to devise a projection methodology that is able to efficiently facilitate an extreme change in resolution caused by an expansive ‘dolly-in’ camera move.

To address the expansive changes in resolution, the goal is to explore the use of several projections, all taken from positions along the motion path of the shot camera and hierarchically overlapped to form a composite with the display area becoming incrementally higher in resolution as the camera moves forwards.

Layered Projections

The dimensions and resolution are set by the boundaries of the frame (aspect ratio) and the display resolution for the final projected animation, which will be 2 x Full HD (or 4K). The furthest point in the cameras motion path shows the most confined section of the environment but at the highest resolution. Matte painting would therefore be undertaken on this frame and at a resolution of double the final display of 1920 x 1080, which is a convention used to minimise the risk of anti-aliasing artefacts and unwanted painterly effects such as brush strokes. The standard resolution would therefore be 3840 x 2160 (4K) but rounded up to 4000 x 2250 to simplify any mathematical calculations arising from Overscanning later in the project.

If the camera is withdrawn (pulled back) along its motion path (in this case by 50%) we see a wider section of the environment, including the most confined area. However, the camera view is now further away so it is not necessary to present this part of the image at the highest resolution. This new ‘composite’ image should now be 4K and the confined (painted) area is reduced accordingly.

The premise is that, by rendering from this frame at 4K, the central part of the matte painting is complete, albeit now with a reduced resolution. We can then iterate on the original artwork by ‘painting outwards’ from the central area until the reframed display area is fully textured.

This principle is repeated until the beginning on the cameras motion path is reached. Note that the tiles in the temporary UV checkerboard pattern do not change scale, indicating that the resolution is maintained through the full motion path of the shot camera.

By layering the projections hierarchically, the images are incrementally overlapped to form a composite with the central area becoming higher in resolution as the camera moves forwards. Each image will align perfectly, providing each of the projections are captured from positions along the camera’s motion path and the same optical camera settings are maintained. The transitions between each projection can be managed using the alphas in each of the respective images.

Resolution

This form of layering and nesting projections within a hierarchical framework allows for an application of the Eames principle, albeit with projection cameras that are spaced along the motion path and incremental points that ‘make sense’ in terms of the projection rather than in fixed integers of ten.

One consideration in this regard in maintenance of good resolution. As previously discussed, the sections of the matte painting to be projected will be built at 2 x the final intended display resolution of 1920×1080. Therefore, during testing, it is permissible for the checkerboard squares to increase in size, from the perspective of the shot camera, up to double without any softening of the image whereas exceeding this would lead to softening of the image and even potential pixelation.

Calculating this could be achieved in two ways. A less precise but quicker approach would be a simple visual comparison of the size of the squares against corresponding squares, in the same area, applied with the previous projection camera. The more precise method would be to use an ‘eyedropper’ method to sample the X and Y address location from pixels at each edge of the square and use a simple calculation to establish the size. This allows for a more accurate comparison with adjacent squares but more time-consuming and, in most cases, excessive. By performing this task, using either method, it is possible to determine the spacing between each projection camera along the motion path.

In this case the shot camera animation starts slowly and quickly accelerates up to its optimum speed by around frame 50. This speed is maintained until around frame 170 after which it decelerates and eventually stops at frame 200. Therefore, for the first and last 60 frames, the camera could travel further (around 30 frames) before the patterns came close to doubling in size. However in the frames in between, when the camera was travelling at tis optimal speed, the greatest safe distance was limited to a maximum of 25 frames.

The comparative image pair (below) shows the checkerboard texture projected onto building 8 from frames 90 and 115 of the shot camera motion path. At this point in the sequence, the shot camera is moving at its optimal speed and the squares are close to doubling in size within this 25-frame spacing.

Frame 090Frame 115

Later in the sequence, when the camera is decelerating, greater spacing between projections is possible. In this case projections from frames 170 and 200 show minimal size change over these 30 frames.

Frame 170Frame 200

Indeed a spacing of 60 frames between projections was possible before the squares came close to doubling in size. The conclusion is that spacing between the projections should be more frequent when the camera is moving more quickly and less frequent when it is moving slowly.

A second consideration was borne out of the amount of time each building was in shot which required identification of logical frames where these projections could deliver meaningful texture data to the geometry. The image pair (below) shows buildings 4 and 5, which are in shot from frame 0 to frame 120. To maintain resolution, it is logical to attempt a final projection just before the buildings disappear from the shot camera’s view but, as can be seen from frame 115, there is almost no meaningful image.

Frame 000Frame 115

The approach is therefore to move the projection camera back to an earlier frame (where a good section of the buildings is visible) but before the squares in the pattern half in size. So effectively a reversal of the process.

Frame 090 

In previous projects, coverage could be extended by changing the focal length of the projection camera rather than changing the projection frame, a technique known as Overscanning. However, this technique introduces a degree of perspective distortion to the image, which would potentially create misalignment issues between corresponding images within a layered hierarchical projection setup.

Texture Doubling and Smearing

The shot has been deliberately assembled so that there are multiple overlapping structures that will parallax against each other from the perspective of the motion camera. Whilst this in itself is quite uneventful, the immediate challenge this presents to a 2.5D camera projection is that texture doubling will inevitably occur, firstly from one geometric object to any other overlapping geometry behind it and secondly between overlapping faces within the same geometric object.

Doubling on Overlapping adjacent Geometry

The image (below) shows the checkerboard texture projected onto all surfaces from the first frame of the shot camera. The texture is clearly doubling from the buildings in the immediate foreground (left and right side) onto the buildings immediately behind.

Addressing this issue can be achieved by separating the buildings that overlap and/or parallax against others and project them using separate shaders, which suggests that a projection system would need to be constructed for each building.

However, this is only partially correct because only objects that overlap and parallax against others need to be separated. In this case, there is no overlap between the adjacent buildings left and right so these can therefore be projected in pairs. For example, the image below illustrates how the adjacent buildings can use the same shader to project the texture.

This diagrammatic representation (below) is based on the projections with boxes defining those buildings that can be projected ‘in pairs’ or more specifically, can receive their texture from the same shader.

Doubling on Adjacent Surfaces on the Same Building

Whilst the previously described application of projection shaders will prevent texture doubling on different overlapping buildings, various surfaces on each individual geometric object also overlap and parallax against each other during playback. It is therefore inevitable that some texture doubling will occur within each building that cannot be practically resolved, using separate shaders, within the layered hierarchical projection setup. An example of which is shown in the image (below). This is a common projection artefact that can be resolved with patch projections once the primary system is implemented.

Moreover, there may also be areas where changes of perspective will result in a smearing effect, a distortion artefact caused by the mismatch in perspectives between the projection camera in play and the moving shot camera. This can also be resolved using patch projections later in the process.

Leave a comment

Design a site like this with WordPress.com
Get started