Base Composite
DPX image sequences were rendered for all the main projections so, for the most-part, compositing was limited to A over B Merge operations. Nuke was selected as the preferred software for this part of the process due to its extensive and advanced compositing features, including its 3D tools and workflow, which I knew would be needed from the outset.
The section of node graph (below) shows the base composite of buildings over the ground. This is achieved using a simple A over B merging process to create a logical stacking order.
Image sets that are further from camera are lowest in the stack; so image set D is merged over images set E, image set C is merged over image set D, and so on. All sit on the ground, so this is the lowest element in the stack.

Image sets closest to camera obscure those further back and we begin to see the basis of the scene.

Note the node setup applied to the image set render before it is added to the composite via the Merge. A fringing along the matte line is a common artefact associated with rendering premultiplied images from 3D applications.

The fix in Nuke is very quick and simple but needs to be discussed. First the rendered image is unpremultiplied. Then a dilate node, with an extremely small value, is used to trim off the edge artefacts. Then the image is returned to its premultiplied state before it is merged over.
Note how a small erode value in the dilate node fixes the render artefact.

This identical approach was used on all the image sets and the bike as these sequences were rendered from 3D.
Backdrop Cyclorama
What is evident in the base composite is that the distant elements such as sky and hills are missing and therefore need to be added to the composite. The primary reason for using NUKE as the compositing tool is the ability to quickly and easily create background elements like this. Elements that behave as if 3D but are comprised from 2D images.
This section of the node graph (below) shows the compositing network to build the sky, distant hills and also a couple of hot air balloons, placed for visual interest.

The sky texture makes use of the same image used to create the HDRI map in 3D and is therefore accurate in terms of tone and brightness to the light, reflections and shadows seen on the buildings.
A simple sphere object is added to the scene and scaled up, so it is much larger than the general scene. A camera and shader are then used to project the image onto the sphere so the scale is correct and there is no loss of coverage through the entire sequence.
A transform is applied directly to the image to rotate the dominant light source (sun) in the image so it is on the upper right side of the scene.

The ‘hills’ image is applied in an almost identical way except a cylinder is used as the geometry rather than a sphere. The hills are closer to the core scene so the cylinder is smaller than the sphere, albeit still much larger than the core area.

When seen in 2D view, the backdrop begins to take shape. Moreover when the shot camera is played, visible parallax between the hills and the sky is evident.

The balloons, sourced from Creative Commons in Flickr, are added using the same approach but projected onto flat cards.

Returning to the 2D view, the position of the cards and their respective projection camera, can be adjusted until a pleasing composition is achieved.

The structure of the archway (image set B) provides natural framing for the balloons during the early frames

Midground Trees
Again Cards are used to apply the images for the midground trees.

However note that the images are applied directly to card UV’s. There is no use of shaders or projection cameras. Alignment can therefore be achieved, simply by transforming each of the cards.

The cards all have different X, Y and Z values to disperse the trees over the paved area.
I took this approach for speed and simplicity given that the elements are peripheral. They are small, distant, partially obscured by buildings. Moreover, because none of the trees overlap each other and are not subject to any significant change of perspective, are subject to very little in terms of projection artefacts.

Steam Elements
I wanted to add steam elements to reinforce the thematic in the scene and add more dynamic objects of interest.
I sourced two steam effect videos from Video Copilot. https://www.videocopilot.net/products/action2/ These are pre-matted elements, provided in 2K resolution and .MOV format.
The first effect was applied to image set B to imply a leak in one of the vertical pipes.

The image (below) shows a Planar geometric element (known in NUKE as a Card) added to the scene and aligned in 3D worldspace to the pipe.

A section of the node graph (below) shows the steps taken to composite this element into the scene. The steam element is assigned to a shader (Project3D) and a new camera is placed close to the Card to project the video.
The Card is assigned to a Scanline Renderer, which converts the 3D scene to 2D from the perspective of the shot camera. This is then piped into the main node graph via a Merge node.
Note the Mirror and TimeOffset nodes just below the steam clip.

The TimeOffset is used simply to determine on what frame the video starts. I used this to start the clip 20 frames before the camera begins to move so that is would be in full flow at the very beginning of the shot.
The Mirror node is used because I needed the steam to flow from left to right, but this was right to left in the default clip. The node simply flips the clip on its Y axis and therefore reverses the direction.

An identical workflow is used for the two steam elements billowing out of the floor vents on the leftmost building of image set D. The only difference was the application of the TimeOffset nodes. Because this building does not come into view until later in the shot, the clips were advanced by 120 frames. The second clip was then extended by a further 60 frames to create a staggered effect, which I feel is more dynamic and also disguises the fact that the same steam element is used for both emissions.

Atmospheric Fog
When producing a camera-projected matte painting I have found it helpful to render a depth pass from 3D for the full sequence and from the perspective of the moving shot camera. This is an AOV that ordinarily is not intended to contribute directly to the look of the shot but more to drive other effects such as depth of field. It is a greyscale pass with the brightness value of each pixel determined by the proximity in 3D space from the camera. Therefore, in this case, darker pixels are assigned to areas of the image that are closer to camera and lighter pixels to those furthest away.

I have found that, by compositing this pass over the final image with an additive blend mode such as ‘Plus’ or ‘Screen’, it creates a layer of diffusion that emulates the effect of precipitation in the atmosphere; a sense that the tonality and colour of distant elements bleed into the background.

The effect than then be dialled back until a more realistic value is achieved. In the image (below) note how the trees and distant buildings are lighter, implying that they are diffused by precipitation in the atmosphere.

Lens Distortion
The human eye expects to see lens distortion on screen-based media but, renders from 3D do not generate this artefact natively.
The shot camera in all my 3D work was configured to the Sigma 50mm prime lens for Arri Alexa, I therefore downloaded a distortion grid for this lens and used this to drive the LensDistortion node in NUKE.

The image (below) shows the lens distortion applied. However note how the barrel distortion creates an area of black boundary around the edge of the display region.

To correct this a Transform node is added below and a small scale increase applied to extend the image to fill the frustum.

A Crop concludes the node graph and works by simply trimming off any pixels outside of the display area so NUKE does not have to process any unnecessary data.

