For over fifty years, James Bond has brought us beautiful women, “shaken not stirred,” awesome spy gadgets and perhaps the most impressive opening movie sequences you will ever see on film. Skyfall is no exception as we are taken on a visual journey deep into the psyche of the infamous 007.
For Skyfall, London’s Framestore worked on all of the digital effects needed for this impressive sequence. To learn more about how Houdini was used , Side Effects talked to Diarmid Harrison-Murray, Head of 3D commercials at Framestore and CG supervisor for the title sequence.
An Interview With Diarmid Harrison-Murray,CG Supervisor At Framestore London
Could you please tell us a bit about Framestore's involvement in the production of Skyfall?
Daniel Kleinman, the director of the title sequence, who worked with us on previous Bond title sequences such as Casino Royale, approached us early on to collaborate on the new film’s iconic opening sequence.
Kleinman started the process by giving us a number of ideas, themes and images that he was interested in exploring. In turn, we started running tests to help us develop those broad concepts into moving images. Around that same time our partner company, The Third Floor, had taken early storyboards and worked on a 3D previz. We also started technical previz of some of the shots that required careful planning before the shoot, such as the mirror sequence.
The shoot took place over three days at Pinewood Studios. Using an underwater stage, we used one day to shoot the girls two days to shoot Daniel Craig along with a variety of vfx elements such as the burning targets.
From that point on we had about 3 months to develop all the 2D and 3D elements. Our team was led by William Bartlett (VFX Supervisor) and myself. It was a fantastic project to be involved in as we had a lot of creative freedom. The core Houdini team, including Martin Aufinger, Charlie Bayliss and myself, handled a big chunk of the 3D work.
Can you please give us a breakdown of all of the elements within the opening sequence where you utilized Houdini?
Houdini was our primary tool with exception to some of our work on the vault sequence, the gun barrel, and some of the modeling and texturing. The most notable Houdini shots involve a blood-heart-skull, a dragon and a graveyard.
For the blood-heart-skull scene we had built a procedural system that allowed layout of the complex geometry quickly and efficient simulation of the fluid traveling along the blood vessels. The setup used a little bit of every part of the software. The blood vessels were created using L-systems as a source for our fluid simulation. Using the animation of the heart as a starting point, velocity and temperature were pushed down the branching structures to create the flow and form of the network of fluid veins. The core elements of the shot had bespoke simulations but we also created a set of standalone veins that we could use to expand the world out from the central animation. Additionally there was the transformation simulation as the heart becomes a skull. Without much trouble we were able to create some custom velocity fields based around volume gradients to manipulate the fluid without killing its inherent fluid feel.
For the Dragon sequence we used Houdini not only for procedural modeling, volumetric effects, lighting and rendering; but also for rigging and animation. Essentially, the entire dragon sequence was created in Houdini.
The dragons could be positioned by laying out simple paths. Once the basic layout was approved, we added detail to the animation by procedurally adding imperfections to the movement as well as keyframing some parts such as the head. We then had to run dynamic simulations for all the body elements.
The setup, developed by Martin Aufinger, was reasonable complex. It was essential to add many dynamic elements such as the beard, fins, additional floppy pieces of cloth etc. in order to give it interest and scale. What complicated matters was the need to layer these dynamics effects. The whole build was approached in a procedural way as we were not sure about the size of the Dragons when we set out to build it. Up to the very end we were able to change the length and shape of the Dragons body dynamically. The rig, all the additional body elements as well as the textures adjusted automatically.
In order to achieve the even interior lighting effect of such a long but thin creature we had to add big number of geometry lights to each Dragon. Mantra handled the challenge very well.
I worked on the graveyard scene which was another major fully-CG scene created using Houdini. Just like many other scenes, it was not clear what the sequence was supposed to look like when we started creating it. Starting with models of gnarly trees, knifes and gravestones we were able to layout the shot and experimented with the mood. We used volumes and creative lighting to illustrate what the sequence could look like and were able to show atmospherics looking tests to Daniel Kleinman.
Once he had seen our test, he approved the mood and gave comments about how he saw the sequence evolving. It was quite an enjoyable look-development task as we could really shape craft the scene by adding more volumes, adding details to the environment (for example using fur and wire simulations for the tree weed) and tweaking the lighting.
Other scenes created in Houdini include the vortex sinkhole and seabed environment in the opening shots, the red Skyfall lodge scenes, the underwater bleeding target scene as well as some 2D fluid simulations for the kaleidoscope sequence.
What were some of the challenges that you faced over the course of this production and how did Houdini help you overcome them?
Houdini played a big part in overcoming our production issues. Due to Houdini's procedural nature we were able to keep various elements flexible right up to the late stages of post-production and react to client comments as well as bigger changes in the edit. This was especially true for the Dragon sequence that changed length a number of times as well as the blood vessel scene which needed to be synced to the music.
In general Houdini allows for the management of complex scenes in a comfortable fashion. With easy access to the data at all levels of the scene and with nothing being too black box, changes are easy to accommodate, debugging is quicker, and developments from one scene can be redeployed into another painlessly. With a lot of geometry and volumetric data it is important to stay on top of the flow of data. Ultimately when your scenes are behaving themselves you can really focus on the important things, like making the end images look great.
With the dragon sequence we had good experience with the wire and cloth simulation tools. The dragon was set up in a way that we could adjust the animation and then kick off a dependency of dynamics simulations that either depended on or influenced each other (about 8 of them). All of this had to be render-ready for review the next morning. The whole process was really automated by the end.
Another point worth mentioning is that we had an experienced Maya TD, Charlie Bayliss manage the bleeding target fluid simulations and renders. He was able to pick up and successfully use Houdini in production in very short time.
You mentioned earlier on that you used Mantra in the Dragon sequence. Can you tell us a bit more about how you utilized Mantra throughout the project as a whole?
Mantra was the main renderer on this project and almost all the CG was rendered using it. We needed a strong volume rendering solution and also a renderer that could handle the challenges of the fully CG scenes.
Volume rendering was probably one of the biggest technical challenges of the sequence. Mantra handled the task amazingly well. Many of the key sequences really benefited from casting shadows through several layers of volumes to create an atmospheric look. The volume SOP tools offer a great toolset for managing and optimizing scenes with a lot of volumes. Additionally, the volumes look pretty good in the viewport which meant that you can do a lot of useful work without having to render anything. The majority of the scenes were rendered using Physically Based Rendering (PBR). This included all the volumes as we found it produced beautiful shading and more definition in the volumes. It also turned out to be more efficient to render with PBR instead of micropolygon rendering because its stochastic sampling offered great quality vs time control.
Mantra's interactive preview render capabilities (IPR) was another helpful tools on the job. It allowed us to develop complex shots efficiently by rendering low resolution preview renders to get a feel for the lighting and mood of the scene. Once we liked our tests we were confident that by increasing the resolution and quality settings, we would get great results.
In Skyfall, did you find it easier at all to make creative decisions for a shot because of your Houdini pipeline?
Early in the process Houdini helped us prototype ideas quickly. This is really helpful in terms of creative decision making because we could produce tests quickly that would help advise on a creative direction. The procedural nature of Houdini lends itself to this approach. You literally branch off a network, to explore another thought or idea. The other major benefit that Houdini offers, is its degree of flexibility that allows changes to a scene late in its production. This means it is much easier to accommodate inevitable last-minute creative changes.
How does Houdini fit into your overall VFX / Animation pipeline at Framestore? Does it integrate well with other toolsets?
Houdini is used by both the film and the commercial teams at Framestore. On the film side, Houdini is used specifically for VFX work. They have done a lot of great work integrating it into the wider film pipeline and tool sets. Proprietary files formats and tools allow for easy exchange of volume, point and geometry data. Within commercials, Houdini is used for a broader range of work than just VFX.
We have successfully used it for VFX, creature and crowd jobs but also like using it for more creative look-development projects such as Skyfall. Framestore traditionally works on a lot of creature jobs and some of those have been done using Houdini/Mantra in the past. We have also done a lot of fur and feather work with Houdini and Mantra. Houdini's digital assets (HDA) mean that we have to put less time into proprietary asset management, as they offer a starting point straight out the box.
In general, we often use Houdini to solve the technical challenges that come along, so the terrain is constantly changing. We certainly use VOPS in all their different contexts a great deal. When you work in same small team, you start to develop a shared philosophy of how to approach a technical challenge and how best to skin a cat! Working on Skyfall we spent a lot of time in DOPS and playing with fluids but we never really know what the next challenge will be.