Press reports have attendance at about 10,000, which is extremely low, of course. But, as I previously noted, it’s resulted in a very appropriate quaintness. You can tell that the people who have come here are very passionate. While one exhibitor lamented the lack of scientific research on display and the commercialization of the event (no need to wait for SIGGRAPH to publish your tech paper), Rob Cook (the recipient of the Steven A. Coons Award) gave what Chris Landreth described as a “spiritual, practical and visionary” speech about “why there still needs to be SIGGRAPH.”
“He put in a ‘Fool’s Hypothesis’ that SIGGRAPH is no longer relevant: the only problems left to be solved are minor and not that interesting. Then he proceeded to bash it apart. I think in the world he was talking about even GPU-based rendering would be a minor problem: the content coming out the other end isn’t different; you’re using different hardware to do it.”
Speaking of the very hot topic of GPU rendering for film, Dominick Spina, NVIDIA, technology production manager, Digital Film Professional Solutions Group, briefed me on the new approach ILM/Lucasfilm took on Harry Potter and the Half-Blood Prince’s fire sim work by integrating NVIDIA GPUs in the process, enabling up to 50x speed ups over traditional CPU methods.
“The rapid turnaround time means that visual effects supervisors can have their artists efficiently create multiple iterations or “takes” of large photorealistic simulations to achieve the director’s vision under tight deadlines. Not only are these final renditions more believable, but these digital assets can now be fine tuned to blend seamlessly with the live-action characters in the story, while leaving remaining CPU resources for less complex processes. We are so excited by the momentum and interest we are getting from all major vfx and animation studios. They are realizing that they can tap into the power of their Quadro FX graphics cards already in their workstations to accelerate their pipelines. This is definitely a trend and we will be seeing more proprietary studio tools, as well as, commercial software products moving in this direction, thus changing the way films are being made.
Indeed, Prime Focus (formerly Frantic Films) is interested in porting fluid technology to the GPU with NVIDIA as well. “The cards have caught up with memory,” suggested President Chris Bond. Ian Fraser, head of R&D, said there are several options. “You can use 10 GPUs to run simulation. You can use a Krakatoa or Spray. You can use NVIDIA cards to drive stereo testing. The big challenge is to use all of that parallel power effectively. Competitively, it’s what you need to do. The algorithms are getting better. It’s helpful for all simulations.”
|In G.I. Joe, Prime Focus helps Cobra attacks Paris.|
Meanwhile, Prime Focus just finished G.I. Joe. They were initially hired to do the previs for the action and were awarded a portion of the finale aerial sequence. They did R&D on the matte painting pipeline involving clouds and a virtual Washington, D.C. They did particle work on Cobra’s weapon, the Nanomites, which eats metal and required hundreds of millions of particles (they used Krakatoa to render).
Visualization was also a hot topic: mental ray introduced its new Iray interactive lighting tool with global illumination, which was, arguably, the coolest new tool (more on that later). NVIDIA introduced OptiX, a ray tracing engine that allows you to build your own renderer or other ray tracing for a programmable pipeline. Studio GPU demo’d the latest version of Mach Studio with such new features as displacement mapping and stereo cameras.
There was a Visualization panel featuring, among others, Steve Sullivan of ILM, Rick Sayre of Pixar and Justin Denton of Halon. I spoke with Sayre and Denton. “Because of the downturn, pitchvis is being used more and more,” Denton offered. He recently returned to Halon as a previs artist after heading cinematics at Midway, where he worked on Mortal Kombat vs. DC Universe. However, the guerilla style previs done on Eagle Eye taught him how to balance the pure creative and technical facets of the craft. “Now I always think about what a camera can do on a 3D scene. He said virtual lighting will be the next big step in previs.
Sayre, a supervising TD, said he offered an interesting little historical footnote: CG previs from 1983’s The Boy Who Could Fly. “It was fascinating to look at that in contrast to today and how many similarities there are. And that was just part of a historical digression about pre what? Because the notion of lumping previs into animation production is very problematical because it’s a process that went along fine by itself without any of this so-called previs. And I think answering the question, ‘What is it pre?’ ‘And when historically did it used to be pre,’ is helpful and even essential to get the right benefit of where it goes in production. So I also just talked about the different scenarios of how you go about using the techniques of previs in animation and we’ve tried a whole lot of them and haven’t found the total right place. Not that there is one… every film is different and every director and creative team is different. One size fits all is definitely not the case. Previs is pornography: everybody knows what it is but nobody really knows how to define it exactly. So one of the points I made is that if you dropped an alien observer on Earth who knew all about Earth culture but not necessarily about animation workflow, if you dropped him into a layout review (certainly at Pixar), he probably would say that’s previs right there because you’re moving the camera around, you’re doing blocking, you’ve got your assets, your constrained by those assets… And yet in animation we don’t see layout as previs. Why is that? So it’s something I’ve been digging into because I think there’s a notion that a process of previs can be extremely helpful in animation, and helping to define it and find it and how to do it is a [useful] exercise.
“There’s something else about previs, certainly compared to production, which is you make a whole lot of mistakes, we do a whole lot of stuff we throw away, not because it isn’t any good, but because it doesn’t work and you learn something. So you can imagine a process where you do a whole bunch of work in previs, you throw 90% of it away and then you go into production. Now in production, you can say look at all that shit that you threw away, but in fact, you don’t have to try that again in production because you’ve already tried it and you know it isn’t going to work or you’ve decided to do something better. I think there’s a really valuable point there. People taught really early on with the introduction of computer tools in all sorts of areas… that the introduction of undo meant you could make many more mistakes per second and that was actually really good. It could lead to better work”
And what is Sayre learning so far prepping Brad Bird’s live-action 1906? “I’ll loop it back to animation: There’s aspects of shooting boards in live-action where the board contains both emotion and camera direction that I believe is really valuable. And there is a flipside to it in animation: There is a view in animation that storyboards are not about camera direction. It’s almost the opposite of live-action. You wouldn’t have a storyboard in live-action, which is a stick figure that’s got a nice facial expression on it — that’s what actors are for. They will bring that reading to it on the day or through rehearsals and table readings and that will come out of working with the actor. In animation, the animators are the actors. But early on because you don’t tend to shoot tons of coverage, you are exploring emotional arcs of performances through the drawings. So there’s an argument that says I’m just going to show you a bunch of almost drawings of faces with dialogue and that will give you the emotional read of the story and we’ll all be able to get on the same page with the emotional arc of the story. So if you do that completely unconstrained by camera and say, ‘I’m going to tell the board artist what to draw and it has nothing to do with camera framing,’ there is a role for that in animation and there are directors who really like it that way. Live-action brings all those constraints on top of it where it’s very natural to say, ‘No, this can’t be divorced from the set, it can’t be divorced from the environment and from the reality of the cameras. Is it a two-shot or is it not a two shot? One of the most interesting things that we’re trying on our various projects is: Are there elements of the animation story development process that are applicable to live-action?”