animation drawing 3d polar express

The Polar Express makes a technological breakthrough by using performance capture. All images © 2004 Warner Bros. Ent. All rights reserved.

The Polar Limited makes a technological quantum by using performance capture. All images © 2004 Warner Bros. Ent. All rights reserved.

When Robert Zemeckis launched his digital center at USC four years ago, he sat at a computer monitor and confided that his dream was to anytime shoot an unabridged movie, virtually. Well, he before long got his chance with Warner Bros. The Polar Limited, the all-CGI Christmas caricature with Tom Hanks, based on Chris Van Allsburgs popular illustrated childrens book, which opens today [November. 10, 2004] in both standard 35mm and 70mm IMAX 3D. Along with Sky Captain and the World of Tomorrow and The Incredibles, The Polar Express represents a daring technological leap in CGI moviemaking. In wanting to capture the spirit of Van Allsburgs painterly expect, Zemeckis chose to experiment with a new form of performance capture designed by Sony Pictures Imageworks and Vicon called ImageMotion. VFXWorld recently spoke with senior visual effects supervisor Jerome Chen about the unique challenges of making The Polar Limited and what its significance means to the 3D community, which has already begun debating its technical and artistic merits.

Senior visual effects supervisor Jerome Chen found that motion capture didnt match the movies needs. Instead, performance capture allowed the actors to interact.

Senior visual effects supervisor Jerome Chen found that motility capture didnt match the movies needs. Instead, performance capture allowed the actors to interact.

Bill Desowitz: How did this process outset?

Jerome Chen: [Senior visual effects supervisor Ken Ralston and I] were the creative and technical supervisors who were charged with coming up with a way of achieving this movie afterwards a preliminary coming together with Bob, who basically said he didnt want keyframe animation. We was pretty sure it would have to exist CG, since information technology would be hard to do this alive activeness. And then he wanted to preserve the visual spirit of the book, the pastel drawings. And then we even toyed with the notion of shooting alive action and treating in postal service à la What Dreams May Come. You still have all of the effects plus on summit of that an creative procedure, which is really daunting.

BD: Were y'all already experimenting with performance capture?

JC: We had washed movement capture for movies similar Spider-Man and other movies at Imageworks. But let me brand a distinction between motion capture and performance capture. We started calling it performance capture considering we were grabbing the unabridged performance at once, pregnant facial and body. The other stuff we had done for stunt sequences was for torso operation. When I started looking into the land of move capture at the beginning of the show in June 2002, it felt pretty primitive to u.s.a., meaning when they did motion capture for games and other action sequences, information technology was about the stunt, not virtually the facial performance, and then you could either keyframe animate the face or grab a divide move capture session where the actor sits still and mimes a facial performance, and a technical animator would glob the ii pieces together. But in our picture show we have these 4 children interacting with each other on this adventure, so information technology didnt make sense to capture everyone separately. Then conceptually, what we needed was to create a place where you lot can go iv actors together they can look in any direction at each other and you tin can record the operation.

That was the blueprint spec. At that indicate, we contacted a number of motion capture equipment makers and talked to them well-nigh what he wanted to practise. Ane of them told usa it couldnt be done because there was too much data to capture, considering we were going to utilise a full marker assault the face up at this signal. Nobody had done what we were talking about, which was really odd. As well a little frightening. One of the main problems we had to overcome was how could the cameras take in and then many facial markers. Our system alone has 152 facial markers. What nosotros ended up doing was working with Vicon to develop their software then that it could have in the amount of information that were talking most at a quality you could reconstruct without a lot of noise in the markers.

Chen needed 72 cameras to provide enough coverage for four actors and their facial and body markers.

Chen needed 72 cameras to provide plenty coverage for 4 actors and their facial and body markers.

BD: So what was the breakthrough here?

JC: The quantum was coming upwards with the number of cameras and the configuration of the cameras and the pipeline afterward youve gathered the information to apply it to the characters. It turned out that that we needed 72 cameras to provide coverage in this capture zone so we could take hold of 4 actors and their facial and body markers together. So thats 152+48 markers per person x 4. I think thats 80 gigabytes per minute. Theres a lot of other technology that had to be created to manage it bookkeeping things for the data to be processed and visualized.

BD: What almost the functioning challenge?

JC: Part of what youre doing is capturing Tom playing an eight-year-old boy [along with four other adults]. And then already you lot take differences in how much a muscle moves on Toms face in relation to whats happening on a childs face. We wanted to go a character that looked similar Tom when he was younger. We started scanning his son, who actually looked more like Rita [Wilson, his mom]. Just we as well realized that nosotros didnt want to make him into an actor. Then we came upwards with a design that Bob liked, and equally we started to utilise motion to it; nosotros fabricated a couple tweaks so that the kids eyebrows and rima oris looked a little more than like Toms considering the performance actually translated a lot improve, because Tom has these really arched eyebrows and does a lot of acting with his brow he doesnt move his confront that much. Its interesting how nosotros were able to analyze his acting in that manner.

Chen created a new smoke and snow renderer called Splat.

Chen created a new smoke and snow renderer called Splat.

BD: Talk about how the production process was split into different phases.

JC: You have the operation capture first, then the integration process. After a detail functioning take is selected by Bob and his editor [Jeremiah O Driscoll], it is sent to Imageworks and is ordered up. We so get through the process of applying the performance data to the digital grapheme in a medium resolution. The digital characters are and then placed into the virtual ready and the props are put in. And at that bespeak it goes into layout, which is similar to a traditional keyframe movie. And so this is where we begin to talk about the point of view of the motion picture. What does he want the photographic camera to be showing us? And 1 of Bobs trademarks is visual storytelling, so the photographic camera is very of import to him. And what was liberating about this digital procedure was he was able to concentrate totally on camerawork every bit a whole separate phase during performance capture. And you cant even brainstorm editing however considering all you have at this bespeak is video reference. So we created this process called Wheels where nosotros brought in a real cameraman to teach calculator animators how to act like cinematographers and information technology would feel like operating a remote camera on a gearhead as if they were on a technocrane. And so the wheels basically allow you to pan and tilt and all were doing is recording the input from those wheels that volition drive this virtual camera after, so you get all the nuances of Bobs camerawork played back in realtime. We didnt desire to keyframe the camera, which gives you a different look.

BD: What other new engineering did you have to create?

JC: We had to create fume and snow and water and all of the effects blitheness. I remember this was one of the largest effects animation crews that weve had at Imageworks. Traditionally smoke and water effects take so long to look right in the figurer, but because we were going for a more than stylized look, nosotros decided to create a new renderer called Splat. This was our fume and snow renderer, and it was very fast. Nosotros used erstwhile technology to create some tests of smoke to be composited in, and these passes took 16-20 hours a frame to return. The simulation to get the smoke movement was done pretty quickly, simply to return information technology yous really had to similar the movement because yous only had ane chance, and we had hundreds of shots that required smoke. So this new renderer would take 20 seconds, which was huge. And then that meant the effects artists could do a lot of iterations of movement and lighting until we really liked it. I thought the fume and all those effects, those subtle interactions, turned out swell it was one of my favorite parts of the movie.

Is it animation or a hybrid? Zemeckis directed live actors, although the movie is rendered in CGI.

Is it blitheness or a hybrid? Zemeckis directed live actors, although the picture is rendered in CGI.

BD: What was dissimilar about your role hither?

JC: It was hard but Ken and I got to do a lot of fun things because we werent but relegated to fit our imagery into the movie we got to make the entire image. We got to calorie-free it, nosotros came up with a great look, nosotros got to make decisions about character design and colors, and Bob was a great collaborator. In terms of the lighting, thats a touchy subject considering the DP, Don Burgess, didnt pattern the lighting and a different DP [Robert Presley] shot it. So its interesting how everyones office becomes fractured. I dont know how to fifty-fifty define what a visual effects supervisor does anymore. There are so many unlike aspects.

BD: Particularly with the larger role that previs now plays.

JC: The interesting thing nigh previs in a CG moving-picture show is that the camerawork can go the shot. I love previs in live action because I know what lens I desire to utilise, what kinds of rigs I demand to build, simply when I shoot information technology, its ever different because you have to deal with reality and you accept all these dissimilar compromises. But here I only had to worry about getting information technology done on time.

BD: So how would y'all define what Polar Express is? Is information technology animation? Is it a hybrid?

JC: I dont have a articulate answer. We didnt know what it is when we were doing it because everything is so different about it. I dont know how to categorize it. I mean, I was asked to cut the visual effects Academy reel yesterday. My reaction was, Oh, really? This is visual effects? Then I have to retrieve virtually it. I guess I have to cut more of a storytelling piece. You deceit say its an animated moving-picture show because from Bobs point of view, he directed actors, not a whole crew of animators.

BD: But if you look at the end result, its CG blitheness.

JC: True, its rendered in CGI.

The actors eyes and mouths were animated and were not part of the performance capture process.

The actors eyes and mouths were animated and were non part of the performance capture process.

BD: And theres lots of blitheness, including keyframed animals, and the optics and mouths of the humans are keyframed and not part of the performance capture.

JC: Yes, thats another area of fence. Ken and I used all of the bag of tricks from visual effects that weve learned to create illusion. Whats interesting is all the dissimilar skill sets that are put into the movie break down the traditional barriers. But its non like this was a photorealistic endeavor. Its not going to supersede the way we do movies. Its creating a new genre: movies that are not keyframed à la Pixar or DreamWorks that have a different texture of movement that is more compelling in one sense.

BD: Where practise yous go from here on the side by side ImageMotion picture, Monster House?

JC: We have the next generation Vicon system. The volume and all the techniques are bigger, which is better. We are no longer confined to a 10 x10 area. I think Monster Firm is an intriguing example of this technology considering the human characters are a footling more caricatured, then artistically they accept to observe out how they movement when their performances are driven by real people.

BD: What are some of the other improvements?

JC: Its technology; its how long you lot can record for.

BD: And how much keyframe embellishments are there?

JC: I dont know. Simply well ever accept to practice tongues because there isnt plenty book on the lips when you purse them, and eyeballs, until we have a mode of tracking them better. We accept better means of doing the eyeballs already considering you cant put a marking on them. Contact lenses dont work considering they make yous wait similar aliens and they swim over your eyeballs anyway, and youre getting incomplete information, so you lot might as well have an animator piece of work on the eyeballs using a video reference. Whats interesting is that the operation capture data had enough allegiance that y'all could really come across where the eyeball was looking but from the burl in the eyelids. Its weird. And very complicated.

Nib Desowitz is editor of VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.

powerspearouble.blogspot.com

Source: https://www.awn.com/vfxworld/all-aboard-cg-polar-express

0 Response to "animation drawing 3d polar express"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel