Envy spot historyUniversity of Applied Science
School of Visual Arts
The Animation Workshop
Media Design School
The Chimney Pot
Prime Focus 2
Prime Focus 1
Customer Success Story :: Pixomondo
Interview with Mohsen Mousavi
22 October 2008
Please tell us about Yourself?
My name is Mohsen Mousavi, I am the lead FX-Crowd technical director and head of technology at Pixomondo. I started computer graphics with Amiga 500, also known as the A500, years ago and made my way through the very first release of the Kinetix 3d studio under DOS. I studied art and music at the Music Academy of Odessa, but as a matter of fact VFX , Crowd Simulation, logic, procedural animation and complex FX scenarios have been always my passion. After graduating from the Academy, after developing a number of different tools in the field of crowd simulation and FX, I joined Pixomondo on the Red Baron project, where I was responsible for a number of major effects. With the success on that, Pixomondo pushed me to a new challenge to build an advanced crowd pipeline with highly customizable workflow where rendering is one of its important aspects.
Could You please tell us a little bit about the Company You work at?
At our VFX department in Ludwigsburg, a highly specialized and international team of up to 74 digital artists, supervisors and coordinators is working on creating such invisible elements for feature films.
The whole production process is carefully and deliberately planned, as well as professionally built and monitored. This ensures that a very large number of shots can be worked on with the highest quality settings in an extremely economic fashion.
What is the Red Baron, what is the story behind the project?
A story about the legendary Manfred von Richthofen, who became the most successful fighter pilot during World War I by shooting down 80 allied planes.
Germany's most expensive feature film in 2008, the $22.6 million (€18 million) English-language production “The Red Baron”, began principal photography on July 3rd 2006 in Prague.
Could You please describe the scale and complexity in creating such a project?
With VFX shots composing more than 30% of the film′s total playing time, Pixomondo carried a huge responsibility in making this movie work. Therefore, it was absolutely necessary for us to work hand-in-hand with every department of the film′s production from the very beginning.
How much time does it take to design and refine the visual effects?
Over 14 months of production. Broad experience, combined with the rigid discipline of all artists and staff members involved, made it possible to finish all 430 shots exactly on schedule on April 30th 2007.
Since February 2006, Pixomondo has been continually recruiting artists from all over the world in order to help transfer the international flair of the set over to the postproduction office in Ludwigsburg.
Due to the varied origins of those involved, very little German was spoken during the course of the project. Our diverse team consisted of matte painting, animation and SFX specialists from the US and South Africa, lighting/shading- and CG-experts from Spain and Israel, as well as compositors from Italy, Scotland, Ireland, Denmark, Poland, and many other backgrounds.
Which part of the project was the most difficult to fulfil?
Communication, stable pipeline structure and keeping all different individuals together as a big picture are the most challenging aspects of a project of that scale.
During the actual shooting from July to October 2006 in Prague and Hohenstadt (Baden-Wuerttemberg, Germany), there were three to four VFX set-supervisors on hand at all times, working closely with the director. If any idea seemed at first to be too unrealistic, it was possible for them to intervene or discuss the issue right on the set. This resulted in several worthwhile compromises that met the satisfaction of everyone and saved plenty of rework in postproduction.
How many people and what specialists by type of team were involved in the making of VFX?
Pixomondo’s pipeline is a set of different departments that are working together as a team and are responsible for different stages of a shot.
How is coordination of the working process achieved between the different teams – does each team do its own work in completing the task and passes it on to the next team or are all teams simultaneously in communication with each other and details are ironed out during the work process?
As I mentioned before, Pixomondo is a combination of different departments with different responsibilities. Every department has a lead, who would organize the teamwork and criticise the results before they go out with the regular dailies for the rest of the team.
In terms of live action shots, the Matchmove Department would take care of tracking the shot and give the rest of the team the basic setup of the scene, which would grow throughout the pipeline. After the matchmove is done, the department would publish the final scene and this is where the next department would continue with the shot.
For the Red Baron we were dealing with a lot of full CG shots that were entirely done at the studio. For those shots, the animation department would take over the scenes from the previz department and reuse as much as they could to have a basis to work with. Fortunately the previz stage of the Red Baron was very well planed, so it could be used as a basic setup for the final shot, without reblocking the whole camera or the basic animation of the planes. In both cases, live action or full CG, animation supervisor Daniel Loeb would make sure that the story telling flows from shot to shot as a big picture. It is what we call a “layout animation”, which is more about continuity within the whole sequence rather than an individual hero animation of a certain airplane. The layout animation is being put together in the edit and is being reviewed in the dailies, to make sure that everything works together as one. For example, scenarios of dog-fight sequences and relationships between different corners of the story are discussed at the layout stage. After approving the layout stage, the animators would start to fine-tune the individual motions and work on the so-called secondary animation, which in the case of airplanes would be all kinds of minor motions that are part of the airplane - wind shake, propellers, engine, wires and so on.
Depending on the type of the shot, at the same time, the digital environment department would take care of the blocking of the landscape and the digital set as well as the sky and the digital clouds. Environment supervisor Bjoern Mayer would organize the look and design of the elements with his team and would publish the result down the pipeline.
When the shot is approved by the animation department, the published file flows to the FX department.
With Red Baron we had all kinds of FX scenarios. From a tiny smoke trail on the background to hero destructions or explosion right in front of the camera. In a two-plane crash scenario, the animators would animate the airplanes all the way to the moment of crash, then would hand it over to the FX department and we would do the rest of the shot from the moment of impact, including the body and all the extra detail within the simulation. In terms of geometrically-oriented effects like rigid body simulation, cloth, crowd or debris - the effect department would publish the shot for the render department, so they could render the whole thing together. In terms of volumetric-oriented effects like smoke trails, explosions, fog and digital fire - the FX department would be responsible for the shading and rendering the necessary passes for the compositors.
The rendering and shading department would take the published scene either from the animation or the FX department (based on type of shot) and would render the necessary passes for the compositors. At Red Baron we had 4 major air battles with kind of a locked lighting condition for each one of them. So the render artist would open the scene and would run a series of tools that were developed by the shading TDs team and shading supervisor Michael Langrebe, who would apply the desired lighting condition to the scene and would automatically recognize the type of the plane and would apply the right shader to it. So we could have a good start with just some clicking and from there we would continue tweaking a shot-based structure. After the very first render of the necessary passes, an automated comp process would put together the rendered elements and with a bit of tweaking we could have a slap comp and discuss it in the dailies. So the render and comp department would work together all the way to the final look.
Which of the scenes has been the greatest challenge to You personally and to the whole team of VFX specialists?
We had a number of complicated and challenging shots in the show that required extra attention and energy. Every department had its own challenging shots that were a bit more difficult from the others. Sometimes a shot would be a big challenge for the animators, but an average shot for the render department or the comp department, and sometimes animators would just block the shot out and the rendering and shading department would deal with extra closeup renders that had to be addressed on a shot-based level. Of course the same applies to the other departments.
To me personally, the crash of the observation balloon and the crash of the two airplanes in the night air battle were fair challenges.
After a long research we realized that there is not even a single reference footage, which we could use to get an idea of how the whole scene could look like, which made it much harder to develop the right look. I did a lot of different tests on the characteristics of the cloth simulation and we discussed it together all the way with animation supervisor Daniel Loeb to make sure we were in the right direction.
I developed a simulation of an observation balloon, which could be pressurized (to some degree)! The balloon simulation had to reproduce the fact that the material is being stretched as it is pressurized.
The simulation had to re-act the elasticity of the balloon, which may not be constant over the entire surface and it had to model the event when the surface develops a tear and the balloon bursts. The bursting event had to model the deformation of the surface of the balloon over time.
As the hole got bigger more air got out and the balloon started to shrink down. On the one hand, the basket (which is connected to the balloon via the ropes) pulled the balloon down and at the same time the hot air raised it up.
We had to take care of every single part of the rope structure and constrain them together in dynamic fashion. We developed a tool to analyze the rope structure and create dynamic knots between the ropes that could simulate pressure and could initialize the basket and the observation Balloon (cloth simulation).
A procedural displacement map was added on top of the basic simulation to enhance the surface deformation and the wrinkles.
Together with FX animator Pieter Mentz we worked on the volumetric effects and the extra debris to finalize the shot.
From which historical battle/-s some of the most exciting scenes have been used for the reproduction of the air combat?
During the preproduction stage we looked at a lot of documentaries and motion pictures that had a story around the First World War. We watched movies like the Blue Max over and over and extracted a lot of different clips from it for our category-based reference library. We put a lot of effort in the previz stage to design dramatic dog-fight and air battle sequences, which could support different aspects of the director's vision and storytelling.
What special software and VFX you used for that?
For Red Baron we used 3ds Max © as our core 3D platform, V-Ray for rendering all the way through, a combination of FX-oriented plugins like FumeFX, Afterburn, Particle Flow Box#3, Shake for compositing and of course a lot of in-house development around every single tool, application and department that is the key for a production of that scale.
In what way has the historical authenticity in the design of the planes been reproduced? What did you use as a prototype?
During the production design and previz stage a team of researchers collected all kinds of materials that could be used as a reference throughout the actual production.A big library of still images from a number of different sources, that had something to do with First World War or any other aspect of the production, were gathered together in a category-based structure. We also collected all kinds of historical documentaries or classical motion pictures about the First World War, which were discussed and cut into small clips and were fed into our category-based library so later on, for example, the animators could study the motion of a special kind of airplane or the FX team could study different aspects of crash and explosion, by looking through 100s of clips in the right category.
Animators did have a lot of meetings with real pilots, at which the test results were discussed.They would explain about the nature of air dynamics and flight strategies on a daily basis to get closer to the character of the combat sequences and to drive their creativity in natural-looking scenarios.
We learned that during the process of reproducing the clouds you have created your own tool for doing that. Tell us a few words about this tool. What were the more interesting moments?
From the beginning of the production, it was clear that one of the most challenging tasks is to deal with photo-real and manageable clouds! During the preproduction phase, we went through all the commercial solutions out there and studied their advantages and disadvantages.
Soon we realized that there is nothing out of the box that could help us achieve what we were after. Some of the tools were simply not controllable, not optimized or needed an extra amount of work to integrate into our pipeline. So we decided to develop our own engine. CG supervisor Boris Schmidt had a lot of talk with the environment department and studied their needs for an artist-friendly tool that would be as efficient as possible. After a few weeks of development the first results were amazing. Compared to other solutions, it was way more optimized and practical so we decided to concentrate on that and we ended up doing all the clouds in the whole show using our in-house engine.
How often does it happen that during the work process new technologies are created within the search for more refined solutions?
At Pixomondo development is the core of the company. We have a complete custom pipeline, which is a set of different modules and tools that build the relation between different departments and production phases. The core controls and automates a lot of aspects of production from artist to supervision level. Besides the pipeline development, we have a lot of technology development, which is task-oriented. For example, we have very strong developments in the field of Crowd Simulation with a lot of in-house tools that I have been working on directly, which address different type of crowd scenarios and workflow.
BehaveIT, an in-house multiagent-oriented crowd tool, which works seamlessly with Massive software,
AirIT, a custom-built bridge between Massive, 3ds Max © and Renderman,
CardIT, a set of tools for card-based crowd scenarios
and a lot more that are confidential.
Whenever we hit a wall using the standard workflow and packages, we are confident to develop a solution that challenges the problem.
What is the importance of rendering in such a production?
Red Baron was a true story with historical background. We had over 30 min of full CG shots that had to look 100% photo real. The accuracy and the correct physical behaviour of the renderer and the perfect integration in the current structure were very important for the production.
Do you use different kinds of renderers?
As I mentioned earlier, our pipeline is structured around V-Ray. We have also integrated Renderman seamlessly into our pipeline to take advantage of Massive (Crowd simulation tool) procedural dynamic load algorithm, which is available in Renderman.
How do you define the applicability of the different renderers in the various technological situations (if you do use different renderers)?
I think the key is to take advantage of every possible solution. V-Ray on one hand is extremely accurate and artist-friendly and at the same time quick to setup. On the other hand we take advantage of the Renderman open structure and shading language, which makes it possible to pipe the renderer almost into any application.
When discussing a prospective crowd simulation, we always end up with a scene consisting of some thousands or hundreds of thousands of simulated agents. Rendering and managing that much information is a very critical task for crowd software. Crowd rendering cannot rely on standard rendering algorithms; therefore the rendering process needs to be optimized. Techniques such as procedural geometry or hardware rendering can be used to improve the rendering pipeline without adding extra complexity to the whole procedure.
Massive generates Renderman data. Using a procedural dynamic load structure, it creates the agents out of a data set collection on render times, which solve a lot of typical memory and optimization problems that one could face in any heavy crowd scenario. At the same time, V-Ray is armed with the animated proxy meshes, which gave us the possibility to build a bridge in-between. V-Ray provides us with a combination of advanced passes that one can setup fairly quick and the fact that it is perfectly integrated into 3ds Max, so one can build a whole automation and artist pipeline around it. On the other hand, Renderman shading language gives us the ultimate freedom for designing new possibilities and critical situation. So every renderer for us has a fair playground, which we respect and follow!
As a V-Ray user, could You please tell us what made You select our renderer the first time? Any specific feature? How would You compare V-Ray’s performance to that of other renderers?
Accuracy, speed, artist-friendly workflow, perfect integration into 3ds Max and the huge community of users make V-Ray a very good choice. V-Ray is incredibly fast, optimized and its state-of-the-art algorithms are reliable. Nowadays, physical correct behaviour of the renderer plays a big role in the growing VFX industry. Shots need another level of realism to be able to convince the viewer, deadlines are getting shorter and shorter with more challenging tasks. I think V-Ray addresses a lot of those issues.
V-Ray was one of the very first renderers that offered true GI and challenged the idea of a physically correct workflow. When it comes to render passes, V-Ray offers an incredible amount of useful information out of the box, which are a key success to a VFX pipeline. At the same time, the advanced SDK offers a great potential for adding extra features that are necessary to a custom pipeline.
How do You decide when to use V-Ray, for which projects? In what way does switching from another renderer to V-Ray affect the working process?
Our strategy is to get the most out of any tool out there. V-Ray is our preferred renderer in a lot of different scenarios. As I mentioned before, we also integrated Renderman into our pipeline, because it is the preferred render engine for Massive and it offers a very open structure. V-Ray is growing very fast and Chaos Group is putting a lot of effort on synchronizing the features with the production needs.
How easy is it to switch from the standard built-in renderers to V-Ray?
V-Ray is very artist-friendly. The workflow and the user interface are really well designed. The documentation is straightforward and there are a lot of resources and useful communities that offer problem-solving and workflow methodologies.
What new features would You like to see in V-Ray? Is there a process/feature, which You think might help Your work if it is integrated within the renderer?
For us, as an advanced VFX company, a more open structure would be the best. Nowadays, the variety of tasks in the VFX industry is getting so vast. A lot of developers and applications are trying to give a more open-structured framework, where users could have access to the low level functions and could create their own tools and pipelines. So instead of waiting for the next release to get a small feature, which actually already exists in the core, users could design their own workflow within the framework.
For example, in Renderman there are the shading language and the universal Renderman interface, which give the user a lot of flexibility in terms of designing new features using the core functionality. Another example is the Particle Flow in 3ds Max. For a while the developers where adding some new operators in every new release, which could address some issues. Rather than adding more operators they developed the so-called Box#3 extension, where the user can develop new operators in a very creative way, which opened a lot of new doors. The new ICE workflow in XSI is another good example. So the V-Ray standalone, with a universal interface and a well-designed shading language, which could be also node-based (at the artist level), would satisfy a lot of production pipelines.
Point-based ambient occlusion, faster 3d motion blur and a better interface and workflow methodology for the new animated proxy are some other corners to mention.
How do You see the future in what you do? Is there an aspect of the working process, which in Your opinion can be really innovated or rationalized?
We would like to work closely with Chaos Group to develop a bridge between Renderman-oriented applications like Massive software so one could plug them into V-Ray as well.
- Standalone version, Shading language or a built-in shading network with more access to the basic functions
- Open structure on volumetric and geometrical shaders
- More particle-friendly workflow in terms of instances and motion blur
- Point rendering
- a built-in ISO surface interface that could be plugged in different workflows and diagrams
Would You like to make any recommendations or give some advice to the visitors of our site? Is there anything else You would like to add?
I know a lot of people out there in the industry that have tied themselves to one and only one specific package or renderer.V-Ray has a great perspective and potential. Give it a chance.
At the end, I would like to thank the whole Red Baron team for their incredible work and the given honor to be their representative.
We thank Mohsen Mousavi for the great insights and we wish him and the whole Pixomondo team good luck in the future!