Sunday, October 12, 2008

Behind the real-time scenes

Scalability is clearly, now more than ever, the key to real-time 3D rendering.

The important thing to keep in mind is that perspective projection allows to see a great deal of things. With perspective projection objects' dimensions will often change drastically across the animation.
Space-limited games like, Tekken and Virtua Fighter, will only happen in a limited range with only 2 detailed characters, thus making it easy for the developers to divide between the few characters and objects that are being used and the static background.
This however severely limits the player and the developers themselves as some things have do be created precisely with those limitations in mind.

Computer Graphics is all about pre-calculating something. Nobody really designs using actual volumes (aside form medical imaging !). Characters don't normally have actual guts. Bricks aren't solids.. and usually aren't even defined as individual objects.

When developing a game, it's important give maximum freedom to artists while somehow being able to deliver an interactive experience.
The challenge for developers is to decide what to pre-calculate, or "bake", and what's left to be manipulated at real-time.

Ideally one would develop a very advanced off-line rendering engine which could handle, for example, a RenderMan or a MentalRay scene straight out of Maya (Note: this does not mean that one has to implement REYES).
This rendering engine would still employ hours to process the more advanced scenes, but it would also be tightly integrated (possibly the same code-base) to the real-time counterpart which would use "baked" data at the desidered level/s.
It's a bit like doing Radiosity pre-calculation.. only that this pre-processing concept should be extended to more complex structures.
For example, the off-line "renderer" would output a structure that optimizes implementing Radiosity in real-time..  off-line takes the heavy bit and real-time picks up the data where the off-line process left it.
Tight integration would mean that one cuts the rendering pipeline at any point and starts outputting data that is light enough to be procesed in real-time.

A rendered 2D background is easily usable and it worked great in games like FF7 or FF8. Nowadays what's baked is more in terms of lighting and texture, while tangible physical objects have a polygonal shell made of simpler polygons.

Smaller polygons can semi-automatically converted into texture maps (using  normal, bump, etc). However the process of converting polygons into textures hasn't been fully automated and it's not going to solve other more global problems.

Should a bolt, belonging to a giant train, be an actual 3D object with its own individual position in space and a named entry in the scene database ?
Not in a game for sure, but it can happen in the assets for a CG movie.

The real challenge then is not to render polygons, but to re-build the geometry, reorganize a scene, optimize a scene graph, compress and delived in a format that is also to some degree scalable at run-time (now this bolt is right in front of my nose and I wish it was made of polygons !).

If an artist needs to consider a bolt as a single object, then it's up to the programmers to make sure that such bolt will turn into 16x16 texels in a texture or keep its polygonal structure, or turn it into a bunch of voxels depending on the circumstances. Some data can be optimized off-line once and for all, some other data needs to be re-organized, re-built, pre-sorted for the run-time to present in real-time.

Treating data is what is really important. Given a scalable enough structure, one could reduce per-frame calculations to such a minimum that actually rendering and shading polygons could be done without any more effort than we've already put into (basically, polygons are alive, but not as important).

I've heard more than one person looking into REYES architecture for real-time. REYES scales well for large parametric surfaces that need to be very smooth or displaced... but when it comes to very complex objects that need to be rendered at any screen size, then custom solutions kick in.. because REYES can dish micropolys but can take 'em ;)
Pixar itself has to come up special solutions to keep scenes manageable at all. Without some optimizations, some scene would simply require too much memory !

..Speaking of memory. The other day I broke the 3GB barrier (maximum allowed on 32 bit Vista).
We currently have a system that will unpack textures in real-time at different resultions (using my progressive-JPEG-like format) by a given memory limit...  I just wish we could do the same with geometry and scene graph nodes and materials. will come, but from the real-time graphics community (for games) I don't see much happening. Id Software being the most inspired from what I can see from their public presentations. See Next Generation Parallelism in Games (PDF includes cur gen stuff, too). The voxel based model in the image on the left is from that presentation.


  1. The problem with doing a lot of precalculating the total turn around time.

    Artists really want real-time feedback.

  2. That's true. One could still use a progressive approach: recalculate the changes only (some dynamic changes are needed in realtime too anyway).
    In our specific case we have a distributed build system, which is still kind of young and still meant for final build only, but could eventually be used in a dev tool directly to iterate changes using everybody's spare CPU cycles (and dedicated clusters).

    ... typed with iPhone !
    wooooo !