Sunday, November 23, 2008

Brain Dump.pptx

The time has almost come for a pitch for future projects at R&D in my company.
I have it fairly clear in my mind what direction we should take, but I find it hard to explain it.
I hit PowerPoint and tried to give an more graphic representation of my thinking, but I had little success so far.

It's frustrating, but I realize that it's also my fault if I can't explain things nicely with some neat graphics.
I think that it's important to be able to scribble on a whiteboard or to make an effective PPT presentation. There are books to learn that.. I see managers going about those manager-like explanations and I realize that those don't come out of nothing (cough cough). There must be an "how to scribble on a whiteboard and leave people in awe" kind of book.

My proposed direction is based on simple concepts, but I've had hard time explaining even to other graphics programmers.. in the game industry.
When I try to explain, I never really get a good reaction... which is both good and bad.
It's bad because it would be nice to share thoughts and experiences. It's good because it could make a case for a salary raise 8)

Generally, I think that real-time 3D graphics as we know it is dead, or it should be.
It's still alive because those that do real-time 3D are stuck in that world, and those that are doing production rendering don't even want to think about real-time.

I once dreamed of a RenderMan vs MentalRay war to get the foot into the door of real-time. But as far as I know, Pixar doesn't have any interest in that... and NVidia (which now owns Mental Images) appears to be very conservative: pushing for DX 11 already... yawnnnnn.

But rendering is bullshit.. I mean, it's more complex than ever before, but it's not where the big problems are with productivity. It takes 20 GB of textures and geometry (mostly texture) to render a frame of the "good stuff", the pre-rendered sequences that make you want to buy the game. Real-time rendering instead, works on much smaller footprints.. about 1/100th of that.

Oh, and please, stop saying that real-time is catching up.. you can put 20 GB of VRAM in every new video-card, but you'll still get some artist to break that barrier the next day. This is not about beefing the system up, it's about being smart with data access.

DXT compression will reduce the needed textures by 1/4th, but that's not nearly as good. Assets need to be structured in a way that it is extremely efficient to randomly access data.
It's very unlikely that a 8k x 8k texture needs to be fully accessed to render a 2k x 1k frame, yet many of those giant textures are likely to contribute to the rendering of a single frame.
A tiled structure allows accessing only portion of a texture. This helps in some cases, but it doesn't help when one needs the whole texture into a tiny 10x10 pixels' polygon. For this task, one needs to access a texture at a given, pre-filtered resolution (mip-map). This is something that for example Photo Realistic RenderMan has been doing for a while, but that the real-time world completely ignores.. because it's assumed that all assets will be pre-loaded in VRAM for that frame, and most likely for the whole stage of that game.

Movies keep relying on production renderers to deal with huge assets, while games keep assuming that artists stay on memory budget or that some build process will resize textures to a workable resolutiion.

Optimized assets build systems are definitely a better solution than asking artists to resize textures.. however build systems are too deached from real-time. A build system will resize your assets so that they fit nicely in your VRAM, but only for a one-time fixed size.

A better idea would be to fuse the build system and the run-time so that assets are "cooked" at different levels depending on the digesting abilities of the target plaform. "Very well done" for the Wii, "Medium Rare" for the PS5, and so on.

I talked mostly about textures, but the concept should be extended to geometry and possibly physics as well.