For years there has been much talking about automatic level of detail (or LOD) for geometry in games.
The idea is to automatically build lower detail meshes. Thus, giving modelers greater artistic freedom and have a conversion system deal with reducing complexity to a manageable level for real-time applications. This also adds the ability to target different platforms with different hardware capabilities without any additional work from the artists/designers.
However it's not all about decimating polygons. Along with complex meshes come "dirty" meshes: flipped normals, duplicated materials and uneven distribution of geometry.
Recently I've been working with some geometry and textures meant for off-line rendering. I'm at a good point with textures (see earlier post). Geometry however it's a bit trickier. The irregular distribution of data (when compared to a bitmap) makes it harder to resample geometry, but it's certainly possible and recently a coworker came out with a pretty solid edge collapsing mesh reduction implementation.
The next issue however is dealing with plain careless modeling. For example I found a geometry file that has literally thousands of meshes. Each of these meshes is made of one quad (4 vertices) which is used for billboard-vegetation.
The overhead from having to create thousands of vertex buffers in DX10 and consequent draw calls is pretty heavy. DX10 and current generation hardware surely wasn't meant to deal with this volume of vertex buffers.
Of course one could ask to an artist to merge those polygons together in the same mesh, etc etc. But it's always preferable to solve issues programmatically, instead of having to bother the less technically inclined.
It's also important for me to build a system that will at least warn of broken or inefficient data.
For one thing I've already built a function that will compare all meshes loaded and find duplicated meshes that can be eliminated. This kind of operation is better suited for pre-processing, and it will be moved there, however it's easier to try this at run-time and eventually port the process to the build phase (I see the rendering and building phases as tightly connected and the amount of work shifting depending on the necessities of the application and hardware: what kind of data you "bake" depends on your target).
For the "one mesh per billboard" case, first of all I think I can safely make those meshes index-less. There is really no point in allocating an index-buffer of 6 items (1 quad = 2 triangles = 6 indices) when I can draw a triangle strip with the 4 vertices as they are.
But on a second stage, I should really merge those vertices together in a common vertex buffer.. or even merge all/multiple meshes into one.. provided that those meshes aren't going to be addressed individually in some animation !
In conclusion, there is really a lot more than meets the eye when dealing with real-time graphics. Going by the book will only get you so far. When the data gets complex, that's the time when doing real-time 3D becomes interesting (at least for me !)