Thursday, February 12, 2009

Getting it on with the Grids

I've been spending more time trying to implement the REYES pipeline as described by Production Rendering .
It's a great book, but it takes some page flipping to try to get code and data structures right from the short code snippets (already found a small error) and the plain English explanations which are a bit scattered (I wish I had a PDF version for those times when I need to look up things which are explained in other pages).

It's funny how the REYES approach is very much like the "forward texturing" that was being used by the first NVidia card, the NV1.
I remember deriding that approach back then and now it may actually come back to the real-time rendering world 8)

The image in the post shows the usual test image composed of quadrics.. this time around without any polygons. I use OpenGL only to blit my offscreen buffer on the display.
I went throught the steps of actually turning each primitive into a "grid", and the visible points on screen are the grid vertices.

Two important things are still missing before I can call that geometry:
- Grid size should be dynamic.. so that there wouldn't be any empty spaces between vertices
- The grid needs to be converted into micro-polygons

Right now I just output vertices as pixels, but I should instead thing of those vertices are part of a mesh of quads (a filled grid).
Those quads would then not be directly rendered on screen, but instead the rasterizer would have to go through every pixel and sample all the quads intersecting that pixel (or micro-polygons) to give a final color.

The dynamic grid resolution thing is already a powerful concept if one thinks a bit ahead: high quality rendering is filled with motion blur and depth of field (DOF) blurring effect.
With the current real-time rasterization approaches, one renders at full resolution (with multi-sampling or super sampling anti-aliasing.. which is the non-REYES equivalent of having more than one micro-polygon contributing to one pixel) and only after applies some sort of blurring.
With the REYES approach instead, one can take into account the DOF blurring right there and potentially have a much coarser grid.. perhaps even with micro-polygons that are bigger than one pixel ?

Unfortunately REYES does not take into account frame animation coherence.. so those grids are rebuilt every frame... but I can see some potential to optimize there..

9 comments:

  1. I have read "Getting it on with the GIRLS" and I was hoping to see a post with pics of cute girls.

    AKA who gives a shit about the grids, were are the girls...

    (Che ce frega de le griglie, facce vede' la fauna locale... :) )

    ReplyDelete
  2. Reading your comment I had to double check the title myself.. you never know when the subconscius may kick in ;)

    In fact, I'm generally doubtful myself on my actual ability to write what I think I want to write !

    ReplyDelete
  3. Don't worry: stick with the food related posts and automagically any title will fit. :)

    ReplyDelete
  4. I am also having a hard time following this grid explanation =)

    Are you saying that each pixel is a quad made of 2 microtriangles?

    And then you somehow project those micro triangles onto render geometry and calculate the (microtriangle) vertex colors?

    ReplyDelete
  5. There are no triangles involved. Micro-polygon really means micro-quad (though supposedly there are implementations that use triangles (?)).

    A grid is a 2D array of vertices so that a micro-quad can be built with the vertices:

    verts[vi+0][ui+0]
    verts[vi+1][ui+0]
    verts[vi+0][ui+1]
    verts[vi+1][ui+1]

    (ui, vi = indices into a grid sized uSize x vSize)

    (I think that the quads don't need to be planar, but I'm not sure)

    A shaded grid (a grid of vertices after the shading) has both position and relative color (and opacity) for each vertex.

    The actual on-buffer rendering will go pixel by pixel and see which micro-quads (or micro-polygons) intersect the area of that pixel and average them together to get the final color of the pixel.

    Micro-quads are so small that they are not rasterized one at the time, but rather considered as they touch each pixel.

    ReplyDelete
  6. Hmm so then this is the same as 4x super sampling?

    ReplyDelete
  7. It is a form of super sampling, meaning that multiple samples are considered when coloring a pixel.

    However, the micro-polygon approach is more flexible. The sampling rate changes as necessary. For example, primitives that are out of focus can produce less micro-polys.

    Hair should normally produce more.. but not as many for fast moving objects (motion-blurred) like the rats in Ratatouille (cannot find the full paper that describes that now).

    ReplyDelete
  8. ..more like adaptive sampling.. because it doesn't have to be "super" for when things are blurred.

    Anyhow there is no one way of doing this with REYES, I guess.
    It's really about getting enough samples and then come out with pixels out of them 8)

    ReplyDelete