Gamefest 2007: GPU Data Structures and Advanced Lighting: State-of-the-Art Techniques from Microsoft Research

Hugues Hoppe and John Snyder
Principal Researchers
Graphics Group at Microsoft Research

Its always a pleasure to listen to a talk by Hugues Hoppe.  Every time I read a paper of his, I learn something new and interesting about graphics!

This talk is basically a presentation of material from his recent papers: Perfect Spatial Hashing, Texel Programs for Random-Access Antialiased Vector Graphics and Compressed Random-Access Trees for Spatially Coherent Data.  (I’ve just linked the PDF for the papers; go to Hugues’s home page to see the accompanying videos and errata.)  The motivation for the class of algorithms described by these papers is to have a random access pattern in order to exploit SIMD parallelism.  “Random access pattern” here refers to how the algorithm works — if the input data can be processed in any order in parallel, then you can map that algorithm onto the pixel shader in a GPU.  What’s exciting about Hugues’s research is that he shows you ways of exploiting the GPU by bringing together a broad knowledge of fundamental computer science data structures: trees, compression, encoding, sorting, recursive subdivision and so-on.  Its when you get into heavy computer graphics that all that computer science “theory” suddenly has a lot of shoe leather on the ground.  Of these three papers, Perfect Spatial Hashing was only mentioned in the beginning.  The majority of the time was spent on the Texel Programs paper and the Compressed Random-Access Trees.

The Texel Programs paper is very cool!  The idea is to come up with a mechanism for rendering high quality vector graphics using the pixel shader.  Ideally what you’d like to do is have your own scanline renderer running in the pixel shader and feeding it the vector data of the graphic so that it can resolve it out as a texture applied to a primitive.  If you just want to see the vector graphics you just slap it on a quadrilateral and fill the screen with it.  However, you could just as easily apply the vector graphics as a high quality texture to an object.  That sounds great, but the resources of a pixel shader are pretty meagre, even on a shader model 4.0 piece of hardware.  So what do you do?  Well, you apply the tried and true computer science technique of dividing the input into smaller bite sized chunks and processing each chunk and then combining the results together later.

So first the texture space is carved up into equal sized squares and the vector graphics that intersect each square are clipped against the squares and a list of vectors are built up for each square.  The vector graphic could have multiple layers, in which case you just clip each layer individually and build a list of vectors for each layer intersecting a square.  Now you’ve reduced the data load to a reasonably small pile of vectors covering a chunk of the texture space.  This is reasonably straightforward.

What happens next was the real mind bender for me.  Each vector list is encoded as a sequence of instructions in a domain specific language that will be interpreted by the pixel shader program.  So we just introduced a level of indirection by writing an “engine” in our pixel shader and feeding it arbitrary instructions that encode the vector graphics for each square.  Its sort of like the way printing to a PostScript printer doesn’t send the printed data, it sends the data to be printed along with instructions on how to print it in the form of a PostScript program.

OK, so we have this engine in our pixel shader and a carved up vector graphic to lighten the load a little.  But these vector descriptions are going to be variable length per pixel so how do we communicate that down to the GPU?  The answer is to go through another texture looked up per pixel.  The value in the texture gives the address of the texel program in its texture along with the length of the texel program.  Here’s a picture from his talk showing this relationship:

TexelPrograms

Its kinda hard to believe how much stuff they crammed into this one presentation because we’re only about a third of the way through the content.  The second paper “Compressed Random-Access Trees for Spatially Coherent Data” is about using the programmability fo the pixel shader to process a compressed representation of a texture in a manner consistent with high quality output and random access patterns.  The idea is to take texture data that has smooth variations in it and encode it in a manner that exploits the spatial coherence in the underlying data.  This sort of data comes from things like depth maps, luminance maps, gloss maps, etc., that are often used as auxiliary inputs to texturing and lighting processes.  With a pixel shader, you can encode this smoothly varying data into an extremely compact form, decompressing the data in the pixel shader.  The more compact an asset is on the card, the more of them you can have and isn’t that what artists always like to hear from their programmers?  This is another paper that I’d recommend reading for the details.  Hugues shows how knowledge from other disparate areas of computer science — in this case compression — really has payoffs in computer graphics even for non-esoteric applications like games.

The third and final part of this presentation was by John Snyder, another principal researcher at Microsoft in their graphics group.  This portion of the talk was about Fogshop, a system for interactively specifying fog as a participative medium and not just as a color summed onto the final output.  By modeling the fog explicitly, you can get lots of cool effects that come from real fog.  The end result looked really nice, but I wonder how long it will be before this sort of fog effect shows up in games.  It still takes quite a chunk of the GPU to do this fog effect, so you’d be spending a significant chunk of your rendering budget just on fog.  That means it will probably be a while before this sort of thing shows up as commonplace, although it might show up in the demoscene first because they can control all of the modelling load in the application.  At any rate, I really like how this is applying the GPU to a more general rendering question, one that’s been nearly impossible for interactive rate renderings to achieve in hardware.  Until recently, fog calculations either had one fast, fixed piece of functionality, or you had to resort to software rendering in order to capture the more subtle lighting effects.  Now that the hardware can process larger pixel shaders, we’re starting to see more differentiated rendering functionality make its way into the interactive hardware space.  Again, go to John Snyder’s home page, linked at the top of this blog post, to watch the videos on Fogshop.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: