View Full Version : How Subdivision surface been implemented in GPU?
I'm reading a few articles on implement subdivision surface in GPU, including the one in GPU Gems 2. I can't fully understand thoses articles yet. But all I want to know is one question: How to create additional vertices in shaders?
The article in GPU Gems2 presents a way to implement subdivision in GPU, and it only uses vertex and fragment shader (Geometry shader was not available at that time). However, every vertex shader execution can only produce one vertex, but to perform subdivision, we need to add additional vertices, how to do that in GPU(without using GS)?
10-01-2007, 03:23 AM
In GPU Gems2, the author renders the vertex data as texture images scaled by a factor of 2. Then reads framebuffer data back into a GL_UNPACK_BUFFER and uses these data as vertex data for rendering.
This way, new vertices are introduced into the vertex buffer. The indices to use can be easily deduced by the topology of the patches created.
I remember this article was a pain to understand..
A more modern approach would be to use geometry shaders of course!
So, I'm not sure if I'm right, GLSL cannot access Vertex buffer right? So all the vertices of the subdivision is done in CPU?
10-01-2007, 07:30 AM
No. The subdivision is done in an extra pass before rendering the real image.
The result of this pass is an image in the framebuffer. This image is read back into a PBO. Then this PBO is bound as VBO, reinterpreting the data as vertex data for the actual "drawing" pass.
Basically, it's "render to vertex array", although it's a bit of a hack. A modern solution should propably use geometry shaders plus transform feedback...
Keith Z. Leonard
10-05-2007, 01:15 PM
Rendering to the vertex array would still probably be a good option, depending upon you subdivision level. Geometry shaders are best when they spit out a limited number of verts. If you are highly tessellating an object, I could see the rendering method outperforming the geometry shader method.
10-06-2007, 07:26 AM
I don't get this part :
Then this PBO is bound as VBOYou mean it is possible to do so, without any roundtrip to the CPU ? :confused:
10-08-2007, 03:20 AM
In theory, yes. Of course it depends on the hardware if it is actually possible, but I think this is widely supported. If not, the driver is supposed to take care of it transparently.
Originally posted by Keith Z. Leonard:
Geometry shaders are best when they spit out a limited number of verts. If you are highly tessellating an object, I could see the rendering method outperforming the geometry shader method. Hi, Keith, is it because of GS is not efficient to process huge tessellation work at this stage? and why?
10-12-2007, 07:03 PM
Originally posted by ZbuffeR:
I don't get this part :
Then this PBO is bound as VBOYou mean it is possible to do so, without any roundtrip to the CPU ? :confused: Absolutely. Didn't have any problems doing that on NVIDIA 7- or 8-series hardware, anyway.
They're both just buffers, after all. The PBO spec was an extension of the VBO spec, so it's not unexpected that they're using pretty much the same underlying notion.
It might not work with GL_PIXEL_PACK_BUFFER; that memory is optimized for sending data to main memory. But both GL_ARRAY_BUFFER and GL_PIXEL_UNPACK_BUFFER are sources for OpenGL's functionality to draw from, so it makes sense that they'd be pretty much interchangeable.
Powered by vBulletin® Version 4.2.2 Copyright © 2015 vBulletin Solutions, Inc. All rights reserved.