GPU vertex modification - R2VB?

I’m a bit confused about R2VB and googling only added to my confusion. If I want to modify the vertices of a mesh on the GPU is R2VB the way to go? The “easy” way seems VTF, but that appears to only be on Nvidia? I’d like the code to run on older cards (say last 3 years) if that makes any difference.

If R2VB is the way then is the basic idea


1. Create a float texture (inTex) with GL_RGBA32F_ARB and fill it with the X,Y,Z,0 of the input points
2. Create a float texture (outTex) with GL_RGBA32F_ARB to hold the output points.
3. Create a FBO with the outTex attached.
4. Render inTex as quad with the same dimensions as the inTex to the FBO. The fragment shader can
    use the texture coordinates to read the X,Y,Z from inTex, modify them and output them (to outTex).
5. Createa a PBO with size float*4*number_of_points  (i.e. the same size as inTex)
6. Copy the transformed points from the FBO to the PBO with glReadPixels.
7. Render the transformed points using that PBO.

If the above is correct are there any major gotchas (pretty open question I know)?

Thanks.

VTF should be available on all hardware within your target group - it’s been specified by GLSL since GL2.0 and is available with SM3.0 or better hardware under D3D - you probably have an out-of-date source on that.

As to the best way to do this, the answer depends on what kind of modification you want to do. Your description reads a lot like you’re doing skeletal animation, but could you confirm?

Thanks for that. I was planning on using it for effects. I’m working in 2D and can render an image using a tri-strip grid with a vertex for each pixel. I modify the grid to get wierd effects, but on the CPU that’s a lot of processing. Worse case I’m rendering the entire screen as a tri-strip with a vertex for each pixel.

Would there be any reason to use R2VB over VTF? e.g. faster on ATI or available on older cards? Just curious really, since if VTF is suitable I’ll go with that.

I’d be inclined to do this kind of effect in the fragment shader instead and encode the effect function into a texture (which you may not even do if the effect function can be derived from a formula). With one vertex per pixel in your current setup you’re getting the same degree of shader processing, and it will give you a LOT less vertexes. :wink:

I’d never even think of the possibility of doing this per-vertex so I can’t comment on R2VB vs VTF - per-fragment is clearly the way to go.

First, please use fewer acronyms like R2VB and VTF; these things are so old that it took me a minute to figure out what you’re talking about.

Second, you want this to run on hardware 3 years old. So we’re talking DX10-class hardware. So why are you bothering with esoteric things like rendering to an FBO and then copying it to a buffer? If you want to render to a buffer object, then just use transform feedback. Or, if you really want to “render” to it, then you can just create a buffer texture and bind that to your FBO and render to it. Either way, the data goes straight to where you want it: your buffer object. There’s no intermediate texture or anything; just your buffer.

BTW, there’s no such thing as VTF anymore. All modern hardware (5 years old or so) has unified shaders; every shader stage has more or less the same capabilities. So there’s nothing special about fetching from textures in your vertex shader (outside of the lack of derivatives, of course).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.