View Full Version : Resize a bufferobject after transform feedback
07-04-2007, 02:25 AM
I have a transform feedback running. Problem is that I need to allocate a bunch of memory, which will definitely be sufficient for the (unknown amount of) values that I get back from the feedback.
Afterward I would like to reduce this amount of memory to fit the required size. How can I do that. Will glBufferSubData reallocate the memory?
07-04-2007, 08:53 AM
I don't think it is possible... You could create another buffer object and copy the data, but that would be very innefective. We need a server-side buffer copy :)
07-05-2007, 01:58 AM
We sure do. I'm very surpised that it is not possible. That would keep me with a buffer which is way to big. But I can't guess the amount of geometry the transform feedback will create.
So really, there is no reasonable solution for now?!
07-05-2007, 11:42 AM
If you could copy the interesting data subset to a new buffer object efficiently, would that solve this problem ?
07-06-2007, 12:47 AM
Well, yes, that would be a solution. But actually I would like to reduce size to gain efficiency. However if the copying takes too much time/memory, than it wouldn't be worth the hassle.
07-06-2007, 11:01 AM
But I can't guess the amount of geometry the transform feedback will create.You can't? Not at all?
I mean, it is your shader doing the transforms. You're sending it vertex data. Unless the algorithm is non-deterministic, I don't see how you can't even make a reasonable estimation of the space it will take.
07-06-2007, 06:50 PM
yeah, you should definitely have some reasonable upper bound on this thing. And I would guess even feedback has some fairly hefty limitations on the size of the output anyway (haven't looked at the spec on it yet).
07-06-2007, 08:56 PM
I don't see any limitations on the total size of the output with transform feedback. Whether or not a reasonable limit could be estimated in advance for this particular problem, a mechanism for direct copies between buffer objects would be nice to have.
07-06-2007, 09:03 PM
you're right, just had a peek. There is a limitation on the size of the vertex though, so if you have a bound on the number of verts, the total size probably isn't much of a stretch.
What I didn't see is something similar to DrawAuto in d3d10, where you can turn around and render from a buffer without having to query the number of primitives written. Don't know if that's worth considering, but it seems pretty handy.
AFAIK Geometry Shaders have an output limit (though it is pretty high).
Haven't worked with GSes yet, one question i was asking myself: is it possible to "recursively" emit triangles? I mean, do the vertices, that are generated in the GS pass through the VS and the emitted triangles through the GS again, so that i could emit another triangle from a triangle, that was already generated earlier?
Or does only "original" geometry pass through the GS?
07-07-2007, 12:52 PM
You would have to issue a second draw call on the results of transform feedback to use the Geometry Shader recursively. I think the new ATI cards don't have a limit on the total number of output vertices per input primitive, and even the G80 can output at most 1024 floats per input so you can get quite a bit of data expansion.
Buffer object copies would be good! I'm sure all GPUs are quite capable of copying memory around quickly, and there's a hole in that we can do this for PIXEL_PACK and PIXEL_UNPACK but not for just copying between buffers.
07-08-2007, 04:33 PM
is it possible to "recursively" emit triangles?Nope. And it's probably a good idea to keep it that way (for the sake of hardware efficiency. Only Intel right now would have the hardware to put a stack in the geometry shader). Fortunately, recursive algorithms can be re-written iteratively, so it's not that big of a deal.
Powered by vBulletin® Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.