PDA

View Full Version : Read and write from/to the same texture



Java Cool Dude
06-26-2008, 11:30 AM
Hey all,
Is there any way on the PC to read and write from the same texture at the same time?
I know for sure on the PS3 it is possibly and I am using it quite often, but what about Opengl on the PC?
Thanks
PS: This is not a topic to discuss the caveats of reading/writing at the same time from the same texture <3

Komat
06-26-2008, 01:52 PM
The specification states (section 4.4.3 of ext_framebuffer_object (http://oss.sgi.com/projects/ogl-sample/registry/EXT/framebuffer_object.txt) extension) that when texture is used as both source and target (with few exceptions relating to different mipmap levels), the values of fragments rendered in that state are undefined. Unless your program wants rely on undefined behavior or you have different extension/vendor information defining behavior in that case, it is not possible to read and write using the same texture.

Java Cool Dude
06-26-2008, 01:56 PM
Thanks, I put together a quick sample using an FBO and it worked as a sharm.
Keep in mind that I am reading and writing to the same pixel which works out fine on the PS3 and the PC as well :)
JCD

Zengar
06-26-2008, 02:18 PM
Still, this is something the specification clearly not defines. It may work (and does work), but I would advice against it, as a future driver revision/hardware may bahave in a different way!

Ilian Dinev
06-26-2008, 03:11 PM
Well, from JCD's homepage, it seems he's working at nVidia, so it's a bit reassuring :) . [I suppose they first test drivers against their demos and popular games before releasing]
Anyway, if you read and write to the same coordinate, maybe there won't be problems now or in the future. And anyway it's a minor vram-requirements optimization, that can be fixed with a few lines of code if broken.

Java Cool Dude
06-26-2008, 04:08 PM
Hey,
I don't work at NVIDIA anymore, otherwise I could have just asked one of their employees directly for an answer.
Time to modify my profile, it's been a while since I have posted something on these boards :)
JCD

Komat
06-26-2008, 04:31 PM
Thanks, I put together a quick sample using an FBO and it worked as a sharm.

It might work now however you have no guarantee that it will work on future hw (e.g. having bigger caches, less forced stalls or different architecture) or in different use scenario. Actually I think that there already was PC hw where this would fail in some cases. The Kyro II chip with used tile based rendering. Some integrated Intel chips also support the tile based rendering mode so they might be sensitive to this as well.

I would prefer for the specification to define all undefined behaviors as error requiring failure. Unfortunately that is impossible to do. The result is that people might rely on undefined behavior which "just works" which in turn might cause problems for the driver writers when they wish to avoid breaking older applications.



Keep in mind that I am reading and writing to the same pixel which works out fine on the PS3 and the PC as well :)

On PS3 it might be perfectly fine to do. The hw is fixed. The PC does not have that luxury.

Java Cool Dude
06-26-2008, 07:08 PM
You guys are definitely right; I shouldn't base my applications on undefined behavior, therefore I went back to the drawing board and decided to go with the classic ping-pong approach :)
Without giving too much details away, I am writing a new algorithm for volumetric light scattering + occlusion.
Thanks for your inputs.
JCD