PDA

View Full Version : TexSubImage into a Sparse Texture on Nvidia.



JoeSSU
02-23-2015, 04:35 PM
I am following the github apitest sparse bindless texture array sample and implementing it into my own renderer.

While admittedly i haven't ran apitest on my machine yet (compile errors), I am noticing very strange behavior in my own renderer when using SubImage to upload to the sparse texture.

It seems like the Virtual Pages are not receiving the correct image data.

Example:

Original Image:

1636

Next: Full Size Image rendered @ 500x311 (because of the sparse texture virtual page size compatibility texStorage is 512x384)

1637

As you can see it looks like a portion of the subimage (a sub sub image) was loaded to each individual virtual page.

To test this, i cropped the image to the size of just 1 virtual page (256x128): here is the result:

1638

as expected, the single virutal page was filled with the exact, correct, cropped image.

Lastly, I increased the crop size to be 2 virtual pages worth, 256x256, one on top of another. here is the result:

1639

This proves that calling texSubimage with an amount of texelData larger than Virtual_Page_Size causes errors.
So do i need to manually call subimage and include offsets in order to write to the correct virtual page? This kind of logic does not exist in the example code.

MaxDaten
02-23-2015, 07:39 PM
I experience a similar problem with my sparse 3D texture. I create a 3D sparse texture with dimension 256x256x256. The sparse texture parameter is True and i did not commit any page. When I run my shader which writes data with imageStore in the completely uncommitted texture, it seems every write op writes something into the z=0 and/or z=255 layer. Reading from the texture at every position fetches from the z0/z255 layer. It seems like all writes and reads are clamped on the z border layer. Explicit decommitting the complete texture has the same result.

For me the sparse texture implementation does not behave like expected, at least I'm not the only one. ;)

My current problem:
1640

these are spheres hovering over a plane encircled with 2 walls. remember: nothing is committed and according to the spec writes shouldn't perform anything. Yes, reads are undefined according to the specs. I implemented a page mask to capture pages to be committed or to be decommitted, and to mask reads from uncommitted pages. And my shader is still reading garbage from the safe zones.

JoeSSU
02-23-2015, 08:38 PM
I just ran apitest and the image on the quads seems off as well. Since the spec says nothing about managing texture data on the actual virtual pages, im going to assume this is a driver bug.

wendaddy
03-24-2015, 04:04 PM
The spec says about accessing uncommitted pages:

* Reads from such regions produce undefined data, but otherwise have
no adverse effect. This includes samples required for the
implementation of texture filtering, mipmap generation and so on.

* Writes to such regions are ignored. The GL may attempt to write to
uncommitted regions but the effect of doing so will be benign.

I don't think that this is a driver bug because reads are undefined which means that rendering using a totally uncommitted texture is also undefined.

JoeSSU
03-27-2015, 11:46 AM
The spec says about accessing uncommitted pages:

* Reads from such regions produce undefined data, but otherwise have
no adverse effect. This includes samples required for the
implementation of texture filtering, mipmap generation and so on.

* Writes to such regions are ignored. The GL may attempt to write to
uncommitted regions but the effect of doing so will be benign.

I don't think that this is a driver bug because reads are undefined which means that rendering using a totally uncommitted texture is also undefined.

so one must commit the pages before calling sub image, even if the texture that is declared sparse is bound? Glew is giving me an invalid value when i commit before calling subimage which seems to be a known issue. I'll update if i find something.

JoeSSU
03-27-2015, 11:53 AM
You were right!

This error exists in nvidias own apitest hahaha.