Samplers in vertex shaders not possible?

i’ve never wanted to put a texture lookup in a vertex shader until today.

the cg compiler doesn’t reject samplers in the arguments list… but seems to reject the use of ‘tex2D’ by declaring that their are no overloads available.

a cg manual seems to imply that that samplers can only be used in fragment shaders, and i can’t find any examples of samplers in vertex programs.

i’ve always assumed this is possible.

if not it really cuts into the imagination.

assuming it isn’t possible. i’m interested in discussing viable alternatives if any exist.

sincerely,

michael

PS: if using cg terminology is problematic here i will try to review the glsl specifications… and if this forum isn’t about gpu programming in general, but rather glsl quirks, then please let me know.

EDIT: i’m pretty sure this has to be possible. don’t people use texture lookups to project heightfields on the gpu all the time?

oiii, i found this bit of info here:

http://www.gamedev.net/community/forums/topic.asp?topic_id=272158&whichpage=1&#1679862

looks like i need to go out and find a geforce6 model.

$$$$$$$$$$$$$$$$…

You need VS3.0 capable hw. At this time only solution is NV4x. Textures must be float type, filtering is not supported, you can use max 4 texture samplers in VS and sampling inside VS cost too much GPU clocks.

yooyo

Another solution would be to convert the vertex program to a fragment program and do the following:

  1. Copy untransformed vertices to floating point texture(s)
  2. Use a fragment program + render to texture to tranform the vertices.
  3. Copy the resulting texture to a vertex buffer.

This should work on most moderately highend GPU:s.

/A.B.

Originally posted by yooyo:
[b]You need VS3.0 capable hw. At this time only solution is NV4x. Textures must be float type, filtering is not supported, you can use max 4 texture samplers in VS and sampling inside VS cost too much GPU clocks.

yooyo[/b]
ouch… i’m not happy to hear that, but i’m damn glad i did.

i was thinking about going with this getup:

http://www.xpcgear.com/gigabytegv3d1.html

but with no filtering, and requiring float textures, and an urealistic overhead… what is the point really.

the fact that vertex shaders can’t sample textures takes all the fun out of them for me. shaders would be so much more powerful and interesting if vertex shaders could sample textures as fluidly as pixel shaders… why is this not possible?

Originally posted by brinck:
[b]Another solution would be to convert the vertex program to a fragment program and do the following:

  1. Copy untransformed vertices to floating point texture(s)
  2. Use a fragment program + render to texture to tranform the vertices.
  3. Copy the resulting texture to a vertex buffer.

This should work on most moderately highend GPU:s.

/A.B.[/b]
that would work, though requiring 3 times as much video memory in my case.

the main problem i personally have with this aproach is it is too ‘back door’ for me to impliment into the system without hardcoding it… and personally i never hardcode anything unless i plan on getting around to doing it proper.

i’ve been wanting to ask an off topic question though lately, though i haven’t really considered it realistic.

i’m basicly just curious if there is any effecient way to go about copying video memory to another location in video memory. is there any plan to allow for this? is there a dispatch sequence which could facilitate this reasonably well that could be setup as a sort of general purpose function? i’m assuming at this time a render to texture technique would have to be used to make the transfer whether you want to modulate the data or not. i’m just curious if it would be worth doing? i should probably actually setup a general purpose system for managing this sort of transfer and integrate it with the gpu programming interface. with enough work i might be able to reasonably setup your solution brinck without hardcoding anything.

and just before i go… i don’t think i made it clear enough before… but you really saved my day yoyo… i’ve been meaning to go 64bit pcie cold turkey for a long time now. this NV40 development appeared to be the last straw. but i know i would’ve been sickly dissappointed if i had invested in a geforce6 card and not had filtering etc in the vertex shader. definately the most consequential input i’ve recieved in these forums yet :slight_smile:

thanks alot.

sincerely,

michael

shaders would be so much more powerful and interesting if vertex shaders could sample textures as fluidly as pixel shaders… why is this not possible?
For the same reason that the initial vertex and fragment shaders couldn’t handle conditional branching, or only had a few instructions, or had limitted opcodes, or why we don’t have primitive programs: because the hardware isn’t done yet.

Texture units were never meant to be accessed from vertex programs, so building that hardware is non-trivial. And the initial implementations of it will be slow. Just accept that what you’re trying to do won’t be possible for the time being. Spend the time using the functionality that is there rather than pining for functionality that will eventually be there.

Originally posted by brinck:
[b]Another solution would be to convert the vertex program to a fragment program and do the following:

  1. Copy untransformed vertices to floating point texture(s)
  2. Use a fragment program + render to texture to tranform the vertices.
  3. Copy the resulting texture to a vertex buffer.

This should work on most moderately highend GPU:s.

/A.B.[/b]
Well… this solution might work… but ATI has internal 24bit precision in fragment shader, so it may produce noticable numerical errors.

This solution work on NV with PBO & VBO. Just replace step 3 to:
3. bind buffer as PBO
4. readpixels to PBO buffer
5. rebind PBO buffer as VBO
6. setup vertex pointers and do the rest…

I do some tests on NV40 using this approach with 512x512 rgba32f texture. I got ~95 fps on 6800U. This test transform grid of 512x512 vertices and render half a million triangles using glDrawElements(GL_TRIANGLES, …) call.Triangles are in VBO too.

yooyo

Originally posted by Korval:
[b] [quote]shaders would be so much more powerful and interesting if vertex shaders could sample textures as fluidly as pixel shaders… why is this not possible?
For the same reason that the initial vertex and fragment shaders couldn’t handle conditional branching, or only had a few instructions, or had limitted opcodes, or why we don’t have primitive programs: because the hardware isn’t done yet.

Texture units were never meant to be accessed from vertex programs, so building that hardware is non-trivial. And the initial implementations of it will be slow. Just accept that what you’re trying to do won’t be possible for the time being. Spend the time using the functionality that is there rather than pining for functionality that will eventually be there.[/b][/QUOTE]yes naturally… i’m still interested in any best guess projections though.

edit: all i was really saying… is it appears that modern cards have about 8 plus pixel shaders running in parallel and 3 vertex shaders as i read somewhere. if 8 pixel shaders can be poking at the same texture at the same time, it seems like the vertex shader should be able to at the same time as well relatively easilly. i wouldn’t even care if the texture was duplicated so that it can be closer to the vertex shader, if of course the vertex and pixel shaders are poking at the same textures.

the thing for me though is i always assumed this was possible… so even though i make it a point not to get bogged down working with things like shaders… and certaintly not depending on them. i have given some thought to different shaders in the sense that ‘i could do this if i wanted to’… and i never ruled out sampling textures from vertex shaders. but anyhow, silver lining, that functionality it seems will get her soon enough, so all is not lost.

i’m actually fairly relieved, i was looking at shelling out around 800~1000$… now i will probably wait until this functionality is up to snuff so i can get everything i want in one swoop rather than thinking about canabalizing perfectly good setups. i wasn’t ready to make the jump, and now i don’t have a good enough reason to. so i’m happy with that in the end.

for the application i have immediately in mind, i will probably just settle for a slightly cheesier effect, or lower performance, and just do the filtering myself and settle on one float per vertex which i will probably try to find some what to stick in the w component. i will have to do the rest in the pixel shader, which is not good for my overall plans, but should produce cleaner images.

i was basicly planning on doing a two pass cloud rendering algorithm in the immediate time frame. the is basicly a central tesselated plane. on the first pass the vertex program would basicly grab a couple displacement offset from a texture, and then take another couple of textures and get the width of the cloud at that vertex in the positive and negative world axies, interpolate between them relative to the camera to get the alpha blending compoent of the vertex.

now i will probably do the first displacement on the cpu, then settle for clouds that are symetricly fluffed up and down though to a different scale, and look up the densities in a pixel program, and do the alpha perpixel there, which will probably look a lot better than vertex colouring anyhow.

the vertex program might also be responsible for looping the texture coordinates slowly to create a cheesy effect of clouds crawling through the sky.

if i could do the texture lookup in the vertex shader though, i could potentially slowly animate the actual form of the clouds by uploading so many lines slowly to the cloud textures. presumably the animation would be suddle enough that the scan line would not be noticible. can’t do that yet though, because the offsets are done on the cpu for now. not that i know of any great way of producing cloud animations (offline more than likely)

Well… this solution might work… but ATI has internal 24bit precision in fragment shader, so it may produce noticable numerical errors.

This solution work on NV with PBO & VBO. Just replace step 3 to:
3. bind buffer as PBO
4. readpixels to PBO buffer
5. rebind PBO buffer as VBO
6. setup vertex pointers and do the rest…

[/QB]
thanks again for the 24bit precision tip off. does nvidia use 32bits or no?

as for you steps 3~6 for what its worth that is how i personally read brincks suggestion, after our discussion earlier regarding mixing pbo and vbo bindings.

i’m seriously thinking about setting up an all purpose high level memory transfer interface to fluidly manage video memory transfers with optional modulation. i’m not entirely sure what opertunities it would create for run-time programming, but it sounds very useful. just needs to be reduced to as few steps as possible.

i meant to ask before yooyo. is there any central place in particular you find these fine grain hardware caveats? you certaintly can’t find this stuff on nvidia’s internet prescence.

is there anywhere i can look before ponying up for a card that would say something like:

before you shell out… the fragment shader uses 24bit internal precision. texture sampling in the vertex shader is not filtered and requires floating point formatting… i would just like to be able to know straight up somthing as simple as the number of bits in the depth buffer. there is nowhere that i can find that has this kind of breakdown for cards. nothing even close.

thanks again for the 24bit precision tip off. does nvidia use 32bits or no?

ATI always works in 24bit precision. NV can work in 16bit (using ‘half’ in shaders) or 32bit (full precision) mode.

i meant to ask before yooyo. is there any central place in particular you find these fine grain hardware caveats? you certaintly can’t find this stuff on nvidia’s internet prescence.

Everything is in NVidia doc’s. Download all doc’s from developers conferences and read it carefully.

before you shell out… the fragment shader uses 24bit internal precision.

ATI=24bit
NV=16bit or 32bit

texture sampling in the vertex shader is not filtered and requires floating point formatting

Yes. And only on NV4x.

i would just like to be able to know straight up somthing as simple as the number of bits in the depth buffer

Usually 16 or 24bit

And one more thing… Assume using pbuffer and NV4x… Step 3 can be changed to:
3. Bind pbuffer as float texture
4. Use texture fetch from vertex shader

yooyo

thanks again yooyo, but i’m afraid you misunderstood my point this time.

i was basicly trying to say it would be nice if there was somewhere to go where you could get all of this information layed out in a tabular form. ie: big table of hardware specifications and caveats on a chipset basis.

i was also asking if something like this exists, then where? you shouldn’t have to read through nvidia docs to get this information, it should be in their ‘products’ section within 3 jumps from their consumer ‘home page’.

i was also asking if something like this exists, then where? you shouldn’t have to read through nvidia docs to get this information, it should be in their ‘products’ section within 3 jumps from their consumer ‘home page’.
That makes no sense. The “Products” section doesn’t need detailed information about the inner workings of the hardware. If you’re someone who plays games, if you don’t know C++ from Visual Basic, there is no reason for you to know that your video card has the ability to use 16 or 24-bit depth buffers. That’s game developer information. As such, it is in the nVidia developer documentation section, not the “Products” section.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.