V/P-S3.0, etc

I was under the impression that GLSL didn’t have restrictions as to program length, register usage, etc, the way DirectX does. If this is indeed true, then why are people excited to see new demos using vs/ps 3.0?

No offense meant to anyone of course, but I’m particularly thinking of certain water demo featured on the main OpenGL.org page. I wrote a water demo that did true perturbation on my FX 5200. Why is the vertex perturbation such a big deal, when it’s been supported so long, and readily available via GLSL?

Since I’m the author of the said demo, here’s my input:
VS3.0 are only supported on a limited set of cards namely the latest from NVIDIA and 3D labs, meaning that no cards other than those aforementioned could fetch texture data in a vertex program (or shader for the matter)
One of the common tricks to achieve the same effects on older generation cards would be to render the desired effects in a texture and then read the info back using an expensive (read utter slow) glReadPixels or glGetTexImage, and finally upload the data to the GPU through a dynamic VBO or a VA (or whatever…doh).
Another approach would be to render your logic in a PBO and then bind it to a VBO which would bypass the texture fetching in the vertex pipe, but for NOW the only cards that have some beta support for FBOs and PBOs are the one likely to have native VS3.0 support.
Besides, I can’t speak for everyone when I say this, but I for one get excited when I see demos or games for that matter that exploit a certain hardware feature on my graphic card, and it was the same way for me ever since my GF2 MX (sigh)
Take care

I was under the impression that GLSL didn’t have restrictions as to program length, register usage, etc, the way DirectX does.
You’ve been misinformed. There’s a lot of misinformation about glslang, particularly due to what 3DLabs (and John Carmack) wanted it to do.

Glslang as it stands today has two kinds of limits. There are the kinds of limits that are reasonably easy to quantify: number of vertex attributes available, number of varying floats, number of uniforms available. These are things that the user directly uses and can easily be quantified. There are glGet calls to acquire the values for these limits on a hardware-basis.

Then, there’s the other kind of limit. The kind of limit that glslang abstracts away. These limits are things that you can’t adiquately describe in a high-level language context. Things like instruction count (determined by compilation, so it’s hard to stay within the restrictions), number of temporary registers (the compiler determines how these are laid out), texture access dependencies (for R300/R420-class hardware only), and other hardware-dependent restructions that simply can’t be effectively communicated to the user in any meaningful way.

Glslang does not expose these limits. However, it does allow compilers to freely live within those limits. It allows a glslang compiler to fail for any implementation-dependent reason. This can include violating instruction limits, texture dependency limits and so forth.

This doesn’t let glslang get around those limitations; it just makes it that much harder for the user to know whether or not he has violated them. The theory is that, as hardware improves, we will stop running into those limits, so we won’t have much problem with them.

However, the design of glslang was concieved such that there is the possibility of transparent multipassing. That is, if a shader cannot be run in one pass on hardware, the compiler could actually break it up into multiple passes, rendered onto internal render targets, to produce the desired effect. If an implementation wanted to accomplish this, glslang allows for it by not exposing those limitations.

OK, I guess I didn’t understand a couple of things, and I apologize about the way my previous post sounded.

First off, I didn’t know about the use of texture reads from a vertex shader. When I did water perturbations, I passed certain values as uniforms, and the vs calculated the height each vertex should be at based on those values and the initial location of the vertex. Quite easy on an FX, but limited as to the interaction (ripples only came from fixed locations, for example).

2nd, it seems that the excitement isn’t really over vs3.0 (as it isn’t used in GL), but about the underlying ability of the graphics card which is represented by vs3.0. As a card supports longer programs, more temporary register, more uniforms and varyings, etc, obviously people will be excited to put them to use.

Originally posted by Korval:
[quote]I was under the impression that GLSL didn’t have restrictions as to program length, register usage, etc, the way DirectX does.
You’ve been misinformed. There’s a lot of misinformation about glslang, particularly due to what 3DLabs (and John Carmack) wanted it to do.
[/QUOTE]Actually, it seems I was right: GLSL doesn’t have these limitations. The graphics cards do. It’s rather like programming for 16bit real mode in C. While C doesn’t really have any limits as to memory access, you couldn’t create a 2 megabyte array in it. When programming for 32bit protected mode, however, multi-megabyte arrays were a very exciting thing.

So, the real excitement isn’t over GLSL being able to do something (since it always was), but the fact that the cards can now run it :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.