Very useful features

Ability

  1. To specify entry point of a shader. Why only
    restricted to main()?

  2. To compile into ARB assembly shader into a string
    buffer for debugging purpose.

  3. To read pixel values from a render buffer target
    or depth buffer using a position relative to the current
    fragment position. This is good for blending.

  4. To change the position of the incoming fragment.

Thanks.

  1. To specify entry point of a shader. Why only
    restricted to main()?

What would that matter? The entry point will always be a function that takes no parameters and returns none. In the long history of C/C++, the need for “main” has been far from the most important limitation in the language. Indeed, most C-derived languages, from D to Java to C# use it.

Makes sense. Maybe until we have effect files that have more than a shader…

… In which case you simply prepend a “#define”, and get this functionality …

  1. I think it’s too late to bring in this one.

  2. There’s AMD ShaderAnalyzer, use that. You could also use the CG compiler to translate your GLSL shaders into ASM, see man cgc. With cgc (or cg.dll), you could even use GLSL in Direct3D. Unfortunately too few people know about that.

  3. See GL_NV_texture_barrier. This extension can be used to accomplish a limited form of programmable blending.

You generally shouldn’t read a random pixel, random memory accessing is highly inefficient with the GPU. There are not so many caches and some kinds of GPU memories are not cached at all so you have to stick with a very specific accessing pattern to maintain reasonable performance (don’t get me wrong but the memory controller of some GPUs is quite stupid for this job).

  1. An even more crazy example of the random memory accessing. Better use a CPU for this one, trust me.
  1. I’m not sure there is a real need for this. I find it quite confusing not to have a main point. This is a bit a cucumbersome idea to me.

  2. ARB assembly would be no use for debugging. It’s not related to the internal GPU instructions, it’s just another language.

  3. GPU and CPU has now days this common behavior: They read cache lines. 64 bytes it often the size used. Chances are that data from pixels around the pixel we read are loaded in cache anyway. When I say around, I mean 2D wise for texture 2D in graphics memory because the data are twiddled to reduce cache miss, reduce the number of cache line read and consequently reduce the cache size. Well make the best use of the cache.

This way multiple random reads would be efficient with “position relative” reads as far as is stay around. However, there is no way GL_NV_texture_barrier works with random access, or at least if you don’t want to read where you want to read.

  1. Crazy idea, it would complexify GPUs design really so much that you can just forget about this. Maybe with a cache of the size of the framebuffer…
  1. If you view GLSL as I have, as a weird HLSL, you’re really setting yourself up for disappointment. Using GLSL effectively requires a slight shift in perspective, me thinks.

Well, I guess we have run into a road block. We keep suggesting the same thing over and over again.

#1? are you kidding me?

Now I think whatever we suggest does not matter to ARB.

But, cannot we get thick line drawing back? :wink:

But, cannot we get thick line drawing back?

Check the GL3.2 specs :wink:

AFAIK the ARB actually relaxed the line-drawing limitations again. glLineWidth is definitely no longer deprecated and line widths greater than 1 are allowed since 3.2 if i remember correctly.

Although i am not really convinced that the ARB has really decided how to handle lines in the future.

Jan.

>> Now I think whatever we suggest does not matter to ARB.

Sure it matters. I’m sure it matters very much.

I think the ARB has much MUCH better things to do than going through freaky ideas.

Half of these “freaky” ideas are available in a competing API :slight_smile:

The rest of the freak is in the hardware limitation :frowning:

Open your mind dude! :wink:

Half of these “freaky” ideas are available in a competing API

I think the point they’re making is that it doesn’t matter. At best, they’re just syntactic sugar.

The rest of the freak is in the hardware limitation

Limitations that are not likely to be relaxed anytime soon.