To specify entry point of a shader. Why only
restricted to main()?
To compile into ARB assembly shader into a string
buffer for debugging purpose.
To read pixel values from a render buffer target
or depth buffer using a position relative to the current
fragment position. This is good for blending.
To specify entry point of a shader. Why only
restricted to main()?
What would that matter? The entry point will always be a function that takes no parameters and returns none. In the long history of C/C++, the need for “main” has been far from the most important limitation in the language. Indeed, most C-derived languages, from D to Java to C# use it.
There’s AMD ShaderAnalyzer, use that. You could also use the CG compiler to translate your GLSL shaders into ASM, see man cgc. With cgc (or cg.dll), you could even use GLSL in Direct3D. Unfortunately too few people know about that.
See GL_NV_texture_barrier. This extension can be used to accomplish a limited form of programmable blending.
You generally shouldn’t read a random pixel, random memory accessing is highly inefficient with the GPU. There are not so many caches and some kinds of GPU memories are not cached at all so you have to stick with a very specific accessing pattern to maintain reasonable performance (don’t get me wrong but the memory controller of some GPUs is quite stupid for this job).
An even more crazy example of the random memory accessing. Better use a CPU for this one, trust me.
I’m not sure there is a real need for this. I find it quite confusing not to have a main point. This is a bit a cucumbersome idea to me.
ARB assembly would be no use for debugging. It’s not related to the internal GPU instructions, it’s just another language.
GPU and CPU has now days this common behavior: They read cache lines. 64 bytes it often the size used. Chances are that data from pixels around the pixel we read are loaded in cache anyway. When I say around, I mean 2D wise for texture 2D in graphics memory because the data are twiddled to reduce cache miss, reduce the number of cache line read and consequently reduce the cache size. Well make the best use of the cache.
This way multiple random reads would be efficient with “position relative” reads as far as is stay around. However, there is no way GL_NV_texture_barrier works with random access, or at least if you don’t want to read where you want to read.
Crazy idea, it would complexify GPUs design really so much that you can just forget about this. Maybe with a cache of the size of the framebuffer…
If you view GLSL as I have, as a weird HLSL, you’re really setting yourself up for disappointment. Using GLSL effectively requires a slight shift in perspective, me thinks.
AFAIK the ARB actually relaxed the line-drawing limitations again. glLineWidth is definitely no longer deprecated and line widths greater than 1 are allowed since 3.2 if i remember correctly.
Although i am not really convinced that the ARB has really decided how to handle lines in the future.