Global persistent vars

If i have a global var declared like

 
int state;

and then in the main

  
main()
{
    if(state!=99)
    {
        // blah

    } 
    else
    {
        state=99;
    }
}

Will the value of state be preserved to the next time i run the vertex shader ?

Does this impose a software implementation ?

I can see from the OSL API by Rost that there are no preservation of variables betweeen shader executions, but it works somehow in software on my HW…

Will the value of state be preserved to the next time i run the vertex shader ?

No, there’s currently no way to share variables between seperate vertex/fragment shader calls.

Yeah, I believe the actual hardware will maintain the value in a certain register across individual shader executions (assuming it wasn’t overwritten in a subsequent one). But definitely not on a per shader basis (after others were executed in between). Even then, I don’t know if the GL driver will do that as it’s only guaranteed for the assembly version of DX, which is why, I’m guessing, their instancing API was a fairly easy add (or maybe I’m trivializing it :stuck_out_tongue: ).

Shaders are an input-ouput paradigm only (until something crazy changes :stuck_out_tongue: ).

The instance of a shader running on one vertex cannot share any information with a theoretical instance running on another vertex. Same with fragment shaders.

Originally posted by ToolTech:
I can see from the OSL API by Rost that there are no preservation of variables betweeen shader executions, but it works somehow in software on my HW…
“It works somehow in software on my HW” because you are relying on the undefined behavior of an unitialized variable in your shader. Example:

void main( void ) {
   vec4 garbage;
   gl_FragColor = garbage * 0.5 + 0.5;   // UNDEFINED
   garbage = vec4( 0.0, 0.0, 1.0, 1.0 ); // BLUE
}

If the above fragment will run in hardware, whatever garbage is in garbage will be output as the gl_FragColor.

If you force the above fragment to run in software, it’s possible that the garbage left over in garbage is BLUE. That UNDEFINED behavior is subject to change.

-mr. bill

So what you imply is that it might change in the future to be a defined value :wink:

Dare I ask what you’re hoping to achieve with this? If you’re just being curious then okay, but otherwise, forget about it.

For one thing, remember that most hardware has multiple parallel vertex/fragment processors, and that their numbers vary from chip to chip. Therefore, saying something like “the next time I run the vertex shader” is already rather ambiguous by itself. With that in mind, how can you expect a value to persist between one “run” and the next?

Furthermore, looking at both your example and Mr. Bill’s, I would expect the last statement to get optimized away. Persistence of values between executions is not required by the spec, so those assignments don’t contribute anything to the result of your shader. If Mr. Bill’s example produces blue pixels on your implementation, your implementation isn’t doing a good job. In fact, I would also hope to see a warning about the uninitialized variable in the log.

– Tom

Originally posted by Tom Nuydens:
[b]Furthermore, looking at both your example and Mr. Bill’s, I would expect the last statement to get optimized away. Persistence of values between executions is not required by the spec, so those assignments don’t contribute anything to the result of your shader. If Mr. Bill’s example produces blue pixels on your implementation, your implementation isn’t doing a good job. In fact, I would also hope to see a warning about the uninitialized variable in the log.

– Tom[/b]
mr. bill’s example is optimized quite nicely when the fragment shader runs in hardware.

There are optimization opportunities that are quite rightly missed when the fragment shader runs in software. It’s just not smart to spend a lot of effort making something that is painfully slow run slightly less painfully slow. Sometimes not doing a good job is doing a great job. (The good job needs to be done where it matters.)

BTW, in general detecting unitialized variables is not trivial with vectors and write masks and component selectors. There are even open issues about how pendantic you need or do not need to be.

Example (warn or not?):

void foo( inout vec4 s ) { s.y = 1.0; }
void main ( void ) {
   vec4 t;
   t.rgb = vec3( 1.0 );
   foo( t );
   gl_FragColor = vec4( t );
}

First priorities are error when you must error and don’t error when you must not error. Warnings are nice to have, but that’s a second priority.

-mr. bill

There’s no arguing about warnings getting lower priority than correctness, of course. It’s just that I like pedantic compilers :slight_smile:

Originally posted by mrbill:
mr. bill’s example is optimized quite nicely when the fragment shader runs in hardware.
That’s interesting – I just assumed that the optimization process would be pretty much the same either way. What is it that stops you from reusing the same optimizer for both HW and SW shaders?

– Tom

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.