Glslang problems

Hello,

i have problems with the most simple glslang shaders

//vertex shader:
varying vec3 Normal;

void main(void) {
//Normal = gl_Normal;
Normal = vec3(1.0, 1.0, 1.0);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

//fragment shader:
varying vec3 Normal;

void main(void) {
gl_FragColor = vec4(Normal.x, Normal.y, Normal.z, 1.0);
}

with this Result .

if I uncomment the 1st line in the vertex shader to change Normal to gl_Normal, the screen gets black.
I check for open gl errors, the info log and validates the programm object but no errors messages.

What can interfere in this way with the shader ? I’ve testet this shader with 3dlabs ogl2example and there were no problems. So I guess it’s a problem in my framework. But i have no idea where. With fixed pipeline shading i had no problems.

any ideas ?

How do you pass the Normals to the GL? With glNormal3f() or with vertex arrays. I had a similar problem using vertex arrays. Try to define your own attribute:

//vertex shader:
attribute vec3 Normal;
varying vec3 v2f_Normal;

void main(void)
{
v2f_Normal = Normal;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Maybe this will work

Originally posted by Corrail:
How do you pass the Normals to the GL? With glNormal3f() or with vertex arrays. I had a similar problem using vertex arrays. Try to define your own attribute:

Why are there also problems when i don’t use the Vertex Normal but a constant value for the varying ? If the vertex normals are invalid, it shouldn’t be a problem if i don’t use them. I tried both, vertex arrays and glNormal3f, no difference, still the same problem.

A can test it with own attribute but i doubt this will change something…

Maybe a parser error?
Try another name than “Normal”
Or a cast error.
Try vec4(myNormal, 1.0) instead of the three ‘.’ references.

Relic: I checked for compilation/linking errors. Other name or other cast: no difference.
I’ve also checked with own vertex attribute. No difference.

If I use glNormal/glVertex and

//vertex shader:
varying vec3 Normal;
void main(void) {
Normal = gl_Normal;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

//fragment shader:
varying vec3 Normal;
void main(void) {
gl_FragColor = vec4(Normal.x, Normal.y, Normal.z, 1.0);
}

the screen get black. With vertex/normal array or attribute I get this Result (with the same scene/models as with first screenshot).

I have absolutely no idea why the mesh get distorted although in the shader the gl_Position only depends on gl_Vertex and ModelViewProjection

btw: I’m using ATI Radeon9800 np, Catalyst 3.10 and glew 1.1.4.

[This message has been edited by valoh (edited 01-19-2004).]

I’m sure that your problem lies somewhere else, cause I’ve used normals in glSlang and never had any problems.
I’ve just tested your shader in my samplapp, and I got the following (looking correct) shots : http://www.delphigl.de/normglslang.jpg http://www.delphigl.de/normglslang2.jpg

(note that I replaced Normal = gl_Normal with Normal = gl_Normal * gl_NormalMatrix)

Edit : Using a Radeon 9700 with Catalyst 3.10 on WinXP.

[This message has been edited by PanzerSchreck (edited 01-19-2004).]

Originally posted by PanzerSchreck:
I’m sure that your problem lies somewhere else, cause I’ve used normals in glSlang and never had any problems.

yeah, that’s also my suspicion, but I am out of ideas where to look for the problem.

The question is: what can interfere with glslang shaders in a way, that they are compiled and linked without problems, no OpenGL errors with glUseProgramObjectARB or validation but the rendering-result is only rubbish.

for my shaders i use:

//construct programm
_program_ID = glCreateProgramObjectARB();
_vertex_shader_ID = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
_fragment_shader_ID = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);
glAttachObjectARB(_program_ID,_vertex_shader_ID);
glAttachObjectARB(_program_ID,_fragment_shader_ID);
glShaderSourceARB(_vertex_shader_ID,1,&v_src,0);
glShaderSourceARB(_fragment_shader_ID,1,&f_src,0);

//…

//build and bind
glCompileShaderARB(_fragment_shader_ID);
glCompileShaderARB(_vertex_shader_ID);
glLinkProgramARB(_program_ID);
glUseProgramObjectARB(_program_ID);

anything forgotten ?

I had in glEnable(GL_LIGHTING) in my render setup. If I remove this, the screen get black, else there are the described problems. Shouldn’t the shading with glslang be independent from GL_LIGHTING ?

Installing catalyst 4.1 has nothing changed.(Well I can’t reproduce the mesh distortion right now, but still incorrect rendering).

Now I have made a simple test and found something interesting:

The first frame is rendered correct. All following frames are rendered incorrect. I tried to destroy and create, attach, recompile and link the shader in every frame drawing, but still incorrect renderings.
What can be the cause that the first frame is rendered correct, all following not ?
Fixed pipeline rendering is correct, every frame.

btw: I’m using dotnet (Windows Forms) with managed c++. Can this make problems ?

Hello, i had compiled your shader and this is the result: http://www.typhoonlabs.com/~ffelagund/slang.jpg
As you can see the sphere are right, your problem will be, with a 99% of posibilites, in the normals of the mesh. Your sphere aren’t distorted (geometrically), only have colors bases on wrong normals.

[This message has been edited by Ffelagund (edited 01-20-2004).]

Just did a quick check of your code, and found these things:

OpenGLApp(HDC hwnd);

HDC hwnd?? Are you’re confusing the HWND and HDC types?

void OpenGLApp::draw() {
if(!wglMakeCurrent(_hDC,_hRC)) // Try To Activate The Rendering Context
{
throw “error”;
}

Why are you calling wglMakeCurrent() every frame?

Originally posted by Humus:
[b]Just did a quick check of your code, and found these things:

[quote]

OpenGLApp(HDC hwnd);

HDC hwnd?? Are you’re confusing the HWND and HDC types?

void OpenGLApp::draw() {
if(!wglMakeCurrent(_hDC,_hRC)) // Try To Activate The Rendering Context
{
throw “error”;
}

Why are you calling wglMakeCurrent() every frame?[/b][/QUOTE]

ahhhh thanks Humus, that were the right questions

The hwnd is only wrong variable name. I used HDC types (see Form1:Form1_Load).

BUT: the wglMakeCurrent caused the problem. If i remove it, everything works fine.

The test was some quick hack based on copy-paste-remove from my framework. There it could be possible to have several opengl windows, therefore i had there a wglMakeCurrent before start rendering.

So is this a bug or a feature that several calls to wglMakeCurrent (with same HDC/HGLRC) destroy gslang functionality ?

But then you would also have problems with pbuffers. Anybody used glslang with pbuffers ?

There was this 1%

Originally posted by valoh:
So is this a bug or a feature that several calls to wglMakeCurrent (with same HDC/HGLRC) destroy gslang functionality ?

Sounds like a bug to me. Maybe you should email ATI about it.

I have. A week ago, no reply till now. But I think too, this is a driver issue.

Anyone tried combination pbuffer and glslang yet ?

Another shortcoming of the current (catalyst 4.1) glslang implementation: Loops are not supported. Also with constant ending condition they are not implemented

Has anyone infos when to expect ATI drivers with a mature (non-beta) glslang implementation?

Anyone tried combination pbuffer and glslang yet ?

Yes, I have implemented some post-scene effects using pbuffers and had no problems with that.

Another shortcoming of the current (catalyst 4.1) glslang implementation: Loops are not supported. Also with constant ending condition they are not implemented

I think that’s more of a hardware problem. On current hardware, you need to inline loops. But as the Radeon only supports 96 instructions in FPs, unrolling a loop would most likely make your shader too big. (But maybe I’m totally wrong with that )

I think PanzerSchreck is right the HW problem of loops. But even for loops with pre-defined loop-counts are not supported:

for (int i = 0; i < 5; i++)
//do something

And, yes. ATI supports only 96 instructions for fragment programs but I’ve heard something about that ATI’s Radeon 9500+ are able to run “infine” instruction programs using floating point buffers. But till now there’s no support for that.

Are you refering to the F-Buffer? If so, then that’s only on the Radeon 9800 and better – but not yet supported in drivers.

Yes but isn’t the F-buffer some kind of a floating point buffer?

No, the F-Buffer is kind of an accumulation-buffer for shaders. If your shader is longer than the max. instruction count, then it is written into that buffer and than (at least I think so) executed in several passes. And AFAIK it’s all done in software and no hardware feature. And since the R350-path has been enabled for R300 in one of the older Catalyst-drivers, the F-Buffer should be usable on all Radeons since 9500 and up. Don’t know though when and how ATI will expose it in OpenGL or DX.

But it’s nothing you can’t do yourself. It’s just a feature that makes life a bit easier.