PDA

View Full Version : glslang problems



valoh
01-18-2004, 04:48 PM
Hello,

i have problems with the most simple glslang shaders http://www.opengl.org/discussion_boards/ubb/frown.gif




//vertex shader:
varying vec3 Normal;

void main(void) {
//Normal = gl_Normal;
Normal = vec3(1.0, 1.0, 1.0);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

//fragment shader:
varying vec3 Normal;

void main(void) {
gl_FragColor = vec4(Normal.x, Normal.y, Normal.z, 1.0);
}


with this Result (http://www.krautgames.de/valoh/opengl/glslang.jpg) .

if I uncomment the 1st line in the vertex shader to change Normal to gl_Normal, the screen gets black.
I check for open gl errors, the info log and validates the programm object but no errors messages.

What can interfere in this way with the shader ? I've testet this shader with 3dlabs ogl2example and there were no problems. So I guess it's a problem in my framework. But i have no idea where. With fixed pipeline shading i had no problems.

any ideas ?

Corrail
01-18-2004, 10:27 PM
How do you pass the Normals to the GL? With glNormal3f() or with vertex arrays. I had a similar problem using vertex arrays. Try to define your own attribute:

//vertex shader:
attribute vec3 Normal;
varying vec3 v2f_Normal;

void main(void)
{
v2f_Normal = Normal;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Maybe this will work

valoh
01-19-2004, 02:18 AM
Originally posted by Corrail:
How do you pass the Normals to the GL? With glNormal3f() or with vertex arrays. I had a similar problem using vertex arrays. Try to define your own attribute:


Why are there also problems when i don't use the Vertex Normal but a constant value for the varying ? If the vertex normals are invalid, it shouldn't be a problem if i don't use them. I tried both, vertex arrays and glNormal3f, no difference, still the same problem.

A can test it with own attribute but i doubt this will change something...

Relic
01-19-2004, 04:46 AM
Maybe a parser error?
Try another name than "Normal"
Or a cast error.
Try vec4(myNormal, 1.0) instead of the three '.' references.

valoh
01-19-2004, 05:57 AM
Relic: I checked for compilation/linking errors. Other name or other cast: no difference.
I've also checked with own vertex attribute. No difference.

If I use glNormal/glVertex and



//vertex shader:
varying vec3 Normal;
void main(void) {
Normal = gl_Normal;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

//fragment shader:
varying vec3 Normal;
void main(void) {
gl_FragColor = vec4(Normal.x, Normal.y, Normal.z, 1.0);
}


the screen get black. With vertex/normal array or attribute I get this Result (http://www.krautgames.de/valoh/opengl/glslang2.jpg) (with the same scene/models as with first screenshot).

I have absolutely no idea why the mesh get distorted although in the shader the gl_Position only depends on gl_Vertex and ModelViewProjection http://www.opengl.org/discussion_boards/ubb/confused.gif

btw: I'm using ATI Radeon9800 np, Catalyst 3.10 and glew 1.1.4.

[This message has been edited by valoh (edited 01-19-2004).]

PanzerSchreck
01-19-2004, 06:29 AM
I'm sure that your problem lies somewhere else, cause I've used normals in glSlang and never had any problems.
I've just tested your shader in my samplapp, and I got the following (looking correct) shots : http://www.delphigl.de/normglslang.jpg http://www.delphigl.de/normglslang2.jpg

(note that I replaced Normal = gl_Normal with Normal = gl_Normal * gl_NormalMatrix)

Edit : Using a Radeon 9700 with Catalyst 3.10 on WinXP.

[This message has been edited by PanzerSchreck (edited 01-19-2004).]

valoh
01-19-2004, 06:46 AM
Originally posted by PanzerSchreck:
I'm sure that your problem lies somewhere else, cause I've used normals in glSlang and never had any problems.


yeah, that's also my suspicion, but I am out of ideas where to look for the problem.

The question is: what can interfere with glslang shaders in a way, that they are compiled and linked without problems, no OpenGL errors with glUseProgramObjectARB or validation but the rendering-result is only rubbish.

valoh
01-19-2004, 09:07 AM
for my shaders i use:



//construct programm
_program_ID = glCreateProgramObjectARB();
_vertex_shader_ID = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
_fragment_shader_ID = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);
glAttachObjectARB(_program_ID,_vertex_shader_ID);
glAttachObjectARB(_program_ID,_fragment_shader_ID) ;
glShaderSourceARB(_vertex_shader_ID,1,&v_src,0);
glShaderSourceARB(_fragment_shader_ID,1,&f_src,0);

//....

//build and bind
glCompileShaderARB(_fragment_shader_ID);
glCompileShaderARB(_vertex_shader_ID);
glLinkProgramARB(_program_ID);
glUseProgramObjectARB(_program_ID);

anything forgotten ?

I had in glEnable(GL_LIGHTING) in my render setup. If I remove this, the screen get black, else there are the described problems. Shouldn't the shading with glslang be independent from GL_LIGHTING ?

valoh
01-20-2004, 02:46 AM
Installing catalyst 4.1 has nothing changed.(Well I can't reproduce the mesh distortion right now, but still incorrect rendering).

Now I have made a simple test (http://www.krautgames.de/valoh/opengl/simpletest.zip) and found something interesting:

The first frame is rendered correct. All following frames are rendered incorrect. I tried to destroy and create, attach, recompile and link the shader in every frame drawing, but still incorrect renderings.
What can be the cause that the first frame is rendered correct, all following not ?
Fixed pipeline rendering is correct, every frame.

btw: I'm using dotnet (Windows Forms) with managed c++. Can this make problems ?

Ffelagund
01-20-2004, 03:25 AM
Hello, i had compiled your shader and this is the result: http://www.typhoonlabs.com/~ffelagund/slang.jpg
As you can see the sphere are right, your problem will be, with a 99% of posibilites, in the normals of the mesh. Your sphere aren't distorted (geometrically), only have colors bases on wrong normals.

[This message has been edited by Ffelagund (edited 01-20-2004).]

Humus
01-20-2004, 03:29 AM
Just did a quick check of your code, and found these things:



OpenGLApp(HDC hwnd);

HDC hwnd?? Are you're confusing the HWND and HDC types?




void OpenGLApp::draw() {
if(!wglMakeCurrent(_hDC,_hRC)) // Try To Activate The Rendering Context
{
throw "error";
}


Why are you calling wglMakeCurrent() every frame?

valoh
01-20-2004, 04:56 AM
Originally posted by Humus:
Just did a quick check of your code, and found these things:



OpenGLApp(HDC hwnd);

HDC hwnd?? Are you're confusing the HWND and HDC types?




void OpenGLApp::draw() {
if(!wglMakeCurrent(_hDC,_hRC)) // Try To Activate The Rendering Context
{
throw "error";
}


Why are you calling wglMakeCurrent() every frame?

ahhhh thanks Humus, that were the right questions http://www.opengl.org/discussion_boards/ubb/smile.gif

The hwnd is only wrong variable name. I used HDC types (see Form1:Form1_Load).


BUT: the wglMakeCurrent caused the problem. If i remove it, everything works fine.

The test was some quick hack based on copy-paste-remove from my framework. There it could be possible to have several opengl windows, therefore i had there a wglMakeCurrent before start rendering.

So is this a bug or a feature that several calls to wglMakeCurrent (with same HDC/HGLRC) destroy gslang functionality ?

But then you would also have problems with pbuffers. Anybody used glslang with pbuffers ?

Ffelagund
01-20-2004, 06:36 AM
There was this 1% http://www.opengl.org/discussion_boards/ubb/smile.gif

Humus
01-21-2004, 06:25 AM
Originally posted by valoh:
So is this a bug or a feature that several calls to wglMakeCurrent (with same HDC/HGLRC) destroy gslang functionality ?

Sounds like a bug to me. Maybe you should email ATI about it.

valoh
01-29-2004, 11:46 AM
I have. A week ago, no reply till now. But I think too, this is a driver issue.

Anyone tried combination pbuffer and glslang yet ?

Another shortcoming of the current (catalyst 4.1) glslang implementation: Loops are not supported. Also with constant ending condition they are not implemented http://www.opengl.org/discussion_boards/ubb/frown.gif

Has anyone infos when to expect ATI drivers with a mature (non-beta) glslang implementation?

PanzerSchreck
01-31-2004, 05:44 AM
Anyone tried combination pbuffer and glslang yet ?
Yes, I have implemented some post-scene effects using pbuffers and had no problems with that.


Another shortcoming of the current (catalyst 4.1) glslang implementation: Loops are not supported. Also with constant ending condition they are not implemented http://www.opengl.org/discussion_boards/ubb/frown.gif
I think that's more of a hardware problem. On current hardware, you need to inline loops. But as the Radeon only supports 96 instructions in FPs, unrolling a loop would most likely make your shader too big. (But maybe I'm totally wrong with that http://www.opengl.org/discussion_boards/ubb/wink.gif )

Corrail
01-31-2004, 09:51 AM
I think PanzerSchreck is right the HW problem of loops. But even for loops with pre-defined loop-counts are not supported:

for (int i = 0; i < 5; i++)
//do something

And, yes. ATI supports only 96 instructions for fragment programs but I've heard something about that ATI's Radeon 9500+ are able to run "infine" instruction programs using floating point buffers. But till now there's no support for that.

Ostsol
01-31-2004, 01:48 PM
Are you refering to the F-Buffer? If so, then that's only on the Radeon 9800 and better -- but not yet supported in drivers.

Corrail
01-31-2004, 02:46 PM
Yes but isn't the F-buffer some kind of a floating point buffer?

PanzerSchreck
01-31-2004, 02:55 PM
No, the F-Buffer is kind of an accumulation-buffer for shaders. If your shader is longer than the max. instruction count, then it is written into that buffer and than (at least I think so) executed in several passes. And AFAIK it's all done in software and no hardware feature. And since the R350-path has been enabled for R300 in one of the older Catalyst-drivers, the F-Buffer should be usable on all Radeons since 9500 and up. Don't know though when and how ATI will expose it in OpenGL or DX.

But it's nothing you can't do yourself. It's just a feature that makes life a bit easier.

Corrail
01-31-2004, 02:59 PM
Yes, I thought so. Because an ATI you can split up a shader and use ATI_draw_buffer to do that feature on your own. So that should be possible on Radeon 9500+. So I guessed that F-buffer stands just for floating point buffer or something like that.

Ostsol
01-31-2004, 03:23 PM
AFAIK, it's a hardware feature. F-Buffer stands for "fragment-stream buffer". Here's the doc:
http://graphics.stanford.edu/projects/shading/pubs/hwws2001-fbuffer/

EDIT: The reason why it has to be a hardware feature is that the memory requirements for the buffer do not increase with screen resolution. For normal multipassing one would need one or more buffers of the same dimensions as the framebuffer. With the F-Buffer the intermediate data for only a single pixel (the pixel being worked on) is stored. Also, normal multipassing can only store one Vec4 per pixel. If you want to store more intermediate data, you need multiple buffers -- which gets expensive. Think of the F-Buffer as multi-passing on a per-pixel basis, rather than on a per-screen basis.

[This message has been edited by Ostsol (edited 01-31-2004).]

Korval
01-31-2004, 09:40 PM
And AFAIK it's all done in software and no hardware feature.

The entire point of F-buffers is that they are hardware, not software. Writing code to break fragment shaders into multiple passes isn't too tough; it's the "making it fast" part that is.

PanzerSchreck
02-01-2004, 01:50 AM
Thanks for correcting me, and that link really clears things up. Until now I thought that the R350-Path that's since longer used on R300 HW also included kind of a software-FBuffer, but that doesn't seem to be the case.

But to be honest : Since ATI isn't exposing that F-Buffer in HW, it's kind of pointless to dig too deep into it.

V-man
02-02-2004, 11:15 AM
Which means that the R300 and R350 are not really different.

Ostsol
02-02-2004, 01:42 PM
Without the F-Buffer exposed in drivers, they're exactly the same in terms of fragment shading capabilities. There are other differences, though, such as the R350's ability to use some z-buffer optimizations which break on the R300 when the stencil buffer is being used. There's also apparently some other tweaks to the memory controller.