vertex and fragment shaders in same file

Hello
I have a suggestion regarding GLSL shaders that from my point of view would speed up, facilitate and improve developing GLSL shaders:
that there is to be only one common shader file instead of two separate (one for the vertex shader and another one for the fragment/pixel shader) like it is now.

Then instead of the one obligatory main() function you could in this common shader file have the two obligatory functions fmain() executing the fragment program and vmain() executing the vertex
program.

Then you would not have to switch between two files to answer questions like “Did I declare that varying variable in the other file?”, “Was it
of the proper type?” etc.etc.

This would make it easier to find errors, (slightly) reduce total code mass and reduce the number of files in the development environment
file ‘tree’ so it would be quicker to spot the file you want to code in next.

This might also be more natural with coming unified shader GPU:s.

Could you please consider this unless there is some reason I don’t know about making it an unfeasible idea.

regards

For example, unified shader vs non-unified
http://www.theinquirer.net/default.aspx?article=32769

So, it doesn’t mean putting your vs and fs in the same file.

You can do it yourself.

/* common declarations */

#if COMPILING_VS

/* vertex shader code */

#else if COMPILING_FS

/* fragment shader code */

#endif

Then, if you load it from file to “myshader” buffer, you can do the following:

char *myshader;    /* the code above */
char *vsdefines = "#define COMPILING_VS
";
char *fsdefines = "#define COMPILING_FS
";
char *vshader[2] = { vsdefines, myshader };
char *fshader[2] = { fsdefines, myshader };

/* compile vshader and fshader */

Cheers

The OpenGL spec has nothing to do with how you store your shaders in files. It just wants a string from you.

For example, I store my shaders embedded in XML files. Although my shaders are one shader per file, nothing prevents me from storing multiple shaders in a single file, or even embed the shader source directly in the material definition.

How the shaders are stored is entirely up to you, the GL spec does not mandate a file format. Just write a loading function that can load multiple shaders from one file.

sgrsd

Here are some naive observations…

I think a multi pass kind of deal like cgfx would be cool. So, instead of ping pong between different fbos and or buffers, for instance, just use the multipass functionality in the glsl script. Opengl just handles it.

Maybe if opengl came with some default scripts every user would not have to rewrite opengl all over again! You know rebuilding the wheel over and over again – good for college kids I suppose. Thank god for shader gen, render monkey, shader designer etc. oops none of it runs on a mac. Before in opengl 1.x, I’d have to write a few lines to have a light … now I have to write an entire program and do math etc. :wink:

There are scene graphs and other libraries for that kind of thing.

If you don’t want to reinvent the wheel, use some library. But what can be done on top of OpenGL should not go into OpenGL core. Especially when it’s something as complex as an FX framework. The drivers have enough bugs as it is now :wink:

now I have to write an entire program and do math etc.
You have to write a program AND do math? Now that’s hard. Are you sure you’re in the right profession? :stuck_out_tongue:

Thank you guys.
After realizing that glShaderSource(…)
seams to ‘clear’ the codestring pointer
arguments after each call I got it to work.

Originally posted by nib:
Maybe if opengl came with some default scripts every user would not have to rewrite opengl all over again! You know rebuilding the wheel over and over again – good for college kids I suppose. Thank god for shader gen, render monkey, shader designer etc. oops none of it runs on a mac. Before in opengl 1.x, I’d have to write a few lines to have a light … now I have to write an entire program and do math etc. :wink:
I’m behind you 100%. Shaders are a great thing but IMO harken back to the RISC vs. CISC debates back in the 80’s and early 90’s. Shaders let you do things really fast but to do simple things (e.g., OpenGL vertex shading) you have to code up the entire lighting model. I think the ARB should publish/post/support/maintain open source “stock” shaders that implement the current fixed function pipeline (and application code fragments for loading and communicating with them).

This is insane.
I think what I’ve read over the last few months on these forums with regards the next major GL release has just gone to show how far OpenGL drifted from its original purpose - to abstract a rendering pipeline. A pipeline that has gone from Submit-Transform-Light-Rasterize, to Submit-UserGeometryProgram-UserVertexProgram-UserFragmentProgram-Blend.

OpenGL is not a general purpose graphics library (even though the G and L may lead you to think so…they should change the name). It has to be about one thing - allowing you to communicate with a GPU, any GPU, in a unified conformant way. It’s the first layer exposed to the user - why do you think stuff like this belongs in your lowest level of communication?
Anything else should be a library that uses OpenGL to perform what you want.

Ooooo, it makes me mad.

Originally posted by knackered:
[b] This is insane.
I think what I’ve read over the last few months on these forums with regards the next major GL release has just gone to show how far OpenGL drifted from its original purpose - to abstract a rendering pipeline. A pipeline that has gone from Submit-Transform-Light-Rasterize, to Submit-UserGeometryProgram-UserVertexProgram-UserFragmentProgram-Blend.

OpenGL is not a general purpose graphics library (even though the G and L may lead you to think so…they should change the name). It has to be about one thing - allowing you to communicate with a GPU, any GPU, in a unified conformant way. It’s the first layer exposed to the user - why do you think stuff like this belongs in your lowest level of communication?
Anything else should be a library that uses OpenGL to perform what you want.

Ooooo, it makes me mad. [/b]
There is nothing (I repeat NOTHING) insane about wanting the ARB to provide reference shaders that can used to move legacy applications forward to the new API.

I agree with you on one point though – OpenGL is no longer what it used to be and probably should be renamed out of respect for what it is not – an “Open Graphics Library”. Maybe OpenGC for Open GPU Compiler would be more appropriate. But please stop telling people to use a library that is most certainly NOT Open to replace functionality that was specifically part of the original design of OpenGL.

OpenGL is not a general purpose graphics library (even though the G and L may lead you to think so…they should change the name). It has to be about one thing - allowing you to communicate with a GPU, any GPU, in a unified conformant way.
I disagree. Not on the general thrust of, “Keep stuff that can be layered out of GL.” I’m with you there.

What I disagree with you on is this idea that GL should not be a graphics library, but a GPU interface library. It should not.

Its purpose is rendering; that is what it was created to do. If you want a GPU library, try AMD/ATi’s CTM or nVidia’s CUDA, or petition the Khronos Group to start a WG to make some cross-platform GPGPU API. OpenGL existed before programmable GPUs and it should not change its fundamental purpose just because the hardware it’s using happens to be able to be used for other purposes too.

BTW, geometry programs go between vertex and fragment programs. And rasterization still happens between geometry programs and fragment programs.

Korval, I meant allowing you to communicate rendering commands with any GPU - whether this in itself is layered on some kind of GPGPU API is neither here nor there.
tranders, I’m asking you to focus your thoughts not on supporting legacy GL functionality but on thinking about the bare minimum you need to draw stuff with - then accept that the ARB will layer higher-level functionality more analogous to GL2 on top of this low-level API. Then we all have a choice. I should not have to pay the price in performance and stability for someone wanting to quickly get something up and running using the level of flexibility present in the current OpenGL. I would rather be able to control my resources at the app level and use well defined clear interfaces to synchronise with the rendering pipeline.

Originally posted by knackered:
This is insane.
I think what I’ve read over the last few months on these forums with regards the next major GL release has just gone to show how far OpenGL drifted from its original purpose - to abstract a rendering pipeline. A pipeline that has gone from Submit-Transform-Light-Rasterize, to Submit-UserGeometryProgram-UserVertexProgram-UserFragmentProgram-Blend.

OpenGL’s purpose is to abstract the rendering hardware, and modern rendering hardware is fully programmable (aside from a few remaining areas like primitive assembly and framebuffer blending). When you use the fixed-function legacy stuff in OpenGL 2.1 today, the driver is synthesizing shaders on the fly in response to state changes.

So having reference / sample shaders is definitely useful, but something that belongs in a layered utility library in the legacy-free Longs Peak design.

Originally posted by knackered:
tranders, I’m asking you to focus your thoughts not on supporting legacy GL functionality but on thinking about the bare minimum you need to draw stuff with - then accept that the ARB will layer higher-level functionality more analogous to GL2 on top of this low-level API. Then we all have a choice. I should not have to pay the price in performance and stability for someone wanting to quickly get something up and running using the level of flexibility present in the current OpenGL. I would rather be able to control my resources at the app level and use well defined clear interfaces to synchronise with the rendering pipeline.
I have not said in this thread anything about penalizing the core with support of the fixed-function pipeline - please don’t infer that from my posts on other topics.

What I HAVE said here is that the ARB should provide reference shaders and application source that implements the fixed-function algorithms so every developer doesn’t have to reinvent the wheel just to draw a single pixel on the screen.

BTW, I’m not trying to quickly get something up and running – I’m actually trying to KEEP something running that has been running reliably for over 12 years. I have to focus on replicating my current algorithms and that goes far beyond the “bare minimum” of drawing “stuff”.

One minute you’re demanding full state display lists remain in the core, the next you’re shouting at me for assuming you’ve got your priorities wrong…funny old world.
I think you know we all agree there should be a layered mode emulating all that old gubbins. Maybe the OP’s suggestion was meant for the next release of the GL utility libraries, but I’m always assuming people are talking about core OpenGL when they post in “suggestions for the next release of opengl”.
More forums will be needed.