Generally + lights

Hi,
if I use a shader, which calculates light:

  1. can I use more than 8 light sources (OpenGL limit)
  2. Do I have to use my own fog/texture/… stuff for working with that? I heared, that you have to rewrite ALL…
  3. How can I delete a shader that is compiled an used (without deleting all)?
  4. Could you give me a sample for a lightning shader which can be configured at runtime ??

Thanks for your help in advance !

http://www.lighthouse3d.com/opengl/glsl/
:slight_smile:

  1. yeah, you can use 10,000 lights if you want
  2. “rewrite ALL” - yes, but it’s easy and compact.
  3. glDeleteProgram()
  4. you just upload different values via glUniformXXX()
  1. If you plan to use fixed-function pipeline(FFP), then yes, AFAIK you don’t have more then 8 light datas.
  2. The shader is responsible for producing final pixel color. You can use data passed to FFP, but the calculations are completely up to you.
  3. Deleting a shader object being used by any program doesn’t affect the program. It’s not deleted in GL context until no program uses it.
  4. http://www.lighthouse3d.com/opengl/glsl/index.php?pointlight

BTW,

  1. I’d not recommend using FFP. OpenGL-3.2 + GLSL-1.5 is a right way to go.
  2. I’m applying light contribution separately for each light (Unified Lighting and Shadowing), so there is no limit in the number of lights and shaders become simpler.

So how can I get all code for fog/texture… ?

And would it be possible to load heightmaps (with textures and collisions) with shaders ?
And volumetric shadows ?
Or deform objects ?
Particles ?

Just want to know it ^^

Yes, yes, yes. There’s no reason to stay in FFP land, so start studying the new pipeline already :slight_smile: .

Is anti-aliasing and anisotropic and linear/biliniar/trilinear filtering also part of FFP ?

AND why should I use different shaders, if I can put all into ONE shader file ? There can only be one “main” or not ?

Do I have to implement:
* Scissor test
* Alpha test
* Stencil test
* Depth test

??

And would you know another side than lighthouse3d.com , many examples just don’t work like they should or throw exceptions…

I would like a tutorial how to implement textures/light ^^

Look in the Red Book. It has the formulas. Also see ShaderGen. It should be able to spit out shaders that give you the code verbatim.

And would it be possible to load heightmaps… with shaders ?

Sure. Why not?

And volumetric shadows ?

Yep.

Or deform objects ?

Yep.

Particles ?

Easily. Many different ways.

It’s available through either the FFP or the custom shader pipe. All this stuff is behind-the-scenes built-in stuff. The texture filtering is customer hardware on the GPU.

AND why should I use different shaders, if I can put all into ONE shader file ?

Because the more complex the shader, the more branches needed in the shader logic and the more inputs and values that need to be passed between shaders. All of these make shaders run slower.

There can only be one “main” or not ?

Right.

Do I have to implement:
* Scissor test
* Alpha test
* Stencil test
* Depth test
??

Nope.

I would like a tutorial how to implement textures/light

Google it. “OpenGL lighting tutorial” “OpenGL texturing tutorial”:

http://www.lmgtfy.com/?q=OpenGL+lighting+tutorial

I mean, if I have to re-implement anisotropic and other texture filtering or ani-aliasing…

No, filtering is still done by silicon.

very Good ^^

But why does it crash if I call “glClear()” or “glBegin()” ?

Code:

*/
	// TEXTURE-UNIT #0		
	glActiveTextureARB(GL_TEXTURE0_ARB);
	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, bump[filter]);
	glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);
	glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_REPLACE);	
	// TEXTURE-UNIT #1:
	glActiveTextureARB(GL_TEXTURE1_ARB);
	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, invbump[filter]);
	glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);
	glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_ADD);
	// General Switches:
	glDisable(GL_BLEND);
	glDisable(GL_LIGHTING);	
	glBegin(GL_QUADS);

Fragment shader:

uniform sampler2D Texture0;
uniform int ActiveLights;
 
varying vec3 position;
varying vec3 normal; 

varying vec3 lightDir;

 
void main(void) 
{
  vec3 lightDir;
  float  attenFactor;
  vec3 eyeDir 			= normalize(-position); // camera is at (0,0,0) in ModelView space
  vec4 lightAmbientDiffuse 	= vec4(0.0,0.0,0.0,0.0);
  vec4 lightSpecular 		= vec4(0.0,0.0,0.0,0.0); 	
 
  // iterate all lights
  for (int i=0; i<ActiveLights; ++i)
  {
	// attenuation and light direction
	if (gl_LightSource[i].position.w != 0.0)
	{
		// positional light source
		float dist	= distance(gl_LightSource[i].position.xyz, position);
		attenFactor	= 1.0/(	gl_LightSource[i].constantAttenuation + 
					gl_LightSource[i].linearAttenuation * dist +
					gl_LightSource[i].quadraticAttenuation * dist * dist );
		lightDir	= normalize(gl_LightSource[i].position.xyz - position);
	}		
	else 
	{			
		// directional light source			
		attenFactor	= 1.0;			
		lightDir	= gl_LightSource[i].position.xyz;		
	} 		
	// ambient + diffuse		
	lightAmbientDiffuse 	+= gl_FrontLightProduct[i].ambient*attenFactor;		
	lightAmbientDiffuse 	+= gl_FrontLightProduct[i].diffuse * max(dot(normal, lightDir), 0.0) * attenFactor; 
	// specular		
	vec3 r 		= normalize(reflect(-lightDir, normal));
	lightSpecular 	+= gl_FrontLightProduct[i].specular * 
			      pow(max(dot(r, eyeDir), 0.0), gl_FrontMaterial.shininess) *
			      attenFactor;	
  } 	
  // compute final color	
  vec4 texColor = gl_Color * texture2D(Texture0, gl_TexCoord[0].xy);	
  gl_FragColor 	= texColor * (gl_FrontLightModelProduct.sceneColor + lightAmbientDiffuse) + lightSpecular;
 
  float fog	= (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale;	// Intensität berechnen 
  fog		= clamp(fog, 0.0, 1.0);  				// Beschneiden 
  gl_FragColor 	= mix(gl_Fog.color, gl_FragColor, fog);  		// Nebelfarbe einmischen 
}

Vertex shader:

varying vec3 position;
varying vec3 normal; 
varying vec3 lightDir;

void main()
{

	normal = normalize(gl_NormalMatrix * gl_Normal);
	lightDir = normalize(vec3(gl_LightSource[0].position));
	
	gl_Position		= gl_ModelViewProjectionMatrix * gl_Vertex;
	
	  gl_FrontColor		= gl_Color;
	gl_TexCoord[0]	= gl_MultiTexCoord0; 
  
	gl_Position = ftransform();
} 

And how can I move a texture from opengl to a shader??

I see calls to glTexEnvf there, and give up replying with hints.
All you’ve been asking has been answered in specs, tutes, forum-posts (findable by simple queries).

and give up replying with hints.
:confused:

But also without “glTexEnvf” it doesn’t work :sorrow:

And HOW can I combine different shader ? Each of them a “main” ?

Check your code. Check for GL errors like this.

If you don’t find any, run a memory debugging tool.

You seem to be learning shaders, yet you have a bunch of GL stuff in your sample code that gets disabled when you plug in shaders. I’d suggest, for learning purposes, you start with a small, fresh, working program, and add things slowly to it, making iterative changes. Rather than toss a bunch of stuff into an existing program and then wonder why it doesn’t work.

And surely you do know that you can’t just throw a glBegin in the program without a glEnd after it, right?

And how can I move a texture from opengl to a shader??

Bind to a texture unit just like normal (you’ve got the concept), and then connect the texture unit to a sampler uniform in the shader by setting the texture unit number on the uniform using glUniform1i.

Run through a few GLSL tutorials on the net, and make it a point to understand every line. If you don’t, ask those specific questions here. You’ll get a lot more help than if you just post a pile of code and ask everyone to debug it for you.

And surely you do know that you can't just throw a glBegin in the program without a glEnd after it, right?

Sure. But I didn’t post the code what wasn’t reached…

I’m working now with an textured triangle to test textures…

And now my shaders work (without nvemulate :mad: :mad: :mad: )

But I could also use GL_TEXTURE0/GL_TEXTURE1/… , or not?

And would it be possible to load heightmaps (with textures and collisions) with shaders ?

And you replied yes. I have several questions:

The only way this could be useful, that I see, would be to load the height field into a geometry shader, which would then generate tris, possibly with taking LOD into account?

How would collision data help? I’ve read that you can detect collisions with a fragment shader?

Last, if you do terrain with height fields, how can you implement features like caves sticking into the terrain?

How would collision data help? I’ve read that you can detect collisions with a fragment shader?

Shaders are programming languages. You can do whatever you want, so long as you can fit it into the input/output/uniform data model.

Last, if you do terrain with height fields, how can you implement features like caves sticking into the terrain?

That’s for you to decide.

If I want to use a shader for different lights, how could I do this ?
Do I have to set the uniforms, draw model 1 then set the uniforms at the new position, draw model 2 ,…
or how can I do this ?

Your question already contains the answer.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.