Wireframe render appears with if-logic

My terrain fragment shader stores the terrain normal in a texture. Per-vertex normals will not work because the terrain uses LOD and real-time lighting, and per-vertex normals would not light well. For texture layers that have a vertical mapping option enabled, the terrain normal from the normal map is used to determine which axes should be used for the diffuse texture texcoords. If the major axis of the normal is x, then the texture layer is mapped with the ZY axes.

A very strange artifact appears when using this technique on my 8800 GTS. I have not tested on any other cards. It looks as a wireframe render is being drawn. This seems to be a result of the if logic; it works fine if I force it to always use any of the three possible mapping axes.

This code is used to determine the mapping axes:

  if (abs(worldNormal.z)>abs(worldNormal.x)) {
   coord = texcoord0.xy;
  }
  else {
   coord = texcoord0.zy;
  }

I also tried rendering the major axis of the normal with red, green, or blue for x, y, and z, and it appeared as I would expect.

Here is my fragment shader:

uniform sampler2D texture0;// normal map
uniform sampler2D texture1;// alpha map
uniform sampler2D texture2;// texture0
uniform sampler2D texture3;// bumpmap0
uniform sampler2D texture4;// texture1
uniform sampler2D texture5;// bumpmap1
uniform sampler2D texture6;// texture2
uniform sampler2D texture7;// bumpmap2
//uniform sampler2D texture8;// texture3
//uniform sampler2D texture9;// bumpmap3

uniform sampler2DShadow texture8;
uniform sampler2DShadow texture9;
uniform sampler2DShadow texture10;
uniform sampler2DShadow texture11;
uniform sampler2DShadow texture12;
uniform sampler2DShadow texture13;
uniform sampler2DShadow texture14;
uniform sampler2DShadow texture15;
uniform sampler2DShadow texture16;

varying vec3 VertexPosition;

uniform vec3 CameraPosition;

uniform vec2 LayerPosition[ LW_TERRAINLAYERS ];
uniform vec2 LayerScale[ LW_TERRAINLAYERS ];
uniform float LayerRotation[ LW_TERRAINLAYERS ];

varying vec3 texcoord0;
varying vec2 texcoord1;

uniform mat4 LightMatrix[ LW_LIGHTMATRIXARRAYSIZE ];

varying vec4 ModelVertex;

varying vec3 T,B;

Include "light.txt"
Include "DirectionalShadow.txt"
Include "PointShadow.txt"
Include "SpotShadow.txt"

void main(void) {

	vec4 AmbientLight = gl_LightSource[0].ambient;
	
	float dirshadowoffset = 0.0001;
	vec4 lightcolor = vec4(0.0,0.0,0.0,0.0);
	vec4 albedo;
	vec4 alpha = texture2D(texture1,texcoord1);
	vec4 bumpcolor;
	vec3 normal;
	vec3 worldNormal = ((texture2D(texture0,texcoord1).xyz - 0.5) * 2.0).xyz;
	vec3 N = normalize( gl_NormalMatrix * worldNormal);
	vec3 bumpnormal = vec3(1.0,1.0,1.0);
	float shininess = 0.0;
	normal=N;
	vec2 coord;	

	albedo = vec4(1.0);
	
	#ifdef LW_LAYER0
  #ifndef LW_LAYER0_VERTICAL
  	coord=texcoord0.xz;
  #endif
  #ifdef LW_LAYER0_VERTICAL
  	if (abs(worldNormal.z)>abs(worldNormal.x)) {
    coord = texcoord0.xy;
  	}
  	else {
    coord = texcoord0.zy;
  	}
  #endif
  albedo = texture2D(LW_LAYER0,coord / gl_LightSource[1].ambient.x);
	#endif

	#ifdef LW_LAYER1
  #ifndef LW_LAYER1_VERTICAL
  	coord=texcoord0.xz;
  #endif
  #ifdef LW_LAYER1_VERTICAL
  	if (abs(worldNormal.z)>abs(worldNormal.x)) {
    coord = texcoord0.xy;
  	}
  	else {
    coord = texcoord0.zy;
  	}
  #endif
  albedo = (1.0 - alpha.x) * albedo + (alpha.x * texture2D(LW_LAYER1,coord / gl_LightSource[1].ambient.y));
	#endif
	
	#ifdef LW_LAYER2
  #ifndef LW_LAYER2_VERTICAL
  	coord=texcoord0.xz;
  #endif
  #ifdef LW_LAYER2_VERTICAL
  	if (abs(worldNormal.z)>abs(worldNormal.x)) {
    coord = texcoord0.xy;
  	}
  	else {
    coord = texcoord0.zy;
  	}
  #endif
  albedo = (1.0 - alpha.y) * albedo + (alpha.y * texture2D(LW_LAYER2,coord / gl_LightSource[1].ambient.z));
	#endif
	
	#ifdef LW_LAYER3
  #ifndef LW_LAYER3_VERTICAL
  	coord=texcoord0.xz;
  #endif
  #ifdef LW_LAYER3_VERTICAL
  	if (abs(worldNormal.z)>abs(worldNormal.x)) {
    coord = texcoord0.xy;
  	}
  	else {
    coord = texcoord0.zy;
  	}
  #endif
  albedo = (1.0 - alpha.z) * albedo + (alpha.z * texture2D(LW_LAYER3,coord / gl_LightSource[1].ambient.w));
	#endif

	//bumpcolor = texture2D(texture3,texcoord0);
	//bumpnormal = normalize(bumpcolor.xyz - 0.5);
	//#ifdef LW_LAYER1
	//	if (alpha.x>0.01) {
	//  vec2 coord;
	//  coord.y=texcoord0.y;
	//  if (abs(worldNormal.z)>abs(worldNormal.x)) {
	//  	coord.x = texcoord0.x;
	//  }
	//  else {
	//  	coord.x = texcoord0.z;
	//  }
	//  
	//  albedo = (1.0 - alpha.x) * albedo + (alpha.x * texture2D(texture3,coord / gl_LightSource[1].ambient.y));
  //	bumpcolor = texture2D(texture5,texcoord0);
  //	bumpnormal = bumpnormal * (1.0 - alpha) + normalize(bumpcolor.xyz - 0.5) * alpha;
	//	}
	//#endif
	
	normal = N;
	//normal = T * bumpnormal.x + B * bumpnormal.y + N * bumpnormal.z;

	//#ifdef LW_LAYER2
	//	albedo = (1.0 - alpha.y) * albedo + (alpha.y * texture2D(texture6,texcoord0*0.5));
	//	bumpcolor = texture2D(texture7,texcoord0);
	//	normal = normalize(bumpcolor.xyz - 0.5);
	//	normal = normal * (1.0 - alpha.y) + (alpha.y * (T * normal.x + B * normal.y + N * normal.z));	
	//#endif

	//#ifdef LW_LAYER3
	//	albedo = (1.0 - alpha.z) * albedo + (alpha.z * texture2D(texture8,texcoord0*0.5));
	//	bumpcolor = texture2D(texture9,texcoord0);
	//	normal = normalize(bumpcolor.xyz - 0.5);
	//	normal = normal * (1.0 - alpha.z) + (alpha.z * (T * normal.x + B * normal.y + N * normal.z));	
	//#endif

	
	
	
	Include "ProcessLights.txt"
	
	gl_FragColor = albedo * AmbientLight + albedo * lightcolor;
	//gl_FragColor = AmbientLight * 0.5 + lightcolor * 0.5;	

  	//if (abs(worldNormal.x)>abs(worldNormal.z)) {
  	//	gl_FragColor = vec4(1.0,0.0,0.0,1.0);
  	//}
  	//else {
  	//	gl_FragColor = vec4(0.0,0.0,1.0,1.0);
  	//}

	Include "Fog.txt"
	//gl_FragColor=alpha;
}

What you’re seeing is a low mip-map being selected. The reason is due to large derivatives between pixels.

Video cards use derivatives between pixels to determine which mip to select. Typically, it’s something like:

mipmap = Log2( max( deltaU * textureWidth, deltaV * textureHeight ) )

Since some video cards calculate derivatives in 2x2 quads, you’re seeing blocks of 2x2 pixels where a low mip-map was selected. Based on what I see there, I’m guessing the difference between xy and zy is large enough that it causes the derivatives to be large, and subsequently a low mip-level is selected.

The way around this is to use the texture2DGrad() instruction and supply your own derivatives. You can simply use dFdx() dFdy() GLSL functions to calculate the derivatives that need to be passed to this function. Note, however, that GeForce 8 series cards seem to have problems with this particular function, so much so that it might be more performance friendly to simply sample the texture using both of those texture coordinates and blend between the result using the result of the if-check.

for example:


  float result = (abs(worldNormal.z) > abs(worldNormal.x)) ? 0.0 : 1.0;
  vec4 c0 = texture2D( texture, texcoord0.xy );
  vec4 c1 = texture2D( texture, texcoord0.zy );
  mix( c0, c1, result );

Hope this helps!

Kevin B

I sampled with both coords and used the if statement to choose a resulting color. I thought the texture2dgrad function worked ONLY on the 8 and 9 series, since it is an SM 4.0 thing?

The terrain frag shader already does about 12 texture lookups per fragment, so one extra isn’t a big deal.

NVidia’s GeForce 6 and better support it, as well as ATI’s X1K and better. In Direct3D, the tex2Dgrad instruction is mandatory in pixel shader 3.0, so it would be safe to assume that all D3D ps3.0 compatible hardware supports this functionality. Whether or not the drivers expose it is another issue entirely.

There has been an extension in the works for a while now that is supposed to introduce the gradient functions to GLSL, but as far as I know, nothing has gone on with that for about the last year. I think it’s called GL_EXT_texture_lod (which happens to be supported on NVidia GeForce 6 and better) but it can’t be found in the OpenGL extension registry. Perhaps the extension is done but just isn’t posted online for some reason? Either way, I’ve used texture2Dgrad() on GeForce 6 and better and it does in fact work. However, I would bet this functionality isn’t available on AMD cards just yet.

Kevin B

Well, I decided to take two samples anyways and blend them together based on the normal, so it’s a moot point.

An easy way to do awesome texture mapping on caves and 3D objects is just to take 3 samples on each axes pair and blend them together based on the normal.

I’m currently having the same issu with an parallax occlusion shader. What values do I have to pass to dFdx() and dFdy() to get the correct results?

Would it be better if I determine the Lod level for my pixel once and then use texture2DLod instead?

You pass the texture coordinate that you’ll use in the texture lookup. texture2DLod is generally faster, however, you won’t get anisotropic filtering with it, and you have to factor in that it takes a bunch of ALU instructions to compute the lod.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.