PDA

View Full Version : Texture can not keep negative value?



apapaxionga
11-03-2011, 09:27 PM
i want to keep an negative value in the texture,but can not access that negative value in the Vertex Shader. But i found the negative value i kept turn out to be 0 in Vertex Shader. Why?
And how can i keep negative value in texture?

My texture definition is:


GLfloat edge[4][4]=
{
-1.0,0,1,0,
0,1,0,0,
0,0,1,0,
1,0,1,0,
};
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,4,4,0,GL_LUMIN ANCE,GL_FLOAT,edge);


Cg code:


uniform float4x4 WorldViewProj : state.matrix.mvp;
;

void main_v(float4 pos:POSITION,
out float4 opos:POSITION,
out float4 color:COLOR,
uniform sampler2D tex
)
{
opos=mul(WorldViewProj,pos);
float a;
a=tex2D(tex,float2(0.5,0.0)).r;

if(a==0)
color=float4(1,0,0,1);
else
color=float4(0,1,0,1);
}

Alfonse Reinheart
11-03-2011, 10:17 PM
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,4,4,0,GL_LUMIN ANCE,GL_FLOAT,edge);

Why are you mismatching your image format (http://www.opengl.org/wiki/Image_Format) with your pixel transfer parameters (http://www.opengl.org/wiki/Pixel_Transfer)? If you want to actually store floating-point data, you have to store it. Use a floating-point image format.

aqnuep
11-04-2011, 01:26 AM
What Alfonse wants to suggest is to replace GL_RGB with a floating point texture internal format. As I understand, you want a single channel floating point texture thus you should use GL_R32F or GL_R16F as an example.

apapaxionga
11-04-2011, 01:37 AM
Thanks a lot . The information you provided helps me a lot

apapaxionga
11-04-2011, 01:38 AM
It works using the way you suggest. Thanks a lot

apapaxionga
11-04-2011, 02:37 AM
Another question,if i use GL_LUMINANCE instead of GL_R32F,it does not work again.Why?


glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE,4,4,0,GL _LUMINANCE,GL_FLOAT,edge);

aqnuep
11-04-2011, 03:22 AM
Because the driver may choose a different internal format in that case (e.g. GL_R8, which is again signed normalized texture format with values in range [0,1]).

Bare in mind that using constants like GL_RGB or GL_LUMINANCE when specifying the texture internal format is not a good practice in general as in such cases the driver can choose almost any kind of actual internal format (usually a limited one) that is compatible with the one you requested. You should always use a sized and typed internal format if you want to ensure that certain precision and/or value range will be available in your texture.

arekkusu
11-04-2011, 09:16 AM
And you can introspect this with glGetTexLevelParameter(...GL_TEXTURE_INTERNAL_FORM AT...).

Alfonse Reinheart
11-04-2011, 10:43 AM
Another question,if i use GL_LUMINANCE instead of GL_R32F,it does not work again.Why?

Because you didn't ask for a floating-point luminance format.

There's a reason the Wiki article I linked you to did not talk about image formats that don't have sizes. That's because you should never use unsized formats. Ask for what you want, and you are far more likely to get it.