"texRECT" replace function in GLSL?

I have a problem!
which function can implentment CG function “texRECT” in GLSL, I need to translate following code from CG to GLSL!
I know deviceCoord can replace by gl_FragCoord ;but I could not find similar function in GLSL like texRECT in CG! I test texture2D ,it is no use!

who can give me some advices?
thank you advanced!

void main(in float4 deviceCoord : WPOS,
in float3 exitPoint : COLOR,
out float4 ray : COLOR,
uniform samplerRECT entryMap)
{
float3 entryPoint = texRECT(entryMap,deviceCoord.xy).xyz;
float3 dir = exitPoint - entryPoint;
ray.rgb = dir;
}

texture2DRect

Note that texture2DRect is part of GL_ARB_texture_rectangle extension. This extension is not reported by my friend’s RadeonX850, but I cannot confirm that it’s unsupported on these GPU’s. Radeon X850 does not report ARB_texture_non_power_of_two, but you may use it (with a few limitations).
It may be better to use NPOT textures (as long as you do not want to run your code on GeForce FX).
In my opinion it’s best to always use NPOT textures except for GeForceFX, where you use POT textres (and waste some memory).

Samplers: sampler2DRect or sampler2DRectShadow
Functions: texture2DRect or shadow2DRect

Originally posted by k_szczech:
Note that texture2DRect is part of GL_ARB_texture_rectangle extension. This extension is not reported by my friend’s RadeonX850, but I cannot confirm that it’s unsupported on these GPU’s.
Texture rectangle are available on all ATI card since Radeon 7000. However there are ofen report as GL_EXT_texture_rectangle. nVidia card support it since GeForce2mx.

The main problem is when the texture rectangle aren’t report as ARB_texture_rectangle, they could be use with GLSL. It’s especially an issue on laptop where OpenGL drivers simce less updated…

Unfortunately, Texture rectangles works in a different way in both Graphics cards (NVidia and ATI).
In GLSL with NVidia typically is used texture2DRect to work with rectangle float textures (GL_FLOAT_RGB32_NV), and this is used only if necessary. In the other case, texture2D can be used, since OpenGL 2.0 core supports NPOT textures (GL_TEXTURE_2D), so it is not necessary to use rectangle textures, and it’s supported by both NVidia and ATI.

As your advices,I write my GLSL code is :

uniform sampler2DRect entryMap;
void main()
{
float xx,yy;
xx = gl_FragCoord.x/512.0;//make xx = [0,1]
yy = gl_FragCoord.y/512.0;//make yy = [0,1]
vec3 entryPoint = texture2DRect(entryMap,vec2(xx, yy));
//vec3 dir = gl_FragColor.rgb - entryPoint;
//gl_FragColor.a = length(dir);
gl_FragColor.rgb = entryPoint;
}

if correct ,I will get a texture(a color cube), But I get nothing, only a black screen!
I’m sure shader no error!
I think the problem is to specify uniform sampler2DRect entryMap; I use following code to specify:

glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID[0]);
glUseProgram(programShader_Ray);

int texLoc;
texLoc   = glGetUniformLocationARB(programShader_Ray, "entryMap");

glUniform1iARB(texLoc, 0);

textureID[0] is 512x512 GL_FLOAT texture!

I think i should not to specify uniform sampler2DRect !The failed reson may be the texture not pass to frag shader?
I guess I shoud use glGetActiveUniformARB to do it (orange book chapter 7.7)!
who can give me a example code or program?

thank a lots of advanced!

Basically if you use GL_TEXTURE_2D then you should use just sampler2D / texture2D in GLSL shader. If your texture is 512x512 then you do not need to worry about rectangular or NPOT texture support.

In sampler2DRect/texture2DRect the image dimensions are width x height (see http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html#arrays3 ) and you can access to the texels directly with [i,j]. In the other hand, in sampler2D/texture2D you should make a conversion [(i+0.5)/width, (j+0.5)/height], since that texture dimensions are parameterized to [0…1, 0…1].

If you really are using texture rectangles in your GLSL shader, your GL code should read:

  
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, textureID[0]);
glUseProgram(programShader_Ray); 

not

glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID[0]);
glUseProgram(programShader_Ray);

If you really are using texture rectangles in your GLSL shader, your GL code should read:
That’s correct except you don’t need glEnable.

What the other guys were saying is if you have GL 2.0, then you can just use GL_TEXTURE_2D, 3D, cubemap because it supports non_power_of_two
For older cards, as long as you don’t use mipmapping, certain clamp modes, … it will be hw accelerated because it is basically ARB_rectangle but tex coords are normalized.
Of course, 3D and cubemap won’t be hw accelerated.
For that reason, the extension won’t be in there, even though the GL version is 2.0

It would be a good idea to put this in the wiki.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.