View Full Version : Anyone tried projective textures on VPs?

09-08-2003, 09:26 AM
In the last few days I wanted to make some experiments with projective textures.
I came to a point in which they seem to work quite nicely but they have a problem which is way too ugly.

Here's a screenshot. (http://digilander.libero.it/krohm/images/projective_bug.jpg) .

I wouldn't mind the back-reflection, I know it's standard, I just don't want that yellow strip here in the middle.
As you will have guessed, this strip is at planar distance 0 from the projector. This involves the 'w' texture coordinate. I tried something to get the rid of it but without success (mainly by modulating or clamping it).

For the rest, it looks quite ok to me...

Here's another load of screenshots, in case someone asks for them:
Projector far away, pointing at geometry. (http://digilander.libero.it/krohm/images/far_away.jpg) .
Projector moving closer in. (http://digilander.libero.it/krohm/images/approaching.jpg) .
Even closer. (http://digilander.libero.it/krohm/images/approaching2.jpg) .
Projector passed thuru plane: back-projection shows up. (http://digilander.libero.it/krohm/images/inverted_near.jpg) .
Projector goes further down. (http://digilander.libero.it/krohm/images/inverted_far.jpg)

Ugh, there's also this ugly triangular thing... http://www.opengl.org/discussion_boards/ubb/frown.gif

EDITS: wrong ubb code.

[This message has been edited by Obli (edited 09-08-2003).]

09-08-2003, 09:58 AM
Most people will add a second layer of masking on the texture, where things close to the projection origin get masked down to 0 contribution. Often, this is also used for back-lobe removal.

You could also try clamping the w to some small value greater than 0.

09-08-2003, 09:58 AM
I have and managed to get it working after a bunch of headaches. My code's at home, though. . .

09-08-2003, 10:14 AM
You should post the vertex program, especially if it's ARB_vp. Hint hint http://www.opengl.org/discussion_boards/ubb/smile.gif

09-08-2003, 10:39 AM
I'm always flattered when I see that
grinning/smiling face that I painted
back in 1999 in people's projective
texturing demos. http://www.opengl.org/discussion_boards/ubb/wink.gif

Masking is one important way to get
rid of projections where you don't want
them. Blacking out the upper levels
of your mipmap pyramid is another useful trick.

Thanks -

09-09-2003, 01:47 AM
Originally posted by jwatte:
Most people will add a second layer of masking on the texture, where things close to the projection origin get masked down to 0 contribution. Often, this is also used for back-lobe removal.

You could also try clamping the w to some small value greater than 0.
Well, about the back projector there's no problem, I know it should be there, I don't consider it a bug but a todo, I will get the rid of it as soon as I'll have the "real thing" working for real.

I tried to clamp w in the VP and the result were not really good... maybe I had to clamp to another value. I will still play a little with clamping since you suggest it can workaround the problem.

About the 0 contribution when near projection origin, I will try it... as soon as I realize how to do that safely. ^_^! Isn't w=0 the near clip plane?

Originally posted by cass:
Blacking out the upper levels of your mipmap pyramid is another useful trick.
This may be a nice idea. I will mind it carefully, it requires some additions here and there, in my texture management API. Another drawback is to give up auto mipmap generation even if I don't think it's a serious issue - I never saw animated projective textures up to now but maybe in the future...
It is not a texturing demo, it's a texturing experiment (almost same thing) http://www.opengl.org/discussion_boards/ubb/wink.gif

Originally posted by CatAtWork:
You should post the vertex program, specially if it's ARB_vp.
I hadn't it at hand when I posted. Here it is.
Little comment about how I plan to use it.

I plan to "place the observer in the right place". From there, I will draw pre-transformed, static things using multitextured projected lighting. First pass will have no blend, other passes will have additional blend.
Once the lighting is there, I will draw the "real world" with textures and complex shading (well, not really sure of that, it's just an idea).
Since the "real world" may add rotations/translations, usually referring to world origin (identity matrix), I need to keep track of observer position. This is similar to what texGen does anyway.
So I will do a pseudocode like:
// Will be matrix.program[0]

while(renderedLights < visibleLights) {
for(i = 0;
i < MINI(texUnits, programMatrices);
i++) {
// Will bind projective texture to texture unit 'i'
// and put projector matrix in matrix.program[i + 1],
// matrix.program[0] is the observer matrix which may not be mvp.
if(!renderedLights) RenderSimple();
else RenderSimpleAdditive();
renderedLights += MINI(texUnits, programMatrices);
// Now lights are in place, blended togheter.
Besides correctess / efficiency of the algo (I am far from completeness) I hope I can explain you the ideas and reasonings which made me build the VP in that way.

From what I've understood from the whitepapers, I need various matrices.
MVP: This may not be "observer MVP" (e.g. complex rotating "entity" objects). This is vertex program MVP matrix.
Observer MVP: this is the above without any transformation added by "entities". The inverse of this is "prevInverse", which should have ben called "observerInverse" anyway.
Projector matrix[i]: the name says it all. There will be a day in which my vertex program will be built automatically by a dedicated subsystem(s?) and those will be pulled out from state.matrix.program[1..MINI(texUnits, programMatrices)-1] and put in lightProj_0, lightProj_1...

TEMP tc, disp, posout;
PARAM mvp[] = { state.matrix.mvp };
PARAM prevInverse[] = { state.matrix.program[0].inverse };
PARAM lightProj[] = { state.matrix.program[1] };

# Put primary color in place.
MOV result.color, vertex.attrib[3];

# Compute clip coordinates
DP4 posout.x, vertex.attrib[0], mvp[0];
DP4 posout.y, vertex.attrib[0], mvp[1];
DP4 posout.z, vertex.attrib[0], mvp[2];
DP4 posout.w, vertex.attrib[0], mvp[3];
MOV result.position, posout;
# Compute some kind of object linear texGen for mesh, base texture.
# Ignore this, actually does not matter.
DPH result.texcoord[0].x, vertex.attrib[0], state.texgen[0].object.s;
DPH result.texcoord[0].y, vertex.attrib[0], state.texgen[0].object.t;
# Just to be sure it works on every hw, I set texCoord[0].rq = {.0 .0 .0 1.0};

# Compute texCoord[1]
# This is the important thing.
DP4 tc.x, posout, prevInverse[0];
DP4 tc.y, posout, prevInverse[1];
DP4 tc.z, posout, prevInverse[2];
DP4 tc.w, posout, prevInverse[3];
# tc may be different from v[OPOS].
DP4 posout.x, tc, lightProj[0];
DP4 posout.y, tc, lightProj[1];
DP4 posout.z, tc, lightProj[2];
DP4 posout.w, tc, lightProj[3];

# Clip coords have (0;0) at screen center and (1.0;1.0) at upper right corner.
# I need to translate and make bigger.
# This is equivalent (or not?) to that matrix full of "1/2" in papers.
# Sligtly different for 2D textures.
MUL disp.w, posout.w, .5;
MAD posout.xy, posout, .5, disp;
MUL disp.w, posout.w, .5;
ADD posout.xy, posout, disp.w;
MOV result.texcoord[1], posout;

I tested it with static and some dynamic geometry and seems to work correctly (exception for the artifact exposed in my post of curse).

By the way, I plan to use a FP to KIL back-projector fragments... the day I'll have FPs and related subsystems in place, for now I'll go for TexEnv(MODULATE), as the conventional wisdom suggests.

The problem about that line-in the middle when planar distance is zero is still there, as jwatte suggested, I will play with w clamping.

Thank you!

======================================= UPDATE =======================================
All right, I took a bit to experiment with fragment programs, and I have to excuse me with everyone which tried to solve the problem. The problem, really, was not there - I should have tested it with fragment programs since the beginning given the fact I planned everything to use the programmable pipe.

About the 'w' value
Before using fragment programs, I tried to clamp the w value as jwatte suggested. I actually think clamping is wrong, it gives a much worse artifact.

About the projector
Why the projective demo I got here uses a projector zNear > 0? I actually set my projector zNear = 0 and seems to work just fine.

Here are the screenshots. The lighting equation is now "more correct" then before. The "cage-like cube" is the "entity" which moves referring to the origin. It appears to be lit correctly.

Of curse, framerate is *so* ugly because I am using NVemulate (just to make sure no one runs away screaming http://www.opengl.org/discussion_boards/ubb/wink.gif).
The projective texture, without black borders (http://digilander.libero.it/krohm/images/spotlight-projection-lighter.jpg)
Just put the projector in place. (http://digilander.libero.it/krohm/images/placing_projector.jpg)
From another point of view (http://digilander.libero.it/krohm/images/point_of_view.jpg)
Projector pointing away (that white line at the top right), no back-projection (http://digilander.libero.it/krohm/images/no_back.jpg)

The fragment program is responsible for clipping of the back-projection and texels outside th frustum. By using CLAMP_TO_BORDER and using a border color of transparent or maybe black, depending if it will be the first applied light or not, I should be able to clip the back projection only.
There's really nothing interesting in that FP however, in case someone asks for it (more efficient implementations are **surely** possible, by beginning from the CLAMP_TO_BORDER trick)...

TEMP adder, lookup, illuminated, projected; # I will move light color here and then modulate by base texture color

# I begin by putting in the first light.
# Everything which is outside the projector frustum should not be illuminated by it.
# This means I cannot simply ask for a projective texturing, I need to know if the
# projected texcoords are beyond the [0,1] range.
RCP adder.w, fragment.texcoord[1].w;
MUL projected, fragment.texcoord[1], adder.w;
TEX adder, projected, texture[1], 2D;
# Add other lights, possibly with shadowmapping... How to do shadowMapping in FPs?
# <other lights and shadowmaps here>
# Get the rid of back-projection.
SGE illuminated, fragment.texcoord[1].w, .0;
MUL adder.xyz, adder, illuminated;
# Now clip this texture if it goes outside projector's frustum.
# 's'
SGE illuminated, projected.x, .0;
MUL adder.xyz, adder, illuminated;
SGE illuminated, 1.0, projected.x;
MUL adder.xyz, adder, illuminated;
# 't '
SGE illuminated, projected.y, .0;
MUL adder.xyz, adder, illuminated;
SGE illuminated, 1.0, projected.y;
MUL adder.xyz, adder, illuminated;

# Now modulate by base texture color, non-projective
TEX lookup, fragment.texcoord[0], texture[0], 2D;

MUL result.color, adder, lookup;

Now a frightening question is growing in me: how to do shadow mapping using programmable pipe?
Going to make some reasearch about this...

[This message has been edited by Obli (edited 09-10-2003).]

UPDATE EDIT: wrong link, stronger "UPDATE" separation.

[This message has been edited by Obli (edited 09-10-2003).]