Perspective Correct Texturing with an Ortho projection

I’m using an orthographic projection matrix to display pre-transformed vertex data. Every vertex has a Z value (distance from the viewer). I know the Z values are right because z-buffering works correctly.

I have enabled texture perspective correction with the glHint function (set to GL_NICEST). But perspective correction is not happening!

Is there some test in the bowels of open GL that short-circuits perspective correction when the projection matrix is orthographic?

Originally posted by Joshua Smith:
[b]I’m using an orthographic projection matrix to display pre-transformed vertex data. Every vertex has a Z value (distance from the viewer). I know the Z values are right because z-buffering works correctly.

I have enabled texture perspective correction with the glHint function (set to GL_NICEST). But perspective correction is not happening!

Is there some test in the bowels of open GL that short-circuits perspective correction when the projection matrix is orthographic?[/b]
You are wanting perspective correction when NOT using perspective? How is OpenGL supposto know how far into the scene etc things are going to do this correction? To get this you are probably going to have to supply a calculated “w” coordinate with your position data. (This is usually generated when a vertex is multiplied by a perspective matrix)

Of course I have not tried this (or thought it through) but what you are asking is really bizarre…

You need to specify projective coordinates for non linear interpolation since hardware has lost any way to compute its own homogeneous data, so you need q coordinates in addition to s & t, glTexCoord4*() or equivalent is the entry point.

The way you are abusing the pipeline is quite nasty.

You really should try to at least use the hardware for projection.

Next you’ll be asking for a zbuffer and we’ll all just laugh at you.

That you’re already subdividing for “Phong” will probably mean you’ll end up with hardware deceleration, it makes for fun reading but it’s no way to use graphics hardware.

P.S. instead of trying to work around a bad early decision, why don’t you go back and reevaluate that decision. Worst case you should be able to get eyespace transformed data for use by OpenGL and set up a projection matrix trivially.

It might be a “bad” decision, but it was an incredibly quick way to get hardware acceleration going. I may end up using perspective projection in the end (because of this texture problem; although I haven’t given up hope yet!), but I know that’s going to be tough because our software renderer allows translation of the triangles after perspective projection (analogous to sliding the film around in the back of the camera), which I’m pretty sure isn’t possible in OpenGL (unless I do something profoundly weird the viewport and scissors – then again, I’m doing profoundly weird stuff now…).

Thanks for the pointer about glTexCoord4. That might be the key.

FYI: Z-buffering works just fine in orthographic projection.

but I know that’s going to be tough because our software renderer allows translation of the triangles after perspective projection (analogous to sliding the film around in the back of the camera), which I’m pretty sure isn’t possible in OpenGL
Before you decide what is and isn’t possible in OpenGL, you should probably investigate it. Or, at the very least, ask us.

Such things are mind-blowingly simple with any kind of vertex shader paradigm. ARB_vertex_program could easily handle it, let alone glslang (ARB_vertex_shader). Pick whichever one is appropriate for you.

Such things are mind-blowingly simple with any kind of vertex shader paradigm. ARB_vertex_program could easily handle it, let alone glslang (ARB_vertex_shader). Pick whichever one is appropriate for you.[/QB]
I’ll have to take your word for it. There is nothing about ARB_vertex_program that looks “mind-blowingly simple” to me. It’s 65 pages of dense specification in the OpenGL Extensions Guide. And it’s been 25 years since I wrote any assembly language code.

Rest assured that if I end up having to go this route, I’ll come begging to this list for LOTS of help.

There is nothing about ARB_vertex_program that looks “mind-blowingly simple” to me. It’s 65 pages of dense specification in the OpenGL Extensions Guide. And it’s been 25 years since I wrote any assembly language code.
It’s not that bad; it’s nothing like the assembly programming you may recall. There are lots of cheat-sheets and demos floating around the IHV websites. I think you’ll find it relatively easy to make the transition.

Besides, as Korval pointed out, GLSL makes it all a walk in the park.

… as Korval pointed out, GLSL makes it all a walk in the park.[/QB]
I’m using JOGL and from what I’ve been able to find, GLSL support doesn’t exist in JOGL. (Although there are some demos of ARB_vertex_program on Sun’s web site, so that must be possible.)

You have a strange definition of “hardware acceleration”. And, you may find that your zbuffer while appearing superficially correct is not perspective correct and will show gross errors later, it depends on whether you’re feeding in unnormalized post projection homogeneous coordinates or not etc.

Dollars for donuts you’re not.

P.S. you’re at the stage where the problems, workarounds and hacks are starting to outweigh the apparent savings of getting something up fast. Turn back before it’s too late. It should be simple to apply an identity projection matrix to your software transformation stage and send the results to OpenGL in eyespace at the very minimum. That would eliminate all the nasty stuff you’re doing now. You could then elegantly try to move the model matrix stuff to hardware. What you’re doing now makes doing the “right thing” later more difficult.

Heed the advice, do the projection in hardware.

JOGL?

So you’re doing all that software transformation in JAVA :frowning:

OK, OK, OK. I yield to your collective experience. (Besides, I cannot get my head around 4D texture coordinates.) I’ll let OpenGL do the perspective transform. Wish me luck!

And yes, I’m doing all this stuff in Java. You can see the engine in action here:
http://www.kaon.com/software/swmeson2.html

With modern compilers and CPUs, Java is just as fast as C for tight loops like graphics pipelines. Regardless of language, you spend all your time waiting for data caches to fill anyway.

Well that rather depends doesn’t it. Real Java compiles to bytecode running in a VM and not all VMs have good JIT. In addition there are array bounds checking etc that grinds you to a halt. People have been saying for years now that JAVA is just as fast with good compilers, the trouble is it never quite materializes or is just so inconsistent as to undermine the point of JAVA in the first place.
:slight_smile:

http://www.armadilloaerospace.com/n.x/johnc/Recent%20Updates

Glad to hear you’re going with HW projection, you won’t regret it.

To not get a good JITC, you have to scrape up a copy of Netscape from 1998. Basically, the JITs have been doing a really good job at tight loops since the MS JVM, and JDK 1.3. In this case, we’re developing an application which will be deployed with a fixed hardware/software configuration, so we can easily ensure an excellent JITC.

Array bounds checking is a red herring. CPUs spend ALL their time waiting for memory read/write in a software graphics pipeline. You could compute RSA cyphers in the time wasted waiting for data to become available and not have any impact on performance. (Besides, most of the time, the bounds check gets hoisted out of the loop.)

I understand the issues of pipelining code and data stalls. I don’t entirely agree with your assertion though. Efficient pipelining & instruction reordering, register allocation, prefetch etc. is the meat of good compilers. That data stalls are a problem is justification for having good compilers, NOT a reason why compilers don’t matter and they’re all equal. OK that’s not exactly what you claimed but burdens on trivial data accesses are not entirely mitigated by the fact that fetching data can stall your code.

FYI, JOGL has support for GLSL in that it exposes all of the associated entry points like glShaderSource.

Nice demo of your engine. Please post on the JOGL forum if you run into JOGL-specific (as opposed to OpenGL-specific) problems.

If you haven’t looked at Java or the currently available set of JVMs in a while you might be surprised at the levels of performance attainable (in a portable, platform-independent fashion) with Java on the desktop. There are some poor demos here which may or may not be of interest.

The glass looks defenitely cool, good job!

The glass looks defenitely cool, good job!
The glass uses our “chrome” shading algorithm (a very peculiar specular lighting equation I dreamed up), which would require a pixel shader program to reproduce in OpenGL. So perhaps I’ll end up having to learn that stuff before all is said and done.

I’ve managed to set up the perspective transform as you all suggested (vertex data in eye coords). I’m faking the film-shift for now by playing games with the frustum call. Works OK.

Feeding in model space coordinates and letting GL do the eye transform is next (that’ll make specular lighting work).

You all were right: this approach is proving easier than getting my Ortho hack working correctly with perspective correction.

You all were right: this approach is proving easier than getting my Ortho hack working correctly with perspective correction.
I would have expected so. :slight_smile:

Still, should you ever find yourself in the situation you really need to do what you initially asked about, I think one of Cass' old pages might help.

Glad to hear it.

FYI, it’s object space coordinates with modelview transformation to eye space in OpenGL parlance.