PDA

View Full Version : projecting textures



blender
08-21-2002, 09:41 AM
How can I project a texture on the environment, to get for example spotlights and bulletholes? Do you know some good tutorials or sample implementations?

-thanks

SirKnight
08-21-2002, 10:42 AM
Cass made a good paper about this. It's on the nvidia developer web site.

-SirKnight

blender
08-21-2002, 11:32 AM
nVidia's papers and demos sucks if you ask me. I think the papers are too small and the demos include ~60% crap.

dorbie
08-21-2002, 05:30 PM
Try this: http://www.dorbie.com/uav.html#_cch3_887222955

I think NVIDIA's white papers and demos are excellent. If you don't like them try to do better yourself. You really sound like an undeserving ingrate when you complain that you only get 40% great code in their demos.

SirKnight
08-21-2002, 05:31 PM
Well if you're looking for a 'hold my hand through every single line of code like im a kindergardner' style of paper then I think you're out of luck unless NeHe has a tutorial on this subject. The papers and demos do not suck though. I find them good enough for me to learn and understand. To me and most other people I know they are great and top notch. To learn from them, and most other graphics papers and demos for that matter, requires a bit of thinking on your part. Which really is the best way to learn graphics or any other programming topic. If you're told exactly how to do every single little thing with out any thinking and reasoning from yourself then you wont be as good at it. But, if that is how you feel about the papers/demos then I really don't know what to tell you. :p Sorry.

EDIT: Silly grammer error. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight


[This message has been edited by SirKnight (edited 08-21-2002).]

zed
08-21-2002, 08:09 PM
>>How can I project a texture on the environment, to get for example spotlights and bulletholes<<

why would u want to project a bullethole?

theres an example that comes with glut, but i wouldnt look at it cause it sucks

Robbo
08-21-2002, 10:32 PM
Originally posted by SirKnight:
Well if you're looking for a 'hold my hand through every single line of code like im a kindergardner' style of paper then I think you're out of luck unless NeHe has a tutorial on this subject. The papers and demos do not suck though. I find them good enough for me to learn and understand. To me and most other people I know they are great and top notch. To learn from them, and most other graphics papers and demos for that matter, requires a bit of thinking on your part. Which really is the best way to learn graphics or any other programming topic. If you're told exactly how to do every single little thing with out any thinking and reasoning from yourself then you wont be as good at it. But, if that is how you feel about the papers/demos then I really don't know what to tell you. :p Sorry.

EDIT: Silly grammer error. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight


[This message has been edited by SirKnight (edited 08-21-2002).]


I think the problem with NVIDIA demos is you have to kind-of hunt around for the interesting bits of code - because they hide everything behind their sdk libs - if you aren't familiar with them, you'll wonder what exactly is going on http://www.opengl.org/discussion_boards/ubb/wink.gif

blender
08-22-2002, 06:42 AM
I think nVidia's demos (mostly the HW stuff) is meant to more advanced programmers than me and I think the sources are not so good as a tutorial, becouse of how they are commented & organized.
For example, I downloaded once a demo about shadowmaps just to see whats going on. It had about one single file over 4500 lines added to a lump and very poorly commented.
If reading such thing isn't pain, what is?

I agree that nVidia's demos must be grat, if you are a very good programmer and know much about graphics programming all ready.

blender
08-24-2002, 05:52 AM
I managed to project a texture on my scene by manipulating texture matrix, but I obviosly can't project multiple textures in one pass, becouse I have to bind the texture on every polygon in my scene. Not very practical, but it was in many tutorials (including nVidia).
It must be possible to project multiple textures in at least one pass, but how?

dorbie
08-24-2002, 06:04 AM
You can use multitexture, but that's not how bullet holes are done. Games tend to computationally figure out where bullet holse should land on the surface and draw a small polygon. With things like weapon lighting effects similar things are done. Projective texture is used for stull like spotlights and shadows under limited circumstances even then not many games use it.

PH
08-24-2002, 06:19 AM
Yes, bullet holes are probably best done with small polygons. I'm even starting to really like glPolygonOffset ( maybe even as far as to use dorbie's previous suggestion for multipass effects...maybe http://www.opengl.org/discussion_boards/ubb/smile.gif ). Ahem, it's great for shadow volumes btw, especially after chopping them up.

blender
08-24-2002, 07:07 AM
Sorry that I mentioned that bullethole thing, it just crossed my mind.
But anyway, how can I project all the textures in one pass?

PH
08-24-2002, 07:21 AM
Like dorbie said, using multitexture and some form of combiners ( RC's, TexEnv, fragment shaders, etc ). How you combine the textures depends on what you want to achieve.

For example, something like the following will add all projected textures using glTexEnv,




glMatrixMode( GL_TEXTURE );
glActiveTextureARB( GL_TEXTURE0_ARB );
// **Load texture projection matrix for Tex0

glTexEnvi( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE );
glActiveTextureARB( GL_TEXTURE1_ARB );
// **Load texture projection matrix for Tex1

glTexEnvi( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD );

// etc

// Draw surfaces


Note that GL_ADD requires the EXT_texture_env_add extension ( or similar extension ) but most/all cards can easily handle this.

With GeForce1 level hardware you can project 2 textures per pass, GeForce3 can do 4, Radeon 8500 can do 6, etc. If I remember correctly, there are 2 texture cards that only have one hyperbolic interpolator and can thus only do 1 projected texture per pass ( the TNT I think ).

jwatte
08-24-2002, 07:26 AM
Projecting a texture is all in your texgen and your texture matrix. There's one texgen and one texture matrix per multitexture unit. Thus, you just ActiveTexture() and set up your projections in turn.

I'm assuming you're doing the "slide projector" projection, and use CLAMP_TO_BORDER with a black border color, and each texture in ADD texture env mode. Or make the border color white and use MODULATE.

blender
08-24-2002, 09:32 AM
How on earth do you think I can make dynamic lights if I must in the worst case render the scene once for a single light?
I assume the games out there does NOT project like this. How are hitman's shadows projected or dynamic light of quake type game? I'm pretty sure not like this.

blender
08-24-2002, 10:22 AM
I remember Nate had one tutorial about dynamic lighting and it check the polygons from a fixed distance from a light and rendered only them to project the light.

This would work when using lights, but how about when rendering shadows; it would be quite difficult to check which polygons the shadow texture shold be projected on.
Especially when the shadow stretches far away into the scene.

zed
08-24-2002, 11:57 AM
im drawing my bulletholes straight into the base texture (actually using dot3 so they look more like holes)
u will need unique texturing though for this (u should have this anyway right?)

blender
08-24-2002, 01:00 PM
When I move the camera it affects to texture coordinate generation, what would be the best way to prevent this?

dorbie
08-24-2002, 02:02 PM
You must have texgen in eye space, you should texgen in object space or load the inverse view matrix on the texture matrix stack. That's what the _OBJECT and _EYE means.

blender
08-24-2002, 02:46 PM
Thanks. It work pretty well now.

PH
08-25-2002, 06:40 AM
Originally posted by zed:
im drawing my bulletholes straight into the base texture (actually using dot3 so they look more like holes)
u will need unique texturing though for this (u should have this anyway right?)

You might be able to "dot3 light" your bullet quads instead of using unique texturing. It could be problematic for bullet holes on comples meshes but it'll save a lot of memory.

blender
08-25-2002, 07:57 AM
I have implemented shadowmapping now by first rendering a model to a texture and then projecting it onto the environment.
Is there any extensions/tricks to speed-up render-to-texture process?
When the model is too close to the light source. it can't fit to the texture, show could this be fixed?
Have any ideas of how to find only those polygons where the shadow is projected?

zed
08-25-2002, 10:51 AM
>>You might be able to "dot3 light" your bullet quads instead of using unique texturing. It could be problematic for bullet holes on comples meshes but it'll save a lot of memory.<<

yes i can see problems with that. i like to keep things simple.
ive been using unique base textures for years for everything (normally at a lowerresolution so even on my riva128 4mb this was viable) unique textures help with certain things (less repition in the scene, object space bumpmapping + a couple of other neat things ) true memory is the biggest drawback but i get around this by generation textures (also meshes) on the fly.

>>
Have any ideas of how to find only those polygons where the shadow is projected?<<

u project the texture like a frustum be it orthogonal or perspective (ie like the view frustum) thus u can use the same code check the 6 (or less) planes of the projecting textures frustum against your scenegraph to see whats in + whats not.

blender
08-26-2002, 09:48 AM
Can you tell me if there's any more faster ways to render to texture than using glCopyTeximage2D?

Have anyone checked NaTe:s tutorial about dynamic lighting? There's another way to project a texture, but can anyone tell me more specificly about that?

SirKnight
08-26-2002, 10:08 AM
Well there are some extensions you can try. They are: WGL_ARB_render_texture, WGL_NV_render_depth_texture (which can be good for shadow mapping), and WGL_NV_render_texture_rectangle. Well actually the last two there require WGL_ARB_render_texture but they add some extra cool stuff. Maybe there is something there that will work good for you.

Have you also tried glCopyTexSubImage2D(XXX) ? That could save some speed by letting you only copy the part you need.

-SirKnight

PH
08-26-2002, 10:17 AM
glCopyTexSubImage2D(...) is faster than glCopyTexImage2D even if you copy to the entire texture. Currently, I copy an 800x600 screen to a 1K x 1K texture and still running at full speed ( the performance is not far from WGL_render_texure on my machine ).

SirKnight
08-26-2002, 02:38 PM
glCopyTexSubImage2D(...) is faster than glCopyTexImage2D even if you copy to the entire texture.


I kind of thought it was but actually I have never used it (I know shame on me) so I didn't know first hand but I thought I remembered someone saying on this msg board that it was faster for even the whole texture.

So PH since you have done this stuff quite a bit before (obviously) what are your thoughts on the WGL_render_texture compared to using glCopyTexSubImage2D?

-SirKnight

zed
08-26-2002, 07:25 PM
speedwise
ati likes ARB_render_texture better
nvidia likes copytexsubimage better (or have they fixed render_texture yet?)

PH
08-27-2002, 03:13 AM
I think zed's right. I remember Matt saying something about ARB_render_texture not neccessarily being faster than copy_to_texture on NV hardware. Poor Matt almost got flamed for saying that http://www.opengl.org/discussion_boards/ubb/smile.gif. I haven't compared the performance of CTT and RTT on GeForce3 with some of the newer drivers, so I can't say if it's been fixed yet ( if at all possible ).

On my 8500 RTT is faster than CTT. Right now I just use CTT since it allows me to save memory, it's so much easier to use and it's still very fast. RTT and contexts can be a real pain to manage. Using shared contexts is possible but that means making the pixelformat of you pbuffer identicle to your main context ( meaning you'll need depth, stencil, double buffers, etc in your pbuffer ).

PH
08-27-2002, 04:23 AM
Some numbers for identicle scenes on Radeon 8500 ( per-pixel lighting without shadows, approx. 8000 triangles rendered twice, 800x600 screen ) ,

(close-up view)
RTT: 190fps
CTT: 150fps

(far-away)
RTT: 300fps
CTT: 192fps


[This message has been edited by PH (edited 08-27-2002).]

PH
08-27-2002, 04:50 AM
Some screenshots (http://www.geocities.com/SiliconValley/Pines/8553/RTT.html) of the above ( with shadows though, so it's more fill limited but still shows an increase ).

blender
08-27-2002, 06:06 AM
The fps is very poor, about 3f in 1024*768 and 32bit. Only there is is a scene that consists from two boxes and an animated model about 900 polygons and a skybox.
I cast two shadows size of 512*512 from the model to the scene. So the model and the scene are rendered three times.
First the model is rendered to the textures (both of them), then I render the scene and the model with base textures and finally render the scene twise to project both shadow textures. The fps is very poor if thinking to implement this in a larger scene.

Add:
If you want to test my app just post your e-mail address and I'll sent it to you.

[This message has been edited by blender (edited 08-27-2002).]

PH
08-27-2002, 06:10 AM
What graphics card are you using ? Did you you try glCopyTexSubImage2D ? Also, you should specify GL_RGBA8 as your internal format ( for a 32 bit framebuffer ). If you use GL_RGBA as your internal texture format, then you might get a 16-bit texture, thus forcing the driver to do a conversion.

SirKnight
08-27-2002, 07:24 AM
Gosh what a big difference in performance for rendering to a texture with those two methods. I'm going to play around with those two methods on my GeForce 4 Ti w/ latest drivers here and see what happens. I'm very curious now. http://www.opengl.org/discussion_boards/ubb/smile.gif

I'd like to test your app. My email is in my profile.

-SirKnight

blender
08-27-2002, 08:44 AM
I sent my app to you in zip file it's somthing about 1.8mb.
I did try glCopyTexSubimage2D, but I didn't notice any speed differences. I'm using
GL_RGB texture mode for shadow textures, but doesn't it include 3 channels? If I use the texture for shadows, I would really need only one (grayscale). Can this be done and if it can, does it speed up?

PH
08-27-2002, 08:51 AM
The key point to getting good performance is using a texture internal format that matches the framebuffer exactly. So if your framebuffer is 32-bits, you _have_ to use GL_RGBA8 regardless of what you're using the texture for ( note the 8 in the above ). Change it to that and your performance will improve.

Using GL_RGB with a 32-bit framebuffer will in most cases not be very fast ( might be on future hardware but not in general ).

blender
08-27-2002, 09:18 AM
PH, do you mean this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGP8, texture->width,texture->height,0,RGP8, GL_UNSIGNED_BYTE,texture->data);

or what? Becouse that turns everything to white.

PH
08-27-2002, 09:26 AM
No, like this

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texture->width,texture->height,0,GL_RGBA, GL_UNSIGNED_BYTE,texture->data);



[This message has been edited by PH (edited 08-27-2002).]

blender
08-27-2002, 09:34 AM
It worked and I think fps increased by 10 or something at close range. I have a problem viewing in fullscreen mode (I have a topic about this one becouse it's weird one),
so it might be a little more faster in fullscreen.

blender
08-27-2002, 09:37 AM
Damn, wrong guess. It didn't speed up http://www.opengl.org/discussion_boards/ubb/frown.gif(
In 1024*768*32 fps is at close range 45, when rendering one skybox, scene of two boxes, the model and one shadow, no lighting.

PH
08-27-2002, 09:51 AM
Originally posted by blender:
Damn, wrong guess. It didn't speed up http://www.opengl.org/discussion_boards/ubb/frown.gif(


Let me guess...you are using some for of Radeon ? On NV hardware ( GeForce1, GeForce3 ) there's usually a noticable performance increase when going to fullscreen. On Radeon 8500 it's equally fast to run in windowed mode ( which is what I do most of the time ).

blender
08-27-2002, 10:22 AM
No I have gf2MX but I can't go fullscreen in my app.

zed
08-27-2002, 07:17 PM
found this on my hd (from a while ago)
i think these are from my tnt2, one things certain its a 32 bit colour window

glCopyTexSubImage2D: LUMINANCE -- UNSIGNED_BYTE 11.882 Mpixels/sec
glCopyTexSubImage2D: ALPHA -- UNSIGNED_BYTE 11.882 Mpixels/sec
glCopyTexSubImage2D: INTENSITY -- UNSIGNED_BYTE 11.967 Mpixels/sec
glCopyTexSubImage2D: RGB -- UNSIGNED_BYTE 69.453 Mpixels/sec
glCopyTexSubImage2D: BGR -- UNSIGNED_BYTE 128.808 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_BYTE 69.435 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_BYTE 128.808 Mpixels/sec
glCopyTexSubImage2D: RGB -- UNSIGNED_SHORT_5_6_5 69.615 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_SHORT_5_5_5_1 9.625 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_SHORT_5_5_5_1 9.625 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_SHORT_1_5_5_5_REV 9.625 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_SHORT_1_5_5_5_REV 9.625 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_SHORT_4_4_4_4 9.571 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_SHORT_4_4_4_4 9.625 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_SHORT_4_4_4_4_REV 9.625 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_SHORT_4_4_4_4_REV 9.576 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_INT_8_8_8_8 69.453 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_INT_8_8_8_8 127.583 Mpixels/sec
glCopyTexSubImage2D: RGBA -- UNSIGNED_INT_8_8_8_8_REV 127.705 Mpixels/sec
glCopyTexSubImage2D: BGRA -- UNSIGNED_INT_8_8_8_8_REV 130.056 Mpixels/sec

PH
08-28-2002, 06:09 AM
Originally posted by SirKnight:
Gosh what a big difference in performance for rendering to a texture with those two methods. I'm going to play around with those two methods on my GeForce 4 Ti w/ latest drivers here and see what happens. I'm very curious now. http://www.opengl.org/discussion_boards/ubb/smile.gif


Good luck http://www.opengl.org/discussion_boards/ubb/smile.gif. I just ran a test on my GeForce3 and RTT was dog slow ( RTT: 42fps, CTT: 200fps ). In addition, there were some strange artifacts all over the image ( I'm looking into this right now ).

All is not well with ATI's drivers either, I'm certain there's a viewport bug there ( 1Kx1K pbuffer, viewport 800x600 at (0,0) ). NVIDIA's drivers do it the right way in this case.

Well, I'm sticking with CTT for now as it's fast, easy and works with both NV and ATI drivers.

PH
08-28-2002, 06:26 AM
Looks like NVIDIA still have some sort of viewport bug in their drivers too ( not the same type as ATI ). A shot of the artifacts (http://www.geocities.com/SiliconValley/Pines/8553/Artifacts.html) .
The artifacts are only there when rendering to a viewport that's smaller than the pbuffer.

blender
08-28-2002, 06:42 AM
Did you mention some NV_copy_to_texture or is there extension such as that?
If there is, how can I use it?

PH
08-28-2002, 06:53 AM
Ok, found the exact issue with NVIDIA's drivers. It's related to using glScissor. Disabling the scissor test ( that I had set to the 800x600 region ) removed the artifacts.

blender
08-28-2002, 09:58 AM
PH, how can I use CTT, is there some extension for it or is it the same as glCopyTexSubImage() (copy to texture)?
Becouse in my gf2MX my fps is around that 42, and I'm thinking this is a slower option I'm using.

PH
08-28-2002, 10:39 AM
CTT is just short for copy_to_texture using glCopyTexSubImage2D and what you are doing is the fastest on NV hardware.

LaBasX2
08-28-2002, 11:19 AM
From zed's list one can see that using BGR instead of RGB is much faster. But how can you set BGR as the internal format? I haven't found anything like BGR8_EXT or similar that could be used in combination with glCopyTexImage2D....

Thanks!

blender
08-28-2002, 11:25 AM
Shouldn't the internal format just be left to be GL_RGB8 if running in 32bit mode even if I use GL_BGR for type.

I have nVidia's card and fps is ~45 using CTT.

PH
08-28-2002, 12:06 PM
GL_EXT_bgra provides additional texture formats ( BGR_EXT ). These are not internal formats but the source format ( the 'format' field in glTexImage2D(...) ). Yes, this should increase performance on NV hardware but I haven't tried it.


[This message has been edited by PH (edited 08-28-2002).]

PH
08-29-2002, 07:39 AM
Just wanted to let you know that the bug(s) in NVIDIA's drivers have been fixed. And you know what...performance of RTT is just as fast as CTT now http://www.opengl.org/discussion_boards/ubb/smile.gif. Now, as soon as ATI fixes their bug, RTT will suddenly be preferable to CTT.
Second, using seperate contexts seems to be just as fast as using a shared one ( with both NV and ATI drivers ).

LaBasX2
08-29-2002, 07:55 AM
I just downloaded the newest drivers and my tests showed that calling wglMakeCurrent is still very slow on a GF3, even if the rendering context is shared http://www.opengl.org/discussion_boards/ubb/frown.gif

BTW, the new 40.41 drivers crash if I try to access the graphics card options tab. Does anyone else have that problem too?

Endymio
08-29-2002, 04:28 PM
I had the desk.cpl crash also on my Windows 2000 box when accessing the adapter tab on the advanced settings.

The fix was relatively simple: First I downloaded the latest service pack (3) and installed it. After that I re-installed the new nvidia drivers and the problem was fixed - for me atleast.)

Reason why the desk.cpl crashed was that it used newer shell controls than what were installed with the previous service pack and I quess they simply didn't provide fallback for older shell versions, but chose to crash the desk.cpl instead. Oh well, it is a beta driver but still... http://www.opengl.org/discussion_boards/ubb/smile.gif

SirKnight
08-29-2002, 04:50 PM
I noticed with these new drivers also that the Cg Effects Browser crashes. :p Also its good to hear they fixed the whole RTT and CTT thing. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

blender
08-30-2002, 03:57 AM
I have now tested my app on AMD 1400MHz tb with geforce4ti and the fps is about 200 in windowed mode and hi-res and the scene is quite simple (~700 polygon model is rendered to texture and projected on simple ground).
Before implementing shadows. fps was around 380. Is this good or poor?

I my gf2mx fps is much worse.

PH
08-30-2002, 04:03 AM
That sounds resonable. Don't expect too much from the GeForce2mx. It's slower than a GeForce1 DDR for just about everything. There's a reason why it's cheap http://www.opengl.org/discussion_boards/ubb/smile.gif.

blender
08-30-2002, 04:47 AM
But some games that uses this kind of shadowing technique runs well on my mx.

And the problem in here is not the 200fps, it's good for a tiny scene like this, but considering a fullscale game done with this in another thing in terms of speed.