support for procedural textures !

probably someone other had posted this hint… i’ll do it too.

DX has the ability to return a pointer to the texture pixels: you can modify them directly.

OpenGL instead not, because textures here were designed to be black-boxes: initialize and use them.

it would be VERY good if some opengl new functions would be designed to allow a low level access to the texture memory.

OpenGL caches textures into system ram and video ram, and if it would be possible to change directly the pixels… hmmm…

maybe some new kind of texture object target could be designed: instead of GL_TEXTURE_2D a GL_TEXTURE_MEMORY_2D.

this could create a texture like the one we all know, but instead of cache them, we tell OpenGL to put them into video ram (or agp ram) and leave them there.

this kind of texture is a sort of “absolute priority” proxy scheme, wich will be considered differently from other texture objects.

the location into memory of the texture pixels then could be obtained with a glGetMemoryPointer()-like function, (that would became a new function entry) wich should behave differently (but always return valid pointers) on any platforms.

could it be possible? what do you think guys?

Dolo//\ightY

>DX has the ability to return a pointer to >the texture pixels: you can modify them >directly.

As I understand things, when you write through such a pointer you are NOT necessarily writing directly into video memory. More likely, when you lock a texture, it gets copied from vram into system-ram, and when you unlock it it gets copied back.

Yes, there are reasons for it; I’m no expert, but I understand that some accelerators will rearrange a texture to improve memory access patterns. The result is that you can’t simply access VRAM and expect your texture to be there in a nice linear array because it won’t be.

DMY is right!!

The DX texture management works fine for procedural textures but it’s terribly slow under GL bcoz of the indirection (understand the use of the temporary buffer)
Thus, please give texture ptr back to the user!!!
I’ll also suggest to get a more direct access to framebuffer and zbuffer ptr!!
hey! what for??? this is my problem ok!!
If i need to do software weird things i can only do that with DDraw and i hate that!! )

mike you’re right: things are working this way now, and i hope they will keep on this way

i think having a generalized, high level interface to textures is much important than a complex low level one.

only the opengl system knows how to well layout things for itself.
also, engine normally uses many “static” textures (let’s say this way) and very few, if not any at all, procedural textures.

but what would be the technical reasons to not have a specialized texture object wich overrides the actual behaviour of standard texture objects?

is something impossible to obtain? imo i don’t think so, but i’m not an engineer, so…

i’d really like to know if it would be possible, impossible and why.

the revenge of UBB code
Dolo[b]//[/b]ightY

that’s exactly right.

pointers to texture mem would ruin opengl somewhat, in the same way that it ruins C. Now, before everyone gets TOO upset (my fave language is C, so don’t get me wrong… i’m not a language bigotist yet , pointers really are quite bad. A brief explanation:

suppose you had the func

void goat(int *foobar)
{
*foobar++;
}

and then you did:

int turtle=1;
goat(&turtle);
if(turtle==1) {
/* something */
}

ok, that’s a fairly poxy example, but the point is the C compiler can’t optimise that as well as if it knew what the func body was doing the parameter. I mean, if the lang KNEW that the parameter to goat was NEVER going to change, then it wouldn’t need to perform the test (because it’s a literal, and it will always eval to true). But, because it doesn’t knwo what foobar is doing to the func, because C passes by value, then it has to perform the if regardless… now, you could argue, it could do flow analysis and work these things out at compile time. Yes, it might, but not if the func foobar is imbedded in some library, or indeed you have pointers to pointers or otherwise alias the parameter.

ultimately, pointers restrict the designers ability to optimise code. the same case, I argue, would extend to opengl. If the programmer is given unrestricted access to texture memory, then the opengl drivers can’t make assumptions about its state, and thus lose chances to optimise texture copying.

An alternative, perhaps, might be the use of a “volatile” texture. In this case opengl could be warned that a texture might be changed at anytime, and it can’t assume the contents are valid. perhaps the programmer could give hints when something is changed? <shrugs>

my 2c worth :wink:

cheers
John

volatile… yes. the new texture object could be called GL_VOLATILE_TEXTURE_[1|2|3]D
that’s a much better name!

but i think the target should be to make volatile textures accessible directly: only through this optimal performances could be obtained.

Dolo//\ightY

[This message has been edited by dmy (edited 05-04-2000).]

I like the general idea, but a couple of caveats:

  1. Direct texture access probably won’t be “optimal”. It’s a tradeoff; forcing the texture to stay cached in gfxmem in linear format reduces the driver’s freedom in texture management and will probably hurt performance. For this reason, rather than declaring a texture to be VOLATILE when you create it, it might be better to provide functions to lock and unlock a normal texture. While locked the texture is volatile and can be accessed directly, at the cost of performance.

  2. Under Windows, certain situations (GPF, BSOD etc) can invalidate the contents of gfx memory without warning. (This is why most Win32 drivers keep a copy of texture data in sysmem.) Giving the application a pointer to the gfxmem is therefore rather dangerous. Sure, it’s a corner case, but I imagine it’s the sort of thing that gives driver writers nightmares.

I do think direct-access textures would be useful, but before that I’d like to see the existing pixel ops done properly. We’re stuck in a vicious circle at the moment; programmers don’t use pixel transfer ops because driver support is so dire, and driver writers don’t optimize the ops because nobody uses them.

Render-to-texture would also be very nice; you could do a lot of interesting procedural stuff in hardware, particularly with the 1.2 imaging extensions, and it avoids some of the messier issues with direct access.

Originally posted by MikeC:
Render-to-texture would also be very nice; you could do a lot of interesting procedural stuff in hardware, particularly with the 1.2 imaging extensions, and it avoids some of the messier issues with direct access.

Definitely.
Are there any fundamental reasons that glCopyTexSubImage2D can’t be nearly as fast as true render-to-texture for smallish areas (say 128*128)? Would any driver guys care to enlighten us about what’s involved?

Mike F

[This message has been edited by Mike F (edited 05-05-2000).]

I’m no programmer or engineer, so I know things are complicated and difficult… But as a simple-minded end user, I’m seeing that DirectX is starting to surpass OpenGL in temrs of new features. I hope OpenGL can turn things around…!

Wait, I read the messages in this thread and something didn’t seem right. Finally I think I understand what you’re talking about. You are talking about a CPU program writing procedural textures, correct? When I was reading the messages, I thought you were asking for support for GPU writing procedural textures. Well, as far as I understand it, the GPU already can write textures, and therefore create procedural textures (well, create the content of the textures, not create/allocate the textures in the first place).

My OpenGL program has procedural textures in CPU memory, and there’s no problem as I see it. It is just a big pile of RAM which my procedural texture code writes into. Once the procedural texture contents has been created (or modified), my code updates the copy in GPU memory.

So I’m not quite clear what you want to happen. Surely you don’t want your writes to CPU memory to immediately be transferred to GPU memory, right? That’s just too insanely inefficient.

I’m missing something. What am I missing?

You are missing more than 12 years and a minimum sense of logic. Sorry for being rude, but reviving such an old post and asking people that are not active for a decade is nonsense. Please, don’t ever revive threads older than few months!

This is a 12 year old thread. Closing it. If you have a new question, please start a new thread. Thank you.