ATI_render_texture_rectangle

Does anybody know how to use the WGL_ATI_render_texture_rectangle extension that is present in Catalyst 4.12?

I suspect this extenssion would do wonders for my media player.

from the name, I’d guess it does the same as NV’s extension and allows you to render to a texture which isnt a power-of-two

Intrestingly, glView isnt showing that extension up on my 4.12beta drivers, yet a site I just found mentions it being in the drivers…

Originally posted by bobvodka:
Intrestingly, glView isnt showing that extension up on my 4.12beta drivers, yet a site I just found mentions it being in the drivers…
WGL extensions don’t show up in the GL extension string, but in the – surprise! – WGL extension string. glView is not particularly well working software. I always get the feeling that the author(s) don’t know a heck of a lot about OpenGL. Anyway, it wouldn’t surprise me if this was yet another issue with the program.

Use Tom's GLinfo if you want something that works well. And don’t forget to submit your results.

Originally posted by bobvodka:
from the name, I’d guess it does the same as NV’s extension and allows you to render to a texture which isnt a power-of-two
Yes, definitely, but I think I recall reading in another thread that it didn’t work the same and since we don’t have a public specification I was wondering if anybody tried it or have any additional information.

It seems kind of silly to add another extension if the semantics are the same as the NV one, but I’ll try that tomorrow unless somebody has more information.

I already tried using nvidia’s extension tokens, it doesn’t work. It’s probably just a testing extension anyway, just give them a little time.

I posted a while ago a work around for this hole render to texture rectangle mess.

Basically what I do is render to a target that is the next power of 2 greater than the size I want to render to. Then I use glViewport to render only to a small portion of that buffer. For instance a 640x480 target is really 1024x512.

Sure it wastes some memory, but I am pretty sure that these NV/ATI extensions do the exact same thing under the hood (They do require that you move your texture coords to the [0, width/height] range instead of [0, 1]).

Originally posted by James Dolan:
[b]Basically what I do is render to a target that is the next power of 2 greater than the size I want to render to. Then I use glViewport to render only to a small portion of that buffer. For instance a 640x480 target is really 1024x512.

Sure it wastes some memory, but I am pretty sure that these NV/ATI extensions do the exact same thing under the hood (They do require that you move your texture coords to the [0, width/height] range instead of [0, 1]).[/b]
Unfortunately it doesn’t just waste “some” memory when a 1920x1080 frame gets expanded to a 2048x2048 texture.

I assure you that is NOT what they are doing under the hood for rect textures.

EDIT: Thanks NitroGL.

Daniel, it seems like you are one of the
few (like me) who’s using OpenGL for video?

Originally posted by rexguo:
Daniel, it seems like you are one of the
few (like me) who’s using OpenGL for video?

Yes, you can find a version of my player at mediocre.wesslen.org , though it’s pretty old by now. I’m in the middle of a major rewrite of the UI back-end and holding off updates until it’s ready for testing. (Also, spare time has been scarce lately.)

The version available on that page has YUY2-decoding implemented in GLSL, my development version has two different YUY2 decoders (memory/fragment processing tradeoff) and a YV12 decoder.

Daniel,

Coolio, I’m checking it out right now.

I assume you’re grabbing the frames from
DirectShow using stuff like ISampleGrabber?
Currently I’m stuck with figuring out how
to have video with alpha channel playing
in DirectShow so I can do real-time compositing…

.rex

Originally posted by rexguo:
I assume you’re grabbing the frames from
DirectShow using stuff like ISampleGrabber?
Currently I’m stuck with figuring out how
to have video with alpha channel playing
in DirectShow so I can do real-time compositing…

No, ISampleGrabber has some serious limitations when it comes to video, so I use a custom renderer instead.

I’ve never encounteded a video with alpha, but theoretically it should be a piece of cake - just make sure you use one of the subtypes with alpha, ARGB32 is trivial.

I checked out your player and the code.
It performed very well! I could play
WMV-HD files on it with no problem.

So I poked around the code and managed
to compile it. One thing I noticed is
your using a separate thread to update
the OpenGL texture after receiving a
sample in your TextureRenderer. I don’t
quite understand the reason behind it.
I know about OpenGL’s context and its
threading issues, so I was wondering if
updating the texture (storing the sample’s
pointer) in the main render loop is
also feasible?

Originally posted by rexguo:
I checked out your player and the code.
It performed very well! I could play
WMV-HD files on it with no problem.

Excellent.

Originally posted by rexguo:
So I poked around the code and managed
to compile it. One thing I noticed is
your using a separate thread to update
the OpenGL texture after receiving a
sample in your TextureRenderer. I don’t
quite understand the reason behind it.
I know about OpenGL’s context and its
threading issues, so I was wondering if
updating the texture (storing the sample’s
pointer) in the main render loop is
also feasible?

The main problem with contexts is (in my mind) that switching between them is painfully slow. By handling decoding in a separate thread I can keep the context current all the time - no switching.

The frame texture is a LockableTexture (which contains a mutex) and is synchronized by the application. The decoder thread flushes its context before releasing the mutex so if the driver is well-behaved there should be no problems (and it indeed works as it should with ATI and nVidia drivers).

To upload in the main loop one would have to either (a) copy the sample or (b) handle synchronization so that DoRenderSample does not return until the main loop is finished with the sample. Just passing the sample pointer could cause problems since upstream filters may decide to reuse the sample while it is in use.

Thinking about it, (b) is really the correct thing to do since the timing of DoRenderSample calls is determined by the time it takes for the frame to be displayed, not just decoded. (And I have actually been having some problems with skipping immediately after seeks so I’ll try to implement this.)

Anyway… I think I prefer handling the decoding in a separate thread since the main thered can go about its business in the mean time, but it would certainly be possible to do everything in the main thread.

I guess this thread went off-topic a while ago so perhaps we should continue by e-mail if there is anything else?

Hi Daniel,

In my player Im using following codepath:

</font><blockquote><font size=“1” face=“Verdana, Arial”>code:</font><hr /><pre style=“font-size:x-small; font-family: monospace;”>
in my texture renderer class:
image_buffer is piece of system memory
CTextureRenderer::DoRenderSample(…)
{
// Get sample pointer. sample is in RGB
image_buffer_lock.Lock();
if (sample is intelaced)
{
copy even lines to upper image_buffer
copy odd lines to lower image_buffer
}
else
{
copy image ti image_buffer
}
image_buffer_lock.Unlock();
frame_event.Set();
}

in main rendering thread:

if ((interlaced && ((currentframe & 1)==1))

Originally posted by yooyo:
In my player Im using following codepath:
[snipped]

I’m not sure what you want to say by that. If it is that you are copying the sample and doing the update in the main thread, then yes, that is one of the methods I mentioned.

Sorry for my post… Forum software screw up my post… but you got the point!

yooyo