PDA

View Full Version : Any word on ANY render to texture spec?!



KRONOS
09-01-2004, 04:25 AM
I am trying to do some stuff with shadow maps and I'am tired of the pbuffer crap. :mad:

Where the heck is super buffer and EXT_render_target?! We haven't heard anything from EXT_render_target and it has been almost 5 months and then I read that super buffers may be left out because there is almost no hardware support.

EXT_render_target started good with a lot of discussion, but then nothing more from it. Is it dead? Is it ready? Is it out on vacation? What?

I am really tired of waiting! :mad:
We don't even have any news or info regarding both extensions, so I can wait 1, 2, 3 or 9 more months (nobody knows) for a simple thing like this?

If there is no solution until the end of the year, tell us (so I might start porting to D3D).

To ATI and NVIDIA people out here: any news?!

(and sorry to create a thread mostly to complain, but anyone can join in... :D )

davepermen
09-01-2004, 05:01 AM
joined...

ZbuffeR
09-01-2004, 05:15 AM
Me too !

Roderic (Ingenu)
09-01-2004, 06:06 AM
I'm in to get not only news but at the very least an EXT, would prefer an ARB though.

nystep
09-01-2004, 06:07 AM
Same here, i'm definitelly waiting for this extension, some news would be good to hear ;)

yooyo
09-01-2004, 06:23 AM
I want it too. Todays engines use image compositing rendering approach and good RTT is necessary for that. PBuffers are good but not enough. For example I need to share depth buffer between main backbuffer and some PBuffer. MRT can help but MTR is just a set of backbuffers (not textures!).

@KRONOS

I hear some rumors about RTT in D3D. RTT in D3D are slower than pbuffers! So stick to GL. :)

yooyo

idr
09-01-2004, 06:54 AM
We're working on it. Just like with a Carmack game, it will be ready when it's ready. :) If you think about it, we're making a pretty significant change to the way OpenGL works. It would really be a shame to botch it up, don't you think?

Like was said the the OpenGL BoF at SIGGRAPH, pretty much all of the major issues with the current spec have been worked out (finally!), and we're polishing off the last details. There's still a few things left, but I can see a light at the end of the tunnel. I'm pretty sure it's not a train, too.

KRONOS
09-01-2004, 08:48 AM
Originally posted by idr:
Like was said the the OpenGL BoF at SIGGRAPH, pretty much all of the major issues with the current spec have been worked out (finally!), and we're polishing off the last details. There's still a few things left, but I can see a light at the end of the tunnel. I'm pretty sure it's not a train, too.Working on what? Is it the super buffers after all? What about EXT_render_target?! What can we expect?

Can we see a spec? Please... :D

yooyo: are you sure?! :D

Korval
09-01-2004, 09:08 AM
They, basically, said that ARB_superbuffers is dead, and that they're going with render_target (which will now be ARB_render_target).


Can we see a spec? Please... I imagine it isn't much too different from what we saw in the preliminary EXT_render_target spec, but obviously a bit more mature. My real question is this.

During the EXT_RT discussions, a lot was made of making this extension very restrictive (buffers needing to be the same size, etc), and building on top of it to allow for hardware that can loosen the restrictions. Also, a lot was made on having another extension based on EXT_RT to allow for render-to-vertex-array-esque functionality. Are either of those fronts progressing, and will they be available when ARB_RT comes out?

knackered
09-01-2004, 09:26 AM
Well seeing as though you can now sample a texture in a vertex shader a render-to-vertex-array extension is kind of redundant, don't you think?

zeckensack
09-01-2004, 09:33 AM
Seconded. I'd prefer a {EXT|ARB}_render_target. Superbuffers would be amazingly cool, but it's not as urgent. And I understand that it's a very complex approach to tackle. Complete overkill for the real issue at hand (which is, of course, elegant R2T).

Originally posted by yooyo:
I hear some rumors about RTT in D3D. RTT in D3D are slower than pbuffers! So stick to GL. :)

yooyoI doubt it.
Whatever you saw may have been the usual batching overhead disadvantage of DirectX Graphics, but I very much doubt that R2T itself is inefficient compared to the song-and-dance that OpenGL requires.

PBuffers for R2T are a complete mess. Because they exist in a different rendering context, you get all the context management overhead. Namely context switching and object sharing.

Now this obviously works, but it's by no means easy to do nor efficient. That's a lot of driver complexity and it also presents a lot of opportunities to developers for shooting themselves in the foot, especially to the more inexperienced folk.

PBuffers work for R2T at all because they piggy-back on infrastructure that was designed to support multi-threaded, multi-view modeler/editor-type applications. This infrastructure was certainly never meant to be used in this way.

OpenGL is quite easy to get started with in a number of ways. But that just doesn't apply to developers who want R2T functionality.

zeckensack
09-01-2004, 09:42 AM
Originally posted by knackered:
Well seeing as though you can now sample a texture in a vertex shader a render-to-vertex-array extension is kind of redundant, don't you think?Not quite.
a)Pulling a vertex attribute is faster by an order of magnitude than sampling a vertex texture

b)Vertex textures are not (yet) widely supported, and aren't cross-vendor portable. Rendering to floating point buffers is much better in these respects.

A lousy and near-obsolete Radeon 9600SE could do render-to-vertex-array, if you could just rebind a render target as a VBO ...

Corrail
09-01-2004, 09:53 AM
Here too! I'm waiting all day for news about this topic... ;)

yooyo
09-01-2004, 12:54 PM
Originally posted by zeckensack:

Originally posted by yooyo:
[qb]I hear some rumors about RTT in D3D. RTT in D3D are slower than pbuffers! So stick to GL. :)

yooyoI doubt it.
Whatever you saw may have been the usual batching overhead disadvantage of DirectX Graphics, but I very much doubt that R2T itself is inefficient compared to the song-and-dance that OpenGL requires.
I have to explain my post a bit better...
I was read somewhere (Im not sure where) that setting render targer in D3D are slower than context switching between pbuffer and main rc.

yooyo

lgrosshennig
09-01-2004, 02:34 PM
@Yooyo

Can you elaborate on this topic a bit more in detail? Maybe even find the post you are referring to? Are you sure that its the general case and not just a vendor limitation?

@KRONOS

Sign me up for the list too.

Jan
09-01-2004, 03:21 PM
There´s absolutely nothing, i am waiting for, more impatiently.

mogumbo
09-01-2004, 03:41 PM
Like everyone else, I'd like some news too.

At the Siggraph OpenGL BOF I remember hearing about 2 things: EXT_framebuffer_object, which they said is intended to simplify RTT, and uber buffers. Some news or comments on either of these would be nice....

ffish
09-01-2004, 10:44 PM
Add me to the list. I'm desperate for this feature too.

MattS
09-02-2004, 03:36 AM
According to John Carmack (in his QuakeCon video), Ati and nVidia are arguing over "stupid, petty little things" when it comes to render to texture which I am assuming is the EXT_render_target spec.

He also describes the pbuffer API as "God awful" and using it was the closest he came to dropping OpenGL and switching to D3D.

Matt

LarsMiddendorf
09-02-2004, 06:37 AM
PBuffers are ugly. EXT_render_target is a very elegant extension and I hope it is implemented soon.

AdrianD
09-02-2004, 07:31 AM
i am also waiting for this extension. its one of my most-wanted-list.
right now i am simulating the behaviour of this extension using pbuffers/render-to-texture.(its hidden inside a pixelbuffer-class, because i cound't wait any longer)
and i can't await the moment, when i can get gid of that ugly context-switches...

Christian Schüler
09-02-2004, 08:51 AM
Hey, sign me to that list please!

That said, I'm gravitating more to *_render_target than super buffers, because the super buffer extension looks like making things more complicated than they need to be.

KRONOS
09-02-2004, 09:54 AM
Someone should tell Carmack to register the forums so he could sign to! :D

And to imagine I had a similar idea in late 2003! http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=7;t=000412
:p

MZ
09-02-2004, 12:47 PM
Originally posted by KRONOS:
And to imagine I had a similar idea in late 2003!And to imagine the original GL 2.0 had an idea of render-to-texture solution similar to the one in D3D, in late 2001! ;)

In case some of you have forgotten: GL 2.0 introduced Image Objects. This type of object played exactly the same role as Surfaces do in Direct3D. Image/Surface is component of framebuffer (color buffer, depth buffer) or texture (mipmap level, cube face). Therefore it is more logical to have render-to-image/surface rather than render-to-texture, and D3D's SetRenderTarget() works exactly this way.

The related thing is that even with EXT/ARB_render_target there will be still no civilised way in GL to simply copy rectangular region of pixel data from one image/surface to another - sort of glCopyTexSubImage but generalized to all kinds of sources/destinations. Compare the following:

- D3D way:
IDirect3DDevice9::UpdateSurface() (Surface->Surface copy, exactly what you'd expect)

- original GL 2.0 way:
glCopyImageData1D/2D/3D() (Image->Image copy - as above, but generalized to all dimensionalities)

- current GL way (depending on what's the source and the destination):
. (framebuffer -> texture) glCopyTexImage
. (framebuffer -> framebuffer) glReadPixels + glDrawPixels
. (texture -> texture) glGetTexSubImage + glTexSubImage
. (texture -> framebuffer) Loads-of-state-changes + render-quad + Loads-of-state-changes again

- future GL way: (almost identical to above, actually)
. glRead/DrawPixels or gl[Get]TexSubImage, but with PBO as intermediate memory
. Loads-of-state-changes + render-quad, but with use of EXT_render_target

IMHO the recent, post-GL2.0's-death solutions are more like sweeping problems under carpet, rather than long-term solving them.

</beating dead horse>

Jan
09-02-2004, 01:51 PM
MZ is right. At the moment crossplatform-capability is the last real reason to use OpenGL.

It´s sad.

V-man
09-02-2004, 02:15 PM
Originally posted by MZ:
- current GL way (depending on what's the source and the destination):
. (framebuffer -> texture) glCopyTexImage
. (framebuffer -> framebuffer) glReadPixels + glDrawPixels
. (texture -> texture) glGetTexSubImage + glTexSubImage
. (texture -> framebuffer) Loads-of-state-changes + render-quad + Loads-of-state-changes againCorrection

. (framebuffer -> framebuffer) glCopyPixels

And with D3D, there are limitation if I remember the documents right.

Anyway, what I really wanted way back then was to have a nice clean render to texture. It never appeared so I ended up building my code around p-buffer. I even have a function that works like ChoosePixelFormat, trying to make a close match.

D3D has the same issue. It does not have a easy to use function like ChoosePixelFormat. <secret>although it has D3DX</secret>

speedy
09-02-2004, 03:09 PM
C'mon IHV guys, make EXT_render_target finally happen!! :D

Korval
09-02-2004, 05:00 PM
Ati and nVidia are arguing over "stupid, petty little things" when it comes to render to texture which I am assuming is the EXT_render_target spec.Oh, this is good to hear :mad: :mad: :mad:

See, this is why nVidia/Apple/3DLabs should have just released it months ago as an EXT extension, rather than going through the "play nice" route of the ARB machine, thus delaying vital functionality for a good 4 months or so. If we see a finished ARB_rt before the year is out, I'd be surprised at this point.

Did I mention :mad: :mad: :mad: !

More ARB members should just do an end-run around the ARB by forming their own workgroups and just dropping EXT extensions as they see fit. With the ATi/nVidia marketshare divided as is, it is not unreasonable for the side with the better EXT extensions to force the other to implement them.


Therefore it is more logical to have render-to-image/surface rather than render-to-texture, and D3D's SetRenderTarget() works exactly this way.EXT_render_target does something similar. You build framebuffer objects by binding textures to various "targets" on the framebuffer (color, depthstencil, etc). It really is a nice extension.


The related thing is that even with EXT/ARB_render_target there will be still no civilised way in GL to simply copy rectangular region of pixel data from one image/surface to another - sort of glCopyTexSubImage but generalized to all kinds of sources/destinations.Really, once we have RTT, I could care less about merely copying texture data from one texture to another. I've never had the need to (non-render target textures tend to be quite static), and I don't see it as a great API failing to make this non-trivial.

The other thing is that it is non-trivial to do copying between texture formats. This will, unless the actual hardware can do it for you (highly doubtful, or it will stall the rendering), force the system to download both textures, copy them, and reupload the destination one. It's better to not have such brutally expensive operations (that can even stall if the destination happens to be in use) disguised in simple, innocous-seeming functions like "glCopyImageData1D/2D/3D()".

idr
09-03-2004, 08:25 AM
It's really interesting to listen to people. On the one hand you have people screaming for companies to do an "end run" around the ARB process and make their own extensions. On the other hand you have people complaining about there being so many different, vendor-specific extensions that do the same thing. Sigh...

After being involved for 2 years, I don't think it's fair to say "Ati and nVidia are arguing over 'stupid, petty little things'". NV, and other companies, had pretty significant issues with the original "uberbuffers" spec. Their response was the EXT_render_target spec. Of course, ATI, and others, had significant issues with that spec. Neither was a case of "stupid, petty little things." What we have now is a child of both specs. IMO, the result is much better for OpenGL.

There was always the possability for things to be completed much, much faster by having the two camps split and make two EXT / vendor extensions. The only reason that didn't happen is because everyone agreed that would have been bad. For something so significant, how many ISVs would have supported two paths? Probably almost none.

Part of the problem was that a couple of the compaines didn't have the extra man-power to devote to the WG early on, so the WG was allowed to go down a false path for a very, very long time. When all the right people finally did get at the same table there was...uh...some "intense" debate. :) This really dragged the process out longer, but I don't think it has gone any longer than it needed to. We've learned two very important things from the process:

If you're going to bring a spec to the table that makes significant changes to OpenGL, either bring it to the ARB early (i.e., before you have a nearly complete spec written) or bring it to the ARB after you ship it. If you're an IHV and you see a WG operating that is proposing significant changes to OpenGL, make damn sure that you have all the right people participating from the beginning.
You can only get pissed at the ARB if the same mistakes are repeated. :)

Gorg
09-03-2004, 08:58 AM
I feel like I have a naive view of the necessary changes because I don't understand how a feature that is already well supported in Direct3d, needs major changes to Opengl to be implemented.

Roderic (Ingenu)
09-03-2004, 08:58 AM
I get your point, yet OpenGL is behind D3D in this area, they have had nice Render_Target capability for a while, when we have an OS dependant crappy set of extensions to compete...

I prefer an ARB over and EXT over Vendor specific extensions, but I want things done too.

An EXT extension should have been made available in the meantime, or info about the extensions should have been made public to explain what's going on and why it's so "late".

I'm just *very* unhappy with the current render_target/render_texture capabilities right now, and I don't like that. :mad:

Korval
09-03-2004, 09:44 AM
On the one hand you have people screaming for companies to do an "end run" around the ARB process and make their own extensions. On the other hand you have people complaining about there being so many different, vendor-specific extensions that do the same thing. Sigh.Most of those complaints have fallen by the wayside as things like ARB_fp/vp/glslang have appeared to unify an increasingly divergent OpenGL. However, now that this problem is (mostly) settled, new extensions are being bound so much to the ARB that ARB itself is becoming the bottleneck. Vital missing functionality can't just stay missing forever, and if the ARB was failing to do something about it, someone else should have stepped in, even if it meant that there'd be multiple extensions. Better to have the functionality available in some fashion than to not have it at all.


NV, and other companies, had pretty significant issues with the original "uberbuffers" spec. Their response was the EXT_render_target spec. Of course, ATI, and others, had significant issues with that spec. Neither was a case of "stupid, petty little things."There's the thing. The ARB is a closed process, so all we really know is what gets posted in the meeting notes (which are scant on details at best). It would be great if the community could be told why such vital functionality was being held up. Granted, I understand that IP is involved in many of these discussions (indirectly, if not directly), but knowing what the reason behind the discussions are is kind of important. Otherwise, we have nothing more than rumor and conjectur to work off of.

I, for one, would really like to know what ATi's problem was with EXT_render_target originally (besides them not having thought it up, and it having supplanted the superbuffers extension that they championed).

For high-profile specs like RT or Superbuffers, the public should be kept more in the loop.


I feel like I have a naive view of the necessary changes because I don't understand how a feature that is already well supported in Direct3d, needs major changes to Opengl to be implemented.Because OpenGL and D3D aren't the same thing. D3D has a number of significant differences from OpenGL. OpenGL does a lot to protect the driver from the user (while also doing a lot for the user), while D3D is a bit more open. The point is that, when you start trying to create some piece of functionality in OpenGL, you have to take into account some of the differences between GL and D3D.

Look at VBO vs. D3D Vertex Buffers. VBO's are structured somewhat differently from D3D Vertex Buffers. While the implementation of them likely uses some similar code on the driver side, there's a lot that is different form the VBO end. nVidia's implementation seems to treat video memory like a cache where VBO's are stored into some fixed-sized portion of memory. Meanwhile, ATi's implementation seems to be a literal interpretation of the spec, where the 3 different hints correspond to where memory is allocated. Both are valid VBO implementations, but both reflect differences in their hardware; differences that ATi's implementation wouldn't be able to handle without the hint mechanism. Assuming that wasn't in the original spec, that would be something that ATi would argue for in the spec.

Just because the hardware can do something doesn't mean that it is reasonable to expose it in the same way for one API vs. another.

zeckensack
09-03-2004, 11:15 AM
Originally posted by idr:
It's really interesting to listen to people. On the one hand you have people screaming for companies to do an "end run" around the ARB process and make their own extensions. On the other hand you have people complaining about there being so many different, vendor-specific extensions that do the same thing. Sigh...You've forgotten something vital:
Development takes time. If I have a feature early on, even if it's not in its "final" form, it gives me a headstart. I can play around with it, and start restructuring my code around it, build an appropriate abstraction layer and a fallback path for drivers that don't have it, etc. I might even find out that I really don't need it in the process of playing around with it.

In any case, one year later, or whenever the overall application is reaching completion, there will be a vendor neutral replacement and I can port from vendor specific functionality to vendor neutral functionality. The cases where this won't happen are obvious early on. If you keep your eyes open, you'll certainly be able to make good enough judgement calls.

texture_rectangle
point_sprite

This is one of the many significant advantages of the OpenGL extension mechanism. It allows developers to start experimenting with functionality instantly, instead of twiddling their thumbs in hope for a Grand Unified Version.

Please remember this.

OTOH I do appreciate well thought out ARB extensions when the time is right. In reflection I believe that the "best" ARB specifications are in fact those where formerly multiple vendor-specific extensions have existed. ARB_vertex_program is a good example, or perhaps ARB_vertex_buffer_object (even though it's my opinion that "map" functionality should not have been included).

As far as I remember, the only truly great ARB extension spec that just came out of nowhere, basically perfect, and with good timing, was ARB_fragment_program.

knackered
09-03-2004, 11:28 AM
I think the VBO mechanism was certainly worth the wait - it really makes the d3d vertexbuffer/streamsource/vertexdeclarations farce look like a serious hack, even by Microsofts standards. I recommend you take some time to compare the two mechanisms, just to get a feel for what an ill-thought out, rushed fundamental change to the pipeline API can look like, especially when designed by retards.
Just bide your time, and make do with the more than sufficient functionality you've got with pbuffers for the time being.
That's my advice.
Take your time, ARB.

Adruab
09-03-2004, 01:30 PM
I haven't dealt much with VBO stuff. How is it different/better than D3D Vertex buffers? From what little I've looked at the spec it seems to have similar restrictions.... Perhaps it's a little more free form, but that's an advantage that gl has always had.

I don't think the stream system is that bad. It was definitely crappy when it was attached to the vertex declaration, and is still perhaps a little to restrictive (e.g. creating decl. objects), but it gets the job done.

Humus
09-03-2004, 02:48 PM
The vertex buffer as such doesn't work all that differently, but in OpenGL you specify different arrays and offsets. This is usually very convenient and flexible. Another way would have been to tell the API what format the vertex buffer is at creation time. This wouldn't be as flexible, but would allow the driver to remap formats that aren't supported natively to native formats, which would help performance in such cases. This idea was proposed for VBOs, but was scrapped.
The D3D model on the other hand combines the worst of both models. You have to create objects specifying the vertex format (called vertex declarations). But since these aren't connected in any way to the vertex buffers the driver cannot know the format of the buffer until you make a draw call, which makes it impossible (or at least very impractical) to remap the format to a native one. I don't know what happens in D3D when you use an unsupported format, but I would guess the vertex declaration creation simply fails.

Adruab
09-03-2004, 05:05 PM
The problem with directly imbedding the declaration in the vertex buffer is that it doesn't allow you to reuse the buffer for other formats. The other issue with that (at least in the way they implemented it in D3D8) is that it doesn't allow multiple streams of data. I suppose you could have a vertex decl. associated with each buffer (changable) and then combine them when the you set them to streams (though conflicts could arise).

You're right about the remapping thing with buffers though, and it isn't terribly convenient. As far as I can guess the only benefit is that you can set the declaration in one call, which probably isn't that big of a benefit.

In any case, getting back to the Render Target extension.... I've been wanting something like this too. Having vertex texturing is nice, but after floating point textures were introduced, I really didn't know why they didn't just generalize the concept of memory layout for different things (giant array of 4 floats for instance...). I mean I'm sure there are specific things drivers do with memory for different purposes. But man... how much bandwidth would you save on a GPU cloth simulation (for instance) where you could just read back in the texture as vertex position rather than texture fetch for every vertex. It simplifies things sooooo much. I want to see this extension too, since that would be a welcome jump up to and beyond D3D's current capabilities. Go ARB Go!!!

V-man
09-03-2004, 08:29 PM
Originally posted by zeckensack:
As far as I remember, the only truly great ARB extension spec that just came out of nowhere, basically perfect, and with good timing, was ARB_fragment_program.Nah, it is a derivative from NV_fragment_program
just like vertex_program.

The people who have worked on the ARB version are the same as in the NV version, not including non-nvidia people.

The same people were considering making ARB versions of the subsequent NV versions, and as you can see, they have extended ARB_vp/fp for now.

As for render_to_texture, I'm very sure the ARB knows the communities opinion since they post comments sometimes. On one occasion, I made a comment similar to Korval about the meeting notes, and the ARB secretary posted a comment.

I find that this thread is not needed.

Korval
09-03-2004, 08:57 PM
Nah, it is a derivative from NV_fragment_programI am virtually certain that ARB_fp was first.


they have extended ARB_vp/fp for now.No they haven't.

knackered
09-04-2004, 01:20 AM
Originally posted by Adruab:
but it gets the job done.Well pbuffers get the job done, but it doesn't mean they don't need replacing with something more sensible.

-NiCo-
09-04-2004, 04:11 AM
Originally posted by Adruab:
I mean I'm sure there are specific things drivers do with memory for different purposes. But man... how much bandwidth would you save on a GPU cloth simulation (for instance) where you could just read back in the texture as vertex position rather than texture fetch for every vertex. It simplifies things sooooo much. I want to see this extension too, since that would be a welcome jump up to and beyond D3D's current capabilities. Go ARB Go!!!The combination of GL_ARB_vertex_buffer_object and GL_ARB_pixel_buffer_object does exactly what you mentioned. I've tested it with RGBA8 backbuffer and fp16 and fp32 rgba pbuffer formats, works like a charm (Geforce 6800 GT).

Nico

V-man
09-04-2004, 06:17 AM
Originally posted by Korval:

Nah, it is a derivative from NV_fragment_programI am virtually certain that ARB_fp was first.


they have extended ARB_vp/fp for now.No they haven't.I don't claim to have perfect memory, but refering to the dates in the docs :

ARB_fp : 5/10/2002 is the earliest
NV_fp : 10/12/2001 is the earliest

As for appearance in drivers,
I think when the 9700 was fresh out, some people
made demoes for ARB, and I think I was complaining that I could not run them on my NVidia, cause it only had NV versions.

Here's a old post that hints on which came first

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=007333

Korval
09-05-2004, 10:46 AM
Oh SirKnight's priceless, isn't he?
So pre-occupied with trying to maintain that pretence of knowledge, he doesn't care what conversations he hijacks. Unlike me of course.SirKnight didn't even post in this thread, yet you continue to attack him. Your behavior is beginning to become disruptive to the forum as a whole, and your 1-man vendetta against SirKnight (who contributes something other than bile to this forum, btw) is both disruptive and annoying.

knackered
09-05-2004, 12:45 PM
Ok, I retract all I have said. It was churlish and distracting. I suggest you two troll-feeders do the same. I love SirKnight - he's an example to us all.

idr
09-09-2004, 07:56 AM
There are problems with getting the community involved, and IP isn't the big one, AFAIK. Think about it like this. Had we posted all the meeting minutes to the forums, you still wouldn't have a spec.

Moreover, people would post comments and make suggestions, but the folks in the WG would likely not have enough time to filter through all that. The end result would be that, not only would people be upset that the spec is taking a long time, people would be upset that their suggestions / comments were largely ignored. For superbuffers, a lot of the discussion has been about the implementability (is that a word???) of the extension, so random folks in the community wouldn't have much to contribute. That, of course, isn't true for all extensions (http://dri.sourceforge.net/cgi-bin/moin.cgi/MESAX_array_element_base") .

Community involvement is always tricky in mostly-open things like the ARB. I don't know what the new process is, but it used to be that pretty much any random Joe could sign the participants agreements and start coming to meetings. I know the process has changed recently, but I think it is still possible for people to directly participate.

Like I said before, the biggest reason for the dealy in this much needed extension is that we spent about 8 months going down a long, windy, false path.

SirKnight
09-09-2004, 09:32 AM
Ok this is the first time I read this thread (I know it's been going on for a while). Wow, not even contributing to a thread and I'm being talked about. Boy I tell you what, I made the big time now. :D Tears rolling down my eye balls. I feel so loved. :)

;)

Anyway, I guess I'd like to say to count me in on this wanting a better RTT scheme. Sure I can get my job done with pbuffers but as already said, all that context mess is just well...mess. I'm assuming though that when one of these better RTT extensions are finally done that it will work on a broad range of hardware and not the latest and greatest. Sure I currently have a 'latest and greatest' but I kind of like other people who don't have this ability to be able to run my stuff. Speaking of that, I need a freaking webpage. Like...a real one and stuff, ya know?

-SirKnight

zom
10-09-2004, 01:56 AM
Does anyone have an idea why there is no render_target powered driver (even a buggy beta!) :confused: ?

None of :mad: NV nor :mad: ATI !!!

I'm really sick of :mad: arb.

As for me the is no sense to wait they finaly release this extension. Same thing may happen to it as happened to superbuffers :mad: (org.ubber).

Korval
10-09-2004, 10:19 AM
See This (http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=011406) thread.

Alonso Schaich
10-12-2004, 08:21 AM
I also join!

tfpsly
10-12-2004, 11:14 PM
Indeed... why don't we have a very simple tempo extension that would make it possible to create a texture with a "renderable" flag, and a format that could either be 4 uchar or float ?

We don't even need 16 bits textures support, depth or stencil textures and the like at first.

jonasmr
10-13-2004, 12:58 AM
I agree too.

This was one of the reasons for me to turn to DirectX.

Was tired of all the troubles of using the different methods.

Why does it have to take this long?