PDA

View Full Version : OpenGL 1.4



knackered
07-18-2002, 12:44 AM
Depth textures and shadow textures, enabling real-time shadows and related image-based rendering techniques
Vertex programming framework, setting the stage for user-defined geometry, lighting and shading programs and enabling high-level general-purpose shading languages
Automatic texture mipmap generation, providing rapid updates and high-quality texture filtering for dynamic textures
Numerous smaller enhancements including:
Multiple draw arrays, for higher geometry throughput
Window raster position, for precise 2D and image rendering
User-defined fog coordinate, for advanced fog effects
User-defined secondary color, point parameters, texture level-of-detail bias, texture crossbar, and new frame buffer blending modes and stenciling functions for more flexible shading and rendering effects


"new frame buffer blending modes"
Which ones? Blend-subtract?

It all looks a little dated, don't you think?

bpeers
07-18-2002, 01:45 AM
I like it, it's a good update. Stuff like auto-mipmap should help get rid of quick box-filter hacks; depth textures ea killing off a bit of the extension jungle; etc.. Yeah, most of it gives a bit of a "about time" feeling but at least it is there now.. Should be a while until 2.0 comes along, so 1.4 = kewl.

ScottManDeath
07-18-2002, 02:28 AM
Originally posted by knackered:
"new frame buffer blending modes"
Which ones? Blend-subtract?


Hi
I think the NV_blend_square extension is meant, because there you can use SRC_COLOR/ONE_MINUS_SRC_COLOR as blending factor for source and DST_COLOR/ONE_MINUS_DST_COLOR for destination

AFAIK the blend-subtract mode is part if the imaging subset.

BTW when will GL1.4 be offical with a spec and an implementation ?

Bye
ScottManDeath

knackered
07-18-2002, 04:17 AM
bpeers, so the lack of *any* kind of pixel shading language (even a simple one, so Cg might suddenly be useful) doesn't bother you?

bpeers
07-18-2002, 04:29 AM
Nope, I'm part of the group that is often overlooked (see especially the thread about the patents) -- the guys using OpenGL for engineering/CADCAM stuff http://www.opengl.org/discussion_boards/ubb/smile.gif I'm happy when an upgrade offers some features that previously required extensions and their relatively low level programming style -- which I usually can't sell to management unless for really important cases (eg VAR).

So.. ofcourse it will only be The Bomb with 2.0, but till then, 1.4 is definitely an improvement.. I hope it will find its way to end user machines.

Now don't kill me for being short sighted http://www.opengl.org/discussion_boards/ubb/smile.gif But I think it's important to remember there's other uses besides games.. (the comment about dropping immediate mode in the patents thread nearly gave me a heartattack :] )

knackered
07-18-2002, 04:38 AM
If you managed to persuade your management to use VAR, then you can easily persuade them to use pixel shaders - register combiners are supported on tnt upwards (and you must be using nvidia hardware, as you're using VAR).
I'm surprised there's no standardised vertex object extension. The VAR extension is really nice, using the write/read frequency/priority hints is very "OpenGL" anyway - why don't the ARB standardise that?

vincoof
07-18-2002, 05:03 AM
ScottManDeath: according to SGI, first implementations should be ready at the end of this year.

davepermen
07-18-2002, 05:04 AM
Originally posted by knackered:
register combiners are supported on tnt upwards
gf1+, tnt doesn't support it.

oh, and.. cad software normally doesn't need fancy pixelshaders. sweet renderings will be done offline anyways due to complex lighting stuff like radiosity that has to be precalculated, and for the rest, you need tons of lines and flat triangles..

bpeers
07-18-2002, 05:05 AM
Okay, but the thing is I don't _need_ pixel shaders http://www.opengl.org/discussion_boards/ubb/smile.gif VAR otoh is very useful for large, static engineering models. So what I meant is -- I can see where the ARB is coming from when they hold off a bit on "exotic" (ie gaming) features like pixel shaders when some of the more basic stuff (mipmapping) isn't taken care of properly yet..
imho etc :]

ScottManDeath
07-18-2002, 05:45 AM
Originally posted by vincoof:
ScottManDeath: according to SGI, first implementations should be ready at the end of this year.
Hi

http://www.opengl.org/discussion_boards/ubb/biggrin.gif new toys for christmas http://www.opengl.org/discussion_boards/ubb/biggrin.gif

I hope that consumer hw will also get gl1.4 ASAP

Bye
ScottManDeath

[This message has been edited by ScottManDeath (edited 07-18-2002).]

[This message has been edited by ScottManDeath (edited 07-18-2002).]

pocketmoon
07-18-2002, 06:11 AM
and another new toy to go with it, hopefully OpenGL1.4 compliant http://www.opengl.org/discussion_boards/ubb/smile.gif

http://www.anandtech.com/video/showdoc.html?i=1656

NitroGL
07-18-2002, 07:38 AM
Originally posted by pocketmoon:
and another new toy to go with it, hopefully OpenGL1.4 compliant http://www.opengl.org/discussion_boards/ubb/smile.gif

http://www.anandtech.com/video/showdoc.html?i=1656



Suppose to be OpenGL 2.0 compliant AFAIK...

Julien Cayzac
07-18-2002, 08:50 AM
Originally posted by knackered:
bpeers, so the lack of *any* kind of pixel shading language (even a simple one, so Cg might suddenly be useful) doesn't bother you?

There is no fragment shading standard, so I don't see how it could be part of GL1.4.

Just wait for ARB_fragment_program, and if you can't wait then do as you always did: use extensions (register combiners or Cg on NVidia hardware, pixel shaders on ATI, etc.).

I can't understand why that single lack bothers you so much: can't you see that the light is coming? In only few monthes we'll have a generic vertex/fragment programmable API!! http://www.opengl.org/discussion_boards/ubb/smile.gif

Just be patient! http://www.opengl.org/discussion_boards/ubb/smile.gif

Julien.

Nakoruru
07-18-2002, 09:35 AM
OpenGL 2.0 compliant? There is no OpenGL 2.0 to be compliant with yet.

DJ_GL
07-18-2002, 11:11 AM
when will 1.4 be available for download, i'd really like to play with it now. :-D

knackered
07-18-2002, 01:18 PM
Originally posted by DJ_GL:
when will 1.4 be available for download, i'd really like to play with it now. :-D

eh? download? they're extensions which will become available in your drivers and on the registry site.


I can't understand why that single lack bothers you so much: can't you see that the light is coming? In only few monthes we'll have a generic vertex/fragment programmable API!!

LOL! It bothers me so much because a standard interface has been available in d3d for some considerable amount of time, deepmind!
Tell me why a similar interface isn't available in opengl 1.4 ...please, I need to know.

DJ_GL
07-18-2002, 01:23 PM
ah, thanks, I didn't know that... just sounded so hyped, and i'm here using 1.2 (i believe) and decided, wow, this upgrade looks interesting. Thanks for the insight!

Julien Cayzac
07-18-2002, 02:29 PM
Originally posted by knackered:
LOL! It bothers me so much because a standard interface has been available in d3d for some considerable amount of time, deepmind!
Tell me why a similar interface isn't available in opengl 1.4 ...please, I need to know.

AFAIK, there is no such thing in D3D. No NVidia card supports Pixel Shaders 1.4 in D3D, just as well no ATI card supports NV_register_combiners in GL.

I'm sure something like NVParse exists for ATI extensions, so you can play with some obsolete ps1.0 shaders on both cards. But what's the point anyway?

The truth is, ARB_fragment_program will be the firt unified API ever (assumed it's released before it's D3D alterego). Period.

Julien.

knackered
07-19-2002, 12:55 AM
None of what you have just said is true. Not one single statement. Well maybe one, assuming your name really is Julien.

davepermen
07-19-2002, 01:39 AM
Originally posted by deepmind:
AFAIK, there is no such thing in D3D. No NVidia card supports Pixel Shaders 1.4 in D3D, just as well no ATI card supports NV_register_combiners in GL.

I'm sure something like NVParse exists for ATI extensions, so you can play with some obsolete ps1.0 shaders on both cards. But what's the point anyway?

The truth is, ARB_fragment_program will be the firt unified API ever (assumed it's released before it's D3D alterego). Period.

Julien.

hm.. ps1.3 works on every dx8 compliant gpu. gf3,gf4,radeon8500, new matrox, and all following ones anyways (as they yet support dx9 most of them)
oh, and they have one unified interface for it yes. does not mean every card can provide pixelshaders but thats not the point. dx always does have an unified interface for each of its features.

thats actually what currently is bitched all the time on opengl, and thats why a fully refactured gl (gl2) is really needed.

zeckensack
07-19-2002, 01:51 AM
What's going on here? The news media are trumpeting the 'availability of the OpenGL1.4 spec'. I'd like to have a look. But when I look here: http://www.opengl.org/developers/documentation/specs.html

... well. Nothing!
Where is it?

vincoof
07-19-2002, 02:31 AM
That's right that specifications are ready, but that doesn't mean they are published yet. It's *just* a matter of time ; please be patient. Anyway you won't be able to use those specifications immediately since you will have to wait that manufacturers make drivers for it.

zeckensack
07-19-2002, 04:04 AM
Originally posted by vincoof:
That's right that specifications are ready, but that doesn't mean they are published yet. It's *just* a matter of time ; please be patient. Anyway you won't be able to use those specifications immediately since you will have to wait that manufacturers make drivers for it.
Sure, I know how that works. I want to see it regardless, just to know what's in store http://www.opengl.org/discussion_boards/ubb/smile.gif

And what about people like Brian Paul, who'd want to start work on new implementations right away? Or even IHVs? Do these people get some kind of private access? I always thought the spec was public for everyone, once it's final.

PH
07-19-2002, 04:18 AM
The ARB ( as you probably know ) consists of several of the leading IHVs. So, I would assume they all have the various specs ( otherwise, how can they vote on them http://www.opengl.org/discussion_boards/ubb/smile.gif ? ). I'm pretty sure most of it has already been implemented and just needs to be enabled.

I'm looking forward to the ARB_vertex_program spec. I'm guessing it's as powerful as DX's VS2.0 but in a way that allows it to be implemented on older generation hardware ( likely via an OPTION mechanism, like NV_vertex_program1_1 ). The rest of GL1.4 is pretty much available right now ( ARB_shadow, etc ).

zeckensack
07-19-2002, 04:29 AM
Originally posted by PH:
The ARB ( as you probably know ) consists of several of the leading IHVs. So, I would assume they all have the various specs ( otherwise, how can they vote on them http://www.opengl.org/discussion_boards/ubb/smile.gif ? ). I'm pretty sure most of it has already been implemented and just needs to be enabled.I'm pretty sure that SiS, Trident and the Mesa guys are not currently ARB members http://www.opengl.org/discussion_boards/ubb/wink.gif

I'm looking forward to the ARB_vertex_program spec. I'm guessing it's as powerful as DX's VS2.0 but in a way that allows it to be implemented on older generation hardware ( likely via an OPTION mechanism, like NV_vertex_program1_1 ). The rest of GL1.4 is pretty much available right now ( ARB_shadow, etc ).And I'm looking forward to the whole 1.4 thing http://www.opengl.org/discussion_boards/ubb/smile.gif
I'd just like to know what exactly it is. So gimme the spec. Hurry! http://www.opengl.org/discussion_boards/ubb/biggrin.gif

PH
07-19-2002, 04:45 AM
Originally posted by zeckensack:
I'm pretty sure that SiS, Trident and the Mesa guys are not currently ARB members http://www.opengl.org/discussion_boards/ubb/wink.gif


I don't know about those but they are hardly the leading IHVs http://www.opengl.org/discussion_boards/ubb/smile.gif.

I don't think the specs are big secrets and if you sign some sort of ARB agreement you'll have access to a lot more ( ARB meetings, mailing lists, etc ).

V-man
07-19-2002, 06:15 PM
Originally posted by pocketmoon:
and another new toy to go with it, hopefully OpenGL1.4 compliant http://www.opengl.org/discussion_boards/ubb/smile.gif

http://www.anandtech.com/video/showdoc.html?i=1656




Wow, 110 million transistors on a .15 micron. They don't say how much power the card will consume, but it's gone blow the AGP port! Look at the section that does 2D operations. It is tiny compared to the 3D sections.

V-man

PH
07-22-2002, 11:41 PM
ARB_vertex_program spec has been approved and posted http://www.opengl.org/discussion_boards/ubb/smile.gif,
http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_program.txt

Hmm, quickly browsing the spec indicates that it's similar to my own VP language ( that currently compiles to EXT_vertex_shader ).

My library will likely be open source when complete - it provides a unified vertex programming API and a unified interface to vertex arrays. Guess I've been wasting my time with the release of ARB_vertex_program http://www.opengl.org/discussion_boards/ubb/smile.gif.

My vertex programs look like this,




/*

Vertex Program Test

*/


//-------------------------------------------------------

//
// Varying data ( input to fragment program )
//

varying vec4 TangentSpaceLightVector(oTex0);
varying vec4 NormalmapCoords(oTex1);
varying vec4 AttenuationMapCoords(oTex2);
varying vec4 DiffusemapCoords(oTex3);
varying vec4 TangentSpaceEyeVector(oTex4);

//
// Per-vertex data ( input from application )
//

attribute vec4 VertexPosition(vPos);
attribute vec4 BaseTexCoords(vTex1);
attribute vec4 Tangent;
attribute vec4 Binormal;
attribute vec4 Normal;

//
// Per-primitive data
//

uniform vec4 LightPosition;
uniform vec4 EyePosition;
uniform vec4 RangeScale;

//
// Local Constants
//

const vec4 Constants = { 0.5 0.5 0.5 0.5 };

//
// Temps
//

vec4 LightVector;
vec4 EyeVector;


//-------------------------------------------------------


SUB LightVector.xyz, LightPosition, VertexPosition;
SUB EyeVector.xyz, EyePosition, VertexPosition;

//
// 3D Attenuation
//

MAD AttenuationMapCoords.xyz, -LightVector, RangeScale.x, Constants.x;

MOV NormalmapCoords.xy, BaseTexCoords;
MOV DiffusemapCoords.xy, BaseTexCoords;


//
// Transform light vector into tangent space
//

DP3 TangentSpaceLightVector.x, Tangent, LightVector;
DP3 TangentSpaceLightVector.y, Binormal, LightVector;
DP3 TangentSpaceLightVector.z, Normal, LightVector;

//
// Transform eye vector into tangent space
//

DP3 TangentSpaceEyeVector.x, Tangent, EyeVector;
DP3 TangentSpaceEyeVector.y, Binormal, EyeVector;
DP3 TangentSpaceEyeVector.z, Normal, EyeVector;

//-------------------------------------------------------

davepermen
07-23-2002, 12:30 AM
Conceptually, what the extension defines is an application-defined program (admittedly limited by its sequential execution model) for processing vertices so the "vertex program" term is more accurate.

actually its a userdefined inlined vertexprocessing callback function. not a program, not a shader http://www.opengl.org/discussion_boards/ubb/smile.gif at least they finally got it together, happy http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty
07-23-2002, 12:54 AM
Man thats the biggest extension spec I've ever seen! http://www.opengl.org/discussion_boards/ubb/smile.gif

knackered
07-23-2002, 01:45 AM
Christ on a bike!
How big's the pixel shader spec gonna be?!

Cab
07-23-2002, 06:55 AM
Iím glad to see the large amount of people that have contributed to it. This is what I think the old spirit of OpenGL was: clever people working together, using their minds to do the right things.
Have you noted that this is also the EXT_stencil_two_side? Maybe IHV have noticed that it is not the fact that they have different extensions for more or less the same thing but the fact that they implement it efficiently in their HW?
It is good news to see that the spec exists I hope that MS claims are nothing more than mongering and nothing more and we can see it implemented, soon, in current HW.
Personally, I have no problem if the fragment shader spec is too long http://www.opengl.org/discussion_boards/ubb/wink.gif. I just want to have it sooner than later.
And what about a common vertex array extension to remove the NV_VAR and the ATI_VAO code?
I think that the board (ARB) is doing a good job. It will be better if they are faster (there are some vertex program near two years ago but...)
Congratulations.

PH
07-23-2002, 07:06 AM
You may be right Cab - since VAO isn't compatible ( unless the spec is updated ) with ARB_vertex_program then perhaps a common extension will follow.

davepermen
07-23-2002, 07:08 AM
for the common vertex array extensions i'll suggest directly dive into GL_GL2_exts.. i think there aren't big problems in implementing those on todays hw. just vertex and pixelshaders of today are crap. but the r300 and nv30 vertexshaders could yet do a great job for most of the GL_GL2_vertex_shaders i think.. not complete full support possibly, but at least they could yet fit the interface.. it would just be too sweet...

oh, and btw, i really like the named registers.. much more handy than r1 //something, r2 //something else etc..

espencially for big shaders.. (and they are comming http://www.opengl.org/discussion_boards/ubb/wink.gif)

i just hope they expose fragmentshaders only for r300/nv30+ hw, else the ext gets quite old (sort of ps1.3) and we have to wait for another ext for the new hw.. (ps2.0)..

oh, and yes, that thing is HUGE http://www.opengl.org/discussion_boards/ubb/smile.gif (107pages to print out http://www.opengl.org/discussion_boards/ubb/wink.gif)

Cab
07-23-2002, 07:52 AM
Originally posted by PH:
You may be right Cab - since VAO isn't compatible ( unless the spec is updated ) with ARB_vertex_program then perhaps a common extension will follow.


I hope you are right. After some years with some GPUs in the market and a functionality that is exactly the same for all IHV, and with OGL 1.4 defined, I think it is time to have one.
I like GL2 Vertex Array Objects, with direct pointers (this is something that the ATI extension should have), and I think that all IHV can implement it easily as it is near the same as D3D Vertex Buffers, so everyone has it currently implemented in their drivers.
Personally, I think NV_VAR is more flexible but GL2 VAO will be enough.
Maybe a poll in www.opengl.org (http://www.opengl.org) can help http://www.opengl.org/discussion_boards/ubb/wink.gif


[This message has been edited by Cab (edited 07-23-2002).]

folker
07-23-2002, 08:12 AM
Originally posted by Cab:


I agree with Matt who pointed out that direct pointers are a bad idea:

If the driver wants to hide implementtation details (e.g. internal formats), the driver has to copy for a lock, in addition to the copy operation of the application to write the data to the pointer -> one additional copy operation -> slower.

Cab
07-23-2002, 08:54 AM
Originally posted by folker:
I agree with Matt who pointed out that direct pointers are a bad idea:

If the driver wants to hide implementtation details (e.g. internal formats), the driver has to copy for a lock, in addition to the copy operation of the application to write the data to the pointer -> one additional copy operation -> slower.

I don't agree with that. The driver writer can implement it as he wants. And it will be more or less efficient depending on his skills and his hw possibilities. I like the way you can do it in D3D. I wrote an application when D3D8 was released to test it and the speed was more or less the same that with VAR. But since that, it seems nVidia have optimized their drivers and now is running more than double the speed of the NV VAR version. Why? Because it was a single demo where most of the geometry are static objects and it seems that now the D3D driver is storing most of them on the card memory and using AGP for the dynamic VBs.
This is something you can't do with VAR as you can only have one buffer and you will probably allocate it in AGP memory (unless you don't have any dynamic object...).
It seems that Matt is commenting negatively any idea that will come from OGL2. I suppose that it is because it comes from 3DLabs http://www.opengl.org/discussion_boards/ubb/smile.gif Maybe, as an ARB meetings attendee, he can suggest to lead a common approach for this problem instead of complaining about others ideas.
Anyway, I donít want polemics. I just want a common extension to submit the vertex data efficiently to the GPU. Donít you?

folker
07-23-2002, 09:08 AM
Originally posted by Cab:
...

When using the pointer semantics, there is excatly one copy operation, and the driver can perform all required format conversions (including swizzling in case of textures) during this copy operation. When using lock semantics, usually first the application has to copy the data into the locked buffer, and then often the driver has to copy again due to format conversions.

For example, when implementing video textures with DirectShow, each video frame is copied twice in d3d and once in OpenGL.

And to really get best performance, you never should touch your data again anyway. For special cases you cannot avoid dynamic (e.g. video textures). But especially for vertex data, you shouldn't touch vertex data. And then the pointer mechanism is very elegant and abstracts the hardware.

folker
07-23-2002, 09:10 AM
Originally posted by folker:
When using the pointer semantics, there is excatly one copy operation, and the driver can perform all required format conversions (including swizzling in case of textures) during this copy operation. When using lock semantics, usually first the application has to copy the data into the locked buffer, and then often the driver has to copy again due to format conversions.

For example, when implementing video textures with DirectShow, each video frame is copied twice in d3d (except if you don't use texture swizzling, and if the video frame format is supported directly by the hardware, then you also get only one copy) and always only once in OpenGL.

And to really get best performance, you never should touch your data again anyway. For special cases you cannot avoid dynamic (e.g. video textures). But especially for vertex data, you shouldn't touch vertex data. And then the pointer mechanism is very elegant and abstracts the hardware.

Cab
07-23-2002, 09:51 AM
Originally posted by folker:
...


This is in the case that you read from disk and want OGL to transfer it to the card/agp as it is. But in the case that you are creating or modifying dynamically you geometry from the mesh that you have read, the lack of an AGP pointer where you are going to store your data sequentially, means that you need to store it in a main memory modified for later left the driver to copy it to AGP memory. This means 1 copies (you transforming it and storing to AGP memory) vs 2 copies (transforming it and storing it in sys memory and then the driver copying to AGP memory).
The problem that you expose is usually for static models that are loaded just one time at the beginning of the program/game level/... So the double copy is not a big problem. But the case Iím exposing is for dynamic geometry that is copied every frame.

Anyway, without VAR or VAO extensions, the current OGL model is the worst as it has all the models in sys mem and has to copy the vertex every time you want to draw a model. This is something you probably know and this is why VAR and VAO extensions are here. So I think we need a common extension/mechanism to solve it instead different extensions for the same think. Donít you think so?

PH
07-24-2002, 07:34 AM
OpenGL1.4 spec online,
http://www.opengl.org/developers/documentation/OpenGL14.html

Looks like ARB_vertex_program is not core functionality ( not that I really care what's core or not as long as it's available ).

Nakoruru
07-24-2002, 07:58 AM
I was under the impression that you could have more than one buffer with VAR. Just reset the vertex array buffer pointers to use a different buffer.

Is this wrong?

folker
07-25-2002, 02:31 AM
Originally posted by Cab:
....

For dynamic geometry, you assume that the application knows and writes exactly the internal data format of the hw. If the format is different, the lock semantics requires anway an additional copy anyway. But it should be job of the driver to hide that to not impose restrictions to hardware design.

You can convince me if you give good arguments that all todays and future hardware have to use the same hardware architecture for some principle reason. Maybe this is the case, but I am not sure yet.

Sample: Texture swizzling. I don't think that it is a good idea that the application should have to deal with hardware-dependent texture swizzling aspects. And I think it would be restrictive to demand for all hardware the same swizzling algorithms. Maybe due to caching aspects, different swizzling algorithms are better than the standard swizzling used in today's hardware (e.g. swizzle also between mipmaps etc.).

Jurjen Katsman
07-25-2002, 04:23 AM
Folker: All current vertex shader capable hardware which can read geometry directly from memory (either AGP or videomemory) is capable of supporting a decent amount of input types, which are clearly defined. Some might support a few more or a few less, but there is a decent amount of base types which could be exposed, and which DX does expose.

This are all just really 'standard' formats, IEEE floats, a few integer formats, with clearly defined unpacking rules. Hardware is fine exposing this, and applications can take advantage of this. This capability is almost required for efficient dynamic geometry.

Are we limiting possibly 'clever' hardware designs by doing this? Sure. But those designs might be clever from a hardware engineering standpoint, but they are certainly not clever as far as interfacing with the application is concerned.

You need to demand certain designs from hardware, or you could just as well give up. We're also forcing all hardware to be polygon based... that's not a bad thing either, now is it?

[This message has been edited by Jurjen Katsman (edited 07-25-2002).]

davepermen
07-25-2002, 04:49 AM
yes it is very bad. i hate polygons..

Nutty
07-25-2002, 05:01 AM
Is there anything you do like about the current state of graphics dave?

Julien Cayzac
07-25-2002, 05:12 AM
GL 1.4 is here, great!!
Now, let's speak about what's still missing:
- ARB_fragment_program, but it's coming "soon"...
- ARB/GL2_vertex_array_object: that kind of extension is needed quickly.
- EXT/ARB_bezier_surfaces ? dealing with both NVidia's evaluators and ATI's truform is a mess.
- EXT/ARB_occlusion_query ? From PH comments it seems ATI's supporting NV_occlusion_query, and 3DLabs is ok for exposing occlusion culling capabilities if there's a need. Maybe NV_occlusion_query should be promoted ?

Julien.

davepermen
07-25-2002, 05:21 AM
Originally posted by Nutty:
Is there anything you do like about the current state of graphics dave?

http://www.opengl.org/discussion_boards/ubb/smile.gif if he is asking.. i would prefer other techniques.
but with the next hw gen i'm quite happy. the current state, we're sitting in since gf1 is sort of pointless, while cool too see. i've seen a lot of fancy things that are today or soon possible. at the moment they get under a good standard interface so that everyone can use them it will get quite funny. nothing against graphics, thats just math.. you know what i really dislike at the current situation..

davepermen
07-25-2002, 05:41 AM
Originally posted by deepmind:
GL 1.4 is here, great!!now lets wait for drivers.. haven't even 1.3 really on my gf2mx, but oh well http://www.opengl.org/discussion_boards/ubb/smile.gif


Now, let's speak about what's still missing:
- ARB_fragment_program, but it's coming "soon"...sometimes.. but i hope they don't write one that helps out for gf3 and 4 as well.. i would prefer a vertex_program one for r300 nv30 + hw.. as it will be promoted and done its at least half a year from here..

- ARB/GL2_vertex_array_object: that kind of extension is needed quickly.
yeah. hope they start implementing GL2_exts as much and as fast as possible..

- EXT/ARB_bezier_surfaces ? dealing with both NVidia's evaluators and ATI's truform is a mess.
anyone seen yet how the exts for displacement mapped higher order surfaces will look? paphelia?r300?nv30 even? don't really want some ext for the current higherordersurfaces, they aren't really useful anyways..

- EXT/ARB_occlusion_query ? From PH comments it seems ATI's supporting NV_occlusion_query, and 3DLabs is ok for exposing occlusion culling capabilities if there's a need. Maybe NV_occlusion_query should be promoted ?
yeah, put this in. combined with the asyncstuff of GL2_exts.. would be best..

Julien.davepermen

http://www.opengl.org/discussion_boards/ubb/smile.gif

Cab
07-25-2002, 06:18 AM
Originally posted by deepmind:
GL 1.4 is here, great!!

Yes.
- ARB_fragment_program, but it's coming "soon"...
It seems that lately, the ARB is doing a good work. I hope it will be sooner than later
- ARB/GL2_vertex_array_object: that kind of extension is needed quickly.
This, or a similar one, should be here since a long time. Every OGL developer, I talk with, mentions the lack of it.

- EXT/ARB_occlusion_query ? From PH comments it seems ATI's supporting NV_occlusion_query, and 3DLabs is ok for exposing occlusion culling capabilities if there's a need. Maybe NV_occlusion_query should be promoted ?
Yes, but, please, releasing at the same time DrawIf functions. If not, for things different to flares http://www.opengl.org/discussion_boards/ubb/wink.gif, the current model forces you to create very artificial architecture to deal with the result of a occlusion query of a model that you draw two frames ago, see if the camera has changed its position, ...
With DrawIf functions the CPU donít need to wait until the occlusion query was performed so you donít have to broke the parallelism between CPU and GPU.


[This message has been edited by Cab (edited 07-25-2002).]

PH
07-25-2002, 06:37 AM
Promoting NV_vertex_array_range + NV_fence to EXT or ARB seems like a good idea. It's already compatible with all extensions that use array pointers. I would definitely like to see these NV extensions implemented by ATI, 3Dlabs, etc.

I like ATI_vertex_array_object / ATI_element_array too ( especially with ATI_map_object_buffer ) but it's not immediately compatible with all extensions. We need a function like glAttributeArrayObjectATI(...) for it to work with ARB_vertex_program ( or something similar ). The benefit of VAO is not having to worry about syncronization issues and memory management.

folker
07-25-2002, 08:15 AM
Originally posted by Jurjen Katsman:
...

Hm...

Since you get best performance for static geometry, the hardware should be able to take advantage of "clever" optimized internal formats. For example, maybe that some alignment padding for attribute data may improve performance. Different texture swizzling techniques for example including mipmaps also may help texture lookup performance.

So if you don't allow such internal optimizations, you decrease performance for the case of static data, which is the most performant case and most important case.
Note especially, that due to vertex programs and pixel programs, you need less dynamic geometry and dynamic textures in the future, so static data is getting more and more important. Because of this, I think hardware, driver and API design should primary optimize for static data.

Maybe the best approach is to use internal optimized representation only for static data, and use standard formats for dynamic data (similar to the famous DYNAMIC flag in d3d).

Hm, maybe the OpenGL 2.0 object management draft is the right approach, providing both the pointer semantics (for static data) and the lock semantics (for dynamic data).

V-man
07-25-2002, 11:27 AM
Originally posted by folker:
For dynamic geometry, you assume that the application knows and writes exactly the internal data format of the hw. If the format is different, the lock semantics requires anway an additional copy anyway. But it should be job of the driver to hide that to not impose restrictions to hardware design.


Not sure if this would work out well, but you could have a gl function specially made for this. Instead of writing the data yourself, you call the gl function which will have the
CPU copying to AGP memory, dword after dword (or whatever) while in parallel, GPU accesses previously written portions by the CPU and translates them to hw specific formats. It would be like having 2 processors.

If that is possible, it would reduce the performance impact of this double copy to almost nothing.

V-man

MZ
07-25-2002, 05:19 PM
Originally posted by PH:
Promoting NV_vertex_array_range + NV_fence to EXT or ARB seems like a good idea.
I disagree, there is at least one feature that disqualifies VAR as candidate to be general solution: all vertex data of entire application must lie in one memory pool. You can't store static geometry in video memory *and* dynamic geometry in AGP memory at the same time. To switch from one pool to another you must call glVertexArrayRangeNV(), and according to spec it causes flush.

I really enjoyed VAR, as it was quick salvage to aid GeForce in way compatible with old vertex arrays. But this is not good way to advance OpenGL in long term.


Originally posted by Cab:
Personally, I think NV_VAR is more flexible but GL2 VAO will be enough.
I'd like to point out that GL2 Array Object is currently defined as typeless memory block (just like VAO or VAR), so you can store multiple, differently formatted vertex arrays inside it (again, exactly like in VAO or VAR)

GL2 AO is flexible enough to easily mimic any existing technique:

1. Nvidia's VAR
create 1 large Array Object, apply Direct Access to it, and implement your own memory managment within object's memory range

2. Ati's VAO
create many Array Objects, and use glLoadVertexArrayData() to fill them (dynamically or statically)

3. Ati's VAO + MOB
create many Array Objects, and apply Direct Access to them

4. DirectX Vertex Buffer
create many Array Objects, apply Direct Access to them, restrict yourself to filling each one object with only one vertex format, implement your own renaming of buffers + synch just before locking.

In fact GL2 AO + Direct Access + GLsync is *most* flexible and effective of all current vertex array solutions.
GL2 nicely unifies multiple techniques, some of which are most-effective, and others which are simple to setup and use.

To people who doesn't like Direct Access: for me VAR is nothing more but one big, exclusive and permanent Direct Access.

[This message has been edited by MZ (edited 07-25-2002).]

folker
07-25-2002, 11:19 PM
Originally posted by MZ:
[QUOTE]...
After thinking again about it, I finally think that your analysis is the right one.

davepermen
07-25-2002, 11:35 PM
i agree to mk and folker as well.

i think people are a bit scared of direct access because for them its like direct access of for example framebuffers in dx. that means, you get back the definition, how the colormode is, and have to code solutions for that.

VAR shows that it does not mean that you need to hack around depending on format, but instead, that you get a direct access to mem, and you specify the data as you want.

now, for textures, and the swizzled and compressed and implementationdefined format.

i think there can be done a textureobject that is speed_optimized. that you can't access directly, only with gl(Copy)Tex(Sub)Image{123}D, and thats it. that one can be swizzled and compressed etc how ever it wants to be..
then there could be a write_optimized one, where you specify the format, but its stored in writeonly memory, so you can do old unreal procedural textures in there, for example.
then there could be a readwrite optimized one as well, if you need access to read back from it.. you can fit all needs, no problem..

but i am for direct access.. vertex arrays we have direct access as well, var gives it as well.. doing it in a nice structured way is handy..