Opengl 1.5 support this?

Hey
All i want to know is if GLSL or opengl1.5 support the new features of a geforce 6800 such as full 32-bit floating point precision, shader programs of over 65,000 lines , shader antialiasing, and a total of ten texture coordinate inputs per pixel.

and a extra question. Who defines the next stardard in graphics ? is a common cense betewen graphic vendors and API developers(Microsoft and ARB)? or is the iniciative of one of this entities.

thaks for all. sorry for my english. :slight_smile:

GLSL supports:

shader programs of over 65,000 lines , shader antialiasing, and a total of ten texture coordinate inputs per pixel.

But right now you are not able to declare how much bits a floating point variable has. GLSL does not care which floating point precision the graphics card has/use. You will need to use extensions to declare which floating point data types are 16/24/32 bits. This extensions isn’t available but I’m quite sure that there will be an extension to GLSL where you can use half floats and “full” floats.

GLSL does not care which floating point precision the graphics card has/use.
Actually, it does care. The specification requires a precision of at least 24 bits (effectively. They don’t say it via some number of bits) of floating point. Since NV30/40 don’t support 24-bit floats, they have to use 32-bit ones.

I’m quite sure that there will be an extension to GLSL where you can use half floats and “full” floats.
Not if ATi has anything to say about it. Such an extension only helps nVidia; it does nothing for ATi hardware as they only support 24-bit floats. As such, supporting such an extension would be pretty silly for them. It’s one of the reasons why glslang didn’t have this to begin with.

Oh, nVidia can create an extension of their own, but it won’t be cross-platform.

Originally posted by Korval:
Actually, it does care. The specification requires a precision of at least 24 bits (effectively. They don’t say it via some number of bits) of floating point. Since NV30/40 don’t support 24-bit floats, they have to use 32-bit ones.

Oh, thanks. Didn’t know this. But GLSL doesn’t care if the graphics card use 24Bit floating point data types or 32Bit.

[b]Not if ATi has anything to say about it. Such an extension only helps nVidia; it does nothing for ATi hardware as they only support 24-bit floats. As such, supporting such an extension would be pretty silly for them. It’s one of the reasons why glslang didn’t have this to begin with.

Oh, nVidia can create an extension of their own, but it won’t be cross-platform.[/b]
Yes, I thought of an NV extension, not an ARB extension.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.