View Full Version : need to pick a GPU...
04-13-2006, 10:26 AM
Does anyone know the value of the following constants for the latest ATI and NVIDIA hardware. Specifically, Radeon x1800, Radeon x1900, FireGL V7300, GeForce 7800, GeForce 7900, and Quadro 4500:
All of these cards support vertex texture fetches, right? Please correct me if I'm wrong.
Also, does anyone know if the FireGL card supports long vertex and fragment programs (e.g. 65536 instructions).
Please chime in if you know of any GLSL limitations or bugs with these cards.
04-13-2006, 12:00 PM
According to this :
geforce 7800 :
Max. tex. image units 16
Max. vertex tex. image units 4
04-13-2006, 12:28 PM
Thanks for pointing out that list. That answers my questions about the NVidia cards...
Too bad the list doesn't include the Radeon x1800 and x1900. I've read allot of conflicting statements about wether those two GPUs support vertex texture fetches. Some websites claim that they do, others claim that they don't. Does anyone know the specifics about these cards?
04-13-2006, 12:34 PM
To clarify, I'm really asking about hardware accellerated vertex texture fetches. I know the last generation of Radeon cards supported the texture fetches in software emulation, which is not what I'm looking for...
04-13-2006, 01:34 PM
There's no VTF support on any ATI hardware, including the X1K series. You can do R2VB though, which solves about the same set of problems, but that's only available in D3D at this point.
04-13-2006, 07:36 PM
In place of vertex texture fetches, does ATI support VBOs for per-vertex shader attributes? I think I remember trying VBOs on a Radeon 9800 with a vertex shader and the driver put the rendering in software emulation.
04-13-2006, 08:05 PM
In place of vertex texture fetches, does ATI support VBOs for per-vertex shader attributes? I think I remember trying VBOs on a Radeon 9800 with a vertex shader and the driver put the rendering in software emulation.VBO's have been working on ATi's cards longer than they have on nVidia's.
04-14-2006, 12:19 AM
For X1800 XL (I've checked the values on my own card) it is 16 for GL_MAX_TEXTURE_IMAGE_UNITS and 0 (which is quite clear as VTF is not supported) for GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS. One of the limitations of X1K cards (I have encountered it recently) is lack of stencil buffer attachment support when using framebuffer objects. Additionally, X1K cards have less restrictions on looping (no more than 4 nesting levels, as with NVidia, but the number of iterations available is at least 1024 or even greater, 1024 being the biggest number I've tested) and has no restrictions on the number of instructions executed dynamically. Furthermore, if your shader falls into an "infinite" (or at least very-very long) loop, you're likely to restart your Windows:) (which have happened with me several times). ATI drivers, however, show less stability while compiling particularly long extreme-case shaders. I have been able to crash the RenderMonkey (and my application too) several times with such shaders. I could only guess what the actual reason was, though I think that it was the instruction cound (512) being exceeded. One more piece of information is that X1K cards have 128 4-tuple temporary registers (see ATI SDK, March 2006) compared to 32 on NVidia 6xxx series (this have been tested). However, I've got no information on whether NVidia 7xxx have more or the same number of temporary registers.
Powered by vBulletin® Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.