Which graphics card to get to get started in doing math on GPUs

Hi There,

I’m in need of some help.

I’m an old hand in software in various fields, but latest graphics hardware and OpenGL has not been too visible on my radar screen.

Now I’m involved in research project in which we need to do some heavy number crunching. At present our calculations take about two weeks with GNUOctave/Matlab script. We need to cut this down to a fraction.

The math it self is rather simple, it involves (all in floating point) some convolutions, transformations and projection, all of which I think could be easily expressed in terms of graphical operations, which would allow us to tap into the horsepower in graphic cards.

Obviously I’ve been searching the net for some weeks and I think I’m on the right track but still I feel I could do with some help.

I need to catch up quickly and I think that the best way is to get my hands dirty on some actual hardware and start coding.

So my questions are:

Is it likely that this type of operation (floating point convolution, transformation and projections) can be done through OpenGL?

Are there better alternatives to OpenGL? Obviously in the long run a solution that does not depend an a specific card would be great but I need to start somewhere so I guess I could go on with vendor specific ARB extension; at least NVIDIA seems to support floating point frame buffers.

Which graphic card to purchase?

How can I find out which graphic cars supports which features (floating point frame buffers etc)?

(I know I can query the card for features, but would feel silly to purchase something just to do the query and find out that it doesn’t support an essential feature).

One essential feature is to be able to project an array of values (which I plan to represent as a texture rectangle) to another array (a frame buffer) through a (perspective) transformation. What is essential is that antialiasing is done properly. One thing that seems to be differently supported by different cars is just how this is done and it seem to be difficult to find out details about this.

Money is no objection, anything below five grand I can sign of myself. What I need (I think) is a card with 512MB RAM (might be able to get away with 128/256 MB) and some heavy duty GPUs.

I studied the web for NVIDIA, ATI and 3DLABS. But I’m confused. From what I’ve gathered 3DLABS (Wildcat Realizm 800) seem like a good choice in terms of memory and gigaflops, but my biggest concern is the software interface. I’ve not been able find out if Wildcat supports floating point frame buffers, textures and convolutions. Some co-workers that I talked to said they thought that ‘sure, OpenGL 2.0 guarantees that’ but I’ve skimmed through the 2.0 spec and was not able to convince myself that this is the case.

Of course I’ve contacted the various card manufactures but so far no response, maybe I’m just too impatient. I’ve also ordered a few books but meanwhile I’ve posted this here.

Any suggestions, opinions, pointers would be highly appreciated.

Reg Kusti

There were several sessions on this topic at the SIGGRAPH conference this summer. The conference proceedings would give a good overview of what’s going on in the field. Check out siggraph.org

In regard to graphic cards, most of the latest high-end cards would be adequate, such as the FireGL, Quadro, or Wildcat. It sounds like the OpenGL drivers are better on the nVidia cards. To check out what’s on the card before purchasing try http://www.delphi3d.net/hardware/listreports.php or another site like it.

If it were me I’d go with a (not yet released?) NVIDIA SLI solution - two NV40-based cards. NV40 supports the largest feature set and is the most future-proof at the moment. I’d get e.g. two 6800GTs and a SLI-capable motherboard.

I’d give it a try in C/C++ first. I do a lot of image processing / computer vision work and see a great speedup when I convert a Matlab prototype to C++. Especially when using a lot of RAM.