Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 2 of 2

Thread: Using hardware transformations for matrix operations

  1. #1
    Intern Contributor
    Join Date
    Feb 2000
    Posts
    89

    Using hardware transformations for matrix operations

    As is often the case when I post here, I really should just try this out myself, but I'm too busy/lazy to bother with it before getting a little feedback. So on to the question...

    If I have a graphics card that does geometry acceleration in hardware, is it worth using that card for some of my fundamental matrix operations (that are not necessarily related to graphics)? For example, could I get some performance gains by passing all my 4x4 matrix multiplications to the card, using glMutlMatrix, and then using a glGetx to get the result? I suspect that any gains I'd get could be killed by transferring to and from the card. Plus I've heard that the hardware transforms on the card are not necessarily a ton faster than doing it on the CPU, but they just help by freeing up the CPU for other things.

    Still, I'd be interested to hear results if anyone has tried this. Paricularly if it's been compared to code using optimized matrix operations on the CPU like the Intel MKL, or on systems using shared memory for graphics. Otherwise I guess I'll have to get off my but and try it myself .

    -Rob

  2. #2
    Advanced Member Frequent Contributor
    Join Date
    Feb 2000
    Location
    London
    Posts
    503

    Re: Using hardware transformations for matrix operations

    Short answer - no. Using hardware for this sort of thing is a major performance killer.

    I've posted the long answer several times already on other threads; should be able to find it with a search.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •