Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Search:

Type: Posts; User: Nikki_k

Page 1 of 3 1 2 3

Search: Search took 0.00 seconds.

  1. So, what kinds of graphics hardware do you have...

    So, what kinds of graphics hardware do you have in these systems? This sounds like a driver problem.
  2. Two questions: 1. Does the window have the...

    Two questions:

    1. Does the window have the TOPMOST style? If I remember correctly this is an extended style and not part of dwStyle.
    2. Does the message box have the main window as owner or just...
  3. Is there any place I can report bugs to Intel? ...

    Is there any place I can report bugs to Intel?

    I also ran in another one recently, that a GLSL 1.3 shader refused to accept the GL_ARB_uniform buffer extension, although it was reported as...
  4. GLSL compiler problem with shader storage blocks in latest Intel driver

    I encountered this while running GZDoom after updating my machine to the latest Intel driver.

    That program shows an error about not allowing an unsized array at the end of a shader storage buffer,...
  5. Problem with uniform buffer array and older AMD hardware

    I am using a uniform buffer to pass a larger amount of light sources to a shader.

    I define the buffer as follows:



    layout(std140) uniform LightBuffer
    {
    vec4 lights[NUM_LIGHTS];
    };
  6. Not really. You can pass the index through the...

    Not really. You can pass the index through the buffer pointer, the same rules apply here as for setting an indes with glVertexAttribPointer, for example I am doing this to iterate through an index...
  7. That context still isn't proper. For example,...

    That context still isn't proper.

    For example, when I create a GL 3.1 context on my Geforce 550Ti I still have all the extensions up to GL 4.5 which means that despite the low version number all...
  8. It looks like you didn't add the library file for...

    It looks like you didn't add the library file for glfw as all missing symbols relate to that.
  9. From looking at the code to bind the textures and...

    From looking at the code to bind the textures and set the samplers, I think that at validation time both samplers do indeed point to the same texture unit. You should make these GL.Uniform calls...
  10. Correct, but let's not forget that this only is...

    Correct, but let's not forget that this only is true on Windows and Linux, where you are mostly guaranteed to have GL 3.0 with most modern stuff available through extensions.

    On MacOSX, that's not...
  11. Replies
    3
    Views
    500

    I had this once, too, turned out that all the...

    I had this once, too, turned out that all the fans were full of dust and no longer able of doing their work. Cleaning everything fixed the problem.
  12. Thread: C or C++

    by Nikki_k
    Replies
    8
    Views
    1,023

    'Stupid' doesn't even begin to describe your post...

    'Stupid' doesn't even begin to describe your post - because - well - nobody forces you to USE that bloat in the first place. The naked language without this shit is actually quite nice.
  13. Thread: C or C++

    by Nikki_k
    Replies
    8
    Views
    1,023

    Thar entirely depends on the code you write. ...

    Thar entirely depends on the code you write.

    C++ makes it ridiculously easy to write bloated, unperformant code while in C you need to think more to make this mistake - but it's not impossible
    In...
  14. You unbind the GL_ELEMENT_ARRAY_BUFFER while your...

    You unbind the GL_ELEMENT_ARRAY_BUFFER while your VAO is still bound. You need to keep this binding in your VAO or glDrawElements does not know what buffer to draw from.
    Remember: Unlike...
  15. Replies
    4
    Views
    1,038

    I got the time before glDraw* and after it, using...

    I got the time before glDraw* and after it, using the CPU's RDTSC, then added up the intervals for all draw calls I made.
    Of couse it only measures the time this needs in my main thread, but...
  16. Replies
    4
    Views
    1,038

    Even worse, draw call overhead can differ greatly...

    Even worse, draw call overhead can differ greatly between drivers.

    I have done a benchmarking test to analyze this exact problem recently and to my neverending surprise the test shows that on...
  17. Replies
    5
    Views
    1,036

    Running out of memory can also mean running out...

    Running out of memory can also mean running out of address space. With 32 bit under Windows, for example, your application's entire address space is 2GB, and once that's used up anything can produce...
  18. Replies
    3
    Views
    787

    I once saw that, too, on an older AMD driver. The...

    I once saw that, too, on an older AMD driver. The only remedy was to add the precision qualifier, redundant as it was...
  19. Replies
    0
    Views
    325

    GLU tesselator and core profiles

    Having to convert some older GL code using the GLU tesselation functions, what's the state of this on various platforms?

    Can those parts of GLU which do not access OpenGL itself still be used with...
  20. Apple's OpenGL support is a bit odd. They do...

    Apple's OpenGL support is a bit odd.

    They do support GL 4.1 - but only as core profile. If you need ANY compatibility features you will be stuck with 2.1.
    And since the core profile was...
  21. Sadly, you have to find a way to work around the...

    Sadly, you have to find a way to work around the buffer upload bottleneck. I had a similar issue, namely that frequent buffer uploads offer utterly terrible performance across the board with all...
  22. Just for the record: On NVidia I get a report...

    Just for the record:

    On NVidia I get a report of approx. 250 handles being used when starting an application that opens a GL context. It seems to be an issue with AMD's driver. But as long as all...
  23. Yeah, I saw that and on NVidia it's certainly...

    Yeah, I saw that and on NVidia it's certainly true. I still wonder what's up with Intel here, does it just silently fail if the uniform block gets too large or is the returned info bogus?
  24. What is GL_MAX_UNIFORM_BLOCK_SIZE being measured in?

    I'm not sure based on my observations. Is it in bytes or in uniforms in the buffer?

    On NVidia it looks like bytes, I get a return value of 65536, and I can't define a vec4 array with 4096 entries...
  25. Replies
    1
    Views
    753

    That card is still based on the older Fermi...

    That card is still based on the older Fermi architecture which does not support this extension.
Results 1 to 25 of 65
Page 1 of 3 1 2 3