Real GL_MAX_VIEWPORT_DIMS

My GL_MAX_VIEWPORT_DIMS is 8192 x 8192 but whenever I set the viewport larger than 6500 x 6500 I get a garbage image. If I render to a 7000 x 7000 I get even an empty image (totally transparent and empty). Sometimes I get a crash. And I render an empty scene, just a black background.

How to calculate the real GL_MAX_VIEWPORT_DIMS and avoid the crash?
I need to know that real value so, in case I need to render a larger image, I render to a series of smaller viewport tiles.

I noticed that if the pixel buffer has NSOpenGLPFASampleBuffers 1 and NSOpenGLPFASamples 2, I cannot get a good image larger than 4000 x 4000. So the calculation presumably involves several variables.

My GC is NVIDIA GeForce GT 330M with 512MB VRAM, OpenGL Version: 2.1 NVIDIA-1.6.24. My display size is 1900 x 1200. I run MacOS X 10.6.5.

Why do you need a viewport larger than your screen ? I think viewports are connected to display screens.

If you need to render to large images, try to use FBO.

The viewport size is independent on window size (though, in most cases you couple them to get a “normal” image).

You may want a viewport transformation (which is controlled by glViewport() that either enlarges or shrinks the output independent on the window… for instance, you might implment some kind of picture-in-picture effect this way.

One clearification is needed, though.
LeoGL: do you actually refer to a call to glViewport(0,0,6500,6500) or do you try to create an FBO that is so large?

Actually if you are using such a large viewport for rendering to a window then it is not a surprise that it doesn’t work as pixel ownership test should discard anything larger than the screen.

If you use FBOs to render to texture then in fact it should work, if not, then it must be most probably a driver bug.

He said,

…I get a garbage image… …Sometimes I get a crash.

so it sounds definitely like a driver bug.

Thank you.
I really need to save the openGL scene to a big image.
Actually I “hide” the window (which sounds like a dirty trick, I know), enlarge the window (together with its glView) to the maxDim (8192 x 8192), then I draw the glView once only, then I get the result with glReadPixel and I save the image. I get garbage starting with 6500 x 6500. But if I turn on the supersample NSOpenGLPFASamples I get garbage starting with 4000 x 4000.

Anyway, I have already coded the render-to-tiles method, but I don’t really know when I have to apply it, since I can’t really know the real max viewport size. I can’t say e.g. if the final image is > maxDim/2 I divide the viewport in tiles… Got it?

I am going to try the FBO. I have some question.
a) As I understand I still have to set the viewport, and its max size GL_MAX_VIEWPORT_DIMS is the same. Right?
b) I will close the window and discard the current glView, so, without a glView, when I bind the FBO, which openGL context should I make as current?
c) Should I use a different pixelFormat than the one I use for the glView?

I really need to save the openGL scene to a big image.
Actually I “hide” the window (which sounds like a dirty trick, I know), enlarge the window (together with its glView) to the maxDim (8192 x 8192), then I draw the glView once only, then I get the result with glReadPixel and I save the image

This is indeed a dirty, evil hack that you deserve to not work! :smiley:

I doubt that you can “just” enlarge a window beyond the screen size and then expect it to return valid pixel data from the invisible out-of-screen parts. Also, for hidden windows GL’s ownership test might just return “false” for every pixel you try to render.

If you want a clean, working solution, always use a tiled renderng (if you want to exceed the maximum viewport/FBO dimension) into an offscreen-FBO.

Apart from that, you should create a bug report/demonstration and forward that to your graphic card vendor. Maybe they have a bug in their implementation because nobody has ever tested such a large viewport setting.

Thank you skynet,
do you have an answer to the 3 questions I posed here above?
I used FBO once (so I am not so skilled on FBO), but always within a glView’s drawRect: method, to draw the result to a texture. So the openGL context was set, the pixelFormat too… Here instead I have no glView, so, can I simply define and draw within the FBO? No need to define context and pixelFormat? Also, should I attach a glFramebufferTexture2DEXT or a glFramebufferRenderbufferEXT ?

b) I will close the window and discard the current glView, so, without a glView, when I bind the FBO, which openGL context should I make as current?

I don’t know what a “glView” is, but if that’s anything like an OpenGL context, then you don’t have OpenGL without one. If you actually close the window, I’m fairly sure your rendering context goes with it. And without having a valid rendering context, you can’t talk to OpenGL.

Thank you Alfonse.
Yes, glView is the openGL view with the openGL context.
Here they suggest to non to draw in the glView but in the FBO.
So, as I understand, to save the hi-res image, I should do the following. Please confirm.

a) I close the window.
b) I do not discard the glView.
c) I create the FBO.
d) I keep on calling the glView’s drawRect method, but now, firstly I bind the FBO, then I set the viewport to the hi-res size (no matter the size of the glView), then I draw the scene.

To create the FBO

I create the FBO and attach to it a texture with Hi Res size

glGenFramebuffersEXT(1, &mGLFrameBuffer);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, mGLFrameBuffer);

glGenTextures(1, &mTexBind);
glBindTexture(GL_TEXTURE_2D, mTexBind);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, hr_width, hr_height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, mTexBind, 0);
verify, unbind, etc…

To draw

// This is the glView’s drawRect method, now modified
// to draw in the FBO when I need to save the scene to a file

  • (void)drawRect:(NSRect)rect
    {
    // I bind the FBO then I set the viewport with the Hi-Res size
    if(saveImage){
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, mGLFrameBuffer);
    glViewport(0, 0, hr_width, hr_height); // I set the Hi-Res size, no matter the glView’s size, right?
    }
    else{
    // if I don’t save the scene to an image, I just draw with the glView size
    glViewport(0, 0, width, height);
    }

    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // I set the projection and modelView matrices and I draw the scene.
    // Then I unbind the FBO and get the image with glTexSubImage2D. Correct?

    if(saveImage){
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

      // should I bring the scene to my image buffer this way?
      glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, hr_width, hr_height, GL_BGRA_EXT, ARGB_IMAGE_TYPE, mImageBufferPtr); 
    

    }

    [[NSOpenGLContext currentContext] flushBuffer];
    }

Ok, I succeeded. That’s great!!!
Now I have to deal with a new problem: the FBO has no antialias.
I am trying to make it multiSample. Sometimes it works, sometimes it crashes.
I will deal with it, and in case, I will open a new thread. This forum is great! Thanks.

Hello, I tried to do something like this, but when the viewport size is larget than 8192x8192 the rendering is just “empty” (I should check the error code, I know…) … I am using rendering offscreen. Just scaling an image to a huge size like 20000x20000, but the limit of 8192x8192 is quite odd… might be I am doing something wrong. This is part of my code…

glBindFramebuffer(GL_FRAMEBUFFER, fb);
// Set the list of draw buffers.
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers

glViewport(0, 0, neww, newh);

float erase_color[3] = { 1,0,0 };
glClearBufferfv(GL_COLOR, 0, erase_color);
prog.use();
{
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, texture);

I check the error… and the error is in glTexImage:

void updateBuffers(int neww, int newh)
{
glBindFramebuffer(GL_FRAMEBUFFER, fb);
// “Bind” the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, renderedTexture);

err = glGetError();
    // at this point err is "OK"

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, neww, newh, 0, GL_RGB, GL_FLOAT, 0);

err = glGetError();

at this point, with newx=10000 the value for err in decimal is 1281

1281 = 0x0501 = GL_INVALID_VALUE.

You’re probably exceeding the maximum size of a texture, which can be queried by calling glGetIntegerv with the parameter GL_MAX_TEXTURE_SIZE. The limit is required to be at least 16384 (OpenGL 4.1 and later); 1024 (3.0 and later), or 64 (up to 2.1). It’s possible that a larger limit exists for renderbuffers (queried using GL_MAX_RENDERBUFFER_SIZE), but unlikely.

If you want to render images larger than the largest supported texture or renderbuffer, you’ll need to render in tiles, setting the appropriate projection matrix for each tile.

Thanks for replying. the rare point is that the maximum texture size seems to be 65536x65536 … but, passing 10000 will generate the error. Hmmm…

I am going to check the other parameters… by the way, the maximum viewport size is 8192x8192, but it applies for “real” framebuffers, and not for offscreen rendering I think.

Best Wishes!

Odd. If there isn’t enough memory for a texture, the call should generate GL_OUT_OF_MEMORY (0x0505), not GL_INVALID_VALUE (0x0501).

It applies regardless of whether you’re using the default framebuffer or a FBO. Note that the current viewport is context state; switching between the default framebuffer and a FBO doesn’t change the viewport. But that doesn’t affect the behaviour of glTexImage2D; at the point you create a texture, the implementation doesn’t know whether you’re planning to use it as a render target. Also, setting too large a viewport doesn’t generate an error; viewport dimensions are silently clamped to the maximum values.

Thank you

But it is rare…because the error occurs when I passed a texture larger than the maximum viewport size (8192). There are no more errors in the code. I check the error on almost every line of the opengl code.

Are you resizing the texture while it is bound to a FBO? Do the texture’s dimensions exceed the limits associated with GL_MAX_FRAMEBUFFER_{WIDTH,HEIGHT}?

If both are true, that will result in the framebuffer being incomplete. Although it shouldn’t result in an error, AFAICT.

Ok, I am going to check it out

I solved the problem by using GL_MAX_RENDERBUFFER_SIZE… in my machine it is 4096 because my GPU is very basic and old. So, I resize the images in blocks of 4096x4096 pixels. Thank you!