GL_MAX_VIEWPORT_DIMS?

Can someone explain the ins and outs of glViewport() implimentations? Why is there a max viewport size? Know any extensions or alternatives supplements?

In short glViewport seemed ideal for a means of setting up a rendering context for a virtual terminal like setup. Where I went afoul is my environment permits the user to zoom in and out at will so that the viewport grows along with the zoom factor. The problem arises when the size of the viewport in screen space (mostly outside the actual display mode pixels) exceeds the GL_MAX_VIEWPORT_DIMS allotment. Is there a reason viewports should be limited in opengl? Why not just treat them as virtual mappings and discard the bits outside the physical display space?

I’d also like my viewport to be quite large, like a backdrop in a potentially limitless virtual desktop of user defined dimension.

Viewport is nice for this application, because it doesn’t depend on the matrix states. Unless there is a virtual viewport type extension however, it seems I’ll just have to make due without it.

But as long as I’m present, any Opengl spec maintainers out there, please do away with GL_MAX_VIEWPORT_DIMS!

Sounds like what you’re trying to do is what you should use the projection matrix for instead. Instead of growing the viewport, you shrink the view volume being mapped to the viewport.

The purpose of the viewport is to define what area of the window to map the view volume to. Doesn’t make much sense to me to make it much bigger than the window itself for the purpose of zooming. The purpose of the projection matrix is to define what area of the scene is mapped to the viewport, which is where you do the zooming.

I thought better of you, michagl.

knackered, I love you :slight_smile:

The viewport interface in question doesn’t anticipate the state of the projection matrix desired by the underlying rendering entrypoint (that is the application perspective, not opengl)

So in the best of all possible worlds the cleanest way to manage this is glViewport – as it least invasively defines the bounding box of the projection space.

Superficially if not for the max_dims limitation glViewport would be perfect. Though implementation wise I can imagine many scenarios where the driver might go overboard – however for instance, it seems unlikely there’d be any connection between defining a viewport and the framebuffer size once a rendering context is setup.

For a little background, the idea is to hand off rendering to a sort of dumb rendering function. If the function wants to redefine the viewport or scissor region it is easy to pass a callback function which can transparently intersect the desired regions with the current states. This way the underlying graphics environment is not exposed and everything happens in local screen space without involving the matrix states. It’s kind of a high-level mixed media sorta thing.

“as it least invasively defines the bounding box of the projection space”
If I understand you correctly, no it doesn’t. It simply maps the normalised device coordinates (x=-1,+1, y=-1,+1) to window coordinates. Same as glDepthRange maps the normalised depth coordinate to a window depth value. Making a bigger viewport doesn’t give you a bigger window into your world, it just stretches the output of the clipper. It makes no sense whatsoever to make the GL_MAX_VIEWPORT_DIMS greater than the maximum window/FBO/texture size.

Yes, that is exactly what it does. It maps the canonical view volume to “window” coordinates.

Same as glDepthRange maps the normalised depth coordinate to a window depth value. Making a bigger viewport doesn’t give you a bigger window into your world, it just stretches the output of the clipper.

I don’t think I ever suggested glViewport was being used as a scaling mechanism. The fact that scaling fouls the approach is just an unwelcomed and unavoidable side effect.

It makes no sense whatsoever to make the GL_MAX_VIEWPORT_DIMS greater than the maximum window/FBO/texture size.

Of course it does! On the other hand, it makes no sense to limit the viewport coordinates (or if someone can find such an argument you’d fulfill one of the primary purposes I had in mind when posting up this business)

The glviewport spec permits the origin of the viewport and the opposite corner to be defined outside the visible framebuffer, so why on earth should the size of the mapping be limited? If there is a reason an unlimited virtual viewport would throw a wrench into any given implementation, ok, fine. But it seems like glviewport is just a state parameter for mapping the canonical view volume’s unitary boundaries to discrete screen space, so why shouldn’t it be left at that? If you must get pithy about it…

Basically I posted to ask, why this should/must be so, and if maybe anyone knew off hand of an extension to work around this. It works really splendidly this approach, as long as the viewport doesn’t exceed the maximum. And any alternative approach will be horribly cumbersome. Not that it would be the first time anyone was forced to work around the quirks of an api.

The rasterizer needs to convert the projected geometry to the matching pixels. With limit on dimensions of the viewport and subpixel accuracy, you can construct fixed point representation of numbers used during that operation which has sufficient number of bits to represent all coordinates which can be generated within the viewport.

Because fixed point implementation is simpler and faster (especially on the old days of first HW implementations) than floating point one, it is reasonable that the API allows the IHV to limit number of bits required in the implementation by specifying maximal allowed size of the viewport.

michagl, you say it does make sense to have an unlimited viewport size, but don’t give a reason why.

An answer similar to Kombat’s there is what I’d expect to hear, though for contemporary hardware it seems a bit quaint. Also a driver could still clip the viewport to device coordinates for a fixed mapping, while preserving the aspect ratio of the viewport.

The fact that the spec clips the viewport to the max dims is the main problem, cause that boogers the aspect ratio. Of course if you declare the viewport origin and offset outside the max dims zone presumably everything works fine (or spec doesn’t say / this is what I’m observing)

It would be nice if at the least there was an option to disable/extend this via the cards feature set from the os environment end or via an extension (perhaps this can be done – I’ven’t thought much about work arounds at this point)

@knackered, the obvious reason this should be unlimited:

Just imagine an environment like ms windows with rectangular regions acting as conduits into canvases subject to any kind of custom filling. Now pretend this variety of windows has a universal zooming tool, and might even let you arbitrarily extend the size of your desktop outside the boundaries of your physical display, so you can pan around or navigate however you prefer. SO IT IS, naturally when you zoom, the size of the window canvases are enlarged with the rest of the desktop. The natural way to impliment these floating canvas regions is with a viewport type mechanism. In fact I’m not even sure it would be possible to jerry-rig the projection matrix to the same end. Obviously you want a single logic to the way this environment is implemented. So if the user decides to zoom into a particular viewport to get an up close view of somethng, this forces teh extents of the vp to be outside the physical display, potentially to the point the size of the vp exceeds the drivers max_dims, at which point the driver starts clipping and screwing up the mapping. Also you can imagine a custom application running in the desktop background rather than a wallpaper. The size of the virtual desktop can exceed the size of max_dims, so you run into the same problem. In fact, this is pretty to point where we’re at atm.

I’m happy with the implimentation for now. At this stage of development I’d just assume blame opengl for the bug, and see how dx handles. This is an opengl short sightedness issue I think. If there is not a workaround something should be lobbied into existence.

That’s such a lame reason. This is exactly what the projection matrix is for (no need to jerry-rig anything). It’s exactly this sort of thing I do all the time when I cluster my renderer across multiple displays. Believe me, if I can do it without painful memories then it’s not rocket science.
Now, say your problem is that you haven’t got access to the projection matrix because a source-less lib is doing that work, well then you also wouldn’t have access to the parameters it inevitably would be feeding into glviewport without your consent too.
Next you’ll be asking “why oh why were the designers of opengl so short-sighted as to not allow me to define an n’sided polygon as a viewport”.

I really don’t follow what you’re getting at.

It’s natural for a windowing interface to define the window coordinates. It is not natural for it to setup a projection matrix (and scissor box?) in advance, then expect the piggy-backing renderer to transform that matrix while complying with the original extents.

I haven’t a ton of experience transforming/preserving the projection matrix myself – as far as I can recall, I’ve only ever set it and forget it. I honestly wouldn’t bet the projection matrix up to the task granted the scenario outlined in my last post. By all means demonstrate how you’d go about it at least, before taking the side of the nazigl spec.

Obviously glViewport is designed for setting up viewports, such as a modeling suite might use to grant multiple camera pov’s for a project. Why should opengl discriminate against virtual viewports which might float offscreen or be scaled by end users for whatever reasons – or even be defined as larger than the physical raster space? This is obviously undefensible in principal. Glviewport is the natural way to approach this. Arguing otherwise seems inherently nonsensical does it not? Surely we can agree at least there is a theoretical shortcoming (as if it might be the first)

It’s the application that defines the viewport, just as it is the application that defines the projection matrix or the vertex shader. Everything is in the application’s control, so from the point of view of OpenGL there is no need for anything more.

The fact that there are maximum viewport dimensions is simply a reflection of the hardware reality, and using DX will not get you around that.

In DX the viewport must be set to be entirely within the target surface so it is even more limited than the OGL one.

Setup up the viewport to cover the intersection between the screen and the virtual window. Then update the projection matrix so part of the virtual window corresponding to this intersection maps to the <-1,1> range. You can get an idea from this article

[quote=“Komat”]

Setup up the viewport to cover the intersection between the screen and the virtual window. Then update the projection matrix so part of the virtual window corresponding to this intersection maps to the <-1,1> range. You can get an idea from this article [/QUOTE]

I plan on trying a number of approaches in time, including playing with the projection matrix. But immediately I think we will force the renderer inside the conduit viewport to deal with quadrant subdivision of the viewport whenever it exceeds the size of the rendering context’s window. This is more inline with the philosophy so far established, and can also be thought of as a potential optimization in cases where large viewport’s could theoretically cause performance bottlenecks depending on the driver implementation.

As for DX, I’m more of an opengl dude, so I generally don’t know the ins and outs so well personally. Typically DX implementations come way down the line and often via second party APIs after everything is established. DX tends to have less hangups than opengl, but it’s too much of a headache to rapid prototype against.

Of course it’s not indefensible. You can achieve everything you ask for using the projection matrix. It’s a non-issue. It’s not outrageous, it’s not short-sighted, it’s just another limit in a long line of other limits. Same as the ffp limit of 8 lights, the hardware could support more, but other application-side approaches would be more suitable for any more than that number.

I agree clearly, however the choice of glViewport as a call name implies these limitations should not exist by definition. Some other nomenclature could’ve been chosen if these limitations were ever intended to be fixed in spec (which doesn’t really seem to be the case)

I don’t think the projection matrix maps to this application as described. Were it to be adopted as the operative vector/approach, the implementation would be overly cumbersome and unintelligible at best. I would only consider it upon demonstrating to myself a non-a-priori projection matrix could be intelligibly transformed in-place.

Subdividing the glviewport calls seems like a much better idea atm nonetheless. It does put a considerable burden on the sub-routine, but it’s a level of sophistication we can probably live with in this case.

All said and done, I think a clear state which defines the bounds of the canonical view volume in virtual raster space is very vital, and arguing otherwise is counter-productive.

Well said, michagl. I don’t think anyone in their right mind would argue with that.

In which language does “glViewport” imply that?

I don’t think the projection matrix maps to this application as described. Were it to be adopted as the operative vector/approach, the implementation would be overly cumbersome and unintelligible at best. I would only consider it upon demonstrating to myself a non-a-priori projection matrix could be intelligibly transformed in-place.

Scaling a matrix is “overly cumbersome”?

Subdividing the glviewport calls

How do you subdivide function calls?

All said and done, I think a clear state which defines the bounds of the canonical view volume in virtual raster space is very vital, and arguing otherwise is counter-productive.

I think arguing that arguing otherwise is counter-productive is counter-productive. Something that can easily achieved by other means is clearly not “very vital”.