Rendering without Data

Currently, in order to render, the specification requires that you have some vertex attribute active and bound to an array (whether CPU side or a buffer object). And while that is generally the way rendering gets done, there are times when it is not necessary.

If your vertex shader can generate all the outputs directly from gl_VertexID (the count of vertices) and gl_InstanceID, then you don’t need attributes at all. It would be good if you could use the glDraw(Instanced)Arrays functions to render in these cases.

Why not! In that case I would had rasterize without data too.
For image space processing is just useless.

In that case I would had rasterize without data too.

Rasterize what? You have to know where you are, which requires some interface to designate a part of the screen. Since the rasterizer only works with triangles, you may as well just use them.

Besides, it’s not like executing the vertex shader 4 times is going to impact performance in the slightest.

I can give the same reply to your idea: it’s not like one vertex is going to impact performance in the slightest :stuck_out_tongue:

Rasterising without data would generate glMinSampleShading sample per pixel. Then it’s more a matter of where to start the sample emission in the pipeline but it would pass the scissor test at least.

I can give the same reply to your idea: it’s not like one vertex is going to impact performance in the slightest

Performance isn’t why I asked for it (though I don’t necessarily concede that performance will not be impacted. It depends on how much stuff you’re “rendering”). I asked for it because having to create buffer objects that contain “data” that won’t be used is not a good thing. It takes up memory that could be doing other things. It also looks strange from an API perspective to attach a buffer that you don’t use data from. It’s just much cleaner overall to be able to run the vertex shader X times.

If you’re just doing screen-space processing, just draw a triangle that’s really big. And, with the ability to generate all data from the vertex shader, you don’t even need buffers and such.

Rasterising without data would generate glMinSampleShading sample per pixel.

So would rasterizing from triangles. I don’t see the point you’re making.

What you’re asking for is a “glDrawScreenSpaceQuad” function. That’s new functionality, and it has to be specified in some way. What I’m asking for is a minor change to the specification’s wording; that requires neither new entrypoints nor new enumerators.

Erm… to draw a triangle, you need a buffer with 3 vertices…
I don’t see why you want a triangle in such case.

Anyway, I am quite up for rendering without data, maybe the only issue, I’ll see is that OpenGL would not generate errors if buffer are missing and binding a buffer is 99% of the time what the OpenGL programmer would want to do. Based on that I don’t think the ARB would go for a solution without entry point.

maybe the only issue, I’ll see is that OpenGL would not generate errors if buffer are missing and binding a buffer is 99% of the time what the OpenGL programmer would want to do

The best way to go for that would be to check the vertex shader to see if it has any attributes defined (gl_VertexID and gl_InstanceID don’t use attribute indices). If it does, then error out.

Where exactly did you read that its required to have an array enabled in order to glDrawElements? Can you quote?

As far as i can remember this was implicit requirement in compatibility (and pre 3.x) specs that stemmed from the fact that all draw calls were defined in terms of glBegin/glEnd (so generic 0 or vertex had to be specified to show anything) - in core specs there is nothing that would require it implicitly.

Where exactly did you read that its required to have an array enabled in order to glDrawElements?

glDrawElements needs a buffer bound to GL_ELEMENT_ARRAY_BUFFER. But you’re right that I don’t see any specific language that says you need an array enabled for any attribute in order to render with glDrawArrays.

There is also always glDrawArrays and it’s friends too, no need for an index buffer at all then.

I asked for it because having to create buffer objects that contain “data” that won’t be used is not a good thing. It takes up memory that could be doing other things.

You could use glVertexAttrib() to assign a constant value to the attribute to avoid this. But…

It also looks strange from an API perspective to attach a buffer that you don’t use data from.

…I agree with this.

Looking at the spec, it looks perfectly reasonable that in the core profile to have a vertex shader with no "in"s, but uses the vertex and primitive id to generate it’s "out"s, and to use glDrawArrays to draw. If this is fine, then no buffer objects are needed.

Anyone see something in the spec that says this will not work?

It should work in Core Profile, similar to the “Using the Input-Assembler Stage without Buffers” D3D10 example.

It can’t work in Compatibility Profile (DrawArrays -> ArrayElement -> glVertex.)

It should work in Core Profile, similar to the “Using the Input-Assembler Stage without Buffers” D3D10 example.

It can’t work in Compatibility Profile (DrawArrays -> ArrayElement -> glVertex.)

If that’s true, then this is something that should be fixed. Hence the suggestion. Just define glDrawArrays the way that core GL does instead of with glArrayElement.

If that’s true, then this is something that should be fixed. Hence the suggestion. Just define glDrawArrays the way that core GL does instead of with glArrayElement.

The core profile lets one draw without needing any buffer data (index or attribute). As for compatibility profile, there the 0’th attribute MUST be used. This is (I think) the only place where the core profile has something the compatibility profile does not. Additionally, because the compatibility profile needs to be, well compatible with GL2.x and before, it is not likely that one can get out of this.

So, use the core profile.

Additionally, because the compatibility profile needs to be, well compatible with GL2.x and before, it is not likely that one can get out of this.

That doesn’t make sense. What we’re talking about is just relaxing a limitation; it doesn’t invalidate any currently existing code. It just means that certain code that would have gotten an error before now will function fine. Just like any of the GL 3.0 features that use APIs defined in 2.x with different enumerators or numbers.

It would even be an extension. ARB_null_rendering or whatever.

So, use the core profile.

Unless you want to render quads without using data, of course.

That doesn’t make sense. What we’re talking about is just relaxing a limitation; it doesn’t invalidate any currently existing code. It just means that certain code that would have gotten an error before now will function fine. Just like any of the GL 3.0 features that use APIs defined in 2.x with different enumerators or numbers.

There is a can of worms here. Firstly, if a vertex shader does not use attribute 0, then glBegin/glEnd does nothing. Each of the glDraw* commands is worded in terms of glBegin/glEnd (which one can simply say reword the spec). It does not invalidate code, but it changes the behavior of existing applications. Admittedly this is moderately obtuse, but if a shader does not use attribute 0 and the application is using a compatibility profile then the current behavior is draw nothing where as the proposal becomes draw stuff. I freely admit that this case is crazy, but then this change loses compatibility in the compatibility profile. The easy way out is then to create an enable that allows for the behavior. So… make an extension for just the compatibility profile, I cannot see that happening.

Unless you want to render quads without using data, of course.

Maybe a request to put QUADS, QUAD_STRIPS, etc back into core would be better?

It does not invalidate code, but it changes the behavior of existing applications.

How? You can’t use glDraw with glBegin/glEnd. For any particular primitive, you either used glDraw or glBegin. There is no way an application could be written that it could tell the difference between a glDraw that was defined in terms of glBegin and a glDraw that was not.

Admittedly this is moderately obtuse, but if a shader does not use attribute 0 and the application is using a compatibility profile then the current behavior is draw nothing where as the proposal becomes draw stuff.

You could just as easily suggest that somewhere, someone might have written “glTexParameteri(GL_TEXTURE_SWIZZLE_R, GL_GREEN)” into their code too. In 2.x, it throws an error and has no effect, while in 3.2+, it does something.

Every extension that uses existing entrypoints does this.

Maybe a request to put QUADS, QUAD_STRIPS, etc back into core would be better?

That’s not going to happen.

You could just as easily suggest that somewhere, someone might have written “glTexParameteri(GL_TEXTURE_SWIZZLE_R, GL_GREEN)” into their code too. In 2.x, it throws an error and has no effect, while in 3.2+, it does something.

Continuing in the pedanticneess here, feeding GL_TEXTURE_SWIZZLE_R will make GL throw an error in pre-GL3.2 AND that enumeration was not defined until the extension/GL3.2 came out… so a legacy application would not pass those values, in contrast a legacy application expecting nothing to be drawn would have a little surprise, :whistle: . I freely admit this line of thought is silly, after all who in their right mind would issue a draw call and expect it to draw nothing? I guess some highly layered stuff might as a kind of clever hack for whatever system that was their, but that just smells bad.

[quote]
Maybe a request to put QUADS, QUAD_STRIPS, etc back into core would be better?

That’s not going to happen.
[/QUOTE]

Considering that rendering without any buffers, atleast looks that way in the spec, works in GL core profile, what you are asking for is to add something to the compatibility profile. Um.

As for weather or not QUADS or QUAD_STRIPS to come back (and maybe add QUADS_ADJANCECY and QUAD_STRIC_ADJANCY, mu-ha-ha) that seems like a more likely candidate to make it: there is a use for QUADS and the hardware already has it. One feature that was marked depreacted but remained in GL core was gllineWidth, so maybe QUAD jazz could get unmarked. If there is a legitimate demand for it, why not eh? Hmm… makes me think of an idea.

If there is a legitimate demand for it, why not eh?

Well, consider this. Line width is still in GL core. It is marked deprecated, but unlike GL_QUADS, it is still available in GL core.

Someone obviously thought, and still thinks, that line width is important enough to keep around in the spec. Which means that there must have been some kind of threshold that the ARB considered when finalizing what was removed from 3.1. A threshold that was higher than mere deprecation. A threshold that GL_QUADS met, but line width didn’t meet.

That means that, on some level, the ARB thought twice about removing GL_QUADS, as they did about all the other functionality. And yet they still did so. Over the objections of pretty much everyone.

So if they can deprecate something, then remove it even though people clearly objected to that removal, then I would suspect that they’re not going to be very willing to bring it back.