PDA

View Full Version : Hey NVidia guys, help me out!



Siwko
11-03-2000, 09:13 AM
Yeah, this is directed at you Nvidia guys that live on this forum. (IE: Matt and Sebastian)

I've got a question on OpenGL implementation, and I'm sure you guys will at least be able to shead light on what is supposed to be the "default" for it.

A while back, (long while) I had asked about the behavior of display lists. This question just revisited me. So I'll ask again and maybe you'll help.

I am wondering what the default behavior of display lists are with respect to the cycle of the following:

...
glGenLists(x);
glNewList(x);
glEndList();
... glCallList(x);
glNewList(x);
glEndList(x);
... glCallList(x);
... // etc until we delete the list ...

Where 'x' is a particular display list ID.

What I'm looking for is what happens after you generate a display list, then "New" it and fill it up with your data. Then, WITHOUT glDeleteLists() on that display list, do a new glNewList on it. What is the default behavior in this case? Am I going to end up possibly overwriting other data in memory, or cause a video memory leak of sorts?

Oh yeah... I'd like to know the default OpenGL implementation of this operation. What is OpenGL "supposed" to do in this case?

I'm extremely curious. Hook me up!

Siwko

[This message has been edited by Siwko (edited 11-03-2000).]

Michael Steinberg
11-03-2000, 10:23 AM
Oh, well. Shouldn't the implementation then simply return some kind of error state like BAD_OPERATION or the like?

mcraighead
11-03-2000, 12:08 PM
You should never be able to cause a memory leak of any type through the OGL API. It's a safe API as those things go.

The spec seems to imply that it is legal to compile into a display list with the same ID as an existing display list. I don't see any explicit language to this effect, though.

The new list should simply replace the old list.

- Matt

Michael Steinberg
11-03-2000, 02:06 PM
Oh, it might be a bit late, but how was that with the 2048*2048 texel texture support on the RivaTNT? I never got it working in OpenGL. thnx

Oh, and the GL_LINES draw style hurts performance badly. Am I using some wrong parameters or states, or is the TNT so slow with GL_LINES?

[This message has been edited by Michael Steinberg (edited 11-03-2000).]

mcraighead
11-03-2000, 05:14 PM
What, is this thread going to become the omnibus feature request thread now? http://www.opengl.org/discussion_boards/ubb/smile.gif

Hmmm, now that I look at it, we never did add 2Kx2K support on TNT... if you query your maximum texture size, you'll get 1024, whereas on GeForce you'll get 2048. No inherent reason we can't support it. I will look into it, but don't hold your breath.

Do you mean glBegin(GL_LINES), or do you mean glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)? The former should run pretty fast. The latter is not a good way to draw lines anyway, since you end up with every line being drawn twice; it won't be nearly as fast, for a number of reasons.

I think if you look at our Viewperf scores with TNT/TNT2, you'll see that our line performance, both regular and smooth lines, is actually quite competitive with low-end workstation parts that are much more expensive, and much faster than most other consumer-level cards. Obviously GeForce and Quadro will be faster once again, but you get what you pay for...

- Matt

Michael Steinberg
11-03-2000, 06:12 PM
Hey, thanx. Umm, for that 2k*2k thing. The box (it's a Diamond Viper 550) told me, the tnt would support 2k*2k. When I read the maximum tex size from opengl (glGet...) it told me, as you said, 1024. I wondered about this discrepance (is that an english word?).

Yeah, thanx. I'm gonna rewrite my wireframe routine.

Uhmmm, that you get what you pay for thing hit me very hard. When I bought that card Christmas 98, I took the last money together. Might be, that my parents will be so nice to gimme a Geforce 2 (or a Radeon http://www.opengl.org/discussion_boards/ubb/smile.gif).

Thnx anyway, cool to have a nvidia guy here. On the old board, s.o. like you was missing...

[This message has been edited by Michael Steinberg (edited 11-03-2000).]

Michael Steinberg
11-03-2000, 06:27 PM
****. Sorry to nerve you.
Uhmmm, I started a topic named pixel precise times ago.
My question was, if I use an ortographic projection and set up the frustum planes as if they has the dimensions of the ClientRect of my window, then, what is for example the center of the most upper-left pixel? I played around with the values, but it often kept getting unprecise. I mean, is the center then an integer of a float with x.5?
Hope u understand what I mean.

mcraighead
11-03-2000, 06:51 PM
Obsolescence in technology is inevitable... not much I can do about that. Would you prefer that the new products be slower so that the old ones wouldn't get obsolete so fast? http://www.opengl.org/discussion_boards/ubb/smile.gif

I believe our D3D driver supports 2Kx2K. I'm hoping that I can enable them by changing one number in the driver and they'll just work, but if it turns out there are other things that need to be changed as well, it starts to get less likely.

Note that 2Kx2K textures start to get really big in video memory. For example, if the texture was a 32-bit texture and had mipmaps, that's a 21-megabyte texture... AGP texturing comes in handy here, but there are many systems that don't have 21 megabytes of AGP memory free for us to allocate.

OpenGL primitives are sampled at the center of the square corresponding to a pixel (note: a pixel is not a square, it is a sample).

So, if you set up a 100x100 viewport and a 100x100 Ortho2D transform, (0.5,0.5) is the correct coordinate for a point at the bottom left pixel on the screen, and in general, (x+0.5,y+0.5) is the correct coordinate for any pixel.

You should be able to verify this by drawing small quads. For example, if you draw a quad with corners (0.4,0.4),(0.6,0.6), it should draw a pixel, but if you draw (0.6,0.6),(1.4,1.4), it shouldn't. If you make the numbers too small, there are other precision issues that can hit you (all chips have a limit as to how far their subpixel precision goes), but I think with those specific numbers you'd probably be safe.

- Matt

Michael Steinberg
11-03-2000, 07:04 PM
Thanks, gonna try that tomorrow. I need some sleep.

Siwko
11-06-2000, 05:24 AM
Well, heh, all I can say is at least I'm not requesting features! http://www.opengl.org/discussion_boards/ubb/smile.gif

Thanks for answering that though, Matt. I was very curious as to what the stock implementation was supposed to be on the display lists. But one more thing, maybe to further my knowledge here:

I'm assuming then, by what you say, the OGL drivers/API itself is designed such to recognize such an operation, and effectively reorganize the memory allocation on the event of a glNewList on an ID that already contains a list?

If so, all I can say is "damn, these guys must be doing a hell of a lot of work". http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Let me know, I've gotten a couple emails asking me more about it. Obviously, I haven't a clue!

Siwko

mcraighead
11-06-2000, 06:37 AM
No, all we should have to do is just free up the memory for the list...

- Matt