Antialiasing

Anyone know how i can enable full scene antialiasing on a kyro 2 using the GL_ARB_multisample extension and set the level eg 2x or 4x?

Someone must know
its not rocket science well it wasnt b4 i started looking into it but now im not so sure

http://developer.nvidia.com/view.asp?IO=ogl_multisample

the Paper
“OpenGL Pixel Formats and Multisample Antialiasing”
is very good to understnad.
wedge

Looks good but arnt too sure how to use it, im currently doing this

static PIXELFORMATDESCRIPTOR pfd= {sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, PFD_TYPE_RGBA, bits, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 0, 0, PFD_MAIN_PLANE, 0, 0, 0, 0 };

if (!(hDC=GetDC(hWnd)))
if (!(PixelFormat=ChoosePixelFormat,hDC,&pfd)))
if(!SetPixelFormat(hDC,PixelFormat,&pfd))
how would i initialize the sample buffers and set up the number of samples?

The article posted above goes through the process step-by-step, and includes all the necessary information. I have only tried it out on an nVidia-based video card, but the article does include all the information necessary; it was my sole source for adding FSAA support to a program I wrote, and it works fine.

Unfortunately, there are a significant number of changes from the standard initialization method, since you can’t use the PIXELFORMATDESCRIPTOR method of setting up your pixel format. (As I’m sure you noticed, it doesn’t provide any mechanism of requesting extra samples.) Please read the article carefully; skimming will likely result in missing something important.

–Travis Cobbs

I tried the code but got no where <- why does it make it look angry, im only fed up

i dont think the Kyro 2 supports WGL_ARG_PIXELFORMAT, if its not in the extensions list it doesnt support it does it, or is it part of the std opengl stuff?

[This message has been edited by Mr_Smith (edited 02-12-2002).]

Look at the name… it’s a WGL extension, not a GL extension…

Originally posted by richardve:
Look at the name… it’s a WGL extension, not a GL extension…

To be a little more explicit, you have to use wglGetProcAddress to get the address of the wglGetExtensionsStringARB function, and then examine the WGL extension string returned by THAT function to determine whether or not your hardware supports the WGL_ARB_pixel_format extension. This is also in that document, along with the fact that you have to do all this AFTER creating your initial OpenGL window in order to get an OpenGL-compatible HDC to pass into the wglGetExtensionsStringARB function.

Also, while the document is fairly long, it’s that long for a reason. The WGL_ARB_pixel_format is just not a trivial thing to throw into your program. If it were trivial, I’d just list the steps. On page 14, it lists all the steps you have to perform (filling the page), but you have to refer back to the previous 13 pages in the document for detailed instructions on how to perform each step.

–Travis Cobbs

Like the paper tells you have to do a dymmy window to get the ProcAdress of the extension. This is essential. I don’t like it, but it is the best way, isn’t it?

I did read it fully
just when i tried the command they specified it didnt work and im not too good with extensions, does anyone know a good tutorial?

I am in the same boat as Mr_Smith, and I am just as frustrated. I read the Pixel Format paper and it leaves a LOT of missing information out. I REALLY would like a response from you guys out there who got this to work.

I have several specific points and questions:

(1) For the record, I’m using W2K with a Nvidia GeForce2/MX400 (32MB RAM).

(2) If I use the FSAA mode in Display | Settings and manually turn the AntiAliasing Off/On – it works from the ControlPanel. Then I set it for “Allow programs to control AA” and the fun begins…

(3) I added the wglGetExtensionsStringARB string to the extensions, and I do get WGL_ARB_extensions_string, WGL_ARB_multisample, WGL_ARB_pbuffer
WGL_ARB_pixel_format, etc. all in the ext string. That indicates everything should work.

(4) I setup a temp window to init the proc adr’s, I do everything the paper says, but my scenes don’t get rendered with antialiasing. And I get no AV’s or errors.

(5) But I have a question about these crazy tokens and their values. There are two very similar named token pairs:

GL_SAMPLE_BUFFERS_ARB = $80A8;
GL_SAMPLES_ARB = $80A9;

WGL_SAMPLE_BUFFERS_ARB = $2041;
WGL_SAMPLES_ARB = $2042;

If I use the “GL_” versions, I get a PixelFormat, but there is no AA active.
If I use the “WGL_” values I get ZERO PixelFormats returned from the wglChoosePixelFormatARB() call. I am simply setting …SAMPLE_BUFFERS_ARB=1 and …SAMPLES_ARB=4 in both cases.

(6) Another question, what are these tokens used for? They are not mentioned anywhere. Should these be invloved somehow?

MULTISAMPLE_ARB = $809D;
MULTISAMPLE_BIT_ARB = $20000000;

(BTW is “Multisample” talking about the same thing as “Sample_Buffers” ??? I assume it is).

(7) Is it required to set the Coverage params? I took this to be an option.

(8) Another thing, I tried out the Nvidia PixelFormat listing program (PxlFmt.Exe), and all the 57 formats that pop out, all have the AA and AAS columns with ‘0’. Doesn’t that mean that there are no MultiSample Buffers available?

(9) I ran my own listing with wglGetPixelFormatAttribivARB() just like the paper describes, and I get the same identical results to the Nvidia program. I’m beginning to think the Nvidia guys couldn’t get this working either in their program.

What do these results mean? Are there Multisample PixelFormats or aren’t there? These listings say NO. But yet, the ControlPanel FSAA works, and the wgl extension string says it has MultiSample modes. This makes NO sense !!!

IMHO, this multisample area of OpenGL is an outright mess. Extremely confusing with thin documentation.

I hope someone out there can really help.

Thanks, Chris.

I tried to read the mentioned NVIDIA Doc, too, but I have some questions, too.

WGL_ARB_extension_string -> needed and it´s clear to me how to use it

WGL_ARB_pixel_format -> needed and I know how to initialize it, but my question is, if I chose a pf with that Extension, I don´t need the good old PFD, or?

WGL_ARB_multisample -> Am I right, if I assume, that this WGL Extension consists only of 2 new tokens, or are there even new functions?

WGL_SAMPLE_BUFFERS_ARB 0x2041
WGL_SAMPLES_ARB 0x2042

GL_ARB_multisample -> Guess I have to use glEnable(GL_MULTISAMPLE_ARB); to enable the AA, after I chose the right pixelformat, is that right?
Or is the AA active, after I chose an AA capable pixelformat and I don´t need GL_ARB_multisample?
(By the way, the latest GLEXT.h in the NVSDK Beta contains an obsolete function called glSamplePassARB, that is not listed in the paper I got in SGIs Extension registry … time for a brand new one Cass?)

Another thing that really anoys me, is, that the Extension registry seems a bit obsolete (many new NV Exts are missing and some docs are obsolete)?

Diapolo

[This message has been edited by Diapolo (edited 02-13-2002).]

>(4) I setup a temp window to init the proc >adr’s, I do everything the paper says, but >my scenes don’t get rendered with >antialiasing. And I get no AV’s or errors.

I guess you have to enable the AA, but I´m not really sure about that.
Try: glEnable(GL_MULTISAMPLE_ARB);
You have to init the whole extension and the tokens first.
Would be cool if you can report back, after you tried it (I still have to code the stuff for WGL_ARB_pixel_format).

>(5) But I have a question about these crazy >tokens and their values. There are two >very similar named token pairs:

>GL_SAMPLE_BUFFERS_ARB = $80A8;
>GL_SAMPLES_ARB = $80A9;

>WGL_SAMPLE_BUFFERS_ARB = $2041;
>WGL_SAMPLES_ARB = $2042;

>If I use the “GL_” versions, I get a >PixelFormat, but there is no AA active.
>If I use the “WGL_” values I get ZERO >PixelFormats returned from the >wglChoosePixelFormatARB() call. I am simply >setting …SAMPLE_BUFFERS_ARB=1 >and …SAMPLES_ARB=4 in both cases.

I think the GL versions are not for the WGL_ARB_pixel_format function, but for some glGet functions.


Accepted by the <pname> parameter of GetBooleanv, GetDoublev, GetIntegerv, and GetFloatv:

SAMPLE_BUFFERS_ARB 0x80A8
SAMPLES_ARB 0x80A9
SAMPLE_COVERAGE_VALUE_ARB x80AA
SAMPLE_COVERAGE_INVERT_ARB 0x80AB


Accepted by the <piAttributes> parameter of
wglGetPixelFormatAttribivEXT, wglGetPixelFormatAttribfvEXT, and the <piAttribIList> and <pfAttribIList> of wglChoosePixelFormatEXT:

WGL_SAMPLE_BUFFERS_ARB 0x2041
WGL_SAMPLES_ARB 0x2042


I got that from the SGI doc about multisample (and they are talking about the EXT version, not the ARB one -> obsolete). http://oss.sgi.com/projects/ogl-sample/registry/ARB/multisample.txt

>(6) Another question, what are these tokens >used for? They are not mentioned >anywhere. Should these be invloved somehow?

>MULTISAMPLE_ARB = $809D;
>MULTISAMPLE_BIT_ARB = $20000000;

The first one is for glEnable(GL_MULTISAMPLE_ARB); and the second one should be for glPushAttrib(GL_MULTISAMPLE_BIT_ARB);

Hope I didn´t write too many wrong things, but I´m in the phase of understanding the whole stuff about AA / multisample .

Diapolo

[This message has been edited by Diapolo (edited 02-13-2002).]

You have to set WGL_SAMPLE_BUFFERS_ARB to GL_TRUE (1) to indicate that you want sample buffers. You have to set WGL_SAMPLES_ARB to the number of samples per pixel that you want. 2x AA uses 2, 4x uses 4, etc.

So, once you’ve performed your extensions check, and set everything up using the temp OpenGL window, your code should look something like this (note that this only supports 2x and 4x):

int myChoosePixelFormat(HDC hdc, BOOL use2xAA)
{
int retValue = -1;
GLint intValues = {
WGL_SAMPLES_EXT, 4,
WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_SAMPLE_BUFFERS_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB,
WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
WGL_COLOR_BITS_ARB, 16,
WGL_DEPTH_BITS_ARB, 16,
0, 0
};
GLfloat floatValues = { 0.0f, 0.0f };
int index;
int count;

if (use2xAA)
{
    intValues[1] = 2; // 2x instead of 4x
}
if (wglChoosePixelFormatARB(hdc, intValues,
    floatValues, 1, &index, &count) && count)
{
    retValue = index;
}
return retValue;

}

Once you have used the above to get your index, you do the rest using the standard WGL functions (unless you want to examine the extended attributes of the pixel format you got).

If you have a GeForce 3 or above and want QuinCunx, you do the following AFTER your OpenGL rendering context has been created and made current:

glHint(GL_MULTISAMPLE_FILTER_HINT_NV, GL_NICEST);

Note that this will work in 4x mode as well, and give you a QuinCunx-style sample pattern in 4x mode.

–Travis Cobbs

[This message has been edited by tcobbs (edited 02-13-2002).]

Thanks for the reply Travis. Your code is basically the same thing I was already doing. However in any case I pasted your exact attribute code into my module and ran it. I got the same thing as before: ZERO PIXEL FORMATS returned.

This is the same thing that the listings from wglGetPixelFormatAttribivARB() are saying: there is no AA PixelFormat available. When I run the Nvidia PxlFmt.Exe program I get the same thing again – none of the 57 formats has TRUE in the AA column.

Here’s the list if you want to see it:

OpenGL Pixel Formats, Version 1.3.0
Nvidia GeForce2/MX400

Indx Type Accl Colr Dpth Sten AA AAS

1 RGBA Full 32 16 0 F 0
2 RGBA Full 32 24 0 F 0
3 RGBA Full 32 24 8 F 0
4 RGBA Full 32 16 0 F 0
5 RGBA Full 32 16 0 F 0
6 RGBA Full 32 24 0 F 0
7 RGBA Full 32 24 0 F 0
8 RGBA Full 32 24 8 F 0
9 RGBA Full 32 24 8 F 0
10 RGBA Full 32 16 0 F 0
11 RGBA Full 32 16 0 F 0
12 RGBA Full 32 24 0 F 0
13 RGBA Full 32 24 0 F 0
14 RGBA Full 32 24 8 F 0
15 RGBA Full 32 24 8 F 0
16 RGBA None 32 32 8 F 0
17 RGBA None 32 16 8 F 0
18 RGBA None 32 32 8 F 0
19 RGBA None 32 16 8 F 0
20 RGBA None 32 32 8 F 0
21 RGBA None 32 16 8 F 0
22 RGBA None 32 32 8 F 0
23 RGBA None 32 16 8 F 0
24 CIDX None 32 32 8 F 0
25 CIDX None 32 16 8 F 0
26 CIDX None 32 32 8 F 0
27 CIDX None 32 16 8 F 0
28 RGBA None 24 32 8 F 0
29 RGBA None 24 16 8 F 0
30 RGBA None 24 32 8 F 0
31 RGBA None 24 16 8 F 0
32 CIDX None 24 32 8 F 0
33 CIDX None 24 16 8 F 0
34 RGBA None 16 32 8 F 0
35 RGBA None 16 16 8 F 0
36 RGBA None 16 32 8 F 0
37 RGBA None 16 16 8 F 0
38 CIDX None 16 32 8 F 0
39 CIDX None 16 16 8 F 0
40 RGBA None 8 32 8 F 0
41 RGBA None 8 16 8 F 0
42 RGBA None 8 32 8 F 0
43 RGBA None 8 16 8 F 0
44 CIDX None 8 32 8 F 0
45 CIDX None 8 16 8 F 0
46 RGBA None 4 32 8 F 0
47 RGBA None 4 16 8 F 0
48 RGBA None 4 32 8 F 0
49 RGBA None 4 16 8 F 0
50 CIDX None 4 32 8 F 0
51 CIDX None 4 16 8 F 0
52 RGBA Full 16 16 0 F 0
53 RGBA Full 16 16 0 F 0
54 RGBA Full 16 16 0 F 0
55 RGBA Full 0 16 0 F 0
56 RGBA Full 0 24 0 F 0
57 RGBA Full 0 24 8 F 0

That’s all the PixelFormats available, and none have AA=True. Obviously wglChoosePixelFormatARB() is going to return ZERO when I ask for a format with AA. That’s exactly what’s happening.

I could easily believe that perhaps my card does not support this, but if so then why does the WGL extension string include all the proper tokens? This is what makes no sense to me.

What video card do you have? I’d really like it if you could run the Nvidia test program and tell me if you see any AA columns in your listing with TRUE. Here’s a link so you can go right to it:
http://developer.nvidia.com/view.asp?IO=ogl_multisample

I’m pretty convinced now that there is nothing wrong with my code for setting up the PixelFormat. The problem seems to be that there just aren’t any formats with AA available. Why? I guess that’s the big question.

I am wondering if there is some other “flag” or bit I need to set to enable multisample modes. I can’t find any info or doc on that.

I’m at a loss to know what to try next when the Nvidia listing program itself says there are no AA formats.

HELP!

Thanks, Chris.

Thanks Diapolo for your response. You mentioned the glEnable(GL_MULTISAMPLE_ARB). That does enable/disable AA once it’s working, but you have to get the context setup and working first. There’s no way I can call that until I get at least 1 PixelFormat that I can setup first. (That’s suppose to be ON by default anyway.)

You gave another interesting point about glPushAttrib(GL_MULTISAMPLE_BIT_ARB). I have not used the glPushAttrib() before. Now that sounds like the kind of thing that might be the “flag enable” bit I think is needed. Can I make that call before a rendering context is active?

Something has to be changed before I make the call to wglChoosePixelFormatARB() or there still won’t be any PixelFormats available. What calls can be made prior to an active rendering context? Could it be an environment variable or something like that?

Thanks, Chris.

I thing we had the same problem on the GeForce2/MX. I dont think it works on that Card!
You have to check for two extensions:
GL_ARB_multisample AND! WGL_ARB_multisample
I think one is missing at the gf2/mx.
wedge

Thanks for the response. You may have just hit the nail on the head. Here are the actual extension strings for the regular and wgl extensions:

GL_ARB_imaging GL_ARB_multitexture GL_ARB_texture_compression GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_dot3 GL_ARB_transpose_matrix GL_S3_s3tc GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_clip_volume_hint GL_EXT_compiled_vertex_array GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_packed_pixels GL_EXT_paletted_texture GL_EXT_point_parameters GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_stencil_wrap GL_EXT_texture_compression_s3tc GL_EXT_texture_edge_clamp GL_EXT_texture_env_add GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_cube_map GL_EXT_texture_filter_anisotropic GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_object GL_EXT_vertex_array GL_EXT_vertex_weighting GL_IBM_texture_mirrored_repeat GL_KTX_buffer_region GL_NV_blend_square GL_NV_evaluators GL_NV_fence GL_NV_fog_distance GL_NV_light_max_exponent GL_NV_packed_depth_stencil GL_NV_register_combiners GL_NV_texgen_emboss GL_NV_texgen_reflection GL_NV_texture_env_combine4 GL_NV_texture_rectangle GL_NV_vertex_array_range GL_NV_vertex_array_range2 GL_NV_vertex_program GL_SGIS_generate_mipmap GL_SGIS_multitexture GL_SGIS_texture_lod GL_WIN_swap_hint WGL_EXT_swap_control

WGL_ARB_buffer_region WGL_ARB_extensions_string WGL_ARB_multisample WGL_ARB_pbuffer WGL_ARB_pixel_format WGL_EXT_extensions_string WGL_EXT_swap_control

The WGL string is there, but the GL string is NOT. Now I know why I feel like I’ve beating my head against the wall!

What still does not make sense is that the AA mode works in the ControlPanel, and they also show the option for “Control AA by Program”. I guess the programming interface is broken on this model. (Thanks’s a lot Nvidia.)

Wedge, since you seem to be on top of this problem, can you give me the names of a couple models that you know actually work? I’ve got to use something to finish my development with.

Thanks, Chris.

“What still does not make sense is that the AA mode works in the ControlPanel, and they also show the option for “Control AA by Program”. I guess the programming interface is broken on this model. (Thanks’s a lot Nvidia.)”

In my opinion the gf2 has supersampling AA ??

We work on a gf3 cards. Don’t know wich. Look at nvidia wich suport this. To use Quincunx you need a new driver version!
with the newest works.

Has anyone got a sample program i can try as im having trouble getting this to work on the Kyro 2, and im not sure if its me or the card?