Is glHint(GL_FOG_HINT, GL_NICEST) not implemented in Redhat?

Trying to port a large flight simulation program from an SGI machine to Dell linux box. All coding done and working but unsatisfied with fog implementation. Trying to confirm GL_NICEST not implemented in RedHat.

Are there any solutions to improve fog? It gets better if I break-up polygons, but using vertices is a mess on a runway.

Thank you all for your consideration.

Simulator seen (with historic SGI machine)on:
http://www.stanford.edu/~yesavage/AIR.html

A hint is just, as the name says, a hint. The implementation is free to ignore it. So, per vertex fogging with fog hint set to nicest is perfectly legal according to the specification, and what you have may very well be an ignored hint rather than a missing/broken feature.

Do you use linear fog or exponential? I found out that linear fog looks terrible with large triangles but exponential fog looks very good.
I got the impression, that linear fog takes the z-distance for its computations (that which is also stored in the z-buffer afterwards), but exponential fog seems to take the real distance of the viewer to the vertex.
The result was, that even when i only turned around the (linear) fog changed heavily, and looked very ugly. The exponential fog only changed when i moved.

Jan.

Hey, I went to Stanford

Anyway, exponential fog frequently looks better and more atmospheric, but it is not as useful for culling distant geometry. You can still do it, but you have to make sure the fog is opaque enough at that distance that blending will yeild the fog color, or you will get popping. I am also pretty sure that whether or not exponential fog is done per-pixel or per-vertex is implementation dependent, just like linear fog.

RedHat doesn’t implement anything in OpenGL, it depends on the GL implementation, not on the Linux distribution.

Which GL implementation do you use? If it is Mesa, it will propably always calculate fog per vertex because it would be too slow in software, but I am not absolutely sure.

Check the GL_VENDOR and GL_RENDERER strings to find out which GL implementation you are using, if it is Mesa, you should try to get a hardware driver for your card.

Each graphics vendor implements their own GL under Linux (unless you use a software renderer). Thus, you should qualify your question with what brand of hardware you’re using.

You can do lots of things per-pixel using nVIDIA register combiners, which is currently available on shipping nVIDIA hardware. You can do even more things using ARB_fragment_program, which is available on shipping ATI hardware (and soon will be on nVIDIA, too).

Originally posted by jerryyyyy:
Trying to port a large flight simulation program from an SGI machine to Dell linux box. All coding done and working but unsatisfied with fog implementation.

Dell box usually means good CPU crap GPU. What kind of video card do you have? What drivers are you using?

Originally posted by Bob:
A hint is just, as the name says, a hint. The implementation is free to ignore it. So, per vertex fogging with fog hint set to nicest is perfectly legal according to the specification, and what you have may very well be an ignored hint rather than a missing/broken feature.

I am forwarned… with SGI they sometimes said things were there that were not…

Originally posted by Jan2000:
[b]Do you use linear fog or exponential? I found out that linear fog looks terrible with large triangles but exponential fog looks very good.
I got the impression, that linear fog takes the z-distance for its computations (that which is also stored in the z-buffer afterwards), but exponential fog seems to take the real distance of the viewer to the vertex.
The result was, that even when i only turned around the (linear) fog changed heavily, and looked very ugly. The exponential fog only changed when i moved.

Jan.[/b]

I am using the EXP2 version which looks better to me than EXP or LINEAR.

My processor seems so fast that I have been able to break up the runway into 5’x5’ pieces which reduces the triangles to a managable level of distortion.

We are doing experiement about aging and flying and I need to precisely control fog since the task to be faced by our pilots will be a go/nogo decision at decision height on an ILS.

Originally posted by Coriolis:
[b]Hey, I went to Stanford

Anyway, exponential fog frequently looks better and more atmospheric, but it is not as useful for culling distant geometry. You can still do it, but you have to make sure the fog is opaque enough at that distance that blending will yeild the fog color, or you will get popping. I am also pretty sure that whether or not exponential fog is done per-pixel or per-vertex is implementation dependent, just like linear fog.[/b]

Hey the farm is still here. I graduated from Med School in '74.

Do you have a recommended setting for fog density- I cannot make it full white, can I?

Originally posted by Overmind:
[b]RedHat doesn’t implement anything in OpenGL, it depends on the GL implementation, not on the Linux distribution.

Which GL implementation do you use? If it is Mesa, it will propably always calculate fog per vertex because it would be too slow in software, but I am not absolutely sure.

Check the GL_VENDOR and GL_RENDERER strings to find out which GL implementation you are using, if it is Mesa, you should try to get a hardware driver for your card.[/b]

I do no know how to find the GL_VENDOR or RENDERER- I am using a Dell 340 precision workstation with the latest and greatest NVIDEA Card.

I do not think it is using Mesa since I went to the pages on the FlightGear project and tried to install that software and ran into a problem with Mesa- I was hoping to use some of the code- I wrote to them but got no response. Anyhow, when I tried to instaall FlightGere it said the version of Mesa was not the latest, but it was exactly the one called for by the installation. After wasting an hour on that I gave up.

The linux version is RedHat 8.0.

Originally posted by jwatte:
[b]Each graphics vendor implements their own GL under Linux (unless you use a software renderer). Thus, you should qualify your question with what brand of hardware you’re using.

You can do lots of things per-pixel using nVIDIA register combiners, which is currently available on shipping nVIDIA hardware. You can do even more things using ARB_fragment_program, which is available on shipping ATI hardware (and soon will be on nVIDIA, too).[/b]

Can you give me some more references on the NVIDEA hardware potential applications. I am used to SGI and this is the first time with this approach. I can tell fog was easier on SGI, but this is the first problem I had porting the software.

The card is: Quadro 04900 128MB

The machine itself has 1024 MB.

Originally posted by jerryyyyy:
[b]
I do no know how to find the GL_VENDOR or RENDERER- I am using a Dell 340 precision workstation with the latest and greatest NVIDEA Card.

I do not think it is using Mesa since…[/b]

The best way to find out is:
$ glxinfo

Btw, that’s spelt NVIDIA.

Originally posted by PK:
[b] The best way to find out is:
$ glxinfo

Btw, that’s spelt NVIDIA.[/b]

Thanks for the info, here is the output:

[root@linux1 root]# glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: No
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
client glx vendor string: SGI
client glx version string: 1.2
client glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
OpenGL vendor string: VA Linux Systems, Inc.
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.2 Mesa 3.4.2
OpenGL extensions:
GL_ARB_multitexture, GL_EXT_abgr, GL_EXT_blend_color,
GL_EXT_blend_minmax, GL_EXT_blend_subtract
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x22 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 0 0 0 0 0 0 None
0x23 16 tc 0 16 0 r y . 5 6 5 0 0 16 8 16 16 16 0 0 0 None

Your OpenGL apps are using MESA (at least that’s what glxinfo is saying) You should have a huge list of extensions being reported. I use an ATi card, and here’s what mine looks like (just as an example)

name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
client glx vendor string: SGI
client glx version string: 1.2
client glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context,
GLX_ATI_pixel_format_float
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: Radeon 8500 DDR Pentium III (SSE)
OpenGL version string: 1.3.3477 (X4.2.0-2.5.1)
OpenGL extensions:
GL_ARB_multitexture, GL_EXT_texture_env_add, GL_EXT_compiled_vertex_array,
GL_S3_s3tc, GL_ARB_point_parameters, GL_ARB_texture_border_clamp,
GL_ARB_texture_compression, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_mirrored_repeat, GL_ARB_transpose_matrix,
GL_ARB_vertex_blend, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_ATI_element_array, GL_ATI_envmap_bumpmap, GL_ATI_fragment_shader,
GL_ATI_map_object_buffer, GL_ATI_texture_env_combine3,
GL_ATI_texture_mirror_once, GL_ATI_vertex_array_object,
GL_ATI_vertex_attrib_array_object, GL_ATI_vertex_streams,
GL_ATIX_texture_env_combine3, GL_ATIX_texture_env_route,
GL_ATIX_vertex_shader_output_point_size, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_func_separate, GL_EXT_blend_minmax,
GL_EXT_blend_subtract, GL_EXT_clip_volume_hint,
GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal,
GL_EXT_polygon_offset, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_stencil_wrap,
GL_EXT_texgen_reflection, GL_EXT_texture3D,
GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_texture_rectangle,
GL_EXT_vertex_array, GL_EXT_vertex_shader, GL_HP_occlusion_test,
GL_NV_texgen_reflection, GL_NV_blend_square, GL_NV_occlusion_query,
GL_SGI_texture_edge_clamp, GL_SGIS_texture_border_clamp,
GL_SGIS_texture_lod, GL_SGIS_generate_mipmap, GL_SGIS_multitexture,
GL_SUN_multi_draw_arrays
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x25 24 tc 0 24 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x26 24 tc 0 24 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x27 24 tc 0 24 0 r y . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x28 24 tc 0 24 0 r . . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x29 24 tc 0 24 0 r y . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x2a 24 tc 0 24 0 r . . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x2b 24 tc 0 24 0 r y . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x2c 24 tc 0 24 0 r . . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x2d 24 dc 0 24 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x2e 24 dc 0 24 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x2f 24 dc 0 24 0 r y . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x30 24 dc 0 24 0 r . . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x31 24 dc 0 24 0 r y . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x32 24 dc 0 24 0 r . . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x33 24 dc 0 24 0 r y . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x34 24 dc 0 24 0 r . . 8 8 8 8 0 24 0 0 0 0 0 0 0 None

Dan

Check out the Linux Discussion Forum. There’s a bunch of posts of people installing the new driver.

Originally posted by PK:
Check out the Linux Discussion Forum. There’s a bunch of posts of people installing the new driver.

I appreciate your information. This looks very similar to the output I get from my Lattitude 640 (Laptop) that has an ATI Card. I have a dual boot on that system and do development on it when at home. This is the same output: Does it look like I have the right Mesa and other libraries?

In any case, I hear that if you use the pixel fog it runs VERY slow. It may be easier for me to fragment the screen into small polygons and do vertex fog on those polygons- this will be my approach unless I hear of a better approach. I would have like to see what the FlightGear looks like, but cannot get it to install because it says I have the wrong Mesa- but there it is?

[root@localhost democomm]# glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
client glx vendor string: SGI
client glx version string: 1.2
client glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
OpenGL vendor string: VA Linux Systems, Inc.
OpenGL renderer string: Mesa DRI Radeon 20010402 AGP 1x x86/MMX/SSE
OpenGL version string: 1.2 Mesa 3.4.2
OpenGL extensions:
GL_ARB_multitexture, GL_ARB_transpose_matrix, GL_EXT_abgr,
GL_EXT_blend_func_separate, GL_EXT_clip_volume_hint,
GL_EXT_compiled_vertex_array, GL_EXT_histogram, GL_EXT_packed_pixels,
GL_EXT_polygon_offset, GL_EXT_rescale_normal, GL_EXT_stencil_wrap,
GL_EXT_texture3D, GL_EXT_texture_env_add, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_object, GL_EXT_texture_lod_bias,
GL_EXT_vertex_array, GL_MESA_window_pos, GL_MESA_resize_buffers,
GL_NV_texgen_reflection, GL_PGI_misc_hints, GL_SGIS_pixel_texture,
GL_SGIS_texture_edge_clamp
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x23 24 tc 0 24 0 r y . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x24 24 tc 0 24 0 r y . 8 8 8 8 0 24 8 0 0 0 0 0 0 Slow
0x25 24 tc 0 24 0 r y . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x26 24 tc 0 24 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x27 24 dc 0 24 0 r y . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x28 24 dc 0 24 0 r y . 8 8 8 8 0 24 8 0 0 0 0 0 0 Slow
0x29 24 dc 0 24 0 r y . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x2a 24 dc 0 24 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow

While not the most elegant solution by a long shot, you might be able to implement it by using a second texture unit. In order to do this, you would set up a 1D texture with coordinate generation mapped to eye_z space, so it scales into the distance. You then use the texture matrix to scale it so you have the end of the texture at the furthest range.

The nice thing about this is, that if you don’t already use multitexturing you can set it up and it will just work.

You also get very precise control over the fog, as the texture is now basically a table lookup for the function fog(dist).

B.t.w. If you use no textures, just replace multitexture with texture through this post.

HTH
nich

Originally posted by jerryyyyy:
… it says I have the wrong Mesa- but there it is?

That’s the point, you should NOT use Mesa, but you should use the OpenGL implementation of your hardware vendor.

You said you have a NVIDIA card, so go to www.nvidia.com and download the latest linux driver. Then your application will use hardware rendering, so it will run a lot faster for per-pixel stuff.

Unfortunately, I tried to install that driver and have run into a host of problems. These are detailed under the linux programming side, but after finding the right driver and installing it carefully I now get no graphics and thefollowing:

GLUT: Fatal error in XXXXXprogram: visual with necessary capabilities not found.

Ideas anyone?