rom "beginner forum:" glGenBuffers causes exit

I am testing some CUDA code, running on a remote Linux machine and displaying back to my desktop macintosh or desktop Linux machine. The code is the basic_interop.cu example from the book CUDA By Example, by Sanders et al. When my code calls glGenBuffers(), it exits with the following error. Other GPU code from the book has been working fine, but then again, none of them call glGenBuffers(). Can anyone help me figure out why this is occurring? Thanks!

Xlib: extension “NV-GLX” missing on display “edge83:13.0”.
X Error of failed request: BadRequest (invalid request code or no such operation)
Major opcode of failed request: 150 (GLX)
Minor opcode of failed request: 187 ()
Serial number of failed request: 34
Current serial number in output stream: 34

Here is the body of main up until the fatal error:

int main( int argc, char **argv ) {
cudaDeviceProp prop;
int dev;

memset( &prop, 0, sizeof( cudaDeviceProp ) );
prop.major = 1;
prop.minor = 0;
HANDLE_ERROR( cudaChooseDevice( &dev, &prop ) );

// tell CUDA which dev we will be using for graphic interop
// from the programming guide: Interoperability with OpenGL
// requires that the CUDA device be specified by
// cudaGLSetGLDevice() before any other runtime calls.

HANDLE_ERROR( cudaGLSetGLDevice( dev ) );

// these GLUT calls need to be made before the other OpenGL
// calls, else we get a seg fault
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGBA );
glutInitWindowSize( DIM, DIM );
glutCreateWindow( “bitmap” );

glBindBuffer = (PFNGLBINDBUFFERARBPROC)GET_PROC_ADDRESS(“glBindBuffer”);
glDeleteBuffers = (PFNGLDELETEBUFFERSARBPROC)GET_PROC_ADDRESS(“glDeleteBuffe
rs”);
glGenBuffers = (PFNGLGENBUFFERSARBPROC)GET_PROC_ADDRESS(“glGenBuffers”);
glBufferData = (PFNGLBUFFERDATAARBPROC)GET_PROC_ADDRESS(“glBufferData”);

// the first three are standard OpenGL, the 4th is the CUDA reg
// of the bitmap these calls exist starting in OpenGL 1.5
glGenBuffers( 1, &bufferObj ); // <------ ERROR OCCURS RIGHT HERE ************

More information:
There is no console on the remote machine unfortunately – it’s the backend of a 200 node visualization cluster.

rcook@edge83 (chapter08): ./a.out
X Error of failed request: BadRequest (invalid request code or no such operation)
Major opcode of failed request: 128 (GLX)
Minor opcode of failed request: 187 ()
Serial number of failed request: 38
Current serial number in output stream: 38

Here is the output from glxinfo displaying to that machine:

name of display: edge83:18.0
display: edge83:18 screen: 0
direct rendering: No
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control,
GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_ARB_multisample, GLX_NV_float_buffer,
GLX_ARB_fbconfig_float, GLX_EXT_framebuffer_sRGB
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_NV_swap_group, GLX_NV_video_out, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGI_swap_control, GLX_EXT_swap_control, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_NV_float_buffer,
GLX_ARB_fbconfig_float, GLX_EXT_fbconfig_packed_float,
GLX_EXT_texture_from_pixmap, GLX_EXT_framebuffer_sRGB,
GLX_NV_present_video, GLX_NV_copy_image, GLX_NV_multisample_coverage,
GLX_NV_video_capture, GLX_EXT_create_context_es2_profile,
GLX_ARB_create_context_robustness
GLX version: 1.4
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control,
GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_ARB_multisample, GLX_NV_float_buffer,
GLX_ARB_fbconfig_float, GLX_EXT_framebuffer_sRGB,
GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: Quadro NVS 135M/PCI/SSE2
OpenGL version string: 3.2.0 NVIDIA 195.36.24
OpenGL extensions:
GL_ARB_color_buffer_float, GL_ARB_compatibility,
GL_ARB_depth_buffer_float, GL_ARB_depth_clamp, GL_ARB_depth_texture,
GL_ARB_draw_buffers, GL_ARB_fragment_coord_conventions,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow,
GL_ARB_fragment_shader, GL_ARB_framebuffer_object,
GL_ARB_framebuffer_sRGB, GL_ARB_half_float_vertex, GL_ARB_imaging,
GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters,
GL_ARB_point_sprite, GL_ARB_provoking_vertex, GL_ARB_shading_language_100,
GL_ARB_shadow, GL_ARB_texture_border_clamp, GL_ARB_texture_compression,
GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_float, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle,
GL_ARB_texture_rg, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_ATI_draw_buffers, GL_ATI_texture_float, GL_ATI_texture_mirror_once,
GL_S3_s3tc, GL_EXT_texture_env_add, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_func_separate, GL_EXT_blend_minmax,
GL_EXT_blend_subtract, GL_EXT_Cg_shader, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXTX_framebuffer_mixed_formats,
GL_EXT_framebuffer_object, GL_EXT_framebuffer_sRGB,
GL_EXT_gpu_program_parameters, GL_EXT_multi_draw_arrays,
GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_provoking_vertex,
GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap,
GL_EXT_texture3D, GL_EXT_texture_array, GL_EXT_texture_compression_latc,
GL_EXT_texture_compression_rgtc, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_cube_map, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod,
GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp,
GL_EXT_texture_object, GL_EXT_texture_shared_exponent,
GL_EXT_texture_sRGB, GL_EXT_texture_swizzle, GL_EXT_timer_query,
GL_EXT_vertex_array, GL_EXT_vertex_array_bgra, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region, GL_NV_blend_square,
GL_NV_copy_depth_to_color, GL_NV_depth_clamp, GL_NV_float_buffer,
GL_NV_fog_distance, GL_NV_fragment_program, GL_NV_fragment_program_option,
GL_NV_fragment_program2, GL_NV_geometry_shader4, GL_NV_gpu_program4,
GL_NV_light_max_exponent, GL_NV_multisample_coverage,
GL_NV_multisample_filter_hint, GL_NV_packed_depth_stencil,
GL_NV_parameter_buffer_object2, GL_NV_register_combiners,
GL_NV_texgen_reflection, GL_NV_texture_compression_vtc,
GL_NV_texture_env_combine4, GL_NV_texture_expand_normal,
GL_NV_texture_rectangle, GL_NV_texture_shader, GL_NV_texture_shader2,
GL_NV_texture_shader3, GL_NV_vertex_program1_1, GL_NV_vertex_program2,
GL_NV_vertex_program2_option, GL_NV_vertex_program3,
GL_NVX_gpu_memory_info, GL_SGIS_generate_mipmap, GL_SGIS_texture_lod,
GL_SGIX_depth_texture, GL_SGIX_shadow, GL_SUN_slice_accum

More information here: http://www.opengl.org/discussion_boards/…amp;#Post287916

More information here: http://www.opengl.org/discussion_boards/…amp;#Post287916

after thinking about it, and from what you last said on the beginner forum: if you’re not sure it works or not on local display, you should first check about it.

Second, and more “hopefully” accurate remark: I think the address functions are not the same from your server and client. So I think that you get the address for one card and the call is made for another card. I’m not sure if it’s coherent, but at last, this is one of the things I think about your issue.

I never said everything works fine without remote display

You should really check it on local display.

This sound like a job for … VirtualGL !
http://www.virtualgl.org/About/Background

There is no console on this machine. It’s a compute cluster. So local display is not an option!

As far as the address of the functions, I’m not sure I understand. Are you saying the OpenGL libraries on any remote client must match the OpenGL libraries of the X server you are displaying to for these calls to work?
One of the experiments I have done is to call the functions without getting the address in a variable, I.e.

void glinit(void) {
glGenBuffers( 1, &bufferObj );
glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, bufferObj );
glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, DIM * DIM * 4,
NULL, GL_DYNAMIC_DRAW_ARB );
}

static void key_func( unsigned char key, int x, int y ) {
if (first) {
glinit();
first = 0;
}
switch (key) {
case 27:
// clean up OpenGL and CUDA
glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );
glDeleteBuffers( 1, &bufferObj );
}
}

static void draw_func( void ) {
if (first) {
glinit();
first = 0;
}
// we pass zero as the last parameter, because out bufferObj is now
// the source, and the field switches from being a pointer to a
// bitmap to now mean an offset into a bitmap object
glDrawPixels( DIM, DIM, GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glutSwapBuffers();
}

int main( int argc, char **argv ) {
int dev;

// these GLUT calls need to be made before the other OpenGL
// calls, else we get a seg fault
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGBA );
glutInitWindowSize( DIM, DIM );
glutCreateWindow( "bitmap" );


// set up GLUT and kick off main loop
glutKeyboardFunc( key_func );
glutDisplayFunc( draw_func );
glutMainLoop();

}

I cannot display to the console – this is a compute cluster. There is no console on these nodes.
I do not understand the address question. Are you saying the OpenGL libraries on the remote client side must always match the OpenGL libraries on the desktop X server side?
I tried calling the functions directly without using the get address" stuff and the behavior does not change – program exits when I call glGenBuffers()/

void glinit(void) {
glGenBuffers( 1, &bufferObj );
glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, bufferObj );
glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, DIM * DIM * 4,
NULL, GL_DYNAMIC_DRAW_ARB );
}

static void key_func( unsigned char key, int x, int y ) {
if (first) {
glinit();
first = 0;
}
switch (key) {
case 27:
// clean up OpenGL and CUDA
glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );
glDeleteBuffers( 1, &bufferObj );
}
}

static void draw_func( void ) {
if (first) {
glinit();
first = 0;
}
glDrawPixels( DIM, DIM, GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glutSwapBuffers();
}

int main( int argc, char **argv ) {
int dev;

// these GLUT calls need to be made before the other OpenGL
// calls, else we get a seg fault
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGBA );
glutInitWindowSize( DIM, DIM );
glutCreateWindow( “bitmap” );

// set up GLUT and kick off main loop
glutKeyboardFunc( key_func );
glutDisplayFunc( draw_func );
glutMainLoop();
}

As far as the address of the functions, I’m not sure I understand. Are you saying the OpenGL libraries on any remote client must match the OpenGL libraries of the X server you are displaying to for these calls to work?

When you get a function pointer, you have a pointer to the address of the library implementation of this function, which in turn, in its code, will call the driver, that directly talks to the hardware, all done by addresses.
So, I think I’m right here.

Anyway, as already said, I’m almost sure it’s impossible to have hardware acceleration with nvidia cards under linux without direct rendering activated.

It sure might! Thanks.

I have reported this as a bug to nVidia and they changed the status to “Dev - Open - To fix” so I think it’s a real bug. I have not heard from anyone that glGenBuffers() is not appropriate for indirect rendering – do you know where I could find a list of OpenGL calls that are OK for direct but not indirect rendering? Thanks

OK. Let know about the result of the bug report.

Will do. I’m really curious about the answer to my question, too. Is there a list of OpenGL calls somewhere that are not OK to use with indirect rendering? My understanding is that the OpenGL libgl “GLX-izes” calls when needed. But it certainly makes sense that some might not be GLXizable…

OK, It turns out that glGenBuffers() is not supported in NVIDIA drivers via indirect rendering. It’s a bug that it crashes applications, so they will change their code to issue an error instead and keep going. I have realized a workaround – I can just launch an X server on the backend nodes and the code runs as designed. I cannot see what it’s doing but it does not crash, and for purposed of mixed-mode OpenGL and CUDA this is a satisfactory result I think.