Part of the Khronos Group

OpenGL Performance Benchmarks

OpenGL is the industry's most widely used, supported and best documented 2D/3D graphics API making it inexpensive & easy to obtain information on implementing OpenGL in hardware and software. In addition to extensive documentation, there are coding examples for every platform and topic, FAQ, open source libraries, Java Bindings and developer coding related software and benchmarks.

The OpenGL Performance Characterization (OPC)

The non-profit OPC project is providing unambiguous methods for comparing the performance of OpenGL implementations across vendor platforms,operating systems, and windowing environments. Operating systems covered by the OPC group include, but are not limited to, OS/2, UNIX, Windows NT and Windows 95. Windowing environments include, but are not limited to, Presentation Manager, Windows and X. The intention of the OPC group is to characterize graphics performance for computer systems running applications, not overall graphics performance.

The OPC group has established the following goals to define OpenGL performance:

  • To permit standardized OpenGL performance measurement and evaluation by creating unambiguous, vendor-neutral measures for product evaluation and comparison.
  • To provide mechanisms that enable vendors, customers and others to perform their own performance measurements and evaluations.
  • To provide formal beta and final software releases to the public in a timely fashion.
  • To develop a list of all those directly and materially affected by the benchmark specifications, and to offer the specifications for public review by all such parties with the possibility that, if there is consensus, the specifications might be offered for consideration by ANSI and other standards bodies.
  • To contribute to the coherence of the field of OpenGL performance measurement and evaluation so that vendors will be better able to present well-defined measures, and customers will be better able to compare and evaluate vendors' products and environments.

The Principal Benchmarks

  1. SPECViewperf 8 Benchmark
  2. SPECapc Benchmark
  3. SPECglperf - Retired
The SPECViewperf Benchmark 8

The first benchmark released by the OpenGL Performance Characterization (OPC) group is SPECViewperf, which measures the 3D rendering performance of systems running under OpenGL

Viewperf is a portable OpenGL performance benchmark program written in C. It was developed by IBM. Viewperf provides a vast amount of flexibility in benchmarking OpenGL performance. Currently, the program runs on most implementations of UNIX, Windows NT, Windows95, and OS/2.

The OpenGL Performance Characterization (OPC) project group has endorsed Viewperf as its first OpenGL benchmark. Performance numbers based on Viewperf were first published in Q4 1994 issue of The GPC Quarterly. OPC group member companies have ported the Viewperf code to their operating systems and window environments. The OPC project group maintains a single source code version of the Viewperf code that is available to the public.

Benchmarking with SPECViewperf - SPECViewperf parses command lines and data files, sets the rendering state, and converts data sets to a format which can be traversed using OpenGL rendering calls. It renders the data set for a pre-specified amount of time or number of frames with animation between frames. Finally, it outputs the results.  Viewperf reports performance in frames per second. Other information about the system under test -- all the rendering states, the time to build display lists (if applicable), and the data set used -- are also output in a standardized report.  A "benchmark" using Viewperf is really a single invocation of Viewperf with command line options telling the Viewperf program which data set to read in, which texture file to use, what OpenGL primitive to use to render the data set, which attributes to apply and how frequently, whether or not to use display lists, and so on. One quickly realizes there are an infinite number of Viewperf 'benchmarks' (an infinite number of data setsmultiplied by an almost infinite number of command line states).

Real-World Benchmarking with Viewsets

OPC project group members recognize the importance of real-world benchmarks. From the beginning, the group has sought benchmarks representative of the OpenGL rendering portion of independent software vendor (ISV) applications. Along these lines, the project group has come up with what it calls a viewset. A viewset is a group of individual runs of Viewperf that attempt to characterize the graphics rendering portion of an ISV's application.

Viewsets are not developed by the OPC project group; they come from the ISVs themselves. Members of the OPC project group often "sponsor" the ISV. Sponsorship entails helping the ISV in several areas, including how to obtain the Viewperf code, how to convert data sets to a Viewperf format, how to use Viewperf, how to create Viewperf tests to characterize the application, how to determine weights for each of the individual Viewperf tests based on application usage, and finally to help offer the viewset to the OPC project group for consideration as a standard OPC viewset. Any ISV wishing to develop a viewset should contact an OPC representative listed on the GPC Organization page.

Currently, there are 8 OPC viewsets:

  • Discreet's 3ds Max, which contains 14 different tests.
  • Dassault Systemes's CATIA, which has 11 different tests, is a visualization application.
  • CEI's EnSight replaces Data Explorer, which has 9 different tests.
  • Alias's Maya V5, with 9 tests, is an animation application.
  • Lightscape Technology's Lightscape Visualization System, with 5 tests, is a radiosity visualization application.
  • PTC's Pro/ENGINEER, with 7 tests.
  • Dassault Systemes's SolidWorks, with 9 tests.
  • UGS's Unigraphics V17, with 8 tests,test shaded, shaded with transparency, and wireframe rendering.
  • All viewsets represent relatively high-end applications. These type of applications typically render large data sets. They almost always include lighting, smooth shading, blending, line antialiasing, z-buffering, and some texture mapping.

For more information and to download the Viewperf Code, please visit the OPC web site.

The SPECapc Benchmark

Within SPEC's Graphics Performance Characterization (GPC) Group there was a strong belief that it is important to benchmark graphics performance based on actual applications that cover a broad-ranging set of standardized benchmarks for graphics-intensive applications. These are not all OpenGL-only tests, but also include interface and CPU performance.

Currently, there are six SPECapc Benchmarks:

  • 3ds max 6 measures performance based on the workload of a typical user, including functions such as wireframe modeling, shading, texturing, lighting, blending, inverse kinematics, object creation and manipulation, editing, scene creation, particle tracing, animation and rendering.
  • Pro/ENGINEER 20001 uses a complex model of a race car assembly to exercise all areas of system performance relevant to Pro/E users. Eight tests are run to measure performance in five categories: CPU, I/O, wireframe graphics, shaded graphics, and file time. Scores are compiled for individual tests, then calculated as weighted composites for each of the five categories and as an overall composite.
  • SolidWorks 2005 comprises eight tests: I/O-intensive operations, CPU-intensive operations, and six different graphics tests.
  • Solid Edge V14 represents typical user operations that are valuable in evaluating the performance of systems running Solid Edge V14. Using four typical types of models, it measure graphics, file I/O and CPU.
  • Maya 5 tests each of the four models - a werewolf, human hand, insect and squid. The models are rendered and displayed in the five different modes used in Maya 5: wireframe, Gouraud-shaded, texture, texture highlighted with a wireframe mesh, and texture selected (texture with wireframe mesh and control points). The benchmark is unique in its ability to test performance for large texture sizes and multiple viewports.
  • Maya 6 includes four scenarios created in Maya 6 that enable users to evaluate scene drawing and playback performance, CPU-intensive operations, and standard I/O performance.

For more information and to download the SPECapc Code, please visit the SPECapc area of the GPC web site.

The SPECglperf Benchmark - Retired

The SPECglperf benchmark has been retired and results are no longer being published. However the spec is still available for downloaded from the mirror FTP server.

The OPC project group has approved a set of 13 GLperf scripts for reporting results. These are split into 10 RGB scripts and three color index scripts. The scripts are further divided by functionality: the OPClist scripts (RGB and color index) contain a number of tests for a variety of graphics primitives and other operations (such as window-clears). These tests are probably the closest parallel to primitive-level results available from most vendors today. Other scripts feature specific graphics operations, such as CopyPixl.rgb, DrawPixl.rgb, ReadPixl.rgb, TexImage. rgb measure glCopyPixels, glDrawPixels, glReadPixels and glTexImage2D RGB operations. DrawPixl.ndx and ReadPixl.ndx are the color index analogs to DrawPixl.rgb and ReadPixl.rgb .

Remaining scripts address underlying graphics concepts that affect OpenGL performance -- BgnEnd.rgb measures performance as it varies with the number of primitives batched together (in a glBegin/glEnd pair), FillRate.rgb measures how fast rasterization operations are performed (how many pixels are drawn per second), Light.rgb measures the effect of the number of enabled light sources on drawing a particular primitive, and LineFill.rgb and TriFill.rgb measure the effect of increasing primitive size on the drawing rates of lines and triangles, respectively.

SPECGLperf vs SPECViewperf

Both Viewperf and GLperf measure the graphics performance of a system through the OpenGL API. They were designed, however, with different goals in mind. While Viewperf draws an entire model with differing sizes of primitives (as you would see in an actual application), GLperf artificially assigns a specific size to every primitive drawn within a test. While Viewperf attempts to emulate what an application would do graphically and measure it, GLperf makes no such attempt. Instead, GLperf provides a more controlled environment within which to extract and measure the highest performance or "upper bound" of a particular system.

Another difference is that Viewperf reports results in frames drawn per second, whereas GLperf measures its results in primitives drawn per second, whether the primitive is pixels, points, lines, triangles or some other object. To give an analogy to the automotive world, GLperf would be the equivalent of a speedometer measuring top speed, while Viewperf would be a stopwatch measuring the average speed through a slalom course.


Column Header
Column Footer