OpenGL and SDL

Foreword: the vast majority of these informations and pieces of advice comes from the various members of the SDL mailing list. I hope that this thematical gathering will help the SDL users. Many thanks to the several SDL contributors!

Overview

This section focuses on helping developers to use OpenGL with SDL.

Due to the amount of relevant informations, this is still a document in progress and any help would be appreciated.

Table of contents

General OpenGL informations

GL stands for Graphics Language. OpenGL (Open Graphics Library) is the "open source" version of GL.

OpenGL 1.1 is somehow outdated. The problem is that the default Windows OpenGL library does not have the 2.0 functions. It only supports up to 1.1.

GLU is the OpenGL Utility Library. This library is associated to OpenGL (it is usually distributed with it), and provides higher-level constructs, built from OpenGL basic operations.

What OpenGL does not do

OpenGL by itself does not provide input event handling (ex: mouse), sound, window management, etc.

GLUT (OpenGL Utility Toolkit, not to be mixed up with the previous GLU), exactly as SDL, provides window management, input handling, because it is designed as a cross-platform utility toolkit for developing OpenGL programs without operating system-specific code.

For various reasons, including performance and completness, one might prefer SDL, and do not use GLUT at all. Stick to GL and GLU, and you will be fine, especially if your multimedia application is a game.

Why use OpenGL?

One should estimate whether, for a given application, OpenGL is really needed, since one of its side effect is that such applications either will not work at all, or will "run" at slide show frame rates on machines without accelerated OpenGL. A first point is that if your application relies on real 3D renderings indeed, then OpenGL should be a good choice from the start. Other cases are less easy to solve.

The problem is that with the resolutions and frame rates people expect these days, you need hardware acceleration for many applications. OpenGL is a convenient way of accessing it since, unfortunately, most platforms lack accelerated 2D APIs supported by the SDL 2D API, or altogether.

So, supporting only the SDL 2D API means your high bandwidth application will only run well on the (very few) platforms that provide decent 2D acceleration. If the application is scalable enough (supports low resolutions, simplified effects etc), this may be acceptable - but a 1024x768-only full screen scroller with loads of translucent sprites will probably not be playable at all on most systems, regardless of video hardware.

But supporting only OpenGL means your application will not run at all on platforms without OpenGL. However, if this is that 1024x768 side scroller, that is probably just as well, because if there is no OpenGL, probably there are not enough resources on the platform anyway.

Of course, the best option, if you can afford it in terms of development cost and/or time, is to support both SDL 2D and OpenGL natively. The SDL 2D backend (with appropriate scalability options) will make it run pretty much anywhere, and the OpenGL backend will add insane frame rates and practically unlimited special effect capabilities.

Or you go the easy way: code for SDL 2D and use glSDL (an OpenGL-based backend for SDL) for extra speed where OpenGL is available, since it is exactly its purpose. And, to close the circle, this is why you should not use glSDL and native OpenGL at the same time: using (as in, "relying on") OpenGL directly basically eliminates the whole point of using glSDL instead of the much more powerful OpenGL API.

Finally, we see that OpenGL should be used only after careful examination of the many available alternatives, depending on the needs of your application, notably in the case where it does not rely on full blown 3D renderings.

Using DirectX and associated libraries could be an alternative to using OpenGL, but it would remove most of the portability of your application (only targeting PC running Windows or XBOX).

State machine

OpenGL is a state machine, which means you can set states and these will remain in a shared context until you change them. For example, if you set 2D texturing mode, it will stay until you want to draw solid filled triangles for example and call glDisable(GL_TEXTURE_2D) or until you want to switch to 1D/3D textures and call glEnable(GL_TEXTURE_1D/3D).

The same is true for all other OpenGL attributes: colors, materials, lights, matrices, etc.


[Back to the table of contents]


Availability of OpenGL on a specific device

OpenGL is not available (at least not in accelerated form) on all platforms (ex: PDAs, 16 bit computers, non-ps2gl Playstation2, pre-3D-boom consoles and some laptops), and not even on all systems based on platforms that do support OpenGL.

The SDL back-ends that support OpenGL are the ones for Windows, X11, Quartz (Mac OS X), maybe whatever MacOS 9 uses; no more.

Many old Windows machines have video cards and/or drivers without OpenGL support, and the situation is even worse with Linux. For example, on some old Linux workstations, OpenGL cannot be accelerated unless the secondary head is disabled.

Learning what is locally available under the X environment

Run glxinfo (see below) and look at the visuals listed at the bottom of all that data it dumps out. All major informations about the supported GLX extensions and the OpenGL renderer should be displayed. If you run glxinfo | grep direct and it prints: direct rendering: Yes then your OpenGL drivers are setup correctly. On the contrary, if glxinfo | grep 'OpenGL vendor string' returns something like OpenGL vendor string: Mesa project: www.mesa3d.org, then you have only software OpenGL.

Enabling a proper hardware acceleration on GNU/Linux

More preciselyn usually the goal is to activate the direct (GPU-based) rendering, i.e. to get rid of

> glxinfo
Xlib:  extension "XFree86-DRI" missing on display ":0.0".
name of display::0.0  screen: 0
[...]
OpenGL vendor string: Mesa project: www.mesa3d.org
[...]
OpenGL renderer string: Mesa GLX Indirect
[...]
OpenGL version string: 1.2 (1.5 Mesa 6.2.1)
[...]
and of
> glxgears -printfps
Xlib:  extension "XFree86-DRI" missing on display ":0.0".
2501 frames in 5.2 seconds = 481.485 FPS
so that it becomes something like:
> glxinfo
name of display::0.0
display::0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
[...]
client glx vendor string: ATI
client glx version string: 1.3
[...]
GLX version: 1.2
[...]
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: MOBILITY RADEON X700 Generic
OpenGL version string: 2.0.6286 (8.33.6)
OpenGL extensions:                                                               [...]                                                                      
and
> glxgears -printfps
23879 frames in 5.0 seconds = 4775.796 FPS

The first step is to know what is your video card, for example thanks to: lspci | grep VGA, it should output something like 0000:01:00.0 VGA compatible controller: ATI Technologies Inc: Unknown device 5653 or, if your OS database is up-to-date, 0000:01:00.0 VGA compatible controller: ATI Technologies Inc Radeon Mobility X700 (PCIE)

The procedure is then to refer to your distribution guide to set up the proper drivers, libraries, kernel modules, xorg.conf etc., which is not always a piece of cake.

Note that using OpenGL under most Unix systems (except OS X) usually implies using GLX contexts under the X windowing system. Other solutions are to:

GLX

Run glxinfo to know which OpenGL pixel formats are supported. The fact that you get Couldn't find matching GLX visual means that GLX cannot find a mode among the listed video modes matching the required pixel format. To specify the pixel mode you want, you should use SDL_GL_SetAttribute command.

Video cards, drivers & extensions

The OpenGL Hardware Registry is an online database that lists the OpenGL capabilities of a wide range of 3D accelerators. You can select an extension and see which cards/drivers support it.

Windows

OpenGL drivers being installed and OpenGL hardware acceleration being available are still unfortunately not guaranteed on all computers under Windows. The user might have to download and install the appropriate drivers.

Developing an OpenGL-based application

On most platforms, OpenGL headers are often not already installed (check with locate gl.h for example).

On Linux, request your distribution to install the correct *-dev packages, MESA ones are good for that.

On Windows, the OpenGL headers (gl.h and glu.h) and libraries (opengl32.lib and glu32.lib) for Visual C++ are provided with the Microsoft Platform SDK.

In order to compile a program using an OpenGL library, the related headers must be specified. They are to be specified through #include <GL/gl.h> or, if SDL is used, the SDL_opengl.h headers take care of it.

Autoconf

There exist autoconf scripts made to detect OpenGL [see also autoconf macro archive, GEM macro]: for OpenGL, for GLU, and for GLUT.


[Back to the table of contents]


Loading the OpenGL library

There are two ways to work with OpenGL and SDL:

One cannot mix the two approaches.

Dynamic library loading

The preferred approach is usually to load the OpenGL library "manually", i.e. at run time. That allows your application to load and start even if there is no OpenGL support on the system, so you can use some other API, or give the user a sensible error message: there may not even be an OpenGL library at all!

This approach also makes it possible to add an option to let the user decide what OpenGL library/driver to use.

If you are loading OpenGL at run time, all you need at build time is the OpenGL header. The rest is handled if/when your application actually tries to fire up OpenGL.

SDL has portable support for loading OpenGL function pointers: SDL_GL_LoadLibrary and SDL_GL_GetProcAddress.

Note that SDL_SetVideoMode should be called after SDL_GL_LoadLibrary and before SDL_GL_GetProcAddress.

See also: extensions for extension loading.

Static or dynamic library linking

Static linking

Static linking is pretty much out, unless you count on all users building from source, as some (most?) platforms have driver-specific GL libraries. OpenGL should never be linked at compile-time, because you have to check if the appropriate OpenGL version or extensions are available before using any function that depends on them. This is impossible to do prior to execution. SDL provides the system-independent SDL_GL_GetProcAddress() function to load OpenGL functions.

Dynamic linking

There is a portable way to detect the correct path of the GL library on a system: unless you want to load a specific, nonstandard OpenGL driver, just pass NULL (0) to SDL_GL_LoadLibrary and it will load the standard library on whatever platform you are on.

On X11, a NULL pointer means loading pointers from the application if linked with libGL, otherwise use libGL.so(.1).

On Win32, a NULL pointer means loading opengl32.dll. Keep in mind that some Windows machines do not have any traces of OpenGL whatsoever.

One's release could provide the user with two binaries: one with OpenGL support (which will not even load without the OpenGL library, say opengl32.dll on Windows), and one fall-back binary with no OpenGL being used.

On UNIX environments, a wrapper script could be used to select the appropriate executable, based on the OpenGL libraries found on the user system.

With minGW, for the link, library order could be:

-Lmingw32/lib -lmingw32 -lSDLmain -lSDL -lSDL_sound -lSDL_image -lSDL_net -lglu32 -lopengl32 -mwindows


[Back to the table of contents]


Managing OpenGL extensions

To know which extensions are locally available, check the string returned by glGetString( GL_EXTENSIONS ). To load extensions (functions and constants that are only for 1.2 or higher OpenGL versions), one can call directly SDL_GL_GetProcAddress or use one of the extension managers.

Loading the relevant extensions yourself

You will have to brew your own extension code using a header file and SDL_GL_GetProcAddress. Just grabbing the functions is not that much work anyway.

Grabbing only the ones you actually use could be nice if you are using functions that may not be provided by all drivers - but then again, a helper library could just wire the stubs to some function that grabs the real function if/when it's hit the first time.

The OpenGL registry lists, among other things, the function prototypes and defines you need to load the appropriate extensions, in case you need a bleeding-edge one that is not in SDL_opengl.h. One can get both the glext.h and glxext.h/wglext.h depending on which OS he uses.

You may ease the work by using define-directives:

#define INIT_ENTRY_POINT( funcname , type ) \
funcname = (type) SDL_GL_GetProcAddress(#funcname); \
if ( ! funcname ) cerr << "#funcname() not initialized" << endl ;

This works like this:

INIT_ENTRY_POINT( glMultiTexCoord1dARB , PFNGLMULTITEXCOORD1DARBPROC ) ;

Another solution would be to declare GL_GLEXT_PROTOTYPES:

#define GL_GLEXT_PROTOTYPES 1
#include <GL/glext.h>

This will declare typedefs for all GL extensions functions. For example, PFNGLACTIVETEXTUREARBPROC is a typedef for glActiveTextureARB() function. So, to bind your functions, you may use:

PFNGLACTIVETEXTUREARBPROC glActiveTextureARB ;
*(void**) & glActiveTextureARB = SDL_GL_GetProcAddress( "glActiveTextureARB" ) ;

In all case, calls to SDL_GL_GetProcAddress must occur after the call to SDL_SetVideoMode so that the extension works: SDL_GL_GetProcAddress can be used only when an OpenGL context exists, and it is created by SDL_SetVideoMode( SDL_OPENGL | ...).

Extension loaders

They do the work for you: bloat or useful help?


[Back to the table of contents]


SDL OpenGL flags

SDL_OPENGL & SDL_INTERNALOPENGL

A surface can in effect be one of three major modes: SDL_SWSURFACE, SDL_HWSURFACE or SDL_OPENGL. When using OpenGL, a lot of SDL functionalities for graphics make no sense. This means SDL_HWSURFACE and SDL_SWSURFACE are meaningless, and so is SDL_HWPALETTE. SDL_BlitSurface cannot be used with OpenGL.

There has been an internal change to the semantics of the SDL_OPENGL flag: backends now use SDL_INTERNALOPENGL to tell the difference between an OpenGL mode and a normal mode. This incurs no change to the applications which still use and query the SDL_OPENGL flag. This flag previously meant two things:

  1. the window is handled by OpenGL
  2. the application uses an OpenGL window

For the glSDL backend, it had to be split. Now SDL_OPENGL simply means "the application uses an OpenGL window" and SDL_INTERNALOPENGL means "the window is handled by OpenGL".

SDL_OPENGLBLIT

Do not use the deprecated SDL_OPENGLBLIT mode which used to allow both blitting and using OpenGL. This flag is deprecated, for quite a few reasons. Under numerous circumstances, using SDL_OPENGLBLIT can corrupt your OpenGL state.

One can look at the source of SDL (the opengl_blit code is still in there), or at glSDL for sample code. Otherwise one can read the OpenGL documentation.


[Back to the table of contents]


Managing OpenGL attributes

Setting them

The point is you set some GL attributes before creating the window with SDL_SetVideoMode, and it Just Works in SDL.

For example, to get a multisample (FSAA) context, do this:

SDL_GL_SetAttribute( SDL_GL_MULTISAMPLEBUFFERS, 1 ) ;
SDL_GL_SetAttribute( SDL_GL_MULTISAMPLESAMPLES, 2 ) ;
SDL_Surface * screen = SDL_SetVideoMode( 640, 480, 0, SDL_OPENGL ) ;

Checking whether they have been accepted

After the window is created, you can see if you got what you wanted:

   int buffers, samples ;
   
   SDL_GL_GetAttribute( GL_SAMPLE_BUFFERS_ARB, & buffers ) ;

   SDL_GL_GetAttribute( GL_SAMPLES_ARB, & samples ) ;
   
   if ( buffers == 0 || samples == 0 ) 
   {
   
	 /*
	  * You did not get a FSAA context, probably older hardware, or you 
	  * asked for more than one buffer, or you asked for some insane 
	  * number of samples (2, 4, or 8 is about it).
	  *
	  */
	  
	 [..] 
	 
   } 
   else 
   {
   
     // FSAA was enabled, success!
	 [..] 
	 
   }


[Back to the table of contents]


Blits between SDL & OpenGL

You cannot mix the SDL 2D API (notably standard blitting with SDL_BlitSurface and updaterects routines) with OpenGL. When you use OpenGL, you have to use OpenGL for everything that touches a visible buffer. OpenGL thinks it owns the window, and there is just no reasonable way to convince it otherwise.

Modern hardware wants you to put everything to the video card once and then let the card work with it every frame. In OpenGL, all textures are stored in video memory, whereas in SDL they can be either in system or video memory.

SDL surface to OpenGL framebuffer

Blitting directly to a screen that has SDL_OPENGL set is not possible, for example in order to render an overlay help screen: as mentioned earlier, SDL 2D API and OpenGL should not be used at the same time.

Thus if you initialize OpenGL, you are supposed to subsequently draw using OpenGL primitives, gl* calls. You can however use the SDL 2D API to manipulate images in memory before you hand them off to OpenGL as textures: instead of blitting as usual, upload the surface as a texture [more infos], and draw a quad (GL_QUAD) with this texture.

A less efficient and somewhat different method would be to use glRasterPos and glDrawPixels to draw the image directly into the framebuffer. However, even if it is card and data size dependent, glDrawPixels is deemed slower: creating a texture has been measured to be way faster on ATI cards as opposed to using glDrawPixels and a bit slower on NVidia cards. So it looks like a good compromise [more infos].


[Back to blits between SDL & OpenGL]

SDL surface to OpenGL texture

Getting a texture from a SDL surface is more tricky than it sounds. Check out these two excellent posts about it: [1, 2]. A RGBA8 (eight bits for each coordinate) SDL surface is then perfectly suitable as an OpenGL texture source.

When you first create the texture with glGenTextures, it is a blank slate. To link it to its content, you first have to tell OpenGL which texture is to be defined. To do that, you use glBindTexture. It just consists in telling OpenGL: "All the current texture commands should be applied to this (and only this) texture". Then you specify all the details (width, height, type, etc.).

The glTexImage2D function copies your image data from CPU-memory to the GPU-memory dedicated to textures. glTexImage2D replaces the image in the current GL_TEXTURE_2D object. If you do not switch texture objects with glBindTexture (including if you never bind any texture object at all), the next image will replace the current one in texture memory, and the current one will be discarded.

If you are not planning to use the SDL surface dedicated to glTexImage2D for something else, you should free it, or it will leak.

More informations in the texture section.

The upside-down issue

In texture space, (0,0) is the "beginning" of the texture data fed to glTexImage. (x,0) always belongs to the first row of the texture data, (x,1) belongs to the second row, etc. glTexImage does not have any notion of directions as in up/down/left/right or or upper/lower corners. It only is given texture data as a sequence, and the only sensible thing to do with that is to map (0,0) to the beginning of that data.

Indeed, this seems counter-intuitive if you map a rectangle with e.g. corners in (0,0), (1,0), (1,1), (1,0) and have texture coordinates set to the same as their vertex coordinates.

Some confusion on this seems to stem from some NeHe tutorial, which used some really primitive BMP loader which did not flip the image data (BMPs are stored upside down for some reason), and thus (0,0) of those textures became the lower left corner. In some SDL port of those examples, the texture coordinates had to be flipped, and this was blamed on SDL (or SDL_image) even though it originally was due to usage of a primitive function where its flaws accidentally matched some incorrect assumptions.

If you still like and want this kind of behaviour, you can of course flip the image data manually before uploading it.

In numerous 3D packages and with general OpenGL, the bottom-left texture coordinate convention is used. It matches up nicely with the default OpenGL coordinate system and cartesian coordinates in general. In an imaging sense it seems counter-intuitive, however in a 3-dimensional scene, it is often neater and more consistent to use this convention.

There is an example in test/testgl.c about how to convert your SDL surface to an OpenGL texture. Please note, though, that this example is not perfect, because the texture is constructed upside down. This is because SDL surfaces start at the top-left and OpenGL images start at the bottom-left corner:

o(0,HEIGHT)---(WIDTH,HEIGHT)o
|                           |
|                           |
|                           |
|                           |
o(0,0)-------------(WIDTH,0)o

OpenGL chose a mathematical graph positioning system, whose third coordinate, when increasing, comes from its origin in the screen towards the user.

test/testgl.c works around this by using an upside-down orthographic projection to flip the world upside-down so it looks right.

Another solution is to turn the SDL surface upside-down before using it in glTexImage2D, so the rest of the program can use the normal OpenGL conventions. To do so, one can use a flip function, or directly save accordingly the flipped images beforehand.

A third solution would be to call the following once, before you draw any objects with your flipped texture:

  glMatrixMode( GL_TEXTURE ) ;
  glLoadIdentity() ;
  glScalef( 1, -1, 1 ) ;

The matching format issue

One has to ensure that the pixel format of the SDL surface and the OpenGL match.

To do so, one can create a temporary SDL_Surface with the right OpenGL format, and then blit his surface on it, so that he is assured to always feed OpenGL with correct data, like in the following example.

More precisely, when using OpenGL, there are three different meanings to the "bits per pixel" thing:

  1. the bpp at which OpenGL displays the graphics. This has no implication on your code except on the graphics initialization with SDL_SetVideoMode (this has an effect on graphics rendering quality, though).
  2. the bpp at which the textures are stored on card (third parameter of the glTexImage2D call). This is the format that the texture has while residing in video card memory. Once again, except changing the third parameter of the call, there is nothing else to do in your code. Changing the parameter only allows you to spare some memory (by switching from a R8G8B8 to an R5G6B5 format, for example). Obviously, this will also have an impact on graphics quality.
  3. the bpp at which you hand the pixels to the glTexImage2D call. That one has implications in your sourcecode. However, it is not related to the first two, and OpenGL can (and will) do the conversions itself (at no cost) if these are needed. To tell glTexImage2D what format you used, you have to change the 7th and 8th parameters. This is the relevant bpp for our texture uploading issue.

Since OpenGL does the conversion, and since if you aim for wide compatibility you only have RGB/RGBA (24 or 32 bpp) surfaces for the third parameter, here is what one can do:

  1. call SDL_CreateRGBSurface using a 24/32 bpp R8G8B8(A8) surface (do not forget to switch the bitmasks if you are on a big endian architecture)
  2. blit his original surface to the 24/32 bpp surface
  3. create an OpenGL texture with that new surface using GL_RGB(A), GL_UNSIGNED_BYTE as 7th and 8th parameters

Also, the pixel format of the SDL surface may require you to set options for the transfer with glPixelStore.

glPixelStorei( GL_UNPACK_ALIGNMENT, 1 ) ;
is relatively common, for instance, if you do not know how the pixel data in memory will be aligned. GL_UNPACK_ROW_LENGTH may also need to be set if the pitch of the surface is not the same as the row length.

This has the obvious advantage that this works all the time, on all the OpenGL platforms.

There is one issue with this method though: if your image has an alpha channel, you may loose it during blit. The example does not have this limitation because it makes use of a colorkeyed source surface.

Using 8-bit palettized textures

If you have 8 bit data originally, it is useless to store it as 32 bit in video memory, which is what happens by default, wasting video RAM and video bandwidth.

Depending on the hardware you target, there are multiple ways to handle this efficiently:

Alpha channel & texture

The first solution to overcome the alpha channel issue is to blit the surface yourself, using getpixel/putpixel.

Another solution is to load an image having alpha informations, first load it normally as such (ex: IMG_Load), convert this surface to another using the pixel format of the current display, via SDL_DisplayFormatAlpha. Call SDL_SetAlpha( thisSurface, 0, 0 ) on this surface. Create another surface, via SDL_CreateRGBSurface, and blit the first surface to this newly created surface.

This surface is now in RGB order, and still has the alpha channel.

See also the relevant section of the SDL wiki.


[Back to blits between SDL & OpenGL]

OpenGL framebuffer to SDL surface

It is commonly used to generate screenshots. If you want to pull pixels out of the framebuffer with OpenGL, you can use glReadPixels.

To capture the entire screen into a buffer, one might use:

  glPixelStorei( GL_PACK_ROW_LENGTH, 0 ) ;
  glPixelStorei( GL_PACK_ALIGNMENT, 1 ) ;
  glReadPixels( 0, 0, screenWidth, screenHeight, GL_RGB, GL_UNSIGNED_BYTE, someBuffer ) ;

where someBuffer has been previously allocated to hold 3 * screenWidth * screenHeight bytes. You can also use other pixel formats and data types, depending on how you want the output to be formatted.

Note that glPixelStore parameters are part of the OpenGL state, so you may have to save and restore these settings. If you have other OpenGL operations that depend on them being other values, and you have at least OpenGL 1.1, you can wrap the whole thing with gl*ClientAttrib calls:

  glPushClientAttrib( GL_CLIENT_PIXEL_STORE_BIT ) ; 
  {
      ...
  } ; 
  glPopClientAttrib() ;

Otherwise one could use:

unsigned int size = width * height * 4 ;
void * pixelData = malloc( size ) ;
memset( pixelData 0, size ) ;

glBindTexture( GL_TEXTURE_2D, myGLTexture ) ;
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData ) ;
and then use SDL_CreateRGBSurfaceFrom to obtain one's SDL surface.


[Back to blits between SDL & OpenGL]

Clean-up

Images that are bound to texture objects stick around until they are replaced or until the objects are deleted with glDeleteTextures, which only makes sense after creating texture objects with glBindTexture.

OpenGL does not require specific user-defined final clean-up, it is managed directly by SDL_Quit: there is usually platform-specific OpenGL context deconstruction that has to be performed, which is handled automatically by the SDL video driver when SDL_Quit is called.

All the texture memory in an OpenGL context is generally freed when the context is destroyed, so it is as though an implicit glDeleteTextures were called on every existing texture object.

Finally, a little demo program shows you how to use SDL to load your textures and then get them onto your 3D card. It also shows how to do multitexturing and blending.


[Back to the table of contents]


Managing textures

The usual sequence for using an OpenGL texture object is approximately:

  1. use glGenTextures to allocate a name for the object
  2. use glBindTexture to switch to it
  3. use glTexImage2D, glTexParameteri, etc. to set up the contents of the object, in most cases out of a SDL surface
  4. later, use glBindTexture again to switch to it and have the contents made available for texture-dependent primitives
  5. use glDeleteTextures to erase the texture object when it will no longer be used

The last step should, again, be performed implicitly by the context destruction that happens as part of SDL_Quit.

One can also allocate multiple texture object names at once, or delete multiple texture objects at once.

Texture objects were added as an extension that became part of the core rather quickly (1.1), but the slightly strange semantics remained.

For example, you do not even have to use glGenTextures if you do not want to: in theory, you can just make up your own "texture names", though it is not recommended.

You cannot perform accelerated operations between textures, at least not without relying on extensions that are available only on some platform/driver/hardware combinations.

Loading images

See OpenGL blits.

Textured fonts

SDL_TTF converts strings into bitmaps (SDL surfaces). You can convert those to OpenGL textures (ex: SDL_ConvertSurface), upload them, and then draw a quad on-screen with the textures to draw the text. See: OpenGL blits.

For static text, like "Score:", you could render it once, upload it and use it many times.

For dynamic text, re-rendering the whole text each time might be inefficient, but it is pretty easy to generate a texture for each character in the font and then render any string by combining these textures.

You can avoid using a texture by providing a display list for each character in OpenGL (ie. each char draw lines and points). Then, you put the display lists into a table and render them on demand, when decoding strings.

NeHe has lessons for 2D and 3D fonts.

More infos can be found in our section dedicated to fonts, including specialized libraries to manage fonts with OpenGL.

Texture location

There is a way to see whether a texture is "resident" (stored in video memory) in OpenGL:

GLboolean glAreTexturesResident( GLsizei n, GLuint * textures, GLboolean * residences )
which returns true if and only if all textures you ask about are resident.

Note that it only checks if they are stored in video memory currently: it does not ask whether they could be stored in video memory. In order to increase the chance that a texture will be stored in video memory from the beginning, set its priority to 1:

GLvoid glPrioritizeTextures( GLsizei n, GLuint * textures, GLclampf * priorities )

As you noted, it may become a bit more complex than SDL surface handling. What you would do in your application is:

  1. create and set parameters for all GL texture objects
  2. set all priorities to 1
  3. upload all SDL surfaces to them
  4. check whether all texture objects are resident, if not, either decrease the quality of them (bits, remove alpha channel, resolution, etc.) or their number. Note that you have to change the target format of the texture object, not the SDL surface quality. Then redo step 3,4.

Read reference page about texture objects. Here are more details on step 4, changing the "internal format" of a texture.

When you bind a texture, it stays bound until you delete it or your program destroys the OpenGL context (i.e. quits).

The texture data may live in video memory, or it may live in system memory, or it may swap between the two, depending on the OpenGL implementation and the system you are running it on. The implementation/driver should handle all that. In the unlikely event that you get performance problems, you might have to fiddle with the texture priorities to try and get the textures to stick in video RAM (on those systems that store textures in video RAM).

Suppose your application changes its graphics (not many games do - and when they do, they accomplish it using some kind of palette changes). If you do these changes "in place" (i.e. directly in video memory) then you will clog the graphical bus (whose bandwidth, even with AGP, is very limited) with all kinds of minor changes to your graphics.

For applications that have to use OpenGL and are constantly mutating their graphics (ex: these graphics are applied as textures to 3D objects), one should instead keep a local copy of the image in a software surface, modify that copy, then use OpenGL functions to upload the new texture in place of the old one. It is especially true if these modifications require reading the video memory (ex: user-defined alpha-blending), since this is insanely slow.

Using the OpenGL texture uploading routines ensures that the actual transmission of the new image is sent in as fast a way as possible. Keeping a local copy of the surface speeds up performance because it prevents your system from ever retrieving that image data from video memory, in the case that you check those values.

Texture size

OpenGL requires texture width and height to be a power of 2 (not necessarily the same: textures does not have to be square), border excluded. They must therefore have the form 2^m + 2.b, where m is a non-negative integer and b, the border, is 0 or 1. For example: width x height = (512+2)x(64+2), or 256x256, etc. Otherwise one may rely on the GL_ARB_texture_non_power_of_two extension.

Some drivers may support up to 512x512 textures. In that case, a 1024x768 texture could be split into several 512x512, and smaller, textures. The smallest maximum texture size is 64x64 [more infos]. This limitation is driver-dependent, to get it do:

glGetIntegerv( GL_MAX_TEXTURE_SIZE, & size ) ;

That does not forbid the creation of small 32x32 OpenGL textures, as any OpenGL implementation is required to support 64 and lower sizes at the very least.

OpenGL 1.2 and higher support texture uploading for most pixel formats. In this case, you can upload the texture directly from the original buffer by specifying the correct pixel format (read the glTexImage2D manpage for a description of these formats).

Try to use the glTexSubImage2D call instead of the glTexImage2D call when possible, if you do not fill your texture with data.

For example, when you have to pad the size to the next power of 2, some OpenGL drivers can take advantage of it, whereas some others upload the full texture again.

You can also use glPixelStore to upload only the relevant part of a surface to a texture, without having to do any surface copying.

As a rule of thumb, the less data you need for your texture, the faster the upload. For example, if you only use 8 bpp, try to make use of paletted textures. If you can afford using 15/16bpp instead of 24bpp, that is fine too. Also, ATI cards benefit a lot from the reversed BGR pixel format when doing texture uploads. Nvidia cards are more tolerant, performance wise, to the pixel format you use.

If you use the trick from test/testgl.c, the bitmap could be of arbitrary size.

Total size for a game

It all depends on how much graphics one intends to use at, for example, a level of his game. Of course, if it is a platform game with tons of animations or non-tiled background graphics etc., he might well get into trouble.

What one might do is decide once and for all how much video memory one is "aiming at", like a minimum requirement (8, 32 Mb or whatever), and do a little math to check how much graphics one can allow his artists to draw for one level. Keep in mind that the video buffer(s) take up quite a lot of the video memory space to begin with (eg. 1024 * 768 * 4 bytes * 2 buffers = 6 291 456 bytes, 6 Mb!).

Tiling

If you are doing filtered scaling, you will need some overlap around the edges for correct filtering, or the edges will become visible. This is where tiling in OpenGL starts to get tough.

Sub-tiling

You can always just load the tile as one big texture, then adjust the texture coordinates to get the section you want. The only problem with this is that if you use bilinear filtering (if the GL_TEXTURE_MAG_FILTER and/or the GL_TEXTURE_MIN_FILTER is GL_LINEAR), then OpenGL will blend the edges of the tile with the edges of adjacent tiles.

But since you are using OpenGL just for faster 2D blitting, you can get away with setting GL_TEXTURE_MAG_FILTER and GL_TEXTURE_MIN_FILTER to GL_NEAREST.

Note that most image filtering techniques (ex: magnifying filter, dithering, anti-aliasing, etc.) cause problems with color keys since they handle it like all other colors. For example, they might result in the use of colors very close to the color key but not equal to it, which would result in strange visual artefacts.

The easiest way to avoid these issues is to use an alpha channel instead of a color key in the image. Once in OpenGL, the video buffer uses the alpha coordinate anyway, so there would be no difference at display time.

Then one can make the transparent pixels black as well (RGBA = [0,0,0,0]), or whatever color the borders of the non-transparent areas have.

To show only 54*32 pixels of a 64*32 OpenGL texture, use:

 
glBegin(GL_QUADS); 
	glTexCoord2f(0, 0); 
	glVertex2i(x, y);
	glTexCoord2f((54.00/64.00), 0); 
	glVertex2i(x+w, y); 
	glTexCoord2f((54.00/64.00), 1); 
	glVertex2i(x+w, y+h); 
	glTexCoord2f(0, 1);			 
	glVertex2i(x, y+h); 
glEnd();

Multitexture

There are two ways to use the multitexture extension:

Last resort is to apply a correctly alpha-blended texture to a surface more than once to achieve multitexturing on system that do not provide it as such.


[Back to the table of contents]


Using SDL & OpenGL for accelerated 2D

If you want to, you can use SDL to get to OpenGL even only for 2D needs. OpenGL provides full hardware accelerated support for stretching, rotating, color tinting, transparency, etc.

In many ways, it is getting to the point where you need to use OpenGL to get good performance on even 2D games...doubly so on the Mac, where you can guarantee more than enough 3D power to make a couple hundred textured quads render with no concern for framerate.

Blits

If you are planning to do some kind of HUD, use simply OpenGL quads to display the elements of the HUD. As a bonus, that will be faster that software drawing.

See also: blits, double buffering

For 2D rendering, most people use glOrtho, which lets you set an orthographic matrix. For example, one can use the following to set full orthographic mode:

glDisable( GL_DEPTH_TEST ) ;
glMatrixMode( GL_PROJECTION ) ;
glLoadIdentity() ;

/*
 * Upside-down square viewport: it maps the screen as if the (arbitrary-set) resolution were
 * 1000x1000 pixels.
 *
 */

glOrtho( /* left */ 0, /* right */ 1000, /* bottom */ 1000, /* top */ 0, /* near */ 0, /* far */ 1 ) ; - or, preferably to keep the 4/3 ratio like 800x600, 640x480, 1024x768, etc. - // Non-reversed 4/3 viewport: glOrtho( /* left */ -320.0f, /* right */ 320.0f, /* bottom */ -240.0f, /* top */ 240.0f, /* near */ -1, /* far */ 1 ) ;

After that, for the first viewport, the following would draw a white 2D rectangle to the screen from (25,150) to (975,450):

glColor4f( 1.0, 1.0, 1.0, 1.0 ) ;
glBindTexture( GL_TEXTURE_2D, textureID ) ;
glBegin( GL_QUADS ) ;
	glTexCoord2f( 1, 1) ; glVertex2i( 975, 150 ) ;
	glTexCoord2f( 1, 0) ; glVertex2i( 975, 450 ) ;
	glTexCoord2f( 0, 0) ; glVertex2i( 25,  450 ) ;
	glTexCoord2f( 0, 1) ; glVertex2i( 25,  150 ) ;
glEnd() ;

To get out of orthographic mode so you can go back to 3D drawing, one can do this for example:

glEnable( GL_DEPTH_TEST ) ;
glMatrixMode( GL_PROJECTION ) ;
glLoadIdentity() ;

gluPerspective( 45.0f, (GLfloat) ScreenWidth/ (GLfloat) ScreenHeight, 3.0f, ZDepth ) ;

Backbuffer issue

A problem with using OpenGL for 2D work is that the backbuffer is undefined after a buffer swap.

Either a blit from front to back occurs, or a page flip (buffers swapped), or a new back buffer is given to you from some arbitrary area of video RAM, and in general, you cannot tell what behaviour you are getting (unlike 2D SDL, where you can tell).

For a lot of 2D GUI work, most of the screen remains static and at high resolutions it is just not practical to redraw the whole screen every frame, even if you render the scene to textures and just tile them each frame (try this at 1280x1024). Plus, it is a lot of extra work to maintain those textures.

There are two extensions that might be useful: GLX_OML_swap_method for X11, and GL_WIN_swap_hint for Windows.

GLX_OML_swap_method

It allows you to request a specific swap behaviour when you create the GLXFBConfig: copy, swap or do not care.

GL_WIN_swap_hint

GL_WIN_swap_hint allows you to mark certain areas of the back buffer as changed, which can reduce the bandwidth to update the scene and, presumably means that other areas are left untouched, giving you "copy" behaviour.

You can also request the swap behaviour when the visual is created, but the MSDN documentation for glAddSwapHintRectWIN says you should use this GL_WIN_swap_hint extension instead.

Other backbuffer issues

One should allow the user to force a particular behaviour if it turns out it is occuring anyway. For example, an i810 under windowed linux X11 does copy the buffer, even though it does not support the extension.

GUI

Graphical User Interfaces can be built on top of OpenGL. They can be home-made, or already existing libraries.

Our requirements would be an LGPL-style license, an API usable from C++, an easy and standard build on Linux (autotools) and Windows (Microsoft Visual Studio) at least, the possibility to use SDL or SDL+OpenGL, non-native interface look-and-feel, easy customization. We referenced following projects:

QT, GTK+, etc. are not really suitable for games in our opinion. Finally we came to the conclusion that the most interesting libraries for specified needs were: Agar, CEGUI, Guichan and Gigi, in that order.


[Back to the table of contents]


Buffering

On any properly optimized accelerated OpenGL setup, SwapBuffers is an asynchronous operation, so unless you have caught up with the accelerator and/or page flipping, it usually returns very quickly.

Now, if you call glFinish, you eliminate the chance of CPU/GPU parallel execution, and effectively hard-sync your application with the GPU. That is, when the GPU is working, your CPU is not, and possibly vice versa. The latter happens if your application has a great deal of work to do each frame before it seriously starts pumping polygons.

Flushing the OpenGL command buffer

Use the OpenGL command glFlush or, if you want a blocking flush, use glFinish.

If you do not see any update after some time, you are probably using double-buffered OpenGL. Try the following to get single buffered OpenGL: SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 0 ) ;

Please not that the SDL_GL_DOUBLEBUFFER, flag is not, in any way, to be used in SDL_SetVideoMode (it happens to be equal to 5, which would mean SDL_HWSURFACE | SDL_ASYNCBLIT!). Use this flag only with SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1), before the SDL_SetVideoMode call.

Refresh rate

Windows XP uses 60 Hz as a default OpenGL refresh rate. Setting it to another value is merely a parameter to Change Display Settings, with no easy way to know whether the video card and the monitor support the requested refresh rate, which makes it hardly impossible to put into SDL.


[Back to the table of contents]


Double buffering with OpenGL

Double buffering with OpenGL is done via:

SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 ) ;
and flipping via SDL_GL_SwapBuffers(): SDL_DOUBLEBUF for SDL_SetVideoMode, SDL_Flip() and SDL_UpdateRect are just useful for 2D non-OpenGL blitting.

Use glFinish so that SDL_GL_SwapBuffers waits until the buffer swap has occured, before returning: the program will stall until the actual buffer swap is complete.

Disabling vertical synchronization

Vertical retrace syncing for OpenGL was not enabled in SDL 1.2.9. It could be enabled with SDL_GL_SetAttribute( SDL_GL_SWAP_CONTROL, n ) ;, with n equal to 1. Setting it to n > 0 causes a buffer swap every n th retrace, setting it to 0 (the default) swaps immediately as before.

There used to be platform-specific tricks to enable or disable retrace syncing, for example on Windows:

#include <windows.h>

void VSyncOn(char On)
{
  typedef void (APIENTRY * WGLSWAPINTERVALEXT) (int);

  WGLSWAPINTERVALEXT wglSwapIntervalEXT = (WGLSWAPINTERVALEXT) 
  wglGetProcAddress("wglSwapIntervalEXT");
  if (wglSwapIntervalEXT)
  {
     wglSwapIntervalEXT(On); // set vertical synchronisation
  }
}

That technique works only when the SDL_OPENGL flag is set. You can try to use SDL_Delay to slow down the framerate.


[Back to the table of contents]


Special buffers

Depth buffer

Using the depth buffer to clear the viewport

If you do not need a lot of depth buffer precision, this (not recommended) trick is to use the range 0 - 0.5 in one frame, and then 0.5 - 1 in the next, then 0 - 0.5 and so on, thus removing the neccessity to clear, which involves switching the clearing mode itself from GL_LESS to GL_GREATER with each frame as well.

Stencil buffer

The 16 bit stencil buffer is not widely supported. One may try requesting a 32 bpp mode with a 8 bit stencil instead. Note that 16 bpp modes almost never support a stencil buffer.


[Back to the table of contents]


Switching to fullscreen, changing resolution or color depth

On several platforms, the OpenGL context is destroyed every time SDL_SetVideoMode is called. Depending on the platform, as soon as you change resolution and/or change the color depth and/or toggle fullscreen, you can loose the OpenGL context.

This means that all the GL state is destroyed with it. This includes textures, among other things. More precisely, on Windows the textures are corrupted, whereas Linux handles it correctly.

To ensure portability is kept, the reloading of the OpenGL state has to be taken into account. So you should free previous ressources (ex: texture) and upload them again.

So you "just" have to reload your textures, restore your viewport and projection matrix, set all states again, etc., and it should work after the switch.


[Back to the table of contents]


Resizing

If glViewport is not being re-called on resize, the viewport will remain the same despite the window size changing. So, on resize, one could call for example:

SDL_SetVideoMode( newWidth, newHeight, 32, SDL_OPENGL ) ;
glViewport( 0, 0, newWidth, newHeight ) ;

On Mac OS X, sending SDL_VIDEORESIZE events, on window resizes, is not implemented in SDL 1.2.6.

At each SDL_SetVideoMode, the window is completely destroyed and a new one is created, centered on the screen. With the effect that the window seems to jump around after resizing.

This is a short piece of code showing how a resize event could be handled.


[Back to the table of contents]


The glSDL backend

This back-end aims at providing an (OpenGL) hardware-accelerated back-end for the SDL API. It uses OpenGL so that SDL blits are accelerated, it is not intended to be used for OpenGL applications, since their modifying the state machine would mess the OpenGL settings that glSDL relies on.

Running stock applications with the glSDL back-end might be extremely slow. Application tuning for performance in general and for glSDL in particular is described here.


[Back to the table of contents]


Why floating point pixel values?

It is the way OpenGL addresses points on a screen. Rather than using pixel values (which change with screen resolution), use it as a percentage value. OpenGL then automatically translates into whatever resolution it is currently running in.

It is a good idea, as it means you can use fonts and they will stay the same size with increased resolution, but get sharper, instead of raster fonts (such as in Windows) which will get smaller as the screen resolution increases.

It is also more intuitive to think of a percentage of the screen for points rather than a certain number of pixels out of a changing total. The only downside is that if the screen is not a 4x3 resolution, graphics will be distorted, whereas with pixel addressing it will not.

Floating point pixels are good for high resolution colors (color values are not naturally integers, they are more continuous), and allow for better color manipulation calculations because you do not have to worry about saturation/overflow so much. They make really nice "high dynamic range" calculations possible


[Back to the table of contents]


How do I determine which shape the user clicked on?

This is called pick correlation, and there are three different methods to do so:


[Back to the table of contents]


Performances & tuning

Knowing how the rendering takes place in a particular configuration

One could use locate libgl.so to have hints about the OpenGL drivers that may be used (Mesa, video card driver, etc.). Enter updatedb if the locate database is too old.

Tuning driver, features and level of detail

Unfortunately, there is not any good way of telling what OpenGL driver is in use, and what features it accelerates in hardware.

Starting from SDL 1.2.10 though, the SDL_GL_ACCELERATED_VISUAL flag was added to ensure the rendering is hardware-accelerated. For most cases this is enough, but there is certain situations where the drivers will fall back to software rendering. At the moment this happens mostly when using shaders. There are so many different hardware that will support the shaders in general, but some cases cannot be hardware accelerated.

Long story short, the best option is to test drive the code, to provide some way of benchmarking the current configuration with the features you use, and see if it is fast enough. Make sure that you provide a way for the user to re-configure once drivers have been updated.

OpenGL and DirectX both use the same underlying hardware, so what is possible in DirectX is possible in OpenGL, and is often much simpler in OpenGL.

If one is using OpenGL and cannot achieve high framerate when rendering a few thousands polygons, then display lists ought to be used. For skeletal animation with weighted vertices and such, use vertex arrays.

Do not call glFlush() before SDL_GL_SwapBuffers since the latter implies the former.


[Back to the table of contents]


Some OpenGL random hints

Is it better to use the SDL OpenGL headers or directly use the GL headers?
Using the OpenGL headers directly is platform dependent. SDL_opengl.h is just a thin wrapper that does the right thing on whatever platform you are building on. Using it is the correct way to use OpenGL with current SDL versions, except if you are using extension loaders (ex: GLEW).
How to perform complex vector graphics with OpenGL?
One can use a dedicated library, as explained here.
How to have in OpenGL the same alpha blending conventions as software SDL uses?
Use:
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ) ;
glEnable( GL_BLEND ) ;
When can one make gl* calls?
One must no make any gl* call before SDL_SetVideoMode.
When using one texture to store multiple sprites, OpenGL smooth out my frames which tends to blend sprites together with a faint line at the edges

To turn off this image blending, so that it keeps it more pixel accurate, either include a gutter around the tiles (a technique that is used when using mip mapping), or do not use the transitioning/blending.

If you want a more pixel accurate render technique using OpenGL, try:

  1. change the glTexParameter() from GL_LINEAR to GL_NEAREST
  2. if you do not zoom in/out, do not use mipmaps
  3. try to draw all quads with exactly the same dimensions as the corresponding tile
  4. remember that integer coordinates are located in the center of pixels under orthonormal projection in OpenGL

One may begin by making sure one could draw a 2x2 single colored quad a certain pixel location (try at (0,0) and (638,478) if you are in 640x480 mode), then you would know if you got your "pixel correctness". Then it would be easy to extend to a whole tile/quad.

How can I change the gamma level of OpenGL rendering?
The SDL gamma functions should work with OpenGL contexts.


[Back to the table of contents]


Troubleshooting

Rendering is slow
See some hints to get informations about the actual video pipeline that is used.
My programs ran perfectly fine using GLUT but were horribly slow using SDL
Apparently the cure could be to call glFlush and glFinish before calling SDL_GL_SwapBuffers. Miraculous recovery, whereas SDL supposedly calls those functions internally.
Random crashes, parachute deployed
Try very tiny SDL examples (such as using SDL_Init, SDL_SetVideoMode, then SDL_QUIT). If it keeps on failing, that might be due to the fact that some X Windows drivers need a recent version of XFree86 (ex: NVidia). Upgrading both of them might help.
Orthographic projection with negative view distance not working but the same code works if the view distance is positive
Set ZBufferEnable to false.
For run-time OpenGL library loading, I declared the void (*glBegin) ( GLenum ) and it crashes
The pointed functions do not use the same calling convention as your program. Just place APIENTRY in front of the function pointers in the GL function declarations: use instead void ( APIENTRY * glBegin ) ( GLenum ).
Under certain circumstances, the loaded textures, geometry or other OpenGL data are lost

On some platforms (ex: Windows), various events (going to fullscreen, switching to another application, etc.) lead to loosing the OpenGL context. For example, you may loose OpenGL textures and/or other OpenGL state data as a result of reopening the OpenGL context. Also, certain backends can loose hardware surfaces at any time, because the operating system steals the VRAM back whenever it wants to. It happens usually when the user switches to another application.

The only work-around is to reload the OpenGL context whenever it gets lost.

When texturing a quad with a RGBA texture, the pixels with alpha values are appearing as white instead of not appearing at all
Use GL_REPLACE instead of GL_DECAL:
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE ) ;


[Back to the table of contents]


OpenGL & SDL links

OpenGL documentation

This documentation addresses generic OpenGL concerns, it is not specifically related to SDL.

Documentation on 3D rendering in general


[Back to the table of contents]




Please react!

If you have information more detailed or more recent than those presented in this document, if you noticed errors, neglects or points insufficiently discussed, drop us a line!




[Top]

Last update: Sunday, March 15, 2009