Foreword: the vast majority of these informations and pieces of advice comes from the various members of the SDL mailing list. I hope that this thematical gathering will help the SDL users. Many thanks to the several SDL contributors!
This section focuses on helping developers to use OpenGL with SDL.
Due to the amount of relevant informations, this is still a document in progress and any help would be appreciated.
General OpenGL informations
Availability of OpenGL on a specific device
Loading the OpenGL library
Managing OpenGL extensions
SDL OpenGL flags
Managing OpenGL attributes
Blits between SDL & OpenGL:
Using SDL & OpenGL for accelerated 2D
Switching to fullscreen, changing resolution or color depth
The glSDL backend
Why floating point pixel values?
How do I determine which shape the user clicked on?
Performances & tuning
Some OpenGL random hints
OpenGL & SDL links
GL stands for Graphics Language. OpenGL (Open Graphics Library) is the "open source" version of GL.
OpenGL 1.1 is somehow outdated. The problem is that the default Windows OpenGL library does not have the 2.0 functions. It only supports up to 1.1.
GLU is the OpenGL Utility Library. This library is associated to OpenGL (it is usually distributed with it), and provides higher-level constructs, built from OpenGL basic operations.
OpenGL by itself does not provide input event handling (ex: mouse), sound, window management, etc.
GLUT (OpenGL Utility Toolkit, not to be mixed up with the previous GLU), exactly as SDL, provides window management, input handling, because it is designed as a cross-platform utility toolkit for developing OpenGL programs without operating system-specific code.
For various reasons, including performance and completness, one might prefer SDL, and do not use GLUT at all. Stick to GL and GLU, and you will be fine, especially if your multimedia application is a game.
One should estimate whether, for a given application, OpenGL is really needed, since one of its side effect is that such applications either will not work at all, or will "run" at slide show frame rates on machines without accelerated OpenGL. A first point is that if your application relies on real 3D renderings indeed, then OpenGL should be a good choice from the start. Other cases are less easy to solve.
The problem is that with the resolutions and frame rates people expect these days, you need hardware acceleration for many applications. OpenGL is a convenient way of accessing it since, unfortunately, most platforms lack accelerated 2D APIs supported by the SDL 2D API, or altogether.
So, supporting only the SDL 2D API means your high bandwidth application will only run well on the (very few) platforms that provide decent 2D acceleration. If the application is scalable enough (supports low resolutions, simplified effects etc), this may be acceptable - but a 1024x768-only full screen scroller with loads of translucent sprites will probably not be playable at all on most systems, regardless of video hardware.
But supporting only OpenGL means your application will not run at all on platforms without OpenGL. However, if this is that 1024x768 side scroller, that is probably just as well, because if there is no OpenGL, probably there are not enough resources on the platform anyway.
Of course, the best option, if you can afford it in terms of development cost and/or time, is to support both SDL 2D and OpenGL natively. The SDL 2D backend (with appropriate scalability options) will make it run pretty much anywhere, and the OpenGL backend will add insane frame rates and practically unlimited special effect capabilities.
Or you go the easy way: code for SDL 2D and use glSDL (an OpenGL-based backend for SDL) for extra speed where OpenGL is available, since it is exactly its purpose. And, to close the circle, this is why you should not use glSDL and native OpenGL at the same time: using (as in, "relying on") OpenGL directly basically eliminates the whole point of using glSDL instead of the much more powerful OpenGL API.
Finally, we see that OpenGL should be used only after careful examination of the many available alternatives, depending on the needs of your application, notably in the case where it does not rely on full blown 3D renderings.
Using DirectX and associated libraries could be an alternative to using OpenGL, but it would remove most of the portability of your application (only targeting PC running Windows or XBOX).
OpenGL is a state machine, which means you can set states and these will remain in a shared context until you change them. For example, if you set 2D texturing mode, it will stay until you want to draw solid filled triangles for example and call
glDisable(GL_TEXTURE_2D) or until you want to switch to 1D/3D textures and call
The same is true for all other OpenGL attributes: colors, materials, lights, matrices, etc.
OpenGL is not available (at least not in accelerated form) on all platforms (ex: PDAs, 16 bit computers, non-ps2gl Playstation2, pre-3D-boom consoles and some laptops), and not even on all systems based on platforms that do support OpenGL.
The SDL back-ends that support OpenGL are the ones for Windows, X11, Quartz (Mac OS X), maybe whatever MacOS 9 uses; no more.
Many old Windows machines have video cards and/or drivers without OpenGL support, and the situation is even worse with Linux. For example, on some old Linux workstations, OpenGL cannot be accelerated unless the secondary head is disabled.
glxinfo (see below) and look at the visuals listed at the bottom of all that data it dumps out. All major informations about the supported GLX extensions and the OpenGL renderer should be displayed. If you run
glxinfo | grep direct and it prints:
direct rendering: Yes then your OpenGL drivers are setup correctly. On the contrary, if
glxinfo | grep 'OpenGL vendor string' returns something like
OpenGL vendor string: Mesa project: www.mesa3d.org, then you have only software OpenGL.
More preciselyn usually the goal is to activate the direct (GPU-based) rendering, i.e. to get rid of
The first step is to know what is your video card, for example thanks to:
lspci | grep VGA, it should output something like
0000:01:00.0 VGA compatible controller: ATI Technologies Inc: Unknown device 5653 or, if your OS database is up-to-date,
0000:01:00.0 VGA compatible controller: ATI Technologies Inc Radeon Mobility X700 (PCIE)
The procedure is then to refer to your distribution guide to set up the proper drivers, libraries, kernel modules, xorg.conf etc., which is not always a piece of cake.
Note that using OpenGL under most Unix systems (except OS X) usually implies using GLX contexts under the X windowing system. Other solutions are to:
glxinfo to know which OpenGL pixel formats are supported. The fact that you get Couldn't find matching GLX visual means that GLX cannot find a mode among the listed video modes matching the required pixel format. To specify the pixel mode you want, you should use
The OpenGL Hardware Registry is an online database that lists the OpenGL capabilities of a wide range of 3D accelerators. You can select an extension and see which cards/drivers support it.
OpenGL drivers being installed and OpenGL hardware acceleration being available are still unfortunately not guaranteed on all computers under Windows. The user might have to download and install the appropriate drivers.
On most platforms, OpenGL headers are often not already installed (check with
locate gl.h for example).
On Linux, request your distribution to install the correct
*-devpackages, MESA ones are good for that.
On Windows, the OpenGL headers (gl.h and glu.h) and libraries (opengl32.lib and glu32.lib) for Visual C++ are provided with the Microsoft Platform SDK.
In order to compile a program using an OpenGL library, the related headers must be specified. They are to be specified through
#include <GL/gl.h> or, if SDL is used, the
SDL_opengl.h headers take care of it.
There exist autoconf scripts made to detect OpenGL [see also autoconf macro archive, GEM macro]: for OpenGL, for GLU, and for GLUT.
There are two ways to work with OpenGL and SDL:
SDL_GL_LoadLibraryand then retrieve function pointers for all OpenGL functions like shown in the SDL
One cannot mix the two approaches.
The preferred approach is usually to load the OpenGL library "manually", i.e. at run time. That allows your application to load and start even if there is no OpenGL support on the system, so you can use some other API, or give the user a sensible error message: there may not even be an OpenGL library at all!
This approach also makes it possible to add an option to let the user decide what OpenGL library/driver to use.
If you are loading OpenGL at run time, all you need at build time is the OpenGL header. The rest is handled if/when your application actually tries to fire up OpenGL.
SDL has portable support for loading OpenGL function pointers:
SDL_SetVideoMode should be called after
SDL_GL_LoadLibrary and before
See also: extensions for extension loading.
Static linking is pretty much out, unless you count on all users building from source, as some (most?) platforms have driver-specific GL libraries. OpenGL should never be linked at compile-time, because you have to check if the appropriate OpenGL version or extensions are available before using any function that depends on them. This is impossible to do prior to execution. SDL provides the system-independent
SDL_GL_GetProcAddress() function to load OpenGL functions.
There is a portable way to detect the correct path of the GL library on a system: unless you want to load a specific, nonstandard OpenGL driver, just pass
NULL (0) to
SDL_GL_LoadLibrary and it will load the standard library on whatever platform you are on.
On X11, a NULL pointer means loading pointers from the application if linked with libGL, otherwise use
On Win32, a NULL pointer means loading
opengl32.dll. Keep in mind that some Windows machines do not have any traces of OpenGL whatsoever.
One's release could provide the user with two binaries: one with OpenGL support (which will not even load without the OpenGL library, say
opengl32.dll on Windows), and one fall-back binary with no OpenGL being used.
On UNIX environments, a wrapper script could be used to select the appropriate executable, based on the OpenGL libraries found on the user system.
With minGW, for the link, library order could be:
To know which extensions are locally available, check the string returned by
glGetString( GL_EXTENSIONS ). To load extensions (functions and constants that are only for 1.2 or higher OpenGL versions), one can call directly SDL_GL_GetProcAddress or use one of the extension managers.
You will have to brew your own extension code using a header file and SDL_GL_GetProcAddress. Just grabbing the functions is not that much work anyway.
Grabbing only the ones you actually use could be nice if you are using functions that may not be provided by all drivers - but then again, a helper library could just wire the stubs to some function that grabs the real function if/when it's hit the first time.
The OpenGL registry lists, among other things, the function prototypes and defines you need to load the appropriate extensions, in case you need a bleeding-edge one that is not in
SDL_opengl.h. One can get both the
glxext.h/wglext.h depending on which OS he uses.
You may ease the work by using define-directives:
This works like this:
Another solution would be to declare
This will declare typedefs for all GL extensions functions. For example,
PFNGLACTIVETEXTUREARBPROC is a typedef for
So, to bind your functions, you may use:
In all case, calls to SDL_GL_GetProcAddress must occur after the call to
SDL_SetVideoMode so that the extension works: SDL_GL_GetProcAddress can be used only when an OpenGL context exists, and it is created by
SDL_SetVideoMode( SDL_OPENGL | ...).
They do the work for you: bloat or useful help?
GLEW, maybe the easiest way to use extensions across all platforms (Windows, GNU/Linux, OS X, FreeBSD, etc.). It has Opengl 2.0 compatible header files with all the defines and function prototypes, and it loads the function pointers during startup for you. When you include
glew.h, you do not need to include any GL headers, and you should not include
If you do choose to load OpenGL at runtime, GLEW will not load it for you (you need to use the
SDL_GL_LoadLibrary(NULL) behaviour for that [more infos]), but the GLEW header does contain the names of the core OpenGL functions, and
glewInit() hooks those up to the loaded library
A surface can in effect be one of three major modes: SDL_SWSURFACE, SDL_HWSURFACE or SDL_OPENGL. When using OpenGL, a lot of SDL functionalities for graphics make no sense. This means SDL_HWSURFACE and SDL_SWSURFACE are meaningless, and so is SDL_HWPALETTE.
SDL_BlitSurface cannot be used with OpenGL.
There has been an internal change to the semantics of the
SDL_OPENGL flag: backends now use
SDL_INTERNALOPENGL to tell the difference between an OpenGL mode and a normal mode. This incurs no change to the applications which still use and query the
SDL_OPENGL flag. This flag previously meant two things:
For the glSDL backend, it had to be split. Now
SDL_OPENGL simply means "the application uses an OpenGL window" and SDL_INTERNALOPENGL means "the window is handled by OpenGL".
Do not use the deprecated
SDL_OPENGLBLIT mode which used to allow both blitting and using OpenGL. This flag is deprecated, for quite a few reasons. Under numerous circumstances, using
SDL_OPENGLBLIT can corrupt your OpenGL state.
One can look at the source of SDL (the opengl_blit code is still in there), or at glSDL for sample code. Otherwise one can read the OpenGL documentation.
The point is you set some GL attributes before creating the window with
SDL_SetVideoMode, and it Just Works in SDL.
For example, to get a multisample (FSAA) context, do this:
After the window is created, you can see if you got what you wanted:
You cannot mix the SDL 2D API (notably standard blitting with
SDL_BlitSurface and updaterects routines) with OpenGL. When you use OpenGL, you have to use OpenGL for everything that touches a visible buffer. OpenGL thinks it owns the window, and there is just no reasonable way to convince it otherwise.
Modern hardware wants you to put everything to the video card once and then let the card work with it every frame. In OpenGL, all textures are stored in video memory, whereas in SDL they can be either in system or video memory.
Blitting directly to a screen that has
SDL_OPENGL set is not possible, for example in order to render an overlay help screen: as mentioned earlier, SDL 2D API and OpenGL should not be used at the same time.
Thus if you initialize OpenGL, you are supposed to subsequently draw using OpenGL primitives,
gl* calls. You can however use the SDL 2D API to manipulate images in memory before you hand them off to OpenGL as textures: instead of blitting as usual, upload the surface as a texture [more infos], and draw a quad (
GL_QUAD) with this texture.
A less efficient and somewhat different method would be to use
glDrawPixels to draw the image directly into the framebuffer. However, even if it is card and data size dependent,
glDrawPixels is deemed slower: creating a texture has been measured to be way faster on ATI cards as opposed to using
glDrawPixels and a bit slower on NVidia cards. So it looks like a good compromise [more infos].
Getting a texture from a SDL surface is more tricky than it sounds. Check out these two excellent posts about it: [1, 2]. A RGBA8 (eight bits for each coordinate) SDL surface is then perfectly suitable as an OpenGL texture source.
When you first create the texture with
glGenTextures, it is a blank slate. To link it to its content, you first have to tell OpenGL which texture is to be defined. To do that, you use
glBindTexture. It just consists in telling OpenGL: "All the current texture commands should be applied to this (and only this) texture". Then you specify all the details (width, height, type, etc.).
glTexImage2D function copies your image data from CPU-memory to the GPU-memory dedicated to textures.
glTexImage2D replaces the image in the current
GL_TEXTURE_2D object. If you do not switch texture objects with
glBindTexture (including if you never bind any texture object at all), the next image will replace the current one in texture memory, and the current one will be discarded.
If you are not planning to use the SDL surface dedicated to
glTexImage2D for something else, you should free it, or it will leak.
More informations in the texture section.
In texture space, (0,0) is the "beginning" of the texture data fed to glTexImage. (x,0) always belongs to the first row of the texture data, (x,1) belongs to the second row, etc. glTexImage does not have any notion of directions as in up/down/left/right or or upper/lower corners. It only is given texture data as a sequence, and the only sensible thing to do with that is to map (0,0) to the beginning of that data.
Indeed, this seems counter-intuitive if you map a rectangle with e.g. corners in (0,0), (1,0), (1,1), (1,0) and have texture coordinates set to the same as their vertex coordinates.
Some confusion on this seems to stem from some NeHe tutorial, which used some really primitive BMP loader which did not flip the image data (BMPs are stored upside down for some reason), and thus (0,0) of those textures became the lower left corner. In some SDL port of those examples, the texture coordinates had to be flipped, and this was blamed on SDL (or SDL_image) even though it originally was due to usage of a primitive function where its flaws accidentally matched some incorrect assumptions.
If you still like and want this kind of behaviour, you can of course flip the image data manually before uploading it.
In numerous 3D packages and with general OpenGL, the bottom-left texture coordinate convention is used. It matches up nicely with the default OpenGL coordinate system and cartesian coordinates in general. In an imaging sense it seems counter-intuitive, however in a 3-dimensional scene, it is often neater and more consistent to use this convention.
There is an example in
test/testgl.c about how to convert your SDL surface to an OpenGL texture. Please note, though, that this example is not perfect, because the texture is constructed upside down. This is because SDL surfaces start at the top-left and OpenGL images start at the bottom-left corner:
OpenGL chose a mathematical graph positioning system, whose third coordinate, when increasing, comes from its origin in the screen towards the user.
test/testgl.c works around this by using an upside-down orthographic projection to flip the world upside-down so it looks right.
Another solution is to turn the SDL surface upside-down before using it in
glTexImage2D, so the rest of the program can use the normal OpenGL conventions. To do so, one can use a flip function, or directly save accordingly the flipped images beforehand.
A third solution would be to call the following once, before you draw any objects with your flipped texture:
One has to ensure that the pixel format of the SDL surface and the OpenGL match.
To do so, one can create a temporary
SDL_Surface with the right OpenGL format, and then blit his surface on it, so that he is assured to always feed OpenGL with correct data, like in the following example.
More precisely, when using OpenGL, there are three different meanings to the "bits per pixel" thing:
R5G6B5format, for example). Obviously, this will also have an impact on graphics quality.
glTexImage2Dcall. That one has implications in your sourcecode. However, it is not related to the first two, and OpenGL can (and will) do the conversions itself (at no cost) if these are needed. To tell
glTexImage2Dwhat format you used, you have to change the 7th and 8th parameters. This is the relevant bpp for our texture uploading issue.
Since OpenGL does the conversion, and since if you aim for wide compatibility you only have RGB/RGBA (24 or 32 bpp) surfaces for the third parameter, here is what one can do:
SDL_CreateRGBSurfaceusing a 24/32 bpp R8G8B8(A8) surface (do not forget to switch the bitmasks if you are on a big endian architecture)
Also, the pixel format of the SDL surface may require you to set options for the transfer with
GL_UNPACK_ROW_LENGTHmay also need to be set if the pitch of the surface is not the same as the row length.
This has the obvious advantage that this works all the time, on all the OpenGL platforms.
There is one issue with this method though: if your image has an alpha channel, you may loose it during blit. The example does not have this limitation because it makes use of a colorkeyed source surface.
If you have 8 bit data originally, it is useless to store it as 32 bit in video memory, which is what happens by default, wasting video RAM and video bandwidth.
Depending on the hardware you target, there are multiple ways to handle this efficiently:
GL_MAX_FRAGMENT_UNIFORM_COMPONENTS = 512, so only 512 components (i.e. floats or ints, etc.) are allowed. So if your image takes more than 256 pixels, you are going over this limit. OpenGL pixel shaders do not work in 8 bit frame buffer mode.
The first solution to overcome the alpha channel issue is to blit the surface yourself, using
Another solution is to load an image having alpha informations, first load it normally as such (ex:
IMG_Load), convert this surface to another using the pixel format of the current display, via
SDL_SetAlpha( thisSurface, 0, 0 ) on this surface. Create another surface, via
SDL_CreateRGBSurface, and blit the first surface to this newly created surface.
This surface is now in RGB order, and still has the alpha channel.
See also the relevant section of the SDL wiki.
It is commonly used to generate screenshots. If you want to pull pixels out of the framebuffer with OpenGL, you can use
To capture the entire screen into a buffer, one might use:
someBuffer has been previously allocated to hold
3 * screenWidth * screenHeight bytes. You can also use other pixel formats and data types, depending on how you want the output to be formatted.
glPixelStore parameters are part of the OpenGL state, so you may have to save and restore these settings. If you have other OpenGL operations that depend on them being other values, and you have at least OpenGL 1.1, you can wrap the whole thing with
Otherwise one could use:
SDL_CreateRGBSurfaceFromto obtain one's SDL surface.
Images that are bound to texture objects stick around until they are replaced or until the objects are deleted with
glDeleteTextures, which only makes sense after creating texture objects with
OpenGL does not require specific user-defined final clean-up, it is managed directly by
SDL_Quit: there is usually platform-specific OpenGL context deconstruction that has to be performed, which is handled automatically by the SDL video driver when
SDL_Quit is called.
All the texture memory in an OpenGL context is generally freed when the context is destroyed, so it is as though an implicit
glDeleteTextures were called on every existing texture object.
Finally, a little demo program shows you how to use SDL to load your textures and then get them onto your 3D card. It also shows how to do multitexturing and blending.
The usual sequence for using an OpenGL texture object is approximately:
glGenTexturesto allocate a name for the object
glBindTextureto switch to it
glTexParameteri, etc. to set up the contents of the object, in most cases out of a SDL surface
glBindTextureagain to switch to it and have the contents made available for texture-dependent primitives
glDeleteTexturesto erase the texture object when it will no longer be used
The last step should, again, be performed implicitly by the context destruction that happens as part of
One can also allocate multiple texture object names at once, or delete multiple texture objects at once.
Texture objects were added as an extension that became part of the core rather quickly (1.1), but the slightly strange semantics remained.
For example, you do not even have to use
glGenTextures if you do not want to: in theory, you can just make up your own "texture names", though it is not recommended.
You cannot perform accelerated operations between textures, at least not without relying on extensions that are available only on some platform/driver/hardware combinations.
See OpenGL blits.
SDL_TTF converts strings into bitmaps (SDL surfaces). You can convert those to OpenGL textures (ex:
SDL_ConvertSurface), upload them, and then draw a quad on-screen with the textures to draw the text. See: OpenGL blits.
For static text, like "Score:", you could render it once, upload it and use it many times.
For dynamic text, re-rendering the whole text each time might be inefficient, but it is pretty easy to generate a texture for each character in the font and then render any string by combining these textures.
You can avoid using a texture by providing a display list for each character in OpenGL (ie. each char draw lines and points). Then, you put the display lists into a table and render them on demand, when decoding strings.
NeHe has lessons for 2D and 3D fonts.
More infos can be found in our section dedicated to fonts, including specialized libraries to manage fonts with OpenGL.
There is a way to see whether a texture is "resident" (stored in video memory) in OpenGL:
Note that it only checks if they are stored in video memory currently: it does not ask whether they could be stored in video memory. In order to increase the chance that a texture will be stored in video memory from the beginning, set its priority to 1:
As you noted, it may become a bit more complex than SDL surface handling. What you would do in your application is:
Read reference page about texture objects. Here are more details on step 4, changing the "internal format" of a texture.
When you bind a texture, it stays bound until you delete it or your program destroys the OpenGL context (i.e. quits).
The texture data may live in video memory, or it may live in system memory, or it may swap between the two, depending on the OpenGL implementation and the system you are running it on. The implementation/driver should handle all that. In the unlikely event that you get performance problems, you might have to fiddle with the texture priorities to try and get the textures to stick in video RAM (on those systems that store textures in video RAM).
Suppose your application changes its graphics (not many games do - and when they do, they accomplish it using some kind of palette changes). If you do these changes "in place" (i.e. directly in video memory) then you will clog the graphical bus (whose bandwidth, even with AGP, is very limited) with all kinds of minor changes to your graphics.
For applications that have to use OpenGL and are constantly mutating their graphics (ex: these graphics are applied as textures to 3D objects), one should instead keep a local copy of the image in a software surface, modify that copy, then use OpenGL functions to upload the new texture in place of the old one. It is especially true if these modifications require reading the video memory (ex: user-defined alpha-blending), since this is insanely slow.
Using the OpenGL texture uploading routines ensures that the actual transmission of the new image is sent in as fast a way as possible. Keeping a local copy of the surface speeds up performance because it prevents your system from ever retrieving that image data from video memory, in the case that you check those values.
OpenGL requires texture width and height to be a power of 2 (not necessarily the same: textures does not have to be square), border excluded. They must therefore have the form
2^m + 2.b, where m is a non-negative integer and b, the border, is 0 or 1. For example: width x height = (512+2)x(64+2), or 256x256, etc. Otherwise one may rely on the
Some drivers may support up to 512x512 textures. In that case, a 1024x768 texture could be split into several 512x512, and smaller, textures. The smallest maximum texture size is 64x64 [more infos]. This limitation is driver-dependent, to get it do:
That does not forbid the creation of small 32x32 OpenGL textures, as any OpenGL implementation is required to support 64 and lower sizes at the very least.
OpenGL 1.2 and higher support texture uploading for most pixel formats. In this case, you can upload the texture directly from the original buffer by specifying the correct pixel format (read the
glTexImage2D manpage for a description of these formats).
Try to use the
glTexSubImage2D call instead of the
glTexImage2D call when possible, if you do not fill your texture with data.
For example, when you have to pad the size to the next power of 2, some OpenGL drivers can take advantage of it, whereas some others upload the full texture again.
You can also use
glPixelStore to upload only the relevant part of a surface to a texture, without having to do any surface copying.
As a rule of thumb, the less data you need for your texture, the faster the upload. For example, if you only use 8 bpp, try to make use of paletted textures. If you can afford using 15/16bpp instead of 24bpp, that is fine too. Also, ATI cards benefit a lot from the reversed BGR pixel format when doing texture uploads. Nvidia cards are more tolerant, performance wise, to the pixel format you use.
If you use the trick from
test/testgl.c, the bitmap could be of arbitrary size.
It all depends on how much graphics one intends to use at, for example, a level of his game. Of course, if it is a platform game with tons of animations or non-tiled background graphics etc., he might well get into trouble.
What one might do is decide once and for all how much video memory one is "aiming at", like a minimum requirement (8, 32 Mb or whatever), and do a little math to check how much graphics one can allow his artists to draw for one level. Keep in mind that the video buffer(s) take up quite a lot of the video memory space to begin with (eg. 1024 * 768 * 4 bytes * 2 buffers = 6 291 456 bytes, 6 Mb!).
If you are doing filtered scaling, you will need some overlap around the edges for correct filtering, or the edges will become visible. This is where tiling in OpenGL starts to get tough.
You can always just load the tile as one big texture, then adjust the texture coordinates to get the section you want. The only problem with this is that if you use bilinear filtering (if the
GL_TEXTURE_MAG_FILTER and/or the
GL_LINEAR), then OpenGL will blend the edges of the tile with the edges of adjacent tiles.
But since you are using OpenGL just for faster 2D blitting, you can get away with setting
Note that most image filtering techniques (ex: magnifying filter, dithering, anti-aliasing, etc.) cause problems with color keys since they handle it like all other colors. For example, they might result in the use of colors very close to the color key but not equal to it, which would result in strange visual artefacts.
The easiest way to avoid these issues is to use an alpha channel instead of a color key in the image. Once in OpenGL, the video buffer uses the alpha coordinate anyway, so there would be no difference at display time.
Then one can make the transparent pixels black as well (RGBA = [0,0,0,0]), or whatever color the borders of the non-transparent areas have.
To show only 54*32 pixels of a 64*32 OpenGL texture, use:
There are two ways to use the multitexture extension:
glActiveTexture. Trouble is, Windows platform is still on OpenGL 1.1 so you loose portability to this platform
Last resort is to apply a correctly alpha-blended texture to a surface more than once to achieve multitexturing on system that do not provide it as such.
If you want to, you can use SDL to get to OpenGL even only for 2D needs. OpenGL provides full hardware accelerated support for stretching, rotating, color tinting, transparency, etc.
In many ways, it is getting to the point where you need to use OpenGL to get good performance on even 2D games...doubly so on the Mac, where you can guarantee more than enough 3D power to make a couple hundred textured quads render with no concern for framerate.
If you are planning to do some kind of HUD, use simply OpenGL quads to display the elements of the HUD. As a bonus, that will be faster that software drawing.
See also: blits, double buffering
For 2D rendering, most people use
glOrtho, which lets you set an orthographic matrix. For example, one can use the following to set full orthographic mode:
After that, for the first viewport, the following would draw a white 2D rectangle to the screen from (25,150) to (975,450):
To get out of orthographic mode so you can go back to 3D drawing, one can do this for example:
A problem with using OpenGL for 2D work is that the backbuffer is undefined after a buffer swap.
Either a blit from front to back occurs, or a page flip (buffers swapped), or a new back buffer is given to you from some arbitrary area of video RAM, and in general, you cannot tell what behaviour you are getting (unlike 2D SDL, where you can tell).
For a lot of 2D GUI work, most of the screen remains static and at high resolutions it is just not practical to redraw the whole screen every frame, even if you render the scene to textures and just tile them each frame (try this at 1280x1024). Plus, it is a lot of extra work to maintain those textures.
There are two extensions that might be useful:
GLX_OML_swap_method for X11, and
GL_WIN_swap_hint for Windows.
It allows you to request a specific swap behaviour when you create the GLXFBConfig: copy, swap or do not care.
GL_WIN_swap_hint allows you to mark certain areas of the back buffer as changed, which can reduce the bandwidth to update the scene and, presumably means that other areas are left untouched, giving you "copy" behaviour.
You can also request the swap behaviour when the visual is created, but the MSDN documentation for
glAddSwapHintRectWIN says you should use this
GL_WIN_swap_hint extension instead.
One should allow the user to force a particular behaviour if it turns out it is occuring anyway. For example, an i810 under windowed linux X11 does copy the buffer, even though it does not support the extension.
Graphical User Interfaces can be built on top of OpenGL. They can be home-made, or already existing libraries.
Our requirements would be an LGPL-style license, an API usable from C++, an easy and standard build on Linux (autotools) and Windows (Microsoft Visual Studio) at least, the possibility to use SDL or SDL+OpenGL, non-native interface look-and-feel, easy customization. We referenced following projects:
QT, GTK+, etc. are not really suitable for games in our opinion. Finally we came to the conclusion that the most interesting libraries for specified needs were: Agar, CEGUI, Guichan and Gigi, in that order.
On any properly optimized accelerated OpenGL setup,
SwapBuffers is an asynchronous operation, so unless you have caught up with the accelerator and/or page flipping, it usually returns very quickly.
Now, if you call
glFinish, you eliminate the chance of CPU/GPU
parallel execution, and effectively hard-sync your application with the GPU. That is, when the GPU is working, your CPU is not, and possibly vice versa. The latter happens if your application has a
great deal of work to do each frame before it seriously starts pumping polygons.
Use the OpenGL command
glFlush or, if you want a blocking flush, use
If you do not see any update after some time, you are probably using
double-buffered OpenGL. Try the following to get single buffered OpenGL:
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 0 ) ;
Please not that the
SDL_GL_DOUBLEBUFFER, flag is not, in any way, to be used in
SDL_SetVideoMode (it happens to be equal to 5, which would mean
SDL_HWSURFACE | SDL_ASYNCBLIT!). Use this flag only with
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1), before the
Windows XP uses 60 Hz as a default OpenGL refresh rate. Setting it to another value is merely a parameter to
Change Display Settings, with no easy way to know whether the video card and the monitor support the requested refresh rate, which makes it hardly impossible to put into SDL.
Double buffering with OpenGL is done via:
SDL_UpdateRectare just useful for 2D non-OpenGL blitting.
glFinish so that
SDL_GL_SwapBuffers waits until the buffer swap has occured, before returning: the program will stall until the actual buffer swap is complete.
Vertical retrace syncing for OpenGL was not enabled in SDL 1.2.9. It could be enabled with
SDL_GL_SetAttribute( SDL_GL_SWAP_CONTROL, n ) ;, with n equal to 1. Setting it to n > 0 causes a buffer swap every n th retrace, setting it to 0 (the default) swaps immediately as before.
There used to be platform-specific tricks to enable or disable retrace syncing, for example on Windows:
That technique works only when the SDL_OPENGL flag is set. You can try to use SDL_Delay to slow down the framerate.
If you do not need a lot of depth buffer precision, this (not recommended) trick is to use the range 0 - 0.5 in one frame, and then 0.5 - 1 in the next, then 0 - 0.5 and so on, thus removing the neccessity to clear, which involves switching the clearing mode itself from GL_LESS to GL_GREATER with each frame as well.
The 16 bit stencil buffer is not widely supported. One may try requesting a 32 bpp mode with a 8 bit stencil instead. Note that 16 bpp modes almost never support a stencil buffer.
On several platforms, the OpenGL context is destroyed every time SDL_SetVideoMode is called. Depending on the platform, as soon as you change resolution and/or change the color depth and/or toggle fullscreen, you can loose the OpenGL context.
This means that all the GL state is destroyed with it. This includes textures, among other things. More precisely, on Windows the textures are corrupted, whereas Linux handles it correctly.
To ensure portability is kept, the reloading of the OpenGL state has to be taken into account. So you should free previous ressources (ex: texture) and upload them again.
So you "just" have to reload your textures, restore your viewport and projection matrix, set all states again, etc., and it should work after the switch.
glViewport is not being re-called on resize, the viewport will
remain the same despite the window size changing. So, on resize, one could call for example:
On Mac OS X, sending SDL_VIDEORESIZE events, on window resizes, is not implemented in SDL 1.2.6.
At each SDL_SetVideoMode, the window is completely destroyed and a new one is created, centered on the screen. With the effect that the window seems to jump around after resizing.
This is a short piece of code showing how a resize event could be handled.
This back-end aims at providing an (OpenGL) hardware-accelerated back-end for the SDL API. It uses OpenGL so that SDL blits are accelerated, it is not intended to be used for OpenGL applications, since their modifying the state machine would mess the OpenGL settings that glSDL relies on.
Running stock applications with the glSDL back-end might be extremely slow. Application tuning for performance in general and for glSDL in particular is described here.
It is the way OpenGL addresses points on a screen. Rather than using pixel values (which change with screen resolution), use it as a percentage value. OpenGL then automatically translates into whatever resolution it is currently running in.
It is a good idea, as it means you can use fonts and they will stay the same size with increased resolution, but get sharper, instead of raster fonts (such as in Windows) which will get smaller as the screen resolution increases.
It is also more intuitive to think of a percentage of the screen for points rather than a certain number of pixels out of a changing total. The only downside is that if the screen is not a 4x3 resolution, graphics will be distorted, whereas with pixel addressing it will not.
Floating point pixels are good for high resolution colors (color values are not naturally integers, they are more continuous), and allow for better color manipulation calculations because you do not have to worry about saturation/overflow so much. They make really nice "high dynamic range" calculations possible
This is called pick correlation, and there are three different methods to do so:
One could use
locate libgl.so to have hints about the OpenGL drivers that may be used (Mesa, video card driver, etc.). Enter
updatedb if the locate database is too old.
Unfortunately, there is not any good way of telling what OpenGL driver is in use, and what features it accelerates in hardware.
Starting from SDL 1.2.10 though, the
SDL_GL_ACCELERATED_VISUAL flag was added to ensure the rendering is hardware-accelerated. For most cases this is enough, but there is certain situations where the drivers will fall back to software rendering. At the moment this happens mostly when using shaders. There are so many different hardware that will support the shaders in general, but some cases cannot be hardware accelerated.
Long story short, the best option is to test drive the code, to provide some way of benchmarking the current configuration with the features you use, and see if it is fast enough. Make sure that you provide a way for the user to re-configure once drivers have been updated.
OpenGL and DirectX both use the same underlying hardware, so what is possible in DirectX is possible in OpenGL, and is often much simpler in OpenGL.
If one is using OpenGL and cannot achieve high framerate when rendering a few thousands polygons, then display lists ought to be used. For skeletal animation with weighted vertices and such, use vertex arrays.
Do not call
glFlush() before SDL_GL_SwapBuffers since the latter implies the former.
SDL_opengl.his just a thin wrapper that does the right thing on whatever platform you are building on. Using it is the correct way to use OpenGL with current SDL versions, except if you are using extension loaders (ex: GLEW).
To turn off this image blending, so that it keeps it more pixel accurate, either include a gutter around the tiles (a technique that is used when using mip mapping), or do not use the transitioning/blending.
If you want a more pixel accurate render technique using OpenGL, try:
One may begin by making sure one could draw a 2x2 single colored quad a certain pixel location (try at (0,0) and (638,478) if you are in 640x480 mode), then you would know if you got your "pixel correctness". Then it would be easy to extend to a whole tile/quad.
SDL_GL_SwapBuffers. Miraculous recovery, whereas SDL supposedly calls those functions internally.
SDL_Init, SDL_SetVideoMode, then SDL_QUIT). If it keeps on failing, that might be due to the fact that some X Windows drivers need a recent version of XFree86 (ex: NVidia). Upgrading both of them might help.
void (*glBegin) ( GLenum )and it crashes
APIENTRYin front of the function pointers in the GL function declarations: use instead
void ( APIENTRY * glBegin ) ( GLenum ).
On some platforms (ex: Windows), various events (going to fullscreen, switching to another application, etc.) lead to loosing the OpenGL context. For example, you may loose OpenGL textures and/or other OpenGL state data as a result of reopening the OpenGL context. Also, certain backends can loose hardware surfaces at any time, because the operating system steals the VRAM back whenever it wants to. It happens usually when the user switches to another application.
The only work-around is to reload the OpenGL context whenever it gets lost.
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE ) ;
This documentation addresses generic OpenGL concerns, it is not specifically related to SDL.
If you have information more detailed or more recent than those presented in this document, if you noticed errors, neglects or points insufficiently discussed, drop us a line!