The first thing i encountered when i displayed a fltk window and a sdl window at the same time was, that SDL_PollEvent() eats up the input of
the fltk window. Stopping calling SDL_PollEvent() caused the fltk input widget to behave correctly. Next thing was, that linking to fltk
prevents SDLNet from working correctly (SDLNet_TCP_Open returned NULL). Maybe because FLTK uses async sockets.
When I create a surface with a bpp of 32bit, 'aalineRGBA' draws only vertical and horizontal lines (ticks of a meter). Only the start and end point of all other lines are drawn. On a surface with 8 bpp all lines are drawn, but the color is wrong.
When the lines are not straight the antialiasing reduces the pixel
colors just a tad away from white. And then your use of nonzero
Amask in the SDL_CreateRGBSurface causes the SDL_gfx to go for
alpha blending. And without alpha channel set in your image you
get didley on the image.
After a SDL_Flip, IMHO, it is best to clear (via SDL_FillRect), then
redraw every element you wish to show on the backbuffer again before
representing it.
SDL_ResizeXY (high quality, several algorithms available) or SDL_SoftStretch (lower quality but faster) can be used.
Using SDL_image to load XPM images generated by Image-Magick : this tool may use the words black, brown red etc., instead of the hex code, which sdl_image xpm cannot handle.
It is indeed possible to center the window or even to specify the x,y positions by using the environment vars SDL_VIDEO_WINDOW_POS or SDL_VIDEO_CENTERED.
Under Unices, your process receives a signal SIGPIPE when it trys to write into a closed/invalid socket/file/pipe/file descriptor.
This signal is handled by a default handler, which exit() by default.
You could redirect SIGPIPE signal handler or set it to SIG_IGN in your case
reference to check (depending on your OS):
man signal
man sigaction
Timers are designed for timing things, SDL_Delay() is not.
I'd like to pack these data together in one single file
>> that i can load from the game.
http://www.gamedev.net/reference/programming/features/pak/
zlib (which compresses gzip style) has functions
like... gzopen gzclose gzread gzwrite which are 100% the same syntax and
usage as fopen, fclose, fread and fwrite.
You could easily "roll your own" using that, or if you want to not roll your
own, use physfs! Its basically a library meant to do exactly what it is you
are trying to do!
http://icculus.org/physfs/
I would like to play Macromedia Flash files in my SDL applications.
Look up the Gnash project. It can use OpenGL or SDL to render Flash movies.
Take a look at FLIRT - http://flirt.sourceforge.net
Just wondering if anyone who works for some of the game development
companies has had the chane to port SDL to either PS3 or XB360?
If we did, we would be violating NDAs to post the code, or violating the
LGPL since you wouldn't be able to recompile it without a devkit.
My game crashes from time to time with floating point exception error/sdl-parachute.
Check for division by zero. Even in integer math, it can cause SIGFPE.
When I put the getPixel() into a LockScreen()/UnlockScreen()-pair, the
crashes weren't anymore.
Direct rendering applies to OpenGL only. The plain X11 target does not
support 2D hardware surfaces because there's not really such a thing
under X11, unless you count one of the following extensions:
There's DGA, but DGA is an ugly hack that's rarely used nowadays.
Though I think SDL still has a target for it, if you have it compiled
in you just have to implicitly ask for it (ie. SDL_VIDEODRIVER=dga,
IIRC). Of course you'll probably need superuser rights because it
needs to read/write directly to /dev/mem (yuck!), and it's
fullscreen-only.
There's also Xv, but it's really only useful for video overlays. The
proprietary nvidia driver does provide an RGB overlay though, and
Xmame can use it if available (and the performance results are pretty
good I might add), but I don't think SDL supports it.
And finally, as mentionned many times here, the best way to get fast
2D hardware rendering under X11 is OpenGL.
SDL_stretch is now in SDL itself. It will work source->dest as you
require. The only catch is the surfaces must be of the same bit depth.
The first Rect that you have specifies the clipping rect in src image
(i think) and the second rect will only use the 2 first values as
new_width and new_height (or something like that).
If you remember that newly created surfaces are fully transparent (alpha
channel = 0) and that blitting does not change the destination alpha...
In that case you should remove the SDL_SRCALPHA flag, blit, and restore
the SDL_SRCALPHA flag. I think that does what you want to do. In fact I
have that function already :
static void __copyPixels (SDL_Surface* pSrc, SDL_Surface* pDst, int nX,
int nY, int nW, int nH, int nDX, int nDY)
{
int nOrigFlag = pSrc->flags;
int nOrigAlpha = pSrc->format->alpha;
int nOrigCK = pSrc->format->colorkey;
SDL_SetAlpha(pSrc, 0, 255);
SDL_SetColorKey(pSrc, 0, 0);
SDL_Rect pSrcRect;
pSrcRect.x = nX;
pSrcRect.y = nY;
pSrcRect.w = nW;
pSrcRect.h = nH;
SDL_Rect pDstRect;
pDstRect.x = nDX;
pDstRect.y = nDY;
SDL_BlitSurface(pSrc, &pSrcRect, pDst, &pDstRect);
if (nOrigFlag & SDL_SRCALPHA)
SDL_SetAlpha(pSrc, SDL_SRCALPHA, nOrigAlpha);
if (nOrigFlag & SDL_SRCCOLORKEY)
SDL_SetColorKey(pSrc, SDL_SRCCOLORKEY, nOrigCK);
}
Im implementing a server and I need to initialize sdl just for network. I tested it in a text console and works, but I have a doubt, if there is no Xorg running in the PC, will the application start?
Just use the Dummy VideoDriver from SDL : export SDL_VIDEODRIVER=dummy
Is there also a Dummy AudioOutput Driver ?
There is the DiskOut Driver, but it would be also good for performance testing and other things to have a Dummy AudioOutput Driver.
export SDL_AUDIODRIVER=disk; export SDL_DISKAUDIOFILE=/dev/null
VMware rocks - I'm testing the build with 6 operating systems using it!
OpenGL screnshot/video capture :
Change the SDL_SwapBuffers() function in SDL, add a glReadPixels() call
to read video memory to a buffer. Write that buffer to frame_XXX.pnm (or
whatever image format you see fit).
Then compress all these files to an avi file using a video encoding
program (for example, mencoder).
Alternatively, you could link your program against ffmpeg (the library used by mencoder/mplayer, among others), and directly output AVI-files. Follow the instructions at: http://www.aeruder.net/software/rpl2mpg.html
SDL_LoadFunction() is somewhat misnamed, since it can be used to load
data symbols, too on any reasonable platform.
Yes, but you have to know exactly what you're doing if you try this. :)
I believe you have to declare your data dllexport on Windows, for example,
and you'll have a lovely time trying to support C++ symbols from different
compilers.
Is the goal in doing this to make everything available from one binary, or is it just to make it so you don't have to do file i/o and allocation to get at the data?
I'd be inclined to avoid this tactic, but you COULD write a small program to convert a file into a C array:
/* This code hasn't even been compiled, let alone debugged. */
int main(int argc, char **argv)
{
int ch;
printf("unsigned char myfile[] = {\n");
while ((ch = fgetc()) != EOF)
printf("0x%X,\n", ch);
printf("};\n\n");
return 0;
}
...then run it as "convertdata < myfile.bmp > myfile.c"
Obviously, that can be cleaned up a little.
However, if you don't mind jumping through some hoops and really just wanted all your data to be contained in one file with your binary, you can save on ugliness and static memory usage by putting a bunch of files in a zipfile and attaching it to the end of the binary...this works on at least Windows and Linux, maybe Mac OS, too. Zipfiles store their table of contents at the end of the file, so this is how self-extracting .EXE files get away with it.
So now the pipeline looks like:
- Build your program in myprogram
- Build your data in mydata.zip
- cat myprogram mydata.zip > finalprogramfile
Verify it works:
- unzip -v finalprogramfile
Now run the program and have it open itself as a zipfile at runtime to access the data.
"have it open itself as a zipfile at runtime" is left as an exercise to the reader. This may be overkill, depending on your needs.
Do not enable RLE acceleration for alpha-channel images.
Keep in mind that depending on exact surface formats involved and on which blitter is chosen to do the conversion from your loaded format to the gScreen format, the full-green pixels (00,ff,00) may get changed to non-full (for example, (00,f8,00)) and the colorkey may undergo a slightly different conversion, which will make it desynchronized with the transparent surface pixels. Since you are already hard-coding transparent green here, it may be a better choice for you to read an already converted pixel from a pre-defined location in the image (i.e. from x=0, y=0) and use that pixel as the colorkey for the converted surface.
With images loaded from files, alpha channel transparency is often a better choice. Alternatively, colorkeys work very well with indexed (8bpp paletted) images, and you would not need to convert indexed images to a target format beforehand -- just blit them to the screen (?) when you need to.
fbset -i returns the following :
open /dev/fb0: No such file or directory. You don't have the framebuffer device node. Run this at root: mknod 29 0 c /dev/fb0
open /dev/fb0: No such device: looks like your kernel didn't find the hardware, so SDL can't use it. Run "dmesg" and look for errors
This assumes you aren't using something like devfs or udev and have to make device nodes yourself. Otherwise, it's possible that the kernel drivers didn't find your hardware, etc.
Then if it might work, otherwise, let us know if fbset -i gets any further.
For fade-in/out, one could use SDL_SetGamma(), but that's really ugly, because it changes the gamma of the whole desktop, at least in X11. Use alpha blits instead.
XIO: fatal IO error 0 (Success) on X server ":0.0" after 0 requests (0 known processed) with 0 events remaining. What Does This Mean?
Usually that means you're trying to do multi-threaded graphics which doesn't
usually work well in X11.
you can set the SDL_FB_BROKEN_MODES environment variable to 1, and it'll pretend that the current framebuffer console mode is the only one available.
SDL_image doesnt seems to support HUGE images, it explodes and parachutes are deployed.
Implementing bicubic interpolation ? It's described here very clearly :
http://astronomy.swin.edu.au/~pbourke/colour/bicubic/
Smssdl has a bicubic filter which is a direct implementation of these equations : http://membres.lycos.fr/cyxdown/smssdl/
Also, I think not many people implement the bicubic filter because it's not suitable for any kind of real time processing when done on the CPU and big pictures.
OPenGL : for enlarging it is bilinear. For shrinking, most hardware is capable of using trilinear filtering between mipmaps.
However a few cards provide better filtering implementations, like anisotropic, or you can implement your own with
multitexturing as described there : http://www.vrvis.at/via/research/hq-hw-reco/algorithm.html
Depending of the sprite size using 32 frames might be overdoing. So just first try to rotate a litle bit and then a litle bit more and look if
the frames differ enough to be considered as two different frames. The next good candidate is 24 frames. Do not use 36, because then you
will miss the 45 degree angle and the sprite can start to woble near that 45 degree angle.
As a rule of a thumb the amount of frames need to be 8*X to look correct. The bigger the sprites the higher the X need to be to make
things look good.
The only problem is that it's nearly never quite that simple, if you actually want the game to play the same at any frame rate. Doing speed, acceleration, jerk and stuff right is the easy part. The problem is event handling.
Even in a simple 2D shooter with no physics responses to collission events (ie any collission kills something instantly), you need to calculate exact (*) collission times and handle events in correct order, or the game logic will be affected (if ever so slightly) by the frame rate.
Basically, with a fixed rate and interpolation, you don't even have to do the game logic right by any technical definition. The game will still play exactly the same regardless of rendering frame rate.
(*) That is, "exact" to some appropriate, well defined granularity. For example, you can round delta times to the nearest millisecond (ie 1000 virtual logic FPS), and implement all calculations so that there will never be any rounding errors that depend on the delta time.
http://www.free-soft.org/guitool/
http://old.picogui.org/scrshots.php
http://aedgui.sourceforge.net/
CEGUI. it works well on top of OpenGL / SDL
http://gameprogrammer.com/gpwiki/Free_-_Game_Libraries
SDL_DisplayFormatAlpha() does this : This function takes a surface and copies it to a new surface of the pixel format and colors of the video framebuffer (if possible), suitable for fast alpha blitting onto the display surface.
That "if possible" makes the difference. In reality, it is only "possible" when the video framebuffer is in 24bpp or 32bpp mode, or at least, that is the current implementation. Currently, the returned surface is always 32bpp, and the reason for such is suitability "for fast alpha blitting onto the display surface", because there are no other fast blitters present. SDL_DisplayFormatAlpha() tries to arrange the RGB channels in the same order as the display surface has them to make blits faster.
your game logic should be regulated by SDL_GetTicks(). Use it to check how much time it has been
since the last call, then do a game logic update if enough time has elapsed (so if your logic runs at 60 updates per second, update every 1/60 second elapsed since the last update). It is also good practice to pad the time-checking loop with SDL_Delay(10), or else the time-checking loop will make your game use 100% CPU at all times (unless vertical retrace sync is used by the video driver, but as I said before there is no reliable way to know if it is used or not).
Keep in mind that SDL_DisplayFormat() will return a 16bpp surface if your SDL video surface is 16bpp, while SDL_DisplayFormatAlpha() will always
return a 32bpp surface
Since version 6.0, VC++ also has support for debugging versions of malloc/free and new/delete.
If you have unreleased memory blocks, they will be dumped at the end of the execution.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsdebug/html/_core_using_c_run2dtime_library_debugging_support.asp
Valgrind runs only on Linux/x86 and /AMD64, with experimental support for Linux/PPC32 and some *BSD platforms.)
I can highly recommend the Numega/Compuware bounds checker. It may however be out of your price range.
You could try this: http://www.codeproject.com/tools/visualleakdetector.asp
If you're on Windows (possibly/probably other systems too) then you can get Fluid Studio's Memory Manager [1] up and running in a couple minutes.
[1] http://fluidstudios.com/pub/FluidStudios/MemoryManagers/Fluid_Studios_Memory_Manager.zip
Is there some simple SDL-like trick to get Drag and Drop running, so one can drop some file-links fron
konqueror/MS-Explorer/etc. into a SDL application ? See http://www.libsdl.org/projects/scrap/
Linux Game Tome:http://happypenguin.org/test/search
1 frame per 2 minute - what could be the problem?
I recently had a report from someone who was trying to use my SDL project and was getting a blank screen. He
updated his video drivers to the latest version and the project worked fine. The same project worked fine
on other machines (including one that was almost identical in spec). The machine used onboard Intel
graphics. Hardware surfaces are only accelerated in fullscreen mode (in Windows at least) and they aren't always the best
alternative.
So I'd maybe check the video drivers if you haven't done so already.
Basically, what happens with the display surface buffer when you Flip() depends on a lot of stuff, such as OS, hardware, driver, video mode and possibly other things.
What's even worse is that you cannot reliably tell what kind of setup you have, since some APIs are basically ignoring the problem and just have the drivers do it any way they like.
Unless you're building turn-key systems, the best you can do is ask the user (ie set up a double buffered display with potentially out of sync buffers, flip some and ask "Do you see any flickering on the screen right now? [Yes/No]"). The easiest way is to work around the problem by assuming that the behavior is undefined and always repaint the whole screen before flipping.
SDL_Delay(0) is equivalent to Unix sched_yield (give up the CPU to other runnable threads, if any) in all SDL-supported backends, Linux included.
cat /proc/driver/nvidia/version
cat /proc/driver/nvidia/cards/0
lsmod | grep nvidia
glxgears
SDL prevents OS from going to sleep, so that the screensaver does not trigger when a game is running.
On Windows, to retablish sleeping, add this line within the WinMessage function in sdl_sysevents.c :
case WM_POWERBROADCAST :
return DefWindowProc(hwnd, msg, wParam, lParam);
(i.e. this message was getting 'eaten' by SDL, preventing normal processing for power/sleep-related stuff.)
If you want to disable this on Mac, chop the following code out of QZ_PumpEvents() in SDL12/src/video/quartz/SDL_QuartzEvents.m ...
/* Update activity every five seconds to prevent screensaver. --ryan. */
static Uint32 screensaverTicks = 0;
Uint32 nowTicks = SDL_GetTicks();
if ((nowTicks - screensaverTicks) > 5000)
{
UpdateSystemActivity(UsrActivity);
screensaverTicks = nowTicks;
}
Choosing where a SDL window should be placed by the window manager :
Do this after SDL_SetVideoMode:
SDL_SysWMinfo info;
SDL_VERSION(&info.version);
int ret;
ret = SDL_GetWMInfo(&info);
if (ret > 0) {
#ifdef unix
if (info.subsystem == SDL_SYSWM_X11) {
info.info.x11.lock_func();
XMoveWindow(info.info.x11.display, info.info.x11.wmwindow,
m_pos_x, m_pos_y);
info.info.x11.unlock_func();
}
#endif
You need to link with `sdl-config --libs` and compile with `sdl-config
--cflags`,
and the SDL header is "SDL.h" not "SDL/SDL.h". While linking -lSDL
and including
"SDL/SDL.h" may work on Linux, neither will work on OS X, for example.
The portable ways - using the sdl-config script and "SDL.h" - are
better.
The actual problem in this case is that for Windows, OS X, and some
other platforms, SDL sets up its own main function and does some
preprocessor trickery to turn your main() into SDL_main(), and supply
main() itself. (This is necessary sometimes to do platform-specific
graphics initialization.) SDL's main() is supplied in -lSDL_main, but
on OS X you need other flags: -framework Cocoa -framework OpenGL and
maybe some more. `sdl-config' takes care of all of this for you.
Note: If you want to control the position on the screen when creating a
windowed surface, you may do so by setting the environment variables
"SDL_VIDEO_CENTERED=center" or "SDL_VIDEO_WINDOW_POS=x,y". You can set
them via SDL_putenv
How an overlay works ? The overlaid data is blended in by your grafix hardware and comes from a different location in the video memory. When you blit to your screen surface the pixels of your text do not overwrite the overlay but the pixels "behind" it. what makes this even more confusing is, that the overlay is more of an "underlay" as you still see your blits to the screen. to come around this either do not use the overlay in the first place or convert your subtitles into an YUV surface and blit it into your video befor sending it to the grafix card.
Alpha blits may alter a colorkeyed target surface : "key colored" pixels of the destination surface may be modified slightly, they do not match the colorkey anymore.
Go to http://www.google.com and format your search there by prepending your search query with "site:http://www.devolution.com/pipermail/sdl/" (without the quotes, of course).
Is it somehow possible to combine accelerated rendering using SDL_DisplayYUVOverlay and rendering text using TTF_RenderText_ ?
You would have to convert the font surface to YUV and render them into the overlay data before handing it to SDL. The overhead is probably fairly minimal in this manner. Alternately, you'd have to convert the YUV data to an RGB surface and blit the font surface onto it before blitting it to the screen, but that's probably more expensive since you lose the hardware overlay.
Here's a library that will handle this for you: http://www.libsdl.org/projects/sdlyuvaddon/
Please note that sdlyuvaddon is (somewhat inexplicably) under the GPL license, which is more restrictive than SDL's license...but the techniques are fairly simple and widely available on Google if you have to write your own version.
It seems that using DisplayFormatAlpha when no alpha channel is
available for the surface, makes it invisible.
Gabriel on Graphics - http://gabrielongraphics.blogspot.com : this is turning out to be a good intro tutorial on CPU-based rendering.
Actually SDL_WM_ToggleFullScreen is just a bad idea. If you're running in a window, you want to match the pixel format of the desktop. If you're running full-screen, you want to run at the color depth for which your game is optimized, usually with hardware surfaces (currently not available in windowed mode) and/or page flipping. That means you should always use SDL_SetVideoMode to switch between windowed mode and full-screen mode, and you should always reload your graphics afterwards.
SDL_WM_ToggleFullScreen exists because on some very limited platforms, you don't get any hardware surfaces, you don't get page flipping, and you might even be unable to change the color depth of the display. On those (legacy) systems, SDL_WM_ToggleFullScreen saves you a bit of time by allowing you to reuse the display surface and any graphics you have loaded. However, SDL_SetVideoMode is the correct and portable way to toggle full-screen mode.
Hopefully SDL_WM_ToggleFullScreen will be gone entirely in SDL 2.0.
[Linker error] undefined reference to `SDL_main'
1) It has to be int main(int argc, char *argv[]), not int main().
SDL *demands* it take those two parameters.
2) If your main is in a C++ file, chances are your 'main' function is not really called 'main' as far as the linker's concerned, but 1502347xcgxf532189_main@ or some other garbage. C++ mangles the names to prevent overloaded functions from having the same names.
Fortunately, there's a way to tell C++ to do that. Try:
extern "C" int main(int argc, char *argv[])
instead of
int main()
SDL_main is there to convert things like:
int WinMain(HINSTANCE hInse, HINSTANCE hPrev,LPSTR lpCmdLine,int nCmdShow)
into
int main(int argc, char *argv[])
As well as do initialization that MUST be done before main is called, such as
the crazyass MacOS X carbon/cocoa/aqua/name-of-the-week stuff. It's a bit
convoluted. The "main"-equivalent that gets called first is inside
-lsdlmain, which in turn calls *your* main, which may have been renamed to
SDL_main with a preprocessor definition.
linux, and UNIX in general, doesn't need need an -lSDLmain -- they have a
standards-compliant int main(int argc, char *argv[]) interface already
I am able to switch to fullscreen mode without a problem. However, after I quit the application my desktop is resized - it is bigger and does not fit on the screen. It looks the same as a virtual desktop - it moves when the mouse cursor reaches a corner.
Ensure you call SDL_Quit() before terminating your application.
The SDL_WINDOWID only works properly under Linux/GTK
There are optimized C and asm blitters for 32bpp->16bpp, but you should use
surface format with masks R=0x00ff0000 G=0x0000ff00 B=0x000000ff.
so I now make one extra pass after the JPEG decompression where I shuffle the data from 24 to 32 bit, and swap the R and B values; which is a *huge* speedup.
XXX is the largest game I've worked on. It's about 20 million lines of code with about 50 people working on it.
There are lots of reasons not to use SDL. The lack of a query system
isn't something I have ever heard mentioned. Poor support for random
Joysticks and the latest sound hardware along with poor vendor (MS)
support for OpenGL *are* reasons I have heard.
Also note that the special 128 Alpha value applies only to Per-Surface alpha, and not per-pixel. the special 128 value really is a bit faster, if your surfaces only need the per-surface transparency on the whole image. If you're trying to get smooth edges on some sprites, for instance, you'll need to stick with the slower per-pixel, or try out glSDL backend, which I haven't tried so I can't tell you anymore than that.
antialiazed sprites are generally mostly (as in 90% or
so) fully transparent and fully opaque pixels. If you enable the RLE
acceleration, the cost of alpha blending only impacts the pixels that
actually need blending. It is generally more efficient to use RLE, probably the surface is not locked/unlocked very frequently (since these operations trigger uncompressing/compressiong the whole surface).
SDL_LoadBMP always loads an image from Disk into a sofware surface. The only way to get this image into a hardware surface is to create one (with SDL_CreateRGBSurface
) and blit it into it;
On most (all?) platforms, SDL doesn't wait for vsync. However, you might
see bigger performance gains if you either use SDL_UpdateRects() on all
parts you need updated at once, or just do
SDL_Flip()/SDL_UpdateRect(screen,0,0,0,0) once per frame instead.
Colorkey problems
How did you create the image? Take a good look at it in a paint editor and I
suspect you'll find some of the pixels near the black/magenta borders are not
quite matching you color key, but are like 249,0,249 or something like that.
Most editors have an option to anti-alias edges when using various tools.
Looks nice for some things, is barely noticeable on others, but doesn't play
well with color keys. This is a common problem. Paint Shop has a checkbox for
selecting anti-alias, IIRC it's default to "on" for most tools. Judging from
the frequency of the problem, I'd guess PhotoShop also defaults to AA. Also,
resizing the image in most apps will anti-alias all edges as well.
One way to ensure things stay the way they are supposed to be is use an
indexed color palette and only have black and magenta in it.
Good SDL tutorials : Sol's Graphics for Beginners http://sol.gfxile.net/gp/
X11 doesn't support hw surfaces
If all you need is on/off transparency, you can convert the image to use a color key, and use SDL_SetAlpha in the normal way.
Fading :
If you do need the full alpha channel, you need to create your own alpha mixer. Mixing alpha is pretty easy (or, well, linear mixing is, which technically isn't correct, but it's good enough for this purpose). To create the fade effect you want, you could do something like this:
// (Init)
if (SDL_MUSTLOCK(image)) SDL_LockSurface(image);
for each pixel in image {
alpha_storage_thingy[index_for_this_pixel] = pixel_alpha_value;
}
if (SDL_MUSTLOCK(image)) SDL_UnlockSurface(image);
// (Fade)
if (SDL_MUSTLOCK(image)) SDL_LockSurface(image);
for each pixel in image {
old_alpha = alpha_storage_thingy[index_for_this_pixel];
pixel_alpha_value = old_alpha * fade_alpha / 255;
}
if (SDL_MUSTLOCK(image)) SDL_UnlockSurface(image);
SDL_BlitSurface(image, srcrect, destination, destrect);
The 255 is assuming the alpha is an 8-bit value. Dividing by 256 may be slightly more efficient as that can use a bit shift in stead of an actual division, but the result will not be 100% correct -- though again, probably good enough. This should still be true on modern CPUs (saves some registers, etc), although to tell the truth you probably won't notice much difference these days =)
I'm using SDL to display YUV images, and it works fine on both Linux and Windows for images over a certain size. But on Windows
I get problems with image size 176x144 (QCIF). It works fine on Linux.
Exactly the same source file is used for Linux and Windows.
Larges sizes (eg 352x288 and 704x576) work fine on both platforms.
Is there a problem with allocating areas with dimensions not divisible by 32
on Windows?
Sometime width != image->pitches[], especially with small sizes:
if (width != m_image->pitches[0]) {
// The width is not equal to the size in the SDL buffers -
// we need to copy a row at a time
to = (uint8_t *)m_image->pixels[0];
from = y;
for (ix = 0; ix < height; ix++) {
memcpy(to, from, width);
to += m_image->pitches[0];
from += width;
}
} else {
// Copy entire Y frame
memcpy(m_image->pixels[0],
y,
bufsize);
}
using a single buffered hardware surface is a bad idea. When you
call update rects you are likely to see the CPU copy the hardware buffer
to the visible buffer by passing it through main memory. That really
kills your performance. Not to mention that most of the operations you
can do to a 2D hardware buffer are not hardware accelerated and so they
must be performed in software across the graphics bus. Another serious
performance killer.
Use a software buffer or use OpenGL.
you can search the list archives through Gmane's web interface at http://news.gmane.org/gmane.comp.lib.sdl
don't ask for a 32 bpp display unless you absolutely need it to=20
be exactly 32 bpp. If you run that code in windowed mode, or on a=20
target that can't switch to 32 bpp, you'll get a shadow rendering=20
buffer that is converted and blitted to the screen when you flip. Not=20
insanely expensive if you're doing full screen s/w rendering anyway,=20
but for more normal applications it eliminates any chances of using=20
h/w accelerated blits.
use gimp instead of photoshop when exporting to .png
i've found that most of .png's exported by photoshop
were incorrect. tested on mozilla / acdsee / galeon / gtksee
i opened them in gimp and 'saved as'
oops ... all .png's are now correct.
SDL_WM_ToggleFullscreen is a glorious example of a misdesign. It cannot be implemented anywhere except on X11. And I hear BeOS might have it too. In any case, Win32 doesn't have it, it does nothing there.
The correct and always working way to switch to fullscreen is to call SDL_SetVideoMode again with SDL_FULLSCREEN in the flags. It will destroy the OpenGL context, though, so you have to reload textures and display lists. Even in fullscreen apps, X11 can't change color depth, and there's a severe performance hit if you are not running in the same depth, because SDL needs to convert depths on the fly through a shadow surface.
24 bit is theoretically slowest try 16 / 32 bpp instead
With X11, setting HW slows down most of the time try SW instead (especially if you are using direct pixel access)
How to move a window with SDL
For X11
SDL_SysWMinfo info ;
Window root ;
Window parent ;
Window * children ;
unsigned int children_count ;
SDL_VERSION( & info.version ) ;
if ( SDL_GetWMInfo(&info) > 0 )
{
if ( info.subsystem == SDL_SYSWM_X11 )
{
XQueryTree( info.info.x11.display, info.info.x11.window,
& root, & parent, & children, & children_count ) ;
info.info.x11.lock_func() ;
XMoveWindow( info.info.x11.display, parent, x, y ) ;
info.info.x11.unlock_func() ;
if ( children )
XFree( children ) ;
}
}
Because SDL 1.x uses 16-bit words for surface width and height, you cannot have a surface wider than 16383 pixels, or higher than 65535 pixels. This might be a problem if you are putting many frames into the same surface.
When asking for video support (ex : SDL_Init( SDL_INIT_VIDEO )
), many drivers will switch to a graphics mode of an arbitrary resolution, then possibly change that resolution when
you call SDL_SetVideoMode(). The implication is that if you are initializing video, you are using video. If you don't want that, but you want to use some other features of SDL, just don't pass
SDL_INIT_VIDEO as a flag to SDL_Init().
There seem to be a couple of unofficial ports of SDL to Windows CE, like Arisme's and Dmitry Yakimov's.
Arisme's:
http://arisme.free.fr/ports/SDL.php
Dmitry's:
http://www.activekitten.com/pbc_download/sdl_wince_0.7.zip
Set the window position on X11 :
You should lock the X display before moving the window (or doing
anything else with X), like this:
int x = 0, y = 0;
SDL_SysWMinfo info;
SDL_VERSION(&info.version);
if (SDL_GetWMInfo(&info) > 0 ) {
if (info.subsystem == SDL_SYSWM_X11) {
XMoveWindow(info.info.x11.display, info.info.x11.window, x, y);
}
}
You might add #if 's for your desired platforms.
if (info.subsystem == SDL_SYSWM_X11) {
info.info.x11.lock_func();
XMoveWindow(info.info.x11.display, info.info.x11.window, x, y);
info.info.x11.unlock_func();
}
For getting the current x and y position you can use XGetWindowAttributes:
{
XWindowAttributes attr;
info.info.x11.lock_func();
XGetWindowAttributes(info.info.x11.display, info.info.x11.window,
&attr);
info.info.x11.unlock_func();
}
Window position is now in attr.x and attr.y.
You should also use wmwindow, not window, as SDL uses 2 windows on X11, one for window management, and one for graphics output.
On some platforms (Windows, Mac OSX), the video thread must also be the
initial (main) thread.
On most linux platforms, this does not need to be the case.
I have some code which toggles windowed <-> full screen and *usually*
works in Linux, Mac and Windows.
What has worked best for me is releasing the video subsystem and re-
initializing it instead of just setting a new video mode. You may want
to try this, it might solve your problem. Just do this before you set
the new video mode:
SDL_QuitSubSystem(SDL_INIT_VIDEO);
SDL_InitSubSystem(SDL_INIT_VIDEO);
Not releasing the video subsystem may be the reason DirectX is staying
in exclusive mode. I think I had problems before I did this too.
SDL_BlitSurface() cannot get access to your screen surface if it's already
locked.
SDL does not support force feedback, nor does the Linux joystick API
if you lock a surface in graphic memory, the whole content of this
surface will be copied in system memory and then back on graphic memory
when you unlock it ? That is not correct. Locking a hardware buffer does not cause it to be
copied. It just makes sure that it has a valid address so that you can
read/write from it and makes sure that the address will not change until
you unlock it. Because of multi-buffering the back buffer address can
(and usually does) change from frame to frame.
Only surfaces that are create with the hardware
surface flag will be placed in video memory. And, even with the hardware
flag they will only be place in video memory if their is enough memory
free to allow the surface to be allocated in video memory. If you ask
for a hardware surface and it can not be allocated SDL will silently
create a software surface for you. You can check the flags in the
surface structure to find out what you actually got.
The most likely reason for a slow down in full screen mode is that you
are getting a hardware buffer in full screen mode and not in windowed
mode. Reading and writing a hardware back buffer is very slow while
reading and writing a software buffer is much faster. The result is that
building up a frame in a hardware back buffer can be very slow compared
to doing it in a software buffer and bliting it to the screen.
Personally, I try to use OpenGL whenever possible. It makes life so much
easier. :-)
apt-get install libsdl1.2-dev to install the SDL development files.
SDL does not require root access to read controls, you just need access to the devices in /dev/input/
I recommend doing using readelf instead. The problem with ldd is that it will list also the dependencies of SDL itself.
readelf -d will list the library dependencies of SDL
you request HWSURFACEs, you lock them, you manipulate the pixels, you
unlock them.
N. O. \u2013 Mais s\u2019il est interdit de voter non, pourquoi un référendum?
locking hw-surfaces means that sdl has to copy them from video
surface to system memory, so someone can access the pixel data. this is
very-very slow. if you start your application in windowed mode, sdl will
most likely not give you any hw-surfaces and SDL_LockSurface() does
nothing and your app does not suffer from this slowdown. try requesting
SWSURFACEs instead. there is one rule of thumb with sdl. use
SW-Surfaces when manipulating the pixel data directly and if you do
alpha (read per pixel alpha) blits.
A blue rectangle is usually the sign of a non working overlay.
Have you tried initializing SDL with the SDL_EVENTTHREAD flag?
Don't use 'ldd'! This is a common mistake that almost everybody makes. ldd prints the entire tree of dependancies. In other words: ldd also shows the dependancies of the dependancies.
The correct way to find out what an ELF binary *really* needs is by using the following command:
objdump -p libfoo.so | grep NEEDED
Is it possible to create two video outputs from a single SDL program?
The short answer is "no" -- SDL only supports one window at the moment. The long answer usually comes down to this: create two programs, one a master and the other a slave, and communicated what should be drawn from the master to the slave (ex : fork and pipe, sockets, etc.)
How does one go about uninstalling SDL?
It should just be the standard approach; with a properly configured tree: make uninstall
Distributing an application :
resource files, the configuration and highscore files and the executable :
- Windows : common base directory. The game then uses relative paths to access the
various resource files.
- Linux distribution (the executable to /usr/games, the resources to /usr/share/games/mygame, and I guess the
config and highscore files to ~/.mygame?).
Do it yourself or see http://www.maccormack.net/~djm/fnkdat/
the prefs :
- AmigaOS/MorphOS : the game dir.
- Windows : the game dir, but I'm told this won't work everywhere. Under
2000/XP, I think it should go into "c:\Document and
settings\user\yourgame"... There's probably an OS call to determine
that.
- MacOS X : ~/Library/Preferences/YourGame (YourGame can be a dir or a
file). It seems that you cannot use "~" in fopen(), so use
getenv("HOME") instead. (still got to try that one... couldn't find a
working svn client to checkout my sources and try that today.)
- Linux : ~/.mygame (file or dir, doesn't matter)
you have to pass the SDL_INIT_NOPARACHUTEï flag when you call SDL_Init() to allow your debugger to work properly
If you link your your program as a gui application, no console window is showed, and whatever you write to stdout/err simply goes nowhere. Which imo is the solution that would be the best default.
You need to catch the CTRL+C signal, so it doesn't just try to abort your application without cleaning up.
You can use signal() from signal.h to wire signals to callback functions - but DO NOT try to clean up and exit from whithin the context of such a callback! That will most definitely cause all h*ll to break lose, regardless of platform. Just set a flag, post an SDL event or something, and have the main loop respond to that by cleaning up and exiting.
With: SDL_WM_SetCaption("Title window",NULL); you set the title window and with
SDL_WM_SetIcon(surface,NULL); you set the icon.
A useful hint could be to not suppose anything about the video buffer returned by SDL_Flip : erase it each frame and rebuild the picture.
What Flip() does is what happens to be the best option on the platform at hand, which includes
(in order of preference, for "normal" applications):
1. True h/w page flip; ie "change a pointer"
2. Hardware accelerated back->front blit
3. Software back->front blit
(And then there's glSDL, which doesn't really behave like any of those
if you look at it through the SDL API.)
While one can of course just decide on one behavior to be strictly
required, and emulate it on all backends that do not comply by
nature, the problem is that applications have different needs, so one
way of emulating the "correct" Flip() behavior is not optimal for all
applications.
Just because *some* applications don't care what's in the buffer after
a flip doesn't mean it's safe to just assume that's what all SDL apps
want. Similarly, if you assume that all applications expect the back
buffer to be unaffected by Flip(), you have to hardwire all backends
that do true page flipping to use a shadow surface as rendering back
buffer.
I think the only useful way to handle it is to allow applications to
find out how Flip() works, and then decide how to deal with it.
Applications that repaint the whole screen every frame can just
ignore the problem. "Smart update" applications (using dirty rects or
other forms of incremental updating) will have to adjust accordingly.
As it is, all you can do is to assume the "worst reasonable" case. For
example, you can assume that the display is always double buffered
with true page flipping. That means one has to merge the dirty rects
for the current and previuous frame for each update - which is of
course a waste cycles and bandwidth on a backend that "flips" by
means of back->front blits. More seriously, it won't even work on a
triple buffered page flipping display, or one of these weird things
that thrash the back buffer (doing in-place pixel format conversion,
I assume) when flipping.
The API to solve this would be simple:
int SDL_GetFlipCycleCount(SDL_Surface *screen);
which returns the number of Flip()s you need to do to get back to the
buffer you started with, 0 for "infinity", if the buffer is
irreversibly thrashed, or -1 if the behavior is unknown. With a
back->front blit backend, it would return 1, whereas it would return
the number of pages (2 is the only option currently, AFAIK) for a
page flipped display.
However, the bad news is that this call would have to return -1
("behavior unknown") most of the time, since the behavior is
undefined with some APIs. (OpenGL would be one example.) At best, the
call could sometimes avoid asking the user if the rendering looks ok,
but you'd still need that logic if you rely on page flipping
behavior.
Needing an alpha channel in their BMP files ?
Have a greyscale bitmap with the alpha information, and a normal bitmap.
Load the two surfaces and create a third with alpha which combines the two.
There is an SDL binding for Java : http://sourceforge.net/projects/sdljava/
Usually laptop computers works with fixed resolution, which causes the driver to resize the image to show it correctly - usually you notice that as the image not being as sharp as if it was rendered in the usual desktop resolution. This may be one source of problem, as the resizing requires some time to process and may reduce you frame rate.
I use an G3 iBook with a Radeon 7500 Mobility, with resolution fixed at 1024x768. My game was running at about 30fps when using SDL surfaces thanks to the image resizing done by the system. Changing my renderer to OpenGL made all the difference in the world, as the game can run as fast as 300fps on the same machine if I don't keep track of the vertical retrace (sad that it's a TFT screen, which sucks for 2D scrolling games).
Laptops usually support only *ONE* actual video resolution (and lower with scaling), associated to the video display. So if you are using a 640x480x32bpp surface and you displays resolution is 1024x768, the video driver will have to convert that from 640x480 to 1024x768 BEFORE showing it properly on the screen - and we are talking software conversion here. This must be making your application run slower - it happened on my iBook, so I know this problem quite well. Running really fast on the desktop and about 33fps in the notebook.
BTW, have you ever tried your code on a desktop system?
To see if this is happening to you, try making your game screen the same as the usual display resolution (don't worry about your code, just change the screen size), if it runs faster (I don't expect a lot faster, as the screen will be quite large now), you may be better off with OpenGL, which is blazing fast and work really well with systems limited to a single display resolution. I use it and it has solved my issues.
tft displays are usually connected to a hardware scaler to avoid this problem, othervise you would not see anything on the screen when the bios initializes the pc hardware. the pc BIOS only suports VGA resolution means 640x480. also the os does not know anything about the fixed resolution of the tft. either you get a proper (but maybe ugly) scaled display or you will see the infamous letter box thing on your screen, when you use a resolution other then the displays native one.
to get a software scaled display you have to code that explicitly. there
are solutions out there, though. (gl-scale backend patch)
Besides that, one point that bothers me in your code is that you use SDL ttf to draw text. If you are using a hardware surface for the game, this may be another performance hit. SDL ttf is great for software surfaces, but blitting the text directly to a hardware surface may impact performance, so if the previous test doesn't help, try commenting the text rendering in your game to see if there's any change.
That is true for TFT displays for desktop computers (which can cost half the price of a whole notebook). But to save costs on notebooks, the motherboard is custom made and they can make whatever they want with the boot screen to fit the TFT resolution, which can then be much simpler to avoid hardware re scalers which increase prices. On a MAC notebook I am 100% sure that this happens (I have one and faced this myself), the system has only one resolution and all others are rescaled by the system. And it does slow down a lot the performance of full screen SDL surfaces on my iBook 900. Changing from 2D SDL to OpenGL was a change from 33fps to over 500fps.
I have a Dell Inspiron 5150 with a Sharp screen (1600x1200) and an Ati M9 64MB video card. In the Bios of the notebook is a Resize Option. If this option is enabled all lower resolution will be scaled in hardware to cover the whole screen, no perfomance hit detected. If the option is disabled the lower resolutions are centered in the screen with a black frame around them, the Bios screen too. I dont know if the resize is done by the video card or something else attached to the screen, but its fast.
Not to mention that recently someone was discussing here a problem with
a game that was running small in the center of a notebook's display,
when someone mentioned that the problem was the video driver
configuration that was not configured correctly to stretch resolutions
smaller than the notebook fixed TFT display.
Note that RGBA->RGBA blits (with SDL_SRCALPHA set) keep the alpha of the destination surface. This means that you cannot compose two arbitrary RGBA surfaces this way and get the result you would expect from "overlaying" them; the destination alpha will work as a mask.
If you want alpha to be copied from the source surface, disable alpha on the source, then blit. you can then reenable alpha if you want to, on that surface.
If you are working with textures anyway, you might use glTexSubImage2D s
see http://www.fh-wedel.de/~ko/OpenGLDoc/gl/texsubimage2d.html.
Another possibility is to write your own alpha blitter function, or use
an existing one. I know I've used one from the pygame source code before.
A game object will have to do all the things your subsystems currently do. It will have to know how to draw itself, how to make noises, animate, bounce around (physics), attack or retreat (ai), etc. So I would think, instead of scaling on subsystems, you really would rather want to scale on self-contained game objects that know how to do everything that your subsystems would do for it. Or at the very least, know how to calculate its specific data so that it can be handed off to the subsystems to do it.
Game objects are usually being built as state machines, but sometimes they are built as communicating objects each with its own thread. One of the first game programming gems books has a super light weight cooperative thread
package designed to support a model of programming. This approach went by the name of "communicating sequential processes" (CSP).
So you would generate a collection of these objects, and then split the collection into parts and assign each part to a CPU. So now we're trying to scale on objects.
But then what? Somehow we need a method for signaling the objects themselves so we can get input into the system. And then objects need to interact with each other, so they need a method for signaling themselves.
It is called an event queue. Unlike the kind of event queue you deal with in GUI programming this one is a priority queue sorted by the time the event is supposed to occur. An object can send itself a message that has a future time stamp. The message will not be received until that time. So, for example, a fire ball can send itself a message to update its position 0.1 seconds in the future and keep doing it until it finally hits something.
BTW, collision detection is done by each object using something like an oct-tree or a BSP tree. Each object has the responsibility of updating its location in the shared tree. When an object detects a collision it sends a message to the object that it hit. That message is most likely delivered through the same message queue as other messages, but it is delivered with a time stamp that tells when the collision occurred. That means, BTW, that an object can get messages from the past.
In a complex game routing message gets complex.
I know something about game objects from reading and tinkering a bit in Torque. Once you get more cpu's than you have high level tasks (subsystems), you cannot scale any further on those tasks. So you must go to finer grained concurrency. Choosing self contained game objects to scale on was probably an independent, though clearly not original, thought.
I figured game objects did most of it. I didn't know how self contained they were.
I was envisioning a thread per cpu, each handling collections of objects. Hopefully it would minimize context switching, which might be a problem if you started scaling to really massive numbers of objects. Which is the goal, right? Maybe it doesn't matter. A thread per object does
seem to be elegant.
I want to include SDL, SDL_mixer and SDL_image in the directory of my game and link with these libs instead of the libs in /usr/lib/. I have tried the -rpath ./libs thing when linking, but ld returns an error message "./libs unrecognized file format" (IIRC).
There is also the $LD_LIBRARY_PATH thing, but I don't like this solution and AFAIK it requires an absolute path.
Then I tried linking with -L./libs -lSDL etc., but that doesn't work and the exe is linked with /usr/lib/libSDL-1.2.so etc. even though I have copied libSDL.so in ./libs/ !
Try using LD_PRELOAD="./libs/libSDL.so ./libs/libSDL_mixer.so ./libs/libSDL_image.so"
do "man ld.so" for more info on the LD_* env vars.
in a script to run the game, you might name the game something like game.exe (I know it's not a win32 exe file!)
and then you can name the script "game", and do a chmod a+x game
the script would be something like:
--------------snip--------------------
#!/bin/sh
EXE=./game.exe
cd "`dirname \"$0\"`"
if [ ! -x "$EXE" ] ; then
FULLEXE="`which \"$EXE\"`"
cd "`dirname \"$FULLEXE\"`"
fi
if [ ! -x "$EXE" ] ; then
echo "Cannot find where $EXE is located!" >&2
exit 1
fi
export LD_PRELOAD="./libs/libSDL.so ./libs/libSDL_mixer.so ./libs/libSDL_image.so"
exec "$EXE" ${1:+"$@"}
--------------snip--------------------
and that's it...
change the "game.exe" on line 2 to the real name of the binary.
this script works for "./game", "/path/to/game", or just "game" when it's found by the shell through $PATH .
you can check out any release of unreal tournament for linux, as they use a startup script too.
LD_LIBRARY_PATH doesn't require an absolute path, no, and it is the best option you have. The main gotcha is that rpath overrides it. Since libSDL.so has an rpath set by default, and sdl-config adds a rpath as well, you'll have to modify the SDL Makefiles to link SDL without the rpath (there's no configure option to disable it, unfortunately), link and install it, modify the sdl-config script to remove the rpath, and relink your app.
Can anyone recommend a format that can use the transparencies (with alpha channel) ?
You can, in fact, use BMPs. I split my 24 bit PNGs into a 8 bit BMP
colormap (or JPEG, depending on the case) and a 4 bit BMP alpha channel
and save them without RLE compression. They are LZW compressed by the
installer maker anyway. Then, on read, you read both BMPs and combine
them into a new SDL_Surface.
It works great for us! Saves a lot of space (compared with 24 bit PNGs
with alpha), it's better than 8 bit PNGs with alpha (because 8 bit PNGs
only support 1 bit alpha) and the load process is quite fast.
Can sdl play MPEG ?
Look at smpeg : http://www.lokigames.com/development/smpeg.php3, http://icculus.org/smpeg/ SMPEG does not decode MPEG2, just MPEG1. Try ffmpeg: http://ffmpeg.sourceforge.net/. mplayer is a program that decodes with ffmpeg (among other libraries) and renders via SDL (among other libraries).
SMPEG_play returns immediately; it doesn't block until the movie
finishes playing. So do this:
SMPEG_play(mpeg);
while (SMPEG_status(mpeg) == SMPEG_PLAYING)
{
SDL_Event event;
while (SDL_PollEvent(&event)) {/* handle event if you want. */}
SDL_Delay(100); /* sleep; smpeg runs in another thread. */
}
SMPEG_delete(mpeg);
With Gimp, you might be able to decompose an image with alpha transparency
so that you get the non-transparent part (save as BMP and use literally)
and the alpha part (as a greyscale layer; save also as a BMP and use as
your alpha channel).
I've never done it, and haven't ever played with Decompose->Alpha in Gimp,
but it might be worth checking out.
(And, obviously, for GIF-like transparency, where it's not an alpha level,
but simply an on-off, you can save the BMP with a special color that you
use for your color key. 0xFF00FF (magenta(?)), for example)
you can make too some simple scripts using ImageMagick and/or NetPBM to split the alpha and the colormap in a batch way. I have this integrated with my art pipeline.
Hint: watch out when using STL vectors -- at least the implementation
for MSVC6 is buggy IIRC, and not thread safe. I found a memory
deallocation problem when using vector::insert() some months ago
directfb is great for fullscreen linux applications, the api is not so
simple as sdl but much more powerful. (i.e. supports per surface AND per
pixel alpha at the same time, ...). keep in mind that hardware
acceleration is only available for a limited number of gfx chips. i
tested it on unichrome, matrox and rage128 and i heard that nvidia
support is caching up.
On fullscreen, instead of having a 640x480 resolution, the display is 1024x768 (the desktop resolution) with the SDL window of size 640x480 centered in middle.
Some graphics drivers (newer NVIDIA drivers for example) have the option to control how is the lower resulution screen displayed on TFT monitors (which are usually made for one factory resolution). Usually this should be set to 'monitor scaling' in the gfx driver options but you may have 'centered output' option checked, which causes the driver to use the natural resultion, and draw the smaller screen inside.
For the few cases where it isn't, there is a technique called "Dirty rects", which you might try googling for. But it's a whole lot more complicated than just repainting everything, and usually not worth the effort.
I'd say it's *definitely* worth the effort (at least if you care one
bit about smooth animation), unless you have a constantly scrolling
background.
Without h/w acceleration, frame rates quickly drop way below 50 FPS
with 24 or 32 bit color and 800x600+ resolutions - but unless you
have an extremely crowded screen (tons of sprites), you can up that
to the hundreds or thousands range, pretty much regardless of
resolution and video backend.
(Of course, this is "free-running" FPS! If you can get retrace sync'ed
page flipping, you should use it whenever you are not benchmarking.)
- I want to use SDL to display the video in a already created window from QT
- See http://www.libsdl.org/cvs/qtSDL.tar.gz
- How to scale a image to window ?
- You can use zoomSurface() from the SDL_gfx additional library, or use glScale back-end,
or you can try by yourself, 2x zoom is pretty simple
Some GUI toolkits that run on fbdev : MiniGUI, GTK+, QT/Embedded etc.
Is SDL capable of using the framebuffer?
Yes, and DirectFB as well. (And IIRC, GGI, which also supports fbdev - but there is no point in adding another layer, of course.)
You will have to disable the SDL parachute for debuggers to work properly.
http://www.linuxdevcenter.com/pub/a/linux/2003/08/07/sdl_anim.html?page=2 :
Copying and swapping both get the next frame on the screen. You only care about the difference if you are doing incremental updates of the frames. If SDL_Flip() is copying buffers, the back buffer always has a copy of the last frame that was drawn. If SDL_Flip() is doing page swapping, the back buffer usually contains the next-to-last frame. I say usually because double buffering can be implemented using a hidden third buffer to reduce the time spent waiting for the buffer swap to happen. You can find out what kind of swapping is being done by watching the value of the back buffer pointer (screen->pixels in hardware.cpp) to see if it changes and how many different values it has. If it never changes, then SDL_Flip() is copying the pixels. If it toggles back and forth between two values, then page swapping is being used.
Redirection of standard input/output
Recompile SDL, giving it the compile-time "--disable-stdio-redirect" option.
On Win32, you can also copy the src/main/win32/SDL_main.c file into your project and then you will no longer need to link with SDLmain.lib, and you can change the way the I/O redirection works.
The width and height of SDL_SetVideoMode are the dimensions of the exact client area : they do not include the window frames nor any title bar.
you can use zoomSurface() from the SDL_gfx additional library.
or you can try by yourself, 2x zoom is pretty simple
I need to load a lot of images using too much memory space when loaded:
152 images, 35kb each but in memory they are much bigger because of image
uncompression and byte alignment.
My idea is to read all the image data compressed in png format in memory
and load the images from memory instead of disk.
Try the following:
- Load the file into memory. Try the stdio.h functions, if you are using
C or C++.
- Create an SDL_RWOps structure with SDL_RWFromMeM:
http://www.libsdl.org/cgi/docwiki.cgi/SDL_5fRWops
- Use the SDL_image library to load your images:
http://jcatki.no-ip.org/SDL_image/
Basically, IMG_Load_RW, IMG_LoadTyped_RW or IMG_LoadPNG_RW should
work.
Any data can be embedded in the program, so that there is only one file.
See : http://delirare.com/files/bin2c.c
It ha's got an embedded makefile, so you can run "make -f bin2c.c" to make it. Run bin2c without any
arguments for usage info.
The program can be used for any type of file, not just images, but it's probably a good idea to only use it for small-ish files. I've used it for icons and such.
Or, the GIMP (gimp.org) already has support for exporting to C files.
Instead of using unsigned chars, it uses normal character literals
escaped as needed. Or use the convert utility with a makefile :
my_image.png.o : my_image.png.c
my_image.png.c : my_image.png
convert $^ -o $@ --export=C
I am handling the SDL_VIDEORESIZE event like:
------
if(event.type == SDL_VIDEORESIZE)
{
int width = event.resize.w;
int height = event.resize.h;
float scale_width = float(width)/800; //game logic set for
float scale_height = float(height)/600;// 800x600 resolution
glViewport( 0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( 0, width, height, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef( scale_width, scale_height, 1.0f );
}
It has been said more than once by developers from both ATI and NVidia that
this is a bad idea. It might have been useful in older generation graphics
cards, but modern graphics cards are highly optimized for this kind of
thing (fast clear etc.), so you are wasting a lot of depth buffer precision
for a laughable gain.
Besides, as others have pointed out, most real games won't need color clear
at all because they always render a background, and the depth clear is well
covered by advanced Z buffering tricks done by the hardware. Don't try to
be too clever, you'll actually make the driver's life harder.
I wrote this function that I use to "dye" surfaces different colors.
Works with all color depths I believe (focused on above 32bit). This
gives a blending effect by the offset values (RGB) that you give to the
function. It's fairly fast and could probably be used in a lot of
applications. I have found it very useful myself. It probably needs a
little tweaking... but I dunno. Seen a few people asking around on the
net for something like this.
======================================================================
void BlitRect_ColorChange_to_dest(SDL_Surface *src, SDL_Rect rect, int
r, int g, int b, int x, int y, SDL_Surface *dest)
{
int x2 = rect.x;
int y2 = rect.y;
SDL_Rect rect2;
SDL_Color clr;
SDL_Color colorkey;
Uint32 col = 0;
Uint32 col2 = 0;
char *gpos = NULL;
SDL_Surface *temp;
rect2.x = x;
rect2.y = y;
temp = SDL_ConvertSurface(src, src->format, src->flags);
SDL_SetColorKey(temp, SDL_SRCCOLORKEY, src->format->colorkey);
SDL_GetRGB(src->format->colorkey, src->format, &colorkey.r,
&colorkey.g, &colorkey.b);
SDL_LockSurface(temp);
while(y2 < rect.y + rect.h)
{
x2 = rect.x;
while(x2 < rect.x + rect.w)
{
gpos = (char *)temp->pixels;
gpos += (temp->pitch * y2);
gpos += (temp->format->BytesPerPixel * x2);
memcpy(&col, gpos, temp->format->BytesPerPixel);
SDL_GetRGB(col, temp->format, &clr.r, &clr.g, &clr.b);
if(clr.r != colorkey.r || clr.g != colorkey.g || clr.b != colorkey.b)
{
if(clr.r+r > 255)
clr.r = 255;
else if(clr.r+r < 0)
clr.r = 0;
else
clr.r += r;
if(clr.g+g > 255)
clr.g = 255;
else if(clr.g+g < 0)
clr.g = 0;
else
clr.g += g;
if(clr.b+b > 255)
clr.b = 255;
else if(clr.b+b < 0)
clr.b = 0;
else
clr.b += b;
col2 = SDL_MapRGB(temp->format, clr.r, clr.g, clr.b);
memcpy(gpos, &col2, temp->format->BytesPerPixel);
}
x2++;
}
y2++;
}
SDL_UnlockSurface(temp);
SDL_BlitSurface(temp, &rect, dest, &rect2);
SDL_FreeSurface(temp);
}
=========================================================================
Happy Holidays
-- John Josef Wyled Crazed Monkeys Inc www.crazedmonkeys.com
DOOM used fixed point arithmetic, but modern
processors (486DX and above) do floating point math very well, so there
is no need to deal with fixed point arithmetic.
Well, this kind of stuff would definitely still be useful in the PDA and
cellphone programming realms (where small screens, lack of OpenGL / accel 3D
graphics, and no FPU are still the standard;
There are still some niches where fixed point is valuable. Whenever you need
to store some vector that is supposed to have equally distributed precision
across its entire range, and you are not doing seriously heavy math with
this vector, you might want to consider fixed point.
In one 3D environment, I was reaching the range limits of the 32 bit float.
Well, not exactly the range limit, but once you are too far from 0 (i.e. the
world origin), the precision got so bad that round-off errors in simulation
were unacceptable. There were two choices: Use double, which takes twice
the memory, or store the actual coordinates as 32 bit fixed point. I chose
the latter, and it works perfectly fine. Note that I still use float during
the actual simulation, but that works out fine because during the
simulation, only small (difference) vectors will be used.
To give a more well-known real-life example, Half-Life sends player
coordinates as fixed point across the network. This is obviously done in
order to save space while guarantueeing that the precision at which player
coordinates are transferred is everywhere the same.
I get the following error:
*** glibc detected *** double free or corruption: 0x08e33fe0 ***
Aborted
And the program is aborted.
Recent linux distributions include a small memcheck tool in the mallocing routines (which is good if you ask me) and this tool found a bug. I advise using a real memory checker (valgrind is the best you can get) and fix your code.
To have mirrored images (ex : up, down, left, right, north-east, north etc.), one of the best solution is to load all image sources, and reflect it accordingly to generate the other images, during the game start-up, and then use the precomputed images.
How to change the size of viewport without change of logical size of software ?
Use glscale backend :
http://icps.u-strasbg.fr/~marchesin/sdl/sdl_glscale.patch
Don't forget to set SDL_VIDEODRIVER=glscale and if you want to specify the size, set SDL_VIDEODRIVER_GLSCALE_X and SDL_VIDEODRIVER_GLSCALE_Y to the size you want. It scales by a factor of 2 by default.
Two hardware surfaces plus a third software surface for rendering could be called, instead of tripple buffering (since the extra buffer is different from the page flipping pair), "semitripple buffering", to distinguish it from "normal" tripple buffering, which uses three actual hardware pages to cut the application and/or rendering hardware some extra slack.
If you just use SDL_SWSURFACE, you can no longer make use of hardware page flipping and/or retrace sync for tearing free animation.
To get the best of both worlds, you can ask for a double buffered
hardware surface, and if you get one (not always possible; check
after init), set up a third software surface for rendering. If you
don't get a hardware surface, just use the (shadow) surface that SDL
provides; it's the best you can get under such circumstances.
Keep your alpha blended areas minimal and use SDL's RLE acceleration.
"Clean" the alpha channels so that nearly opaque areas become opaque and nearly transparent areas become fully transparent. (You may have your artist do that, or you can add a filter in your loader.)
Render with reasonable amounts of alpha blending (mainly around the edges of objects, for antialiasing)
Pig uses a software back buffer only to avoid doing software alpha
blending directly in VRAM. A sloppy performance hack, that is. One
should check for hardware accelerated alpha blending before deciding
to use an extra software buffer.
Sam : I have always assumed that if you request and get SDL_DOUBLEBUFFER then you
have exactly two framebuffers and the flip will swap between them,
synchronizing with vertical retrace if possible.
if screen->flags & SDL_DOUBLEBUFFER, then there are 2 buffers and they are copied (not swapped) on SDL_Flip, on the most widespread platforms
(directx/x11).
The behaviour of SDL_flip is never undefined. It's one of :
- the back surface holds the n-2 frame (flip-style double buffering)
- the back surface holds the n-1 frame (copy-style double buffering or
shadow surface)
- the back surfare contains garbage left from some conversions done
before the actual flip. (Flip or copy style, dealing with a pixel
format that is unsupported by the RAMDAC or whatever.)
It's just that you have no way to know in which case you are.
Fixed Rate Pig assumes that there are two buffers and page flipping;
that's why there are two sets of dirtyrects. That still works with a
copying implementation; it just restores a few pixels here and there
that wouldn't have to be touched if we could tell that the backend
uses copying "flips".
That is less problematic with OpenGL, since you are probably doing a glClear on every frame that
you update anyway.
The COPYING file in the SDL source directory has the "true" license for the code.
You don't have to include source or object code, as long as you link to SDL as a DLL.
Basically, all the license requires in that respect is that a user can drop in a modified version
of SDL and have the program use that instead (subject to binary compatibility limitations).
As for the Library/Lesser GPL issue, the Lesser GPL acts like a later version of the Library GPL.
This means that, if a program is released under the Library GPL, you can choose to go by the terms of
the Lesser GPL instead.
The only source code you *have* to publish is whatever changes you make
to SDL itself.
You can ship SDL.dll in a proprietary product, as long as there is a written offer to offer its source. I guess pointing to libsdl.org in a README is enough
To search the mailing list archives. Just download all the gzips and search them ourselves
or use a search engine like Google.com or AltaVista.com and add "+site:www.libsdl.org" in your search query. Try the search field on the top right. It links to google.
The officially recommended way to successfully switch from window to fullscreen is to just toggle the SDL_FULLSCREEN flag and reset the video mode.
Drawing in a non displayed surface :
You need a pbuffer, which SDL doesn't support. SDL is missing a feature here.
However, in the 1.3 branch SDL emulates a render-to-texture like
extension that will render into a texture directly (using pbuffers), but
that's only useful if you want render to texture functionality.
Basically :
- allocate a pbuffer using the suitable platform-dependent (glX, WGL or
whatever) extension
- draw your stuff
- glReadPixels() of the frame buffer (I recommend a driver accelerating
this call here - notably not any old ati driver)
How to find current resolution
SDL_SysWMinfo info;
int width;
int height;
SDL_VERSION(&info.version);
SDL_GetWMInfo(&info);
width = DisplayWidth(info.info.x11.display, 0);
height = DisplayHeight(info.info.x11.display, 0);
find current resolution :
Please note that to support ONLY RedHat you'll need to build almost 7
different rpm archives (different SDL libraries, libc, libstdc++...) and
it will not work if you want to use recent SDL features (for instance
the MMX probe functions of 1.2.7).
It's a matter of fact that if you don't have hundreds of hours/man to
spend on this there are only three solutions:
- build from source (not applicable for closed source/commercial apps)
- static link (with objects available for relink)
- LD_LIBRARY_PATH & with local libraries built on an OLD architecture
(used for instance also by Mozilla products)
There are still a LOT of Debian woody, RedHat 7, Mandrake 8, etc
If you plan to make your software usable for a clueless user you should
make it run "out of the box" also on those systems.
Otherwise we cannot complain that everyone use Windows...
SDL + Direct3D : Does anybody have an idea what's needed to get this to work fullscreen?
Look at the second chunk of code in the FAQ :
d3dpp.Windowed = FALSE; // You can use TRUE here no matter what
you passed to SDL_SetVideoMode()
Unreal Tournament 2004 uses SDL.
I always set a video ram budget and stick to it. The budget
is set at a percentage, usually 70% of the size of the memory on the
video cards I'm targeting.
SDL_UpdateRect() : are the co-ords inclusive or exclusive ?
if i pass a height of zero, does it still invalidate the line 'y' ?
Inclusive. A height of zero really is a height of zero.
Ticks and system clock :
SDL may uses rdtsc on systems that support it (i.e. on x86).
As such, it should not be affected by a clock change.
Other platforms might be affected, though.
rdtsc is disabled by default.
See src/timer/linux/SDL_systimer.c
You have to recompile SDL with USE_RDTSC defined.
SDL uses gettimeofday by default.
having alpha in the framebuffer is seldom
needed. You can still do blending without it. The framebuffer alpha is
just 0 always. The usual blending operation is GL_SRC_ALPHA,
GL_ONE_MINUS_SRC_ALPHA, which uses only the fragment's alpha. Another
common one is GL_SRC_ALPHA, GL_ONE which is often used to avoid having
to sort objects back-to-front. It doesn't use the framebuffer alpha
either.
Resize of a SDL window :
all textures disappear from screen, since OpengL context is
destroyed at each SDL_SetVideo function call.
it's documented behavior of the windows OpenGL implementation. So it probably won't get fixed.
The OpenGL specs allow this, too.
This "bug" occurs somewhere in Microsoft's code or your drivers - I
suspect the former. That's not the domain of the SDL library to fix,
unfortunately. The workaround is to simply reload your textures. The three options SDL has are:
1) Saving all textures to a buffer before sending them to video memory.
This will cause a performance hit to often-updated textures.
2) Attempt to grab the texture from video memory on context switch. This
will cause more overhead than simply re-sending the textures from system
memory, assuming those textures have allready been loaded.
3) Ignore the problem, and let programs deal with it - with no extra
overhead, since those programs allready have expert knowledge of the
situation of it's textures.
AFAIK, SDL has gone with #3. Rationale being: Texture reloading is easy
to implement in userland with C or C++, and allows your program faster
operation. That, and we don't want to break backwards compatibility -
the relevant opengl calls would be routed through SDL, meaning the
texture-reloading might not work for older binaries.
Well, I'd consider it a "bug" in SDL as it is possible to resize your window
without recreating the OpenGL context (even on windows).
Only if you change other properties (video depth, etc), your context will
become invalid.
But changing the default SDL behaviour (destroying the context on windows on
every call to SDL_SetVideoMode) might break applications (in terms of video
memory management).
A surface can practically be one of three types. 1) HWSURFACE 2)
SWSURFACE 3) OPENGL. That's because when you set OPENGL those other two
don't have a meaning.
SDL has a fixed size queue and no way to force the application to read
from it. When the queue fills up, SDL dumps the events on the ground.
You can read all about the problem, the tests I did to confirm it, and
the library I wrote to get around the problem at:
http://gameprogrammer.com/fastevents/fastevents1.html
BTW, this is rarely a problem when handling input, but it is a serious
problem when using the event queue as a general interprocess
communication mechanism.
- on x86, int is 32 bits, long is 32 bits and long long is 64 bits (this
is called ILP32).
- on most sane 64 bits architectures (x86_64, ia64, powerpc...) int is
32 bits, long is 64 bits and long long is 64 bits too (that's usually
called I32LP64 or simply LP64 for "long pointer 64").
- the people at microsoft decided to do differently, and use LLP64 on 64
bit windows : ints and longs are 32 bits, and long longs and pointers
are 64 bits.
So, using "long" is not portable. Thus, you need to use a portable type
in your code (hint : use SDL's Uint64/Sint64 if you need the 64 bits).
When I last reviewed SDL's cursor-related classes and support, my
conclusion was that the dedicated cursor options were fairly limited.
I decided that this was a limitation based on SDL's crossplatform
nature, to ensure as wide as possible support for SDL's cursor
routines. It seemed possible (in fact, reasonably trivial) to
implement blit-based cursors of arbitrary size, shape, color,
animation, by supressing the 'true' cursor display and blitting
surfaces of my choice to the location of the cursor. However, I did
not and do not plan to implement that, as my understanding was that
the 'basic' SDL cursor is very programmically efficent and makes good
use of hardware-based cursor acceleration on most platforms, where a
re-implementation using surfaces would involve all the inefficencies
involved in the use of surfaces and blits.
Great summary.
I'll add that if you are using fullscreen hardware surfaces then you'll
need to draw your own cursor anyway, as SDL has no idea what you are doing
to the video memory and cannot draw the cursor into video memory itself
without clobbering what you are doing. There are hardware overlays for
the cursor on some video cards, but that's not guaranteed to be available.
Running this simple SDL program on windowsXP,DirectX9
when alt+tab and return I get "error: Blit surfaces
were lost, reload them" "error:
DirectDrawSurface3::Blt: Surface was lost"
What's going on? It seems to still run. Tried to
SDL_SetVideoMode but it doesn't fix it.
Here's the code.
That's documented. See the manpage for SDL_BlitSurface :
If either of the surfaces were in video memory, and the blit
returns
-2, the video memory was lost, so it should be reloaded with
artwork
and re-blitted:
while ( SDL_BlitSurface(image, imgrect, screen, dstrect)
== -2 ) {
while ( SDL_LockSurface(image)) < 0 )
Sleep(10);
-- Write image pixels to image->pixels --
SDL_UnlockSurface(image);
}
This happens under DirectX 5.0 when the system switches away from your fullscreen application. Locking the surface will also fail
until you have access to the video memory again.
I find compiling using Visual C++ 6 under Win98 gives the best
performing DLL, even better than with MinGW.
I'm on the usual Quest for Smooth Animation.
1) Create the video surface with SDL_DOUBLEBUFF | SDL_HWSURFACE.
Note that SDL_DOUBLEBUF automatically & silently sets
SDL_HWSURFACE, so that there's not difference between SDL_DOUBLEBUF |
SDL_HWSURFACE and SDL_DOUBLEFUF.
2) If it succeeds (flags & SDL_HWSURFACE == SDL_HWSURFACE), you get a
video memory chunk with enough space for TWO screens. The video card
displays one of these at a time. So you draw into the offscreen one (the
one represented by the returned SDL_Surface, right?), and call
SDL_Flip(), which *may* (this is backend-dependent) waits for VSYNC and swaps the visible surface with the invisible surface. It depends on the video drivers and the screen settings.
By default most drivers turn off vsynch. From the programmers point of view, you might
just as well forget that vsynch exists. Don't even think about it.
I thought VSYNC was *the* way to have smooth animation.
It is the *only* way with current display technology, but
unfortunately, your average OS + driver setup does not make high end
animation system. Unless it's ok that your game looks like crap or
misbehaves horribly on the majority of systems, do not count on
retrace sync to work.
3) Because the back buffer is on hardware, alpha blitting (which I do a
lot) is slow because it requires read. Therefore, you also allocate a
second surface, with the same dimensions and format of the screen, but
as a software surface. You draw on this third buffer, copy the dirty
parts to the hardware back buffer, and call SDL_Flip(). This is semi-triple
buffering.
This is not exactly triple buffering. Triple buffering in the "standard"
understanding is when you have 3 pages in video memory. This is what
Doom does. In this scheme you have two pages in video memory and one page in
system memory.
As for alpha blitting acceleration, well, this is backend-dependent
(heh. We should release glSDL sometime...)
4) If 1) didn't succeed, instead you got a software surface (and the
display memory which you cannot directly touch). You draw on this
surface, and copy the dirty bits with SDL_UpdateRect(). The copy is
immediate, no wait for VSYNC, so you get tearing. The best you can do is
try to have a frame rate close to the monitor's refresh rate.
AS you'll never be at the exact monitor's refresh rate (not to mention you don't always know the refresh rate...), you'll get sometimes the worse tearing that is. I think you'd
better draw as fast as possible to reduce tearing (considering that it's
impossible to remove it altogether)
So, according to you, the best choice is doing semi-triple buffering if
the hardware supports it, video surface + back buffer if it doesn't, and
always try to draw at a fixed FPS around 100?
At program start, clear an integer array AvgFrameTimes of size NumFrames
and set an Index to 0.
At the start of each frame:
FrameTime = SDL_GetTicks();
AvgFrameTimes[Index] = FrameTime - PrevFrameTime;
Index = (Index + 1) % NumFrames;
PrevFrameTime = FrameTime;
Somewhere during each frame, when you want to print your current frame rate:
Calculate the average frame time, which is just the average of all
the numbers in the AvgFrameTimes array
The FPS value is 1000 / (average-ms-per-frame)
Just print that number to some place on the screen and you have a
real-time FPS meter.
i use a very well working method for counting fps, that i did not see
elsewhere:
static int
clock(void) {
struct timeval now;
gettimeofday(&now, NULL);
return
((now.tv_sec-start.tv_sec)*1000+(now.tv_usec-start.tv_usec)/1000);
}
int
fps(void) {
static int frames = 0;
static int last = clock();
int f = 0, now = clock();
if (now-last>=1000) {
last = now;
f = frames;
frames = 0;
}
frames++;
return (f);
}
just call fps every frame and check if it returns != 0. i think this
is the most accurate fps counter possible, because it has no rounding
errors (it really counts the frames).
Yeah, that's pretty much it; the game logic deals with time periods
shorter than the nominal frame period.
It doesn't have to be restrictid by logic frame rates, though. Another
way to handle it is to deal with "exact" delta times instead, doing
away with the frame as a unit in the game logic. The problem with
this is that it quickly gets hairy and/or inaccurate, unless you put
a lot of effort into ensuring that rounding and approximation errors
do not make rendering frame rate, CPU speed, wall clock time (start
timestamp) and other factors affect the game logic.
It's really simple. Just use SDL_GetTicks() to messure the time between
two frames.
Divide 1000 by the time (in milliseconds).
That'll work, though it'll bounce around a lot as the time between frames varies
slightly. You can also have one variable storing a time, and another variable
counting frames. Start frames at 0, and store the time. Each frame, put the
frame counter up, and check if the current time is the stored time + 1000. When
it is, print out the counter as the current FPS, then reset it to 0 and put the
current time in the stored time. Then the FPS will update every second, showing
the average FPS over the last second.
collision detection -- sub-frame accuracy?
Here's an example:
Assume you have 2 objects, O1 & O2, defined as follows:
at frame 0:
O1 is at <0,5> w/ velocity <10,0>
O2 is at <5,0> w/ velocity <0,10>
clearly they should intersect at <5,5> at time 0.5. But if our
calculations are done on a per frame basis only, at frame 1 we get:
O1 is at <10,5> w/ velocity <10,0>
O2 is at <5,10> w/ velocity <0,10>
Assuming objects are moving in straight line segments, a solution is:
at every frame, calculate the path of each object & perform a
collision detection on the paths (essentially line intersections).
Reality is of course more complicated as this only works on zero
volume objects - ie it will not register collisions of paths that pass
'near' each other & would have overlapping objects. Also, it requires
curved paths to be approximated by straight line segments. As w/
anything, the more realism you want, the more complicated things
become
Remember though, your program is not the only program using video memory. It is very
unlikely that you can get 100% of video memory for you programs use.
VSync is only supported in a few drivers and usually only in fullscreen,
plus you cannot really tell SDL to turn it on or off, and sometimes you
have to set some unknown environment variable before calling
SDL_SetVideoMode. In any case, it's not a reliable solution since you
have no good control over it.
A more or less reliable way of doing it, is to calculate the frames/sec
and either use a timer, or an SDL_Delay (along with a time that will get
you close to your desired next frame time). Or just skip rendering in
your event loop until the time for the next frame arrives, replacing the
redraw with a small SDL_Delay(10) if you want to be cpu friendly.
Limiting the framerate :
you can turn on vsync, that limits the framerate to the verticle refresh
rate of your monitor, which in turn makes the images look alot nicer because
it gets rid of tearing.
You cannot Blit to a locked surface
Using SDL without needing a window : the way around it is to use the "dummy" video driver. You can set the
environment variable SDL_VIDEODRIVER to "dummy" either from the shell, or by
using setenv() inside your program before calling SDL_Init(). This is useful
for programs that want to use SDL structures and routines, but note that
without a real video driver, you won't get any events.
As long as your users have decent graphics cards (i.e., which can do
hardware accelerated OpenGL), then yes, using OpenGL is far faster, as where
possible OpenGL is hardware accelerated by default.
The other option is to use SDL_VIDEODRIVER to use one of hardware
accelerated 2D drivers, but in addition to a decent graphics card these
also require that you run in fullscreen and that your program is run with
root permissions.
In general the reason Windows vastly outperforms Linux for 2D graphics is that by default
Windows supports hardware acceleration for 2D graphics, but Linux does not.
To get hardware acceleration in Linux you need to do one of:
- use OpenGL and 3D graphics functions to do 2D graphics
- use a video driver that requires root permissions, requires you to run
fullscreen, and which may well not be supported by many video cards, like
directfb or dga. You can use such drivers by setting the environment
variable SDL_VIDEODRIVER, it's use is covered in the FAQ on the SDL
website.
as soon as I try to compile and run in fullscreen, the graphics won't
show. I just get a black screen with the menu text.
make sure if you are manipulating pixels directly that you are making sure
and using the surface->pitch instead of surface->width to move from one row
of pixels down to the next.
in windowed mode it seems that they are the same but in full screen mode
they are often different because of padding on rows of pixels or something
I wrote an little howto for a small game I did. It explains step by step
how to build a statically linked exe using mingw and msys.
http://www.boardmod.org/stuff/astro_constrib.tar.gz
By calling SDL_SetVideoMode that way, you are forcing the creation of a
16bpp video surface. If you force the creation of a 16bpp surface while
the underlying screen is 32bpp, SDL will create a so-called "shadow
surface". This surface is a surface between your application and the
video memory, and it has the pixel properties you requested (here, it is
a 16bpp surface). So what will happen from there ? SDL_SetVideoMode will
return the shadow surface to your application, and your application will
draw to it. Then, during SDL_Flip/UpdateRect, this shadow surface will
be copied (and converted at the same time) to 32 bpp. That can be slow,
because this prevents hardware accelerated blits and does an additional
copy.
What can you do about that ? Let SDL create a 32bpp video surface (by
using the SDL_ANYFORMAT flag for example). Then SDL_DisplayFormat all
your resources.
Tile studio is here: http://tilestudio.sourceforge.net/
It allows you to export the graphics as "C" code so they will become
static data in you EXE. This method should work on any platform.
if your actually thinking of building an installer, then you might
like to check out the Nullsoft installer:
http://nsis.sourceforge.net/
I would like to create a SINGLE exe which contains all te dll's (SDL, SGE, MSVC.., etc.), the ttf files and the gif files of my game
Is this possible? And, if so: how?
You can statically link all the libraries together. The resources can
all be converted to C code that you can compile and link into your
program and use RWops to read them from memory. Turns out all SDL
functions that read from files actually use a RWop to do the file IO and
there are RWops that let you treat a constant in memory as a file.
alpha blending needs a read-write cycle which is slower when happening on hardware surfaces.
(note that no video backend currently supports alpha blending in
hardware, but this might change in the future. The only way to tell is
to look at the acceleration flags you get from SDL_GetVideoInfo)
This flag is read only.
SDL_HWACCEL means that the blit that just happened with this surface as
source used hardware acceleration.
when you are double-buffering you have two buffers that correspond
to the same "surface". you are generally always drawing to the back
buffer, whereas the front buffer is what's being displayed; then the
flip operation causes the back buffer to become the new front buffer
et vice versa. Example (someone correct this if I've made glaring
mistakes, please):
Back buffer Front buffer
----------- ------------
Start (empty) (empty)
Draw square square (empty)
Flip! (empty) square
Draw circle circle square
Flip! square circle
No square in front buffer!
If you wanted the square to appear in the front buffer in this case,
you would also have to draw it on the second buffer:
Back buffer Front buffer
----------- ------------
Start (empty) (empty)
Draw square square (empty)
Flip! (empty) square
Draw square square square
Draw circle circle square
square
Flip! square circle
square
Incremental drawing on double-buffered video surfaces tends to be a
pain, for this reason; my impression is that mostly when drawing to
double-buffered surfaces, people simply clear the buffer at each
frame, then redraw everything, since it's significantly easier...
If I have an animation seqence in one BMP file.
I would like to use it at run-time. Do most of you just store the one
bitmap and cut the image you need when the frame is required of do you
chop each animation up and store it in it's own PSDL_Surface for each
frame of the animation?
It's much easier than that. Simply store the animation in a single
surface the way you normally would. When blitting, use the second
parameter "srcrect."
Well, that's not necessarily a safe bet. It depends on the details of
the individual application, of course, but I managed to get a small
performance improvement in my program by slicing up the source data so
that each image was being blit from a separate surface.
I think also that images stored RLE might cause a performance hit if you are
trying to grab arbitrary parts of them (rather than blitting the whole thing).
i tried searching the archives before posting
I use google with "site:libsdl.org"
Is there any chance you would know how to set the window (x,y)
position???
seams to be pretty random so far...
SDL doesn't support this function yet. The fact is that currently your
operating system sets it for you. Under windows, it may seem pretty
random under many circumstances.
using dirty rectangles in a double-buffered set up
will mean having to keep track of changes not since last frame, but from
the frame before that, which may be considered too much bookkeeping.
(To enlighten me, is there anybody out there actually using this approach?)
If your application is a 2D game or something (with mostly static
backgrounds) or the like, you may be better off having a single buffer
only (aka the screen) and have your sprites (if applicable) in video
memory and just blit them to the screen when needed (video memory to
video memory copies may not be as fast as page flips, but they are quite
fast.)
amask = ~(src->format->Rmask |
src->format->Bmask);
Is src->format->Amask the right amask to use?
This is code to enable as large an alpha channel as possible, I think.
(That could be useful to improve filtering quality, since alpha channels
are often one-bit.)
It's wrong in the general case, though.
When you actually have 2 hardware buffers and are flipping
between them, you need to update for changes on each buffer separately.
The primary advantage of using OpenGL rather than doing all your effects
through straight software pixel manipulation (as I assume SDL_gfx does?)
is that, if your OpenGL drivers are reasonably done, you get to take
advantage of the GPU in your graphics card to perform some operations
faster and/or more parallelizedly than you could with the CPU. For
instance, if your graphics card accelerates texturing, you can upload
your image to texture memory and then get fast warping of it (including
rotation and scaling and such) by drawing polygons using it as a
texture.
You wouldn't really be "emulating 2D with 3D", note; using OpenGL for 2D
usually involves setting up a projection such that the third dimension
just never gets used. There are plenty of applications that use OpenGL
for 2D rendering; Chromium B.S.U. comes to mind, for instance, or the
OpenGL modes of Kobo Deluxe.
8bpp textures are faster that direct pixel access under X11 here
(that makes sense too, since you are sending 8bpp values to the card, and
the card converts those to the display bpp, instead of sending bigger
already converted values which eat up the video card's bandwidth).
asking for a double buffered surface implicitly requests a hardware surface.
One might also want to try the windib backend.
In short, a hardware video surface cannot stand too much pixel-level
access, but are very appropriate for fast blitting. A software video
surface, OTOH, is appropriate for direct pixel access.
Because X11 controls the video subsystem. X11 uses the ATI drivers to
draw on the screen. SDL, as an application, has to go through X11. That
is the way it works. The OpenGL subsystem negotiates with X11 for direct
access to the 3D hardware, but still has to work with X11 to get a
window, to move a window, to get input...
Glxgears runs full out. It takes up 100% of the CPU. The "weird glitch"
is most likely the OS cutting in and running housekeeping code and
letting other applications run a bit.
That is why it is important for animation code to *not* use 100% of the
CPU.
You can have 2 or more windows by having your program spawn child processes
that each have their own SDL environment. The technique of spawning child
processes is different for each OS, but this can be handled by conditional
compilation. You'll also want to set up interprocess communication between
the parent and child processes.
i don't know how to obtain a exact copy of a region surface in a new surface (with
the same alpha format, pixel format,etc), i have used
SDL_CreateRGBSurface, SDL_CreateRGBSurfaceFrom, but this functions
don't respect the alpha channel...
Disable alpha on the source surface (see the SDL_SetAlpha manpage)
before doing the blit, and it will transfer the alpha channel during the
blit.
When the mouse is not connected I get a error stating:
DirectInput::CreateDevice:Device not Registered. The error is created
when the SDL_INIT(SDL_INIT_VIDEO) command is called.
try to use windib instead DirectX (see
http://sdldoc.csn.ul.ie/sdlenvvars.php), you won't use DirectInput
It's worth noting that several companies use SDL in closed source
solutions: Loki statically linked it and supplied an unsupported
dynamically linked version too. Epic just dynamically links it for
Unreal and ships the shared library with the game as the sole
configuration, so this is really only a question for embedded things
like SymbianOS.
control framerate : How can I adjust the speed of the engine?
The way, _I_ usually do it (I target slow machines, and then
slow my apps down on the faster machines) is to call SDL_Delay()
at the bottom of my loop.
Say I want 20fps. If, by the end of dealing with events, moving
my monsters, and drawing the screen, I haven't used up 1000/20 = 50ms,
I wait for the remainder.
I just keep track of 'what time it is' at the beginning of the loop
(right after the "do {..." usually), using SDL_GetTicks() (I think?),
and then test-and-pause-if-I-need-to at the end of the loop
(right before the "...} while (!done);" usually) -- using SDL_GetTicks()
again, and then calling SDL_Delay() on the remainder, if any...
Although usually you want to write your code in a framerate independant
manner, I have found instances where I have had to lock the frame rate to
certain values in some of my games. The problem with calling Sleep and
SDL_Delay (which I think calls Sleep internally), is that the minimum wait
time varies in different platforms which can give you real ugly problems.
I have a better solution that can wait on millisecond accurate times. I
have written a tutorial on how to do this for windows (see link below):http://home.comcast.net/~willperone/Codetips/accuratesleep.html
You *can* restrict the rendering (and logic) frame rate to some
arbitrary value, but I'd recommend against it for three reasons:
1) It eliminates all chances of utilizing "spare"
rendering power for smoother animation.
2) The resulting frame rate is likely to interfere
with the refresh rate of the display, which
cannot safely or reliably be changed.
3) It doesn't solve the situation where rendering
is too slow, so your game logic may run too
slow on slower machines, unless you deal with
that specifically.
The best and most robust solution is to separate the game logic from
the rendering frame rate. There are some different approaches to
this, including:
* Fixed logic frame rate with timer driven thread. (Game
logic is handled in a separate thread that is driven at
a fixed rate - say 100 Hz - by a timer of some sort.)
+ Simple - at least in theory.
+ Easy to design exact (repeatable) game logic.
+ Fixed frame rate ==> simple and robust game logic.
- Depends totally on OS scheduling accuracy and
granularity for smooth animation.
- Threading issues complicates the code.
- Threading overhead. (Virtually zero on some
platforms, though.)
- Needs interpolation for smooth animation at all
rendering frame rates.
- Interpolation is hard to get right, since there
is no hard synchronization between the logic and
rendering threads.
* Fixed logic frame rate with frameskipping and/or
interpolation. (Game logic runs in the rendering thread,
but is throttled so that it runs at the correct average
frame rate.)
+ Fixed frame rate ==> simple and robust game logic.
+ Easy to design exact (repeatable) game logic.
+ No multithreading complexity
+ Interpolation is easy to implement, since there
are no asyncronous buffering issues.
+ A very high logic frame rate (say 1 kHz) is an
easy hack if you want to avoid interpolation.
- Needs interpolation or very high logic frame rates
for smooth animation at all rendering frame rates.
* Time based logic. (Logic frame rate == rendering frame
rate, but speed, acceleration etc is scaled based on the
delta times between frames.)
+ Accurate positions and smooth animation,
regardless of frame rate.
+ No need for interpolation for perfectly smooth
(even sub-pixel accurate) rendering.
- Exact/repeatable game logic is hard to implement.
- Special measures must be taken to ensure that
collision detection works properly at low frame
rates.
- Game logic is complicated by extra math for the
time scaling.
(Note: My personal favvourite for 2D action games is the second one;
fixed logic frame rate + interpolation. That's what I use in Kobo
Deluxe and Fixed Rate Pig, both found on my site.)
How can it be incorporated with a graphic engine?
That depends on how you deal with the game logic timing. It's rather
trivial as long as you keep the game logic in the same thread as the
rendering engine. It gets a bit trickier with interpolation (need to
buffer some old coordinates for each object and do the actual
interpolation to get the graphics coordinates for each frame), and
perhaps even worse with a multithreaded approach, due to the sync
issues. (cannot just share variables. At best, you'll get occasional
jittering.)
I need to say this timer thing will need to work with the sound system too latter.
Yeah, another tricky detail... The easy way is to just send
asynchronous messages from the game logic to the sound engine
(perhaps using a lock-free FIFO, as I do in Kobo Deluxe) - but that
obviously means that sound timing is quantized to the physical logic
frame rate (same as the rendering frame rate unless you have a
separate logic thread) and/or audio buffer size.
In recent (not officially released) versions of Kobo Deluxe, I've
introduced a timestamped interface that keeps a "constant" latency,
rather than the random jittering and quantization effects of previous
versions. Very simple stuff in theory, but things are complicated by
the timing jitter caused by OS scheduling granularity, rendering time
fluctuations and stuff. That's handled by a slowly decreasing time
offset that's adjusted slightly whenever the engine realizes the
delay is smaller than the minimal latency.
In my Game[1], I do lock the game logic and frame rate to a
fixed value. After having problems with different scheduling
granularities, I arrived at the following combination of Sleeping and
busy-waiting...
I planned to find the values of SLEEP_MIN and SLEEP_GRAN at runtime, but
20/10 works good enough for now.
#define SPEED 70 /* Ticks per Frame */
#define SLEEP_MIN 20 /* Minimum time a sleep takes, usually 2*GRAN */
#define SLEEP_GRAN 10 /* Granularity of sleep */
int frames; /* Number of frames displayed */
Uint32 t_start,t_end,t_left;
Uint32 q; /* Dummy */
while(1){
frames++;
t_start=SDL_GetTicks();
// Do a single frame worth of work here....
t_end=t_start+SPEED;
t_left=t_start-SDL_GetTicks+SPEED;
if(t_left>0){
if(t_left>SLEEP_MIN)
SDL_Delay(t_left-SLEEP_GRAN);
while(SDL_GetTicks() mypatch
To to the opposite, i.e. apply the patch to a source tree, you have to use the unix patch utility :
"patch < mypatch" or "patch -p1 < mypatch"
Thus the name.
So simply see the manpages for "diff" and "patch" if you want more info (especially be careful as to not include the .o files or such in our patch) . Also, even if you don't use a unix system, these utilities are available in mingw.
I need to duplicate a SDL_Surface structure.Snipplet:
dupe = SDL_ConvertSurface(sprite, sprite->format, sprite->flags);
Where sprite is an SDL_Surface.
pixels member of the SDL_Surface structure is
really just for your personal reference. In other words, pixels tells
you where the pixel data is stored, it doesn't tell SDL where the pixel
data is stored : it is only a view of it
Also, always use while() to poll events. Using a simple "if" will pump
only one event from the event queue, while there could be more than one.
Events will then start to accumulate in the event queue until everything
blows up
one can use SDL_image as a texture loader : primarily to synthesize my own SDL_PixelFormat structure based on which
settings I'm going to be using in my glTexImage2D call, then call
SDL_ConvertSurface and extract the pixels from the result. You could also
do the conversion yourself, if you wanted to use a format SDL doesn't
natively support for its surfaces (such as GL_LUMINANCE_ALPHA) or if you
wanted to do in-place conversion or other such tricks.
No SDL backend accelerates alpha blits for now. Hw alpha blit support in SDL isn't finished (it's not far from working though, and a simple fix does it, but for now it makes it impossible to have backends hw accelerating alpha blits. That's probably explains why DirectFB has the code for hw alpha blits, but it's disabled.).
Where do people get that SDL_HWACCEL from? It isn't a flag you can give
to SetVideoMode. It's a SDL_Surface flag, and a read-only one, you cannot
set it yourself.
You need to clip the rectangles that you pass into SDL_UpdateRects()
SDL_BlitSurface does clip the rectangles, and you can pass the resulting
rectangles directly to SDL_UpdateRects(), however if you do things yourself,
you'll need to do your own clipping.
The SDL parachute has been added out of fear that an user code crash for some reason, or be ctrl-alt-deleted out of, before SDL_Quit is called.
If you crash, under Unix it generally sends SIGSEGV (or SIGBUS) to the process. SDL sets up a
handler that catches this (among other signals) so it can deinitialize
itself appropriately. On its calling TTY (if there is one), it prints
an error message indicating such:
Fatal signal: Segmentation Fault (SDL Parachute Deployed)
Under Windows I expect it would be similar.
If you supply an SDL_NOPARACHUTE flag to SDL_Init, this functionality
will be disabled.
As far as the C-A-DEL combination, this is usually handled by the OS;
under Linux it usually triggers a clean reboot which sends SIGTERM,
then SIGKILL. I'm not sure SDL catches SIGTERM; it doesn't seem to
with a short test program... ? SIGKILL, of course, is not catchable.
If the system is shutting down, of course, I estimate the utility of
calling SDL_Quit may be somewhat diminished, but I'm hesitant to give
an answer with any certainty.
Your program crashed, and SDL caught the resulting exception in order to clean up and return the display to a usable state. If this isn't done on some platforms, the user may be forced to reboot. For instance, if a fullscreen application crashes or exits without SDL_Quit() in X11 compiled with SDL_INIT_NO_PARACHUTE, it leaves the desktop to the fullscreen app resolution (that is often far lower than the original desktop res). You may be looking for the SDL_INIT_NOPARACHUTE flag, used here:
SDL_Init ( SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_NOPARACHUTE);
to disable the SDL parachute, so you can have your debugger of choice trap the exception instead.
For small programs, to be sure everything, the GUI included, is correctly cleaned up after the program finished, just use : std::atexit( SDL_Quit )
:
#include // for std::atexit
int main(int argc, char **argv) {
blah blah blah
std::atexit(my_cleanup_func);
blah blah blah
}
void my_cleanup_func(void) {
clean up stuff
SDL_Quit();
}
. More advanced users should shut down SDL in their own cleanup code. Plus, using std::atexit in a library is a
sure way to crash dynamically loaded code....
If SDL has certain rules (std::atexit(SDL_Quit) is an example, another
popular one is the need to handle events on the SDL_SetVideoMode()
calling thread) is because it's a multiplatform API.
For win32 cross-compilation use these scripts :
http://www.libsdl.org/extras/win32/cross/cross-configure.sh
http://www.libsdl.org/extras/win32/cross/cross-make.sh
*Never* free the display surface!
here's no "ClearSurface" call, but there
is SDL_FillRect(). This should do the trick:
SDL_FillRect(screen, NULL, SDL_MapRGB(screen->format, 0, 0, 0));
(The last three numbers are R, G and B respectievly. Change these for
other colors than black.)
In fact, you should avoid allocating or
freeing any memory in the main loop, as memory management is one of
the big problems in real time applications.
How can I see my window in the center of the screen?
void CenterWindow()
{
SDL_Surface *screen=SDL_GetVideoSurface();
SDL_SysWMinfo info;
SDL_VERSION(&info.version);
if ( SDL_GetWMInfo(&info) > 0 ) {
#ifdef __unix
if ( info.subsystem == SDL_SYSWM_X11 ) {
info.info.x11.lock_func();
int w = DisplayWidth(info.info.x11.display,
DefaultScreen(info.info.x11.display));
int h = DisplayHeight(info.info.x11.display,
DefaultScreen(info.info.x11.display));
int x = (w - screen->w)/2;
int y = (h - screen->h)/2;
XMoveWindow(info.info.x11.display,
info.info.x11.wmwindow, x, y);
info.info.x11.unlock_func();
}
#endif // unix
#ifdef WIN32
{
RECT rc;
HWND hwnd = info.window;
int w=GetSystemMetrics(SM_CXSCREEN);
int h=GetSystemMetrics(SM_CYSCREEN);
GetWindowRect(hwnd, &rc);
int x = (w - (rc.right-rc.left))/2;
int y = (h - (rc.bottom-rc.top))/2;
SetWindowPos(hwnd, NULL, x, y, 0, 0, SWP_NOSIZE|SWP_NOZORDER);
}
#endif
}
}
Or try this:
#include
putenv("SDL_VIDEO_CENTERED=1");
In Linux, it centers the window every time SDL_SetVideoMode() is called.
In Windows, it centers the window upon program invocation, but not for
subsequent SDL_SetVideoMode() calls (within the same run).
MinGW cross-compilation
It didn't occur to me that I could use a precompiled version because I
didn't realize that it needn't be precompiled specifically for Mac OS X/
PPC since it only contains Windows/x86 code. Anyway, the precompiled
version from libsdl.org worked, so I went back to my own compile to figure
out what's wrong, and eventually found it.
The problem is the following: When run through cross-configure.sh,
configure (for some reason I don't even want to know) fails to figure out
in what shell it is running, leaving CONFIG_SHELL empty. This propagates
through various errors into a broken libtool, which in turn builds a
broken library. (Apparently the method with nm is not suitable for
detecting a broken library.)
My (preliminary) fix is to add "CONFIG_SHELL=/bin/sh; export CONFIG_SHELL"
to cross-configure.sh.
I use a black image as background, because if I don't use it,the foreground images overlap, generating a trail effect.
How can I don't use the background image?
You can do one of two things, really:
1. At the beginning of each frame, erase the entire screen,
then draw all of your objects, then do an SDL_Flip() or _UpdateRect().
Sounds like you'd rather not do this. :^) It definitely can be slow!
2. At the beginning of each frame, erase each of the objects
(just like drawing them all, except you are drawing black).
THEN, move the objects.
Now, draw them in their new positions, and then update the screen
(e.g., SDL_Flip())
Typically, I have SO much stuff moving on my screens (see: Defendguin)
that it's cheaper/easier for me to just wipe the screen each frame.
Sometimes, though (see: Mad Bomber), it makes more sense to erase and
redraw just the few moving objects, rather than redraw /everything/...
By "fake" double buffering, I mean a setup where making a new frame
visible is done by copying a shadow buffer into the display buffer,
as opposed to just swapping two display buffers.
What Pig does when hardware page flipping is not available is stay
away from SDL_Flip() and use SDL_UpdateRects() instead. That way, it
can get away with copying only the areas that have actually changed,
which is how you get those insane frame rates (hundreds or thousands
of fps) even without h/w accelerated blits.
Note that SDL gives you *either* double buffering with h/w page
flipping, *or* a single display buffer + s/w shadow buffer! That is,
if you want a s/w back buffer for fast rendering, you'll have to
create that yourself, and implement your own alternative to
SDL_UpdateRects(). Pig does that to avoid doing alpha blending (which
there is a lot of) directly into VRAM.
The "semitriple" buffering setup that Pig uses when supported (two h/w
pages + one s/w shadow), is not directly supported by SDL. You have
to implement the shadow surface yourself, on top of an SDL double
buffered h/w display surface.
Now, do keep in mind that this setup is utterly pointless if the blits
you are using are accelerated! Only use this if you do significant
amounts of alpha blending to the screen, and aren't on a backend that
accelerate alpha. (Only glSDL and DirectFB do that, AFAIK, so don't
rely on it. Most users won't have accelerated alpha.)
In theory, no, double buffering in a window *is* possible. However,
without special (high end) hardware, it's very tricky to implement,
so very few (if any) widely availably windowing systems support it.
Either way, I strongly recommend that you disregard the
windowed/fullscreen property in this regard. There is no guarantee
that a window cannot use page flipping, nor is there a guarantee that
a target can use page flipping *at all* even in fullscreen modes.
(Many cannot - including XFree86 without DGA, meaning some 95% of the
Linux users.)
Bo Jangeborg
I got a quite advanced dirty rect routine that splits overlapping rects
in to the minimum number
of new rects that would cover the same area, this is implemented on a
tree of widgets, some of
who have their own surfaces and some that are drawn directly to the
screen. All with relative positioning
to make life more interesting.
So I think I have that part covered.
OpenGL and SDL hints
>Is it better to store multiple pictures per surface when you have
alot of em or one picture per surface?
Definitely one picture per surface.
Assuming a picture size of 8x8 or greater, one picture per surface is always
better when you have any kind of transparency (colorkey or alpha channel),
and in no case significantly worse than multiple pictures per surface.
Also note that if you are using RLE acceleration (which is strongly
recommended if you have colorkey or alpha channels), there is an
additional cost to clipping which is avoided if you use one surface
per sprite.
...so you should pretty much always ask for
RLE acceleration if you have alpha channels.
Lack of additive blending is
one of the big weaknesses of SDL
you may not actually get double buffering
even if you request it. There are two ways of dealing with this situation:
1. Use SDL_SWSURFACE instead SDL_HWSURFACE when setting the video mode.
This will give you a software backbuffer to which you can render. It will
also disable all hardware acceleration, so use with care.
2. Create your own backbuffer with SDL_CreateRGBSurface, draw your
complete scene on the backbuffer, and then blit the backbuffer to the video
surface in one fell swoop. It's a bit of work on your part, but it gives
good results.
SDL_UpdateRect or SDL_Flip. Pick one. Don't use the other.
SDL_RWops Tutorial : http://www.kekkai.org/roger/sdl/rwops/rwops.html
If you enable DGA (set environ variable SDL_VIDEODRIVER=dga) for SDL,
you will not be able to launch SDL+OGL programs... You need restore:
SDL_VIDEODRIVER=x11
>Any thing like double buffering support is there in SDL?.
Yes - SDL_DOUBLEBUF. However, the implementation is highly target and
platform dependent. You'll find that on many targets, it's not
possible to get properly page flipped double buffering (with retrace
sync'ed flips, that is), because the drivers and/or related software
doesn't implement it.
Not much you can do about that, unless you are working with
custom/turnkey hardware (arcade machines or whatever), consoles or
something.
When you do this:
SDL_SetColorKey(dialogSurface, SDL_SRCCOLORKEY|SDL_RLEACCEL, color);
the memory representing the pixels is compresed into a series of runs of
non-transparent pixels, which take up less space and are faster to render.
The original pixel memory is freed.
When you lock an RLE surface, you are requesting access to the original pixels
that no longer exist. SDL_LockSurface() knows this and kindly regenerates the
original surface for you from the RLE version, fills in dialogSurface->pixels
with a pointer to this temporary surface, and lets you fiddle with the
pixels.
When you call SDL_UnlockSurface() you are indicating that you have finished
fiddling and SDL_UnlockSurface() RLE encodes the modified pixels, destroys
the temporary surface, and sets dialogSurface->pixels to NULL to prevent
accidents(!).
Some hardware-surface systems e.g. DirectX are capable of copying surface data
between system memory and video memory transparently using the same
Lock/Unlock trick.
calling SDL_SetColorKey() or SDL_BlitSurface() whilst a surface is locked
(as you seem to) is a bad idea - see the docs.
Im not using opengl, I use HW surfaces and SW surfaces, and use double
buffering when use software. I have de SDL_Event structure in the main. And
I use the sdl timer to keep the frame rate uniform.
After reading the example at
http://sdldoc.csn.ul.ie/sdlcreatecursor.php
and doing some other reading around the 'net, it is my understanding
that most systems have hardware acceleration for the mouse cursor in
specific and that use of SDL's cursor routines comes highly
recommended.
They are, however, limited to white, black, transparent, and XOR as
colors.
Preventing start of screensaver - handling of platform
dependent messages : SystemParametersInfo()! It has
a function to enable and disable screensaving
You shouldn't have to do anything. SDL handles SC_SCREENSAVE on its own;
you should never get a screensaver while an SDL application is running
(or at least while it has focus).
deux techniques opposables : updaterects / doublebuffered
HW OpenGL :
Best "solution" for Linux so far is checking
whether glGetString(GL_RENDERER) returns a
value equal to "Mesa GLX Indirect" (=software?)
Not sure if this is 100% correct, but seems to be?
Only pain being that this GL call needs an active
OpenGL context, in order to not just return NULL...
Anyone have any better ideas for X11? (or others)
There's a function to do that directly : glXIsDirect
>I wanted to use glSDL for HW accelerated 2D blitting, but I saw
>> this note on the web page:*
>>
>> NOTE:* This branch of glSDL is no longer actively developed, but
>> there is a glSDL backend for SDL in the works. The glSDL backend
>> makes it possible for many SDL applications to use full hardware
>> acceleration, even without recompiling. This backend will hopefully
>> become part of SDL in the near future.
>>
>> Is it worth using glSDL, or should I wait until the glSDL backend
>> is available?
Well, the whole idea is to provide a portable accelerated backend for
SDL, so it shouldn't really matter much whether you are using a
"normal" SDL backend, glSDL/wrapper or the glSDL backend. On a fast
machine, you might be able to develop on the standard backends. If
you need the speed of OpenGL right away, just use glSDL/wrapper for
now.
However, note that glSDL/wrapper isn't 100% compatible with the SDL 2D
API. (For example, surface locking and updating is a bit flaky, since
glSDL/wrapper doesn't mark surfaces as h/w.) You should test your
code with glSDL disabled every now and then, to make sure it renders
correctly. If it does, it should render correctly with the real glSDL
backend as well.
//David Olofson
I create a surface with SDL_HWSURFACE in linux, to find the
physical address of the pixels, you have to lock the surface (SDL_LockSurface), then the pixel buffer will be available in surface->pixels, when you are finished unlock the
surface (SDL_UnlockSurface). Pixel operations are much slower on
Hardware surfaces, so if you plan to do lots of them to the surface
often you will be better off with Software sufaces.
if (image == NULL)
{
cout << "Error loading bmp: " << SDL_GetError() << endl;
}
http://www.linux-user.de/ausgabe/2001/06/042-3dbasic/glx_dri_s.png
direct openGL rendering : "direct" means "bypasses the X server to send OpenGL commands to the 3D
hardware".
to get X11 windowmanager resolution for fullscreen : The interface is in SDL_syswm.h and an example of
usage is at http://www.libsdl.org/projects/scrap :
void get_wm_property(t_pv *p)
{
SDL_SysWMinfo info;
SDL_GetWMInfo(&info);
p->wm_w = DisplayWidth(info.info.x11.display, 0);
p->wm_h = DisplayHeight(info.info.x11.display, 0);
}
Add the following line of code just after "cdrom = SDL_CDOpen( 0 )"
SDL_CDStatus( cdrom );
I think, when you set the video mode in software mode, SDL returns you a
pointer to a secondary surface and not to the surface actually in the
video memory. In this way, just call your clear_surface on the "screen"
surface. This doesn't alter the surface in video memory but your
secondary surface. The change are copied by SDL in video memory only
when you call SDL_UpdateRect. Thus, you only have 2 transferts in memory
and not 3. It will save some banwidth, improve your framerate and maybe
reduce your visual problem.
SDL_FULL screen, wrong hertz : The reason is the same in X, if you tried to change refresh
rates: you cannot tell what the highest working refresh is, since many
pre-PnP monitors will simply be configured (by the user) incorrectly;
on old monitors, there's no way for the system to tell.
The first:
Update a variable that is used like this (well, something like this):
myObject.x += steps * timeScale; // I'm sure you know what I mean
The second:
Use a sleep method in the main loop.
(This method calculates how long the application should sleep ofcourse)
I do use the first method, but the second one seams much simplier IMO.
Actually, all my gameplay related functions always take
an argument called "frametime" which is the number of milliseconds that
passed in the last frame.
The second one makes you lose accuracy, because SDL_Delay() is not
entirely accurate. You can do it in addition to the first method if you
don't want to use all the CPU time (which only makes sense for some
games; most of times, you shouldn't care about that).
I like a combination of the two. I use a pause to keep the program from
grabbing every available CPU cycle, but I use the actual time between
frames to update position
Well, I'd say "not at all", but that's not quite accurate. What I mean=20
is that I prefer to run the game logic at a fixed, hardwired=20
"virtual" frame rate, to make sure long frames don't cause collision=20
detection to fail and stuff like that.
Indeed, this method causes interference between the logic frame rate=20
and the rendering frame rate (you need to resample all "cordinate=20
streams" from the logic frame rate to the rendering frame rate), but=20
some simple interpolation deals with that very nicely.
Of course, the most accurate method is to always use the delta time in=20
logic calculations, but that means you have to explicitly deal with=20
long frames and stuff, so you don't have players running through=20
walls and stuff when the rendering frame rate drops too low.
keep track of the total time between frames and if it is less that 10
milliseconds I call SDL_Delay(5) which *averages* to a 5 millisecond
delay. Tends to keep the frame rate near 100 frames/second for fast
machines and lets the program run as fast as it can on slower machines.
>Is it possible to remove the title bar, when SDL is initialised in
>> windowed mode? If so, how can I do this?
Yes you can. A short look at SDL_video.h shows:
#define SDL_NOFRAME 0x00000020 /* No window caption or edge frame */
Just pass SDL_NOFRAME to SDL_SetVideoMode().
Changing resolutions :
prefer
/*resolution 1*/
> > SDL_Init(...);
> > scr1 = SDL_SetVideoMode(...);
> > /*resolution 2, no memory leak*/
> > scr2 = SDL_SetVideoMode(...);
> > SDL_Quit();
to
/*1*/
> > /*resolution 1*/
> > SDL_Init(...);
> > scr1 = SDL_SetVideoMode(...);
> > SDL_Quit();
> > /*resolution 2*/
> > SDL_Init(...);
> > scr2 = SDL_SetVideoMode(...);
> > SDL_Quit();
> >
> >
The one thing to note is that any previous screen pointer will become invalid.
If you have information more detailed or more recent than those presented in this document, if you noticed errors, neglects or points insufficiently discussed, drop us a line!