[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Linux sound



Mads Bondo Dydensborg wrote:

> > After reading the .plan of zoid@idsoftware.com one time, we decided to
> > do it like Quake2 does it, in a single process, with calls to a
> > "process_sound" subroutine "sparkled" over the code (we didn't have to
> > sparkle very much, we call it at every end of video frame and it seems
> > to suffice). This process_sound subroutine uses an ioctl to check
> > whether /dev/dsp can take another buffer of sound data, and if so, mixes
> > and send the data. If it doesn't have any free buffer, process_sound
> > returns without doing anything.
> 
> I actually considered this, -after- coding the thread, because I had
> serious problems getting pthreads to work with svgalib. (It turned out,
> that this problem disappeared in glibc 2.1).

Sharing of the SIGUSR1/2 signals, eh? I saw that coming a mile up front!
Annoys the hell out of me when *libraries* take over signals in this
manner!

> However, I have ended up going for the thread, mostly because of two
> things;
> - avoiding to call "process_sound" - the sound is beeing processed in the
>   background by the OS.

Not avoiding this call is actually very easy. We put the process_sound()
call right after the call to flip SVGA video page. Is called *a lot of
times* per seconds. :-)

> - I almost know exactly how long it takes to enqueue a new sound.
>   (Meaning, a call to sound_play(sound) returns almost immidiatly.
>   If I had to write to a buffer, it could take variable length. Not a big
>   deal.)

Enqueueing a new sound take very little time too, because an object
containing the playing parameters (sampling rate, balance, volume) and a
*pointer* to the sound data is added to an array of playing sounds. No
buffer writing when starting to play a sound.

The mixing is done a buffer at a time in process_sound(), so it runs in
bounded time (it can take from almost no time (if the sound device has
no space for a buffer) to the time it takes to mix one buffer (not very
long)).

> It is almost the same as mine. There are benefits to controlling the
> scheduling yourself, and benefits to having the OS do it.

In my experience, letting the OS do it leads to bonehead scheduling. The
mixer loop for example, has most of the loop counters in registers, the
code is small and fits into the code cache, data is taken in small
chunks, so it also fits into the data cache. Interrupt in the middle of
this: registers are stowed to main memory (consider that in a few loop
iteration, they were simply going to be thrown away) and both code and
data cache are crapped out (aaargghh!).

Instead, I wait until the *end* of the loop, where the loop counter are
thrown away without a single load or store to the main memory, and the
data and code aren't needed anymore (so they can get out of the cache
without a problem).

Also, writing thru a pipes involves at least two memory copies (still
not too bad) and a forced context switch (this is bad), our resource
manager uses compresses resources that are uncompressed in memory (two
sets of buffer, one for each processes).

I would venture that using shared memory for communication would be a
big improvement, but SysV shmem simply freaks me out and you can't have
anonymous shared mmap() to do this (to my knowledge, if anybody knows
how, that would be great!).

> One thing that needs to be considered in both cases is the size of the
> buffer, as this will influence lag etc much. I have choosen 3 buffers of
> 512 bytes at 11khz - this seems to work good. (not on sblive though).

We have two of them, at 8192 bytes each. We run the sound device at full
resolution, 44.1kHz, 16 bit samples, stereo, which goes for about 47
milliseconds of sound. We have a 10 millisecond timebase, so on
reasonable hardware, it makes sense. Mixing for too much ahead get you
lagged...

What's your experience with the SBlive? We have one nearby, but couldn't
try the Linux version of our game with it...

> The thread sleeps 12 ms between each write to the buffers. (And it kinda
> sleeps in the write, I assume).

We set the /dev/dsp file descriptor to be async, but we're not sure this
actually does anything when there is sufficient space in the sound
device.

> As you, I am also wondering.

And I've been known to be wrong sometimes. :-)

-- 
Pierre Phaneuf
Ludus Design, http://ludusdesign.com/
"First they ignore you. Then they laugh at you.
Then they fight you. Then you win." -- Gandhi