[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Linux sound



On Tue, 15 Jun 1999, Pierre Phaneuf wrote:

> > I actually considered this, -after- coding the thread, because I had
> > serious problems getting pthreads to work with svgalib. (It turned out,
> > that this problem disappeared in glibc 2.1).
> 
> Sharing of the SIGUSR1/2 signals, eh? I saw that coming a mile up front!
> Annoys the hell out of me when *libraries* take over signals in this
> manner!

Yes, you are right. Actually I did know that svgalib uses SIGUSR{1,2}, but
I did not know that pthread did. (In earlier versions then the one
bundled with glibc 2.1)

> 
> > However, I have ended up going for the thread, mostly because of two
> > things;
> > - avoiding to call "process_sound" - the sound is beeing processed in the
> >   background by the OS.
> 
> Not avoiding this call is actually very easy. We put the process_sound()
> call right after the call to flip SVGA video page. Is called *a lot of
> times* per seconds. :-)

Yes, it is no big deal.

> 
> > - I almost know exactly how long it takes to enqueue a new sound.
> >   (Meaning, a call to sound_play(sound) returns almost immidiatly.
> >   If I had to write to a buffer, it could take variable length. Not a big
> >   deal.)
> 
> Enqueueing a new sound take very little time too, because an object
> containing the playing parameters (sampling rate, balance, volume) and a
> *pointer* to the sound data is added to an array of playing sounds. No
> buffer writing when starting to play a sound.
> 
> The mixing is done a buffer at a time in process_sound(), so it runs in
> bounded time (it can take from almost no time (if the sound device has
> no space for a buffer) to the time it takes to mix one buffer (not very
> long)).

I know the time is bounded. I jast gave you the only two reasons I had :-)
(none of them was any good. :-)
> 
> > It is almost the same as mine. There are benefits to controlling the
> > scheduling yourself, and benefits to having the OS do it.
> 
> In my experience, letting the OS do it leads to bonehead scheduling. The
> mixer loop for example, has most of the loop counters in registers, the
> code is small and fits into the code cache, data is taken in small
> chunks, so it also fits into the data cache. Interrupt in the middle of
> this: registers are stowed to main memory (consider that in a few loop
> iteration, they were simply going to be thrown away) and both code and
> data cache are crapped out (aaargghh!).

I had not considered this. You are right. The game I am using it for is
not really in need of CPU cycles. If it was, my approach would kill the
registers and cache.

> Also, writing thru a pipes involves at least two memory copies (still
> not too bad) and a forced context switch (this is bad), our resource
> manager uses compresses resources that are uncompressed in memory (two
> sets of buffer, one for each processes).
> 
> I would venture that using shared memory for communication would be a
> big improvement, but SysV shmem simply freaks me out and you can't have
> anonymous shared mmap() to do this (to my knowledge, if anybody knows
> how, that would be great!).

By using the thread, I actually use shared memory. It is like yours (and
others I guess) setup, where a int is enqueded. (Actually a bit more, if a
looping sound is requested), I simply use the thread for scheduling. I had
not given it any thought that this means that the cache and registers are
destroyed. (The sound mix code is hugely ineffecient anyway - my lack of
knowledge in C ensures that ... :-)

> 
> > One thing that needs to be considered in both cases is the size of the
> > buffer, as this will influence lag etc much. I have choosen 3 buffers of
> > 512 bytes at 11khz - this seems to work good. (not on sblive though).
> 
> We have two of them, at 8192 bytes each. We run the sound device at full
> resolution, 44.1kHz, 16 bit samples, stereo, which goes for about 47
> milliseconds of sound. We have a 10 millisecond timebase, so on
> reasonable hardware, it makes sense. Mixing for too much ahead get you
> lagged...

Yes. I mix approx 3 * 22 ms ahead, but mostly I am only approx. 2 * 22 ms
ahead.
> 
> What's your experience with the SBlive? We have one nearby, but couldn't
> try the Linux version of our game with it...

When I tried to allocate 3 buffers of 512 bytes each, it would only give
me 1 buffer of 512 bytes. When I tried to play to this, the sound was
lagging way behind. I have written creative labs about this, but has yet
to hear from them.

> 
> > The thread sleeps 12 ms between each write to the buffers. (And it kinda
> > sleeps in the write, I assume).
> 
> We set the /dev/dsp file descriptor to be async, but we're not sure this
> actually does anything when there is sufficient space in the sound
> device.

What difference should it make, if you set it to async? (I probably should
be able to figure this out myself, but it kinda confuses me.)

Mads

-- 
Mads Bondo Dydensborg.                               madsdyd@challenge.dk
Just because a program takes text commands makes it complex? I love GUI's. I
love using the web. I love WYSIWYG word processors. But I also love CLIs. It
feels more natural to me, as if I'm talking with the computer (granted, the
language isn't english, it's bash, and the vocabulary happens to be whatever's 
is my PATH)--I tell it what to do and it does it for me (unlike GUI's where I
have to do everything my own damn self). 
                                      - fassler, in response to MS France FUD