[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: sound header
At 22:43 31/03/99 -0500, Derek wrote:
>For the final time, I said I agree! :-) OSS/ALSA comes first, and later
>down the road, say version 2.0, 3.0 heck later on even, low level drivers
>ought to be implemented. I really do think we could do a more efficient
>job of it. But, getting a mature API, is more of a priority...my first doc
>said OSS and ALSA was not hte priority and, after research and reading
>comments, I changed it.
I'm glad you agree with the OSS/ALSA part so we get something going in
the short term (I'm talking a month or 2 here). Hopefully the rather
blunt comments from others have made your realize that rewriting low
level drivers is unrealistic (-:
If the existing OSS/ALSA drivers are too inefficient or don't provide
enough features, we should be hassling the OSS or ALSA developers to
improve them, or contribute to their development, not reinvent our own.
>> Playing MOD, XM, S3M etc _IS_ wavetable synthesis! (-:
>You'd be surprised, I've seen some alternative implementations...very
>interesting! It's been a long time though, mind you I was writing all that
>stuff on about umm well no sleep, so my thought processes were not going
Well playing a MOD, S3M etc is basically mixing the samples contained in
the file in realtime, altering their pitch, volume & panning according to
the pattern data. Do some research if you don't understand how they work,
they are very important part of games, and almost the only format used in
>By mixer, I meant the code that takes the sounds and mixes them together.
>:) I meant, code that takes the sound's point of origin in 3D space,
>factors in everything, then mixes it accoordingly.
OK, so this 'mixed' part is then passed to the mixer that combines it
with the other 3d sounds that have been 'mixed'? <g>
>>Either way, the API should be the same - the game defines 'where' the
>>sounds are, and penguinsound
>>locates them using whatever system is available.
>My definition of surround sound requires >2 speakers, sorry for confusion.
>To me "surround sound" with only 2 speakers is just vanilla 3d positional
>audio...it's not surrounding you is it? ;-)
Sorry, I should have been clearer. You can output dolby encoded surround
sound with a standard stereo sound card, but you require a surround sound
decoder attached to your sound system to decode the 2 audio channels into
the left, centre, right & surround speaker outputs. Virtually all videos,
and most stereo TV broadcasts are encoded this way. If you don't have a
surround decoder, it still works well with a stereo (2 speaker) system.
Note this is quite a different matter from a soundcard that has 4 or more
separate speaker outputs. This would require quite separate encoding, and
would require driver-level support for the extra hardware channels.
>>btw, I can contribute to the dolby & 3d sound stuff, as well as the audio
>>codec for the
>>voice communication stuff, as well as generic sound codec handling.
>Hey, I take help where it can be taken from. Right now, I'm trying to
>learn about the OSS stuff, I'm going to try to compile some stuff later and
>test it out that I've already done. I really wish I was back in the DOS
>days, the docs made so much more sense when they said put this value in
>that port. ;-) OSS has ioctl calls, and I've only use ioctl once many many
>moons ago for CD Audio (it was interesting...) so it's a weird thing to
OSS is quite easy to use, once you figure out the quirks. Using it for
realtime applications is a bit trickier though. Basically you have to
reduce the fragment size to somethign close to your audio update rate.
Buffer underruns are a problem. My experiments just used a loop that
waits till there is only one fragment playing, then calculated the next
fragment - but we need something a bit more elegant than this <g>
btw, IRC meets - why don't you just use the channel #gamedev - I'm sure
that's what the last meets used, and it's always empty. Personally I'd
prefer IRCnet, but make up your own mind..