-
Notifications
You must be signed in to change notification settings - Fork 1
implementations
JW edited this page Sep 25, 2016
·
2 revisions
🚨 Hard to implement server side
- 3D sound:
- three.js
- playcanvas * only for speaker * test code
-
Babylon.js
- also supports HRTF for headphone
- HRFT supporting Libraries (npm modules):
- Implementing Binaural (HRTF) Panner Node with Web Audio API
-
Tutorial with code example
- 3D positioning with Audio Web API
⚡ Motivation: To process incoming coordinates and to communicate back to multiple sound output streams / devices
- only a few features of the Audio Web API are available for Node js
📃 Source: NPM module 'node-web-audio-api'
What's implemented:
- AudioContext(partially)
- AudioParam (almost there)
- AudioBufferSourceNode
- ScriptProcessorNode
- GainNode
- OscillatorNode (coming soon)
- DelayNode (coming soon)
var API = require('web-audio-api')
, AudioContext = API.AudioContext
//, AudioListener = = require('web-audio-api').AudioListener // !!! is missing!
, fs = require('fs')
, context = new AudioContext
, Speaker = require('speaker')
console.log('encoding format : ' + context.format.numberOfChannels + ' channels ; ' + context.format.bitDepth + ' bits ; ' + context.sampleRate + ' Hz')
context.outStream = new Speaker({
channels: context.format.numberOfChannels,
bitDepth: context.format.bitDepth,
sampleRate: context.sampleRate
})
fs.readFile(__dirname + '/../resources/audio/rain.wav', function(err, buffer) {
if (err) throw err
context.decodeAudioData(buffer, function(audioBuffer) {
var soundSource = context.createBufferSource()
soundSource.connect(context.destination)
soundSource.buffer = audioBuffer
soundSource.loop = true
soundSource.start(0)
})
context.listener.setPosition(20, -5, 0); // not available in node-audio-web-api
})
🚨 Dead End: missing AudioListener class implementation!! ==> not possible to change the position of sound server side by using Audio Web API
🚨 Dead End (similar as above)
- are all created for usage in browser
- need the browser's 'windows' object
- would also use the Web Audio API at the end
- using OpenAL
- open al using add on which we can specify for using multiple devices:
- code : https://github.com/jefftimesten/node-openal
openal.cc
:
// Enumerate OpenAL devices
if (alcIsExtensionPresent (NULL, (const ALCchar *) "ALC_ENUMERATION_EXT") == AL_TRUE)
{
const char *s = (const char *) alcGetString(NULL, ALC_DEVICE_SPECIFIER);
while (*s != '\0')
{
cout << "OpenAL available device: " << s << endl;
while (*s++ != '\0');
}
// ...
according to tip on stackoverflow:
change
ALC_ENUMERATION_EXT
toALC_ENUMERATION_EXT_ALL
change
ALC_DEVICE_SPECIFIER
toALC_ALL_DEVICES_SPECIFIER
!!
Thus, we get a list of all available devices!
🚨 BUG!: Mac OSX has an OpenAL bug due to old version used by CoreAudio
and differentiation between different devices in NodeOpenALDevice.cpp
:
// ...
NodeOpenALDevice::NodeOpenALDevice() {
// differentiate between different devices here
// e.g. device = alcOpenDevice((const ALCchar *) "Built-in Output\0");
device = alcOpenDevice(NULL);
if(device==NULL) {
std::cout << "cannot open sound card" << std::endl;
return;
}
};
//...
- using OpenAL