Does anyone have experience with realtime audio mixing on native mobile targets?
Typical scenario is loading/uncompressing 5-10 mp3 files and playing them back in realtime mixing manner.
Dsp processing like pitch shift/time stretch might also be within the scope.
My experience is focused on web usage, but last time I checked OpenFL did not support sample callback to push data to audio buffer. Not sure how Kha handles sound, but the sound API seemed quite basic.
Heaps would probably be your best bet, and it seems like it should be easy to add your own DSP.
For web it is more difficult. Heaps can use script nodes in WebAudio to emulate OpenAL, so in theory it should be great, but script nodes will cause memory leaks in many browsers. So unless you can stick to existing WebAudio processing nodes, it is very problematic to do serious audio applications, and will probably remain so until AudioWorkers are available.
I haven’t noticed any leaks so far even though Kha uses webaudio script nodes by default for everything on desktop browsers (on mobile browsers it uses webaudio the regular way because of speed problems on some old phones). Where can I learn more about those leaks?
Hm, the only relevant problem regarding script nodes in there seems to be 379753 - chromium - An open-source project to help move the web forward. - Monorail - don’t see how any of that would cause problems with realtime mixing though - when you mix things yourself you only ever need a single script node which is exactly what Kha does. I think we’re talking about different things and now I’m not sure I was on the same page with what Jonas had in mind either. I was talking about doing all audio processing programmatically.
I would imagine that one is not just mixing one long or looped sound, but have to trigger new sounds every now and then. In a game that would for sure be the typical scenario anyway.
Not sure if any fixes landed in Chrome since last I tested, but then at least with how it was handled in heaps then the memory usage increased for each sound you played, resulting in our games grinding to a halt on mobiles after playing for some time. And same happened with very basic WebAudio examples.
To clarify, I understand what you mean with just creating a script node and have that running is not an issue, say having processing on a bus that is permanently connected.
However, to enable processing of individual sounds you cannot have a constant AudioBufferSourceNode that you keep retriggering, hence you cannot have a permanent script node attached to it either.
And a process like timestretching that was mentioned cannot be done as part of the mix graph, but is directly connected to the sound rather than signal path.
Ah, sorry for being slow, but now I understand what you mean! That is a nice solution!
Since this has been an issue plaguing Unity, Heaps, Howler and most other engines I looked at, I must say that it is impressive that despite being unaware of the issue you made the only solution not suffering from it
AudioWorklets - yes. They run directly in the system’s audio thread (or at least that’s the intention) so latency is reduced and it won’t stutter just because something else is slow. It’s a bit cumbersome though because it works like a WebWorker. Did not get to adding support yet but I’m looking forward to finally giving my cross-platform WebWorker/native thread macro magic something to do.
Sorry I didn’t notice I was tagged here. I made something that does give you a realtime callback. In js it does use scriptprocessingnode in the js/html environment but in hxcpp it uses portaudio so you get a real realtime environment. It does work on top of heaps as well, but the latency isn’t as good since heaps either uses openal or adds its own abstraction to openal.