Realtime audio mixing/dsp on Android & IOS: Kha? OpenFL? Snowkit? Heaps?

Hi!

Does anyone have experience with realtime audio mixing on native mobile targets?
Typical scenario is loading/uncompressing 5-10 mp3 files and playing them back in realtime mixing manner.
Dsp processing like pitch shift/time stretch might also be within the scope.

Any ideas and experiences are welcome!

/ Jonas

1 Like

@pinkboi might know something.

My experience is focused on web usage, but last time I checked OpenFL did not support sample callback to push data to audio buffer. Not sure how Kha handles sound, but the sound API seemed quite basic.
Heaps would probably be your best bet, and it seems like it should be easy to add your own DSP.

For web it is more difficult. Heaps can use script nodes in WebAudio to emulate OpenAL, so in theory it should be great, but script nodes will cause memory leaks in many browsers. So unless you can stick to existing WebAudio processing nodes, it is very problematic to do serious audio applications, and will probably remain so until AudioWorkers are available.

Thank you Leo for valuable insights!

Web is not the case here, so it’s not a problem.
Contacted Rob, and Kha seems to be possible.

A closer look at Heaps sound api looks promising, with dedicated snd.effects package for example…
Big part of it is finding a well-working complete mobile target toolchain - lots of work ahead…! :slight_smile:

Andrei Rudenko has things going on with Clay, built on Kha, that looks promising:

I haven’t noticed any leaks so far even though Kha uses webaudio script nodes by default for everything on desktop browsers (on mobile browsers it uses webaudio the regular way because of speed problems on some old phones). Where can I learn more about those leaks?

This is for Chrome, but I observed it in other browsers as well:
https://bugs.chromium.org/p/chromium/issues/detail?id=576484

Since you cannot really reuse nodes, it is very problematic and no way to work around it what I found.

Hm, the only relevant problem regarding script nodes in there seems to be 379753 - chromium - An open-source project to help move the web forward. - Monorail - don’t see how any of that would cause problems with realtime mixing though - when you mix things yourself you only ever need a single script node which is exactly what Kha does. I think we’re talking about different things and now I’m not sure I was on the same page with what Jonas had in mind either. I was talking about doing all audio processing programmatically.

I would imagine that one is not just mixing one long or looped sound, but have to trigger new sounds every now and then. In a game that would for sure be the typical scenario anyway.

Not sure if any fixes landed in Chrome since last I tested, but then at least with how it was handled in heaps then the memory usage increased for each sound you played, resulting in our games grinding to a halt on mobiles after playing for some time. And same happened with very basic WebAudio examples.

To clarify, I understand what you mean with just creating a script node and have that running is not an issue, say having processing on a bus that is permanently connected.
However, to enable processing of individual sounds you cannot have a constant AudioBufferSourceNode that you keep retriggering, hence you cannot have a permanent script node attached to it either.
And a process like timestretching that was mentioned cannot be done as part of the mix graph, but is directly connected to the sound rather than signal path.

Well, I didn’t know that’s not possible so I did it anyway. Here’s the code for that: Kha/Audio1.hx at master · Kode/Kha · GitHub

Not sure I’m following what it is you do in Kha that contradicts what I’m saying, and would have to see the WebAudio implementation to understand what is happening with the graph.

But I’m very curious to figure out if I got anything wrong. Are you saying that you can reuse the audiobuffer source nodes? If not, how does that work with not recreating script nodes as well?

Will for sure dig deeper into the sound implemention in Kha anyway, seems interesting!

I do not use any audiobuffer source nodes. I have no use for them. The backend code is here: Kha/Audio.hx at master · Kode/Kha · GitHub

Ah, sorry for being slow, but now I understand what you mean! That is a nice solution!

Since this has been an issue plaguing Unity, Heaps, Howler and most other engines I looked at, I must say that it is impressive that despite being unaware of the issue you made the only solution not suffering from it :wink:

Nice to see-evolution of the thread…! :slight_smile:

Regarding ScriptProcessorNodes, AudioWorklets are supposed to replace them, aren’t they?

Thanks Leo!

AudioWorklets - yes. They run directly in the system’s audio thread (or at least that’s the intention) so latency is reduced and it won’t stutter just because something else is slow. It’s a bit cumbersome though because it works like a WebWorker. Did not get to adding support yet but I’m looking forward to finally giving my cross-platform WebWorker/native thread macro magic something to do.

Sorry I didn’t notice I was tagged here. I made something that does give you a realtime callback. In js it does use scriptprocessingnode in the js/html environment but in hxcpp it uses portaudio so you get a real realtime environment. It does work on top of heaps as well, but the latency isn’t as good since heaps either uses openal or adds its own abstraction to openal.

2 Likes