Steam Audio

Steam Audio

TyounanMOTI Feb 23, 2017 @ 7:05pm
Ambisonics Audio Clip playback in Unity
There's no instructions to playback Ambisonics AudioClip. Can we playback Ambisonics AudioClip with Steam Audio in Unity?
< >
Showing 1-11 of 11 comments
lakulish Feb 24, 2017 @ 10:28am 
This feature is not currently available in the Unity integration, although it should be possible to do this via the C API.
PET Mar 8, 2017 @ 2:44am 
Hello there,

I think I also want something like this... I have a WAV with 4 channels... all I want is Unity to play that WAV with 4 channels.

Is this currently not supported?
Will it ever be supported?
ludzeller Mar 20, 2017 @ 2:40pm 
I would like to ping this.

I need to decode 1st order B-format in Unity for playing back in sync with 360 degree video textures on a sphere - viewed with a Vive.

I think this would be worth a tutorial if somebody has the time... Since YouTube supports this format now I guess it might grow much more important.

Alternatively: Dear Phonon/Steam team, could you please post a snippet that shows how to do this in c# Unity with the C API of Phonon?

Thanks!
sander May 19, 2017 @ 11:33pm 
Originally posted by lakulish:
This feature is not currently available in the Unity integration, although it should be possible to do this via the C API.

Any development here? Is it possible to import and play ambisonics audio clips (for HRTF-based binaural rendering) in Unreal Engine now?
Slin May 20, 2017 @ 4:54am 
All you need is the ambisonics data you want to play and feed it to Steam Audio piece by piece per audio frame as a IPLAudioBuffer (which can take interleaved and deinterleaved audio data and you will have to set its format according to you ambisonics data).
All that is left to do, is decoding the ambisonics data into the format of your choice using either the ambisonics binaural effect or the ambisonics panning effect. The output buffers channels can then just be mixed with your output channels.
The tricky part could be figuring out where to put it, but maybe you already know that (I got almost no experience with Unreal Engine or Unity...).
Last edited by Slin; May 20, 2017 @ 4:55am
sander May 20, 2017 @ 8:33am 
Originally posted by Slin:
All you need is the ambisonics data you want to play and feed it to Steam Audio piece by piece per audio frame as a IPLAudioBuffer (which can take interleaved and deinterleaved audio data and you will have to set its format according to you ambisonics data).
All that is left to do, is decoding the ambisonics data into the format of your choice using either the ambisonics binaural effect or the ambisonics panning effect. The output buffers channels can then just be mixed with your output channels.
The tricky part could be figuring out where to put it, but maybe you already know that (I got almost no experience with Unreal Engine or Unity...).

Thanks for the response!
The case is: I want to use an ambisonics audio recording as atmos in a simpe VR scene where the «player» is standing completely still (can only look around), so I guess the "camera" would be the placed to put it(?).
I’m the sound designer of this project and I’ve got very little experience with game audio as I usually work with film (and 360-video) and tv.
The programmers/Unreal Engine-guys have not worked with spatial audio before, so we could all use some guidance for this to work..
Any tutorial, tutorial-video or guide-for-dummies we could take a look at?
All help is much appreciated:)
Slin May 20, 2017 @ 8:48am 
Ambisonics are encoded to provide the correct audio for one position, as such the ambisonics sound source can not really have a position. Unless Unreal directly (or using it's steam audio implementation, maybe it does, check out the docs!) supports playing ambisonics, your programmers will have to implement it, which as I tried to explain above should be quite easy using Steam Audio. How they implement it for you to use is up to them.
sander May 20, 2017 @ 9:52am 
Originally posted by Slin:
Ambisonics are encoded to provide the correct audio for one position, as such the ambisonics sound source can not really have a position. Unless Unreal directly (or using it's steam audio implementation, maybe it does, check out the docs!) supports playing ambisonics, your programmers will have to implement it, which as I tried to explain above should be quite easy using Steam Audio. How they implement it for you to use is up to them.

Thanks for replying again!

Ambisonics are encoded to provide the correct audio for one position, as such the ambisonics sound source can not really have a position.

I don't understand this..
Yes, the ambisonics audio is recorded at a static position. Just like the "player" in this super-simple game vil also be stationary.
The only thing the "player" can do is to move his/her head and look around.
I can see the problem if the player should be able to walk around, but here the player is standing still.
It would also be a problem if we where talking about binaural audio recordings, but we are talking about ambisonics (that hopefully will be decoded to binaural using HRTF-based decoding and headtracking if the Steam Audio/UE can do this).

Unless Unreal directly (or using it's steam audio implementation, maybe it does, check out the docs!) supports playing ambisonics, your programmers will have to implement it.

I have checked the docs for Steam Audio but found very little about it's ability to implementing ambisonics and binaural decoding of this format. I would appreciate if someone from Valve would chime in here!
I understand that my programmers would have to implement, but I'm trying to help by checking if what we are trying to do is possible using Steam Audio.
Slin May 20, 2017 @ 10:12am 
That's what I tried to explain with my first answer. The core parts I mention there are part of Steam Audio. You'll need to feed your ambisonics data into in IPLAudioBuffer, create an ambisonics binaural effect using iplCreateAmbisonicsBinauralEffect (which will require a binaural renderer and Steam Audio to be initialized), use it on the buffer using iplApplyAmbisonicsBinauralEffect and mix the output buffer into your general audio pipeline.
I think to have the head orientation effect the output you'll have to rotate the ambisonics buffer using iplRotateAmbisonicsAudioBuffer. This is all part of the C API.
sander May 20, 2017 @ 10:38am 
Originally posted by Slin:
That's what I tried to explain with my first answer. The core parts I mention there are part of Steam Audio. You'll need to feed your ambisonics data into in IPLAudioBuffer, create an ambisonics binaural effect using iplCreateAmbisonicsBinauralEffect (which will require a binaural renderer and Steam Audio to be initialized), use it on the buffer using iplApplyAmbisonicsBinauralEffect and mix the output buffer into your general audio pipeline.
I think to have the head orientation effect the output you'll have to rotate the ambisonics buffer using iplRotateAmbisonicsAudioBuffer. This is all part of the C API.

Ok, thanks again!
wrenchse Oct 17, 2017 @ 12:01pm 
Originally posted by lakulish:
This feature is not currently available in the Unity integration, although it should be possible to do this via the C API.

Any news regarding adding this to the Unity integration?
< >
Showing 1-11 of 11 comments
Per page: 1530 50