[ProAudio] Mastering and monitoring for spatial audio
cheater00social at gmail.com
Thu Apr 15 19:00:42 PDT 2021
Not with that exactly, but I was recently involved in quite a bit of
development on accurate head tracking using an inertia measurement
device (IMD), so if you need help with the head tracking, do let me
If you want "just" 5.1 surround, I would suggest a head mounted IMD
(possibly a VR headset, but you can get ones just mounted on the
head), a Windows PC using Steam VR, and an app written in the Unity
engine. This is the easiest way forward.
I understand Apple Spatial Audio is limited to Airpods Pro. There is a
Unity engine plugin for reading the IMD data off the airpods. An IMD
is how they figure out the movement.
This means you can build a program that runs a 3D scene, which has
some sound sources in it, that then gets mixed in-game using Unity's
stereo downmixer, and you get the output you want.
This is nearly at the level of "first project in unity" so you should
be able to find someone to do this fairly easily.
Bear in mind a few things from my experiences when using IMDs:
1. most IMD hardware sucks. IMD in itself is not inherently flawed but
IMD hardware has many very subtle issues that can be corrected for but
a) numerical bias - it'll tend to report drift in a specific direction
b) (partial) gimbal lock - sometimes turning in a specific direction
will not give you as much movement as you'd expect
c) momentum losses - if you turn your head right, down, left, and up,
you don't end up at the same position.
d) calibration data sucks - the calibration procedure is often
inadequate to fully characterize the sensor, the sensor has drifted
since the calibration procedure, or it is not retrievable due to lack
of documentation, or the documentation doesn't tell you how exactly to
interpret the calibration data
2. most software that decodes IMD hardware sucks:
a) not taking into consideration the issues above (huge problem),
especially numerical bias, momentum losses, and calibration
b) incorrect algorithms and bad linear algebra
c) bad filtering of incoming data stream
d) bad interpretation of the calibration data causes subtle issues
e) lag - the decoding is delayed compared to incoming data
f) bad or lack of center-seeking algorithm
g) lack of additional calibration on top of what the calibration data
suggests - it's often necessary and almost never done
One additional issue here is that with VR, your eyes will see the
landscape around you and will *tell you* that you are drifting, so
you'll know that eg the sounds are in the incorrect location but you
also can make sense of it because in relation to what you're seeing it
makes sense. Without the added visual cue it's quite a problem and you
might just hear the sounds morph around or shift around without any
apparent reason. The choice of ambient music is good here, because
stuff just shifting around isn't going to be so offensive. Listening
to a rock band like that would be pretty terrible, though.
On Fri, Apr 16, 2021 at 1:53 AM David Josephson via ProAudio
<proaudio at bach.pgm.com> wrote:
> Do any of you have current experience managing a workflow from e.g. a 5.1 surround recording to reproduction with head tracking in Apple spatial audio? I have a client who would like to record and play back ambient soundscapes where listenera can move their head within the soundscape, at least in rotation.
> David Josephson
> ProAudio mailing list
> ProAudio at bach.pgm.com
More information about the ProAudio