[ProAudio] Stereo/localization

David Josephson dlj at josephson.com
Sun Feb 2 12:01:26 EST 2020


I sent the link to Dr Choueri’s work because he’s taken a recent slice of the problem and understands it. There is no magic solution, especially when you have a four-port system (two input and output channels) trying to handle a multi-port world (infinite input channels, due to the location of the sound sources.) Incorporating a head-related transfer function helps with the stated problem, but it should really characterize the path from each sound source (including each reflection), and work in time domain as well as frequency. Most HRTFs do neither.

There are thousands of papers on this topic. We know that our ability to localize sounds is mostly driven by interaural time delays, interaural amplitude differences and learned processing about what things should sound like.

Of course you all know that my approach to this starts with the microphones that serve as extensions of our hearing into the original performance space. I don’t have a lot of time for fixes that try to fix an existing 2- or 3- or 4- channel sound file and give it a sense of solid dimension (stereo) for the listener.

Dolby has taken a different approach with Atmos, using sound objects placed in the listener’s hearing field according to recipes tuned for the playback environment. NASA has for decades worked on a similar idea to place aural warning signals in the acoustic environment for a pilot’s situational awareness. I think there’s a lot more promise in those directions than there is with anything that tries to make a recording intended for loudspeaker playback into one that works well for headphone listening.

David Josephson


More information about the ProAudio mailing list