Google plans on making sure you can hear your enemies creeping up on you in virtual reality, with their new Resonance Audio SDK.

Google is taking steps to improve the audio experience in VR with the help of a new SDK for application and game developers. The Resonance Audio SDK is aiming to improve the quality and ease of implementation of spatial audio in VR, with the aim of creating a more immersive experience for users.

The new SDK is aiming to significantly reduce the computational intensity and latency of processing spatial audio in VR, as system resources are heavily dedicated to visual processing. As well as that, it uses Ambisonic techniques for accurate positioning of hundreds of audio sources without damaging audio quality on smartphones.

What this achieves is that it allows developers to control the direction in which sound propagates from sound sources. For example, if you are standing behind a piano player in the virtual world, it will sound quieter than if you were standing directly in front of them. Facing and turning your back to the piano can also impact the dynamics. It will allow effective simulation of how sound spreads out, bounces off objects, and is blocked by objects depending on the VR environment.

The SDK additionally excels at rendering near-field effects when a source approaches the listener's head. This provides an accurate perception of distance and further boosts immersion, which Google says should significantly improve the VR experience.

Resonance Audio works with Unreal Engine, FMOD, Wwise, Unity and a number of popular digital audio workstations (DAWs) such as Ableton Live. You can find out more over at the GitHub page below:

What do you guys think? Do you use VR or own a VR headset? Do you think software intiatives like this will help improve the VR experience as a whole? Let us know in the comments, or post over on the forums with your thoughts.

Via XDA