Mercredi, 15 Juillet 2020
Dernières nouvelles
Principale » Google releases Resonance Audio SDK for VR, AR and 360 Video

Google releases Resonance Audio SDK for VR, AR and 360 Video

07 Novembre 2017

On a smartphone, not a lot of resources are allocated for audio.

Helping video creators improve the sound on augmented reality (AR), virtual reality (VR), and 360º videos played on desktops and mobile devices, YouTube has released the Resonance Audio software development kit (SDK).

The new SDK is based off Google's VR Audio SDK with Resonance Audio able to be used in AR, VR and 360-degree on both mobile and desktop applications.

SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics. This cross-platform support will mean developers won't have to pay much concern to translating audio across the wide variety of headsets out there.

Apple plans to let developers offer discounts on in-app subscriptions
Apple plans to give developers more freedom over the introductory pricing levels of in-app subscriptions. Ultimately, developers will be able to offer something in between a free trial and a full subscription.

Resonance Audio comes with cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows so that it integrate seamlessly with audio middleware and sound design tools. "The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs". There are native APIs for C/C++, Java, Objective-C and the web. Resonance Audio allows the users to spatialization numbers of sounds without compromising the audio quality.

Google claims that Resonance Audio is way more powerful than traditional 3D spatialization as it cannot only control where a sound comes from, but how it spreads out from its point of origin. It allows them to control the direction of acoustic waves to create more realistic sounds. The example cited on a blog post by Product Manager, Eric Mauskopf, is that of when standing behind a guitar player, it can sound quieter than when standing in front. Also, the guitar sounds louder when the viewer faces it as opposed to having his or her back turned.

Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisionic soundfield playback is supported, from game engines to YouTube videos. A good spatial audio experience can pull you in and fully immerse you in ways you didn't expect, it's a very cool thing to experience firsthand. Let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.

Google releases Resonance Audio SDK for VR, AR and 360 Video