The platform is billed as an end-to-end universal positional audio solution built with the flexibility to support object-based formats, open standards, and legacy content across mobile, PC and other consumer electronics devices.
It’s comprised of components — available independently or as part of a complete solution — for content creation, with features designed to optimize audio playback.
Components and capabilities include:
- Content Creation: Content creation plugins that produce immersive audio in ambisonics and object-based formats and integrate into industry standard audio design tools.
- MPEG-H Encoding and Decoding: Support for the encoding, decoding and transport of audio technology for next-generation TV broadcasts and streaming video.
- Rendering Engine: Spatializes ambisonic, object-based as well as legacy channel content through headphones and speakers across all consumer devices.
- Tuning and Device Optimization: Measures and calibrates audio playback.
- Personalization: Delivers personalized audio profiles using Head-Related Transfer Functions (HRTF) that are said to be optimized for a listener’s unique hearing physics.
At MWC, THX and Qualcomm will jointly demonstrate the platform capabilities, including the encode, decode, transport and rendering of content using the MPEG-H audio standard. The demo will show how content creators, broadcasters and streaming media companies can use the THX Spatial Audio platform to deliver immersive audio experiences on mobile devices.