Frequently Asked Questions
Welcome to our Frequently Asked Questions section! Our groundbreaking immersive audio augmented reality headphone system is presented to over 1000 amazed listeners at international conferences and seminars, and we’ve received numerous inquiries about our innovative technology. Here, you’ll find comprehensive answers to the most commonly asked questions.
BLS was founded in 2019 in Ilmenau by Prof. Dr.-Ing. Karlheinz Brandenburg and is a Spin-off company of Fraunhofer IDMT and Technische Universität Ilmenau. We are still working together closely with these institutions. Therefore, we can rely on years of research and experience in the audio field. As experts in the field of audio technology, our main focus is the development of immersive audio products. Our start-up company creates immersive audio experiences for headphones, as intuitive as real life. Our long-term vision is to extend the limits of the human hearing experience. We want to do this by further developing the concept of PARty, “Personalized Auditory Reality”, which was developed at Technische Universität Ilmenau and Fraunhofer Institute for Digital Media Technology. Furthermore, we offer a range of consulting services in the audio field, such as project planning. We are seeking licenses for our industrial property rights.
We have built a proof-of-concept demo system, which allows users to experience our headphone technology. It simulates a real set of loudspeakers, allowing people to listen to virtual sound sources, which are stably anchored in a fixed position in the room. Nevertheless, simulating speakers is just an example; any kind of virtual sound source could be simulated. The system has been demonstrated at multiple conferences and shown worldwide to more than 1000 amazed listeners.
The current algorithm for the demo is a parametric extrapolation algorithm that enables calculating Binaural Room Impulse Responses (BRIRs) in real-time, based on a single omnidirectional Room Impulse Response (RIR). A basic geometric model of a room, as well as the positions of the sound sources and the microphone, are needed to be captured in advance. From tracking the user’s head pose live, the directions of arrival (DoA) for the direct sound are calculated, while the early reflections are estimated by a simplified image source model. The RIR is processed in segments that will be convolved appropriately with generic Head Related Transfer Function filters (HRTFs). The late reverberation is emulated by a noise-shaping approach. The algorithm allows six degrees of freedom (6DoF) in rotation and translation for the user.
We do an acoustic measurement with an omnidirectional microphone. We play a sinus sweep over the loudspeaker, which is then captured by the microphone, so we get the room impulse response containing the reflections of the room we are in. This room impulse response is then used by our algorithm. The process only takes a few minutes and is currently done for each room in which we are setting the demo up. We are already working on smarter ways to do this so it can all be set up by the user.
We also measure the rough dimension of the room we are in. For the demo, we usually use 2 Genelec 8020D (or the small ones Genelec 8010A) loudspeakers and simulate them over the headphones. The algorithm uses the directivity of the speakers. We use Sennheiser HD560S open-back headphones with a cable connection, but it could be any similar medium-priced headphones with these specifications.
The 6DoF head-tracking that we are using belongs to the HTC VIVE. It allows a 6-degree of freedom tracking, so we get the head rotation of the user as well as the translation approximately in real-time, thus the user’s movements in the room. This is achieved with the help of static infrared emitters in the corners of the room. The tracker devices receive the emitted infrared light, which makes it possible for the system to estimate the position and orientation. This is the current solution for our research demo, we are already looking into other options more suitable for future consumer products, to reduce the hardware requirements. The ideal tracking solution combines low latencies, high precision, and no drift and works everywhere with little to no setup. This solution does not exist yet, but we can make certain tradeoffs for certain use cases.
Object-based audio is ideal for our system. Nevertheless, channel-based audio works well, too. The music has been picked for licensing reasons, so we are able to play it for a public audience; the piano piece was created by our working student, Noel Toms.
So far, there seems to be no defined “upper limit” to room sizes. Our system has been successfully tested in very large rooms (25m*20m*10m*), extremely dry rooms (with a reverberation time RT60 of 0.2s), as well as highly reverberant rooms (with an RT60 exceeding 1s). However, one limitation is that we currently use measured impulse responses. While measuring these responses outside can be challenging, there are workarounds for such use cases.
In the coming weeks, more specialized prototype systems (similar to the demo system) will be sold or lent to B2B pilot customers for use in music recording studios and research. A first system is already installed at a research institution in Belgium. We are looking to integrate into several more B2B applications like professional education, virtual prototyping, and audio guides. We will license our technology to select partners. Gradually, we’ll extend our feature set, performance, and integration to allow widespread adoption of our solution.
After a successful introduction to the B2B market, we aim to address the consumer market. This requires the realization of a consumer-ready hardware solution, including stable head tracking and wireless connectivity. For this, we aim to find manufacturing and marketing partners.
Use cases we see for our technology are in music production, teleconferencing, virtual acoustic prototyping for the industrial demand, audio guides, AR/VR, and many more, where high specialization and hardware costs are less problematic. Once the headphone technology goes into mass production, it will initially be adopted by the entertainment industries, before spreading wide enough for general usage. At this stage, we expect a complete adoption of the technology, and it will become the de-facto standard for audio playback. Learn more about the products and services we offer.
The vision for the future is PARty, where we are aiming to create personalized auditory realities (that is also where the acronym comes from). Just like glasses enhance vision in everyday life, the headphones will be able to individually improve human hearing. Disturbing sound is reduced, and what users want to hear is amplified. Unlike current noise-canceling headphones, PARty is able to intelligently and independently identify and regulate acoustic signals. Warning signals, such as the siren of an ambulance, reach the user at all times through the intelligent system in order to avoid dangerous situations. In addition, virtual sound sources, such as conversation partners on the telephone, blend naturally into the user’s listening environment and become realistically audible in space.
This realistic acoustic presence also makes it possible to hold conversations with several distant conversation partners. Thanks to the personal modification options, users are able to design their own listening environment according to their individual needs. With intelligent wearables, users will be able to create their personalized auditory reality. Such devices will be able to reduce interfering background noise and increase the volume of the sound source the user is currently focusing on.
Spatial hearing and binaural audio are not new concepts. Early experiments for binaural synthesis have been conducted over 40 years ago. In research, it has been a topic ever since. Currently, with the rise of VR, big companies have discovered Spatial Audio as an improvement for entertainment. However, all commercially available systems suffer from the same limitations due to insufficient inclusion of psychoacoustical insights and acoustical cues in the current listening room and reliance solely on HRTF based approaches. The combination of all the necessary features (spatial format, head-tracking, headphone device, dynamic binauralization, room acoustic characteristics of the virtual and the real room and more) are missing. Competitors do not have a holistic approach like we are planning to implement into our technology and products. Our system is the first to create sound indistinguishable from reality, even in demanding scenarios. This approach additionally makes our technology computationally efficient.