Location

Brandenburg Labs GmbH
Ehrenbergstraße 11, 98693 Ilmenau

Contact us

Phone: +49 (0)3677-8749075 / +49-(0)3677-668190 
General inquiries:
Business inquiries:

Press related inquires

Phone: +49 (0)3677-8749075
Email:

Follow us

Frequently Asked Questions

Welcome to our Frequently Asked Questions section! Our groundbreaking immersive audio and augmented reality headphone system has been presented to over 1000 amazed listeners (both audio professionals and enthusiasts alike) at international conferences and seminars, and we’ve received numerous inquiries about our innovative technology. Here, you’ll find comprehensive answers to the most commonly asked questions. 

Brandenburg Labs (BLS) was founded in 2019 in Ilmenau by Prof. Dr.-Ing. Karlheinz Brandenburg. As a spin-off of Fraunhofer IDMT and Technische Universität Ilmenau, we continue to collaborate closely with both institutions, building on decades of groundbreaking audio research. As experts in the field of audio technology, our main focus is the development of immersive audio products by creating immersive audio experiences for headphones that are as intuitive as real life.

In September 2024, we launched our first product designed for audio professionals, Okeanos Pro, a powerful headphone-based system that simulates multi-channel loudspeaker environments with remarkable realism, surpassing the quality and precision of existing solutions on the market. Okeanos Pro is the first product released in the Okeanos Systems lineup, with more products already in development.

Looking ahead, our long-term vision is to push the boundaries of the human listening experience. We want to do this by further developing the concept of PARty, “Personalized Auditory Reality”, which was developed at Technische Universität Ilmenau and the Fraunhofer Institute for Digital Media Technology.

Okeanos Pro is Brandenburg Labs’ first state-of-the-art headphone-based system, powered by our groundbreaking Deep Dive Audio (DDA) technology. Designed for audio professionals and institutions that demand precision and flexibility, it’s the ideal solution for research and innovation labs, audio engineers and producers, as well as universities and training programs focused on spatial audio. By simulating advanced multichannel loudspeaker setups through headphones,

Okeanos Pro enables users to reduce costs and save space while still enjoying full immersion, even in environments where multichannel physical loudspeaker setups are impractical. With its seamless pick-up-and-play usability, and high-fidelity, realistic sound reproduction, it delivers studio-grade performance without the complexity or cost of traditional hardware. Learn more

The Okeanos Pro system connects to your setup similarly to a traditional set of studio monitors, while providing the advantages of headphone-based monitoring. It utilizes ultra-low latency Dante, Ravenna, or AES67 connectivity, making Okeanos Pro easy to integrate into existing studio environments. The system’s reliable tracking solution, based on HTC VIVE technology, attaches to the user’s preferred headphones and enables accurate spatial positioning.

With our demo, we want users to experience the difference, not just compared to stereo, but also to other spatial audio systems, firsthand. In line with our motto, “You have to hear it to believe it,” we exhibit our technology at global conferences and trade shows to demonstrate that we offer the best commercial spatial audio system.

Our technology demo lets users experience the full capabilities of our headphone-based system in a format tailored to trade shows and conferences. It delivers the same performance as Okeanos Pro, simulating a real loudspeaker setup through headphones.

Virtual sound sources are perceived as if they’re anchored in fixed positions within the room, creating a highly convincing spatial audio illusion. Our fully virtual 16-channel loudspeaker set up showcases the powerful spatial audio capabilities of our system. But simulating speakers is just an example; any kind of virtual sound source can be simulated. Our system has been demonstrated at multiple conferences and shown worldwide to more than 1000 amazed listeners. Are you curious to try it for yourself or find out where we will be showcasing next? Check out our 2025 Events Calendar to see where you can meet us and experience our immersive audio demo, firsthand.

Our technology uses a smart algorithm to recreate realistic 3D audio over headphones in real time. It starts with an acoustic measurement of the room using a single microphone and loudspeaker. From this, we generate what’s called a Room Impulse Response (RIR), which captures how sound behaves in that space, including reflections off walls and objects.

We combine this with a simple 3D model of the room and track the listener’s head movements live. This tracking helps us figure out where the sound should come from as the user moves. The system then processes the RIR in small pieces and adapts the sound using generic Head-Related Transfer Functions (HRTFs), which help simulate how we naturally hear sound in space. The late reverberation is added using a noise-shaping approach. This all happens in real time, allowing for full six degrees of freedom (6DoF), so listeners can move and rotate their heads naturally while staying immersed in a realistic spatial sound experience.

The first step is to take acoustic measurements using an omnidirectional microphone. To do this, we play a sine sweep over the loudspeaker(s), which is then recorded by the microphone, in order to acuire the room impulse response containing the reflections of the room we are in. This room impulse response is then used by our algorithm. The entire process only takes a few minutes and is done for each room in which we are setting up the demo. We are currently working on simpler methods to make this process more accessible, so that in the future, even non-audio professionals can easily perform the setup.

We also measure the rough dimensions of the room we are in as well as some points of interest, such as the positions of the loudspeaker(s) and microphone. For the demo, we usually use 2 Genelec 8020D (or the smaller Genelec 8010A) loudspeakers and simulate them over the headphones. The algorithm can even account for the directivity of the sound sources, in this case the loudspeakers. For the multi-channel demo, we only measure a few positions inside the setup, with the rest being calculated from the measured RIRs. For headphones, we use Sennheiser HD560S open-back headphones with a cable connection, but any medium-priced headphones with similar specifications will work.

The 6DoF head-tracking system utilizes so-called “lighthouses” and a tracker commonly used with the HTC Vive HMD. It allows a 6-degree of freedom tracking, so we are able to capture the head rotation and position of the user in real-time, thus tracking the user’s movements within the room. This is achieved with the help of infrared laser beam emitters (light houses) in the corners of the room. The tracker devices receive the emitted infrared light, which makes it possible for the system to estimate the position and orientation. This is the current solution for our research demo, we are currently looking into other options more suitable for future consumer products, in order to reduce the hardware requirements. The ideal tracking solution combines low latency, high precision, and zero drift, while working everywhere with little to no setup.

For our demo, currently, we use channel-based audio content. The demo content includes specially selected music that we are licensed to play for public audiences. One of the featured pieces is a beautiful piano composition created by one of our talented Audio Engineers, Noel Toms. To demonstrate the immersive potential of our system, we also collaborated with Lasse Nipkow of Silentwork GmbH to help demonstrate the multi-channel capabilities of our system. The multichannel demo content includes works such as Vivaldi, Seasons, and Spring, performed by the Stradivari Orchestra, Xiaoming Wang, and Maja Weber, as well as Lost by Elija Tamou (artist name: Silas Kutschmann).

So far, there seems to be no defined “upper limit” to room sizes. Our system has been successfully tested in very large rooms (25m*20m*10m*), extremely dry rooms (with a reverberation time RT60 of 0.2s), as well as highly reverberant rooms (with an RT60 exceeding 1s).  The demo area is restricted by the cable connection between the rendering laptop and the headphones. Tracking also has some constraints. For example, in free-field conditions, such as anechoic chambers or outdoor settings, there are no reflections available for the algorithm to use, which violates the fundamental assumptions on which it is based.

Our first product, Okeanos Pro, was launched in September 2024. It’s a headphone-based system powered by our Deep Dive Audio (DDA) technology, designed for audio professionals, universities, and research institutions who require precise spatial audio reproduction without the need for physical loudspeakers. It’s already in use at leading institutions, including installations in Belgium, Switzerland, Japan, USA, and Germany. We are continuing to expand our B2B offerings, providing tailored systems for music production, professional education, virtual prototyping, and audio research. 

Looking ahead, we plan to bring our technology to the consumer market (high end first), with a focus on creating hardware solutions that feature wireless connectivity, stable head tracking, and seamless user experience. For this, we’re seeking manufacturing and distribution partners.

The first consumer application will be for home users who want to experience multichannel audio, like in the cinema, but at much lower cost compared to a 16 channel high quality loudspeaker system. Our technology has applications across multiple industries, including music production, teleconferencing, virtual acoustic prototyping for industrial demand, audio guides, AR/VR, and many more. Learn more about our products and services 

The vision for the future is PARty, where we are aiming to create personalized auditory realities (that is also where the acronym comes from). Just like glasses enhance vision in everyday life, the headphones will be able to individually improve human hearing. Disturbing sound is reduced, and what users want to hear is amplified. Unlike current noise-canceling headphones, PARty is able to intelligently and independently identify and modify acoustic signals.

Warning signals, such as the siren of an ambulance, reach the user at all times through the intelligent system in order to avoid dangerous situations. In addition, virtual sound sources, such as conversation partners on the telephone, blend naturally into the user’s listening environment and become realistically audible in space. This realistic acoustic presence also makes it possible to hold conversations with several distant conversation partners. Thanks to the personal modification options, users are able to design their own listening environment according to their individual needs. With intelligent wearables, users will be able to create their own personalized auditory reality. Such devices will be able to reduce interfering background noise and increase the volume of the sound source the user is currently focusing on.   

MULTIPARTIES: Multi-Party Augmented Reality Telepresence System
Brandenburg Labs, plazz, Consensive, and Technische Universität Ilmenau launched the joint research project “MULTIPARTIES”. The focus is on the development of a 3D communication system that enables realistic online meetings between several people over distances. The two-and-a-half-year joint project is being funded by the German government as part of “KMU-innovativ: Interaktive Technologien für Gesundheit und Lebensqualität”. Learn more

GROOVE: Experienced Synchronization for Connectedness and Closeness in Social Virtual Reality
Brandenburg Labs, in collaboration with Bauhaus-Universität Weimar, TU Ilmenau, and Consensive GmbH, is contributing to the joint project, “GROOVE – Experienced Synchronization for Connectedness and Closeness in Social Virtual Reality”. The GROOVE project aims to induce social entrainment in virtual environments to sustainably strengthen feelings of personal connectedness. The Federal Ministry of Education and Research finances the three-year project under the “Closeness over Distance – Enabling Interpersonal Connectedness with Interactive Technologies” directive. Learn more

Spatial hearing and binaural audio are not new concepts. Early experiments for binaural synthesis were conducted over 50 years ago, and it has been a topic of research ever since. Currently, with the rise of VR, big companies have discovered spatial audio as an improvement for entertainment. However, all commercially available systems suffer from the same limitations due to insufficient inclusion of psychoacoustical insights and acoustical cues of the current listening room and, often, reliance solely on HRTF based approaches. The combination of all the necessary features (spatial format, head-tracking, headphone device, dynamic binauralization, room acoustic characteristics of the virtual and the real room and more) are missing. Our competitors cannot match the holistic approach that we implement into our technology and products. Our system is the first to create sound indistinguishable from reality, even in demanding scenarios, while still being computationally efficient. 

We’re always happy to hear from passionate individuals who are excited about immersive audio and innovation, even if there’s no current opening that fits your background. To explore our current job opportunities, visit our jobs page.

If you’re interested in joining our team, you’re welcome to submit an unsolicited application at any time. Please include your CV, a motivation letter, and any relevant work samples or links to past projects. Send us an email at .

We understand that job hunting can be overwhelming, which is why we’ve created a helpful guide to support you through the application process. From preparation to submission, our step-by-step blog post will walk you through what we look for and how to make your application stand out. For for information please see our application guide.