top of page

From Headphones to Neural Interfaces: What AudioXpress Has Been Showing Us

  • Writer: Cristina Costa
    Cristina Costa
  • Mar 14
  • 3 min read

Over the last few years, I’ve had the chance to follow how audioXpress has been tracking the convergence between audio and neurotechnology. A few news pieces by Joao Martins in particular tell a clear story: in 2018, hearing‑assist products like Alango’s Wear & Hear line were already showing how DSP and smart form factors could make personalized hearing more accessible.


From 2023 onward, the focus clearly shifted to true brain–computer interfaces. In August 2023, Martins reported on AAVAA’s Headband Accessibility Developer’s Kit, a wearable BCI that uses intentional gestures (blinks, tongue clicks, eye and facial movements) to give people with motor and speech impairments direct control over devices. Later that year, in December 2023, he covered Neurable’s partnership with Master & Dynamic to launch BCI‑enabled MW75 Neuro headphones, integrating EEG sensors into premium consumer headphones.


In 2024 and 2025, the articles expanded this narrative to in‑ear and in‑car experiences. A January 2024 news piece highlighted how Harman Automotive was transforming the in‑cabin experience, combining advanced audio, sensing and personalization for drivers and passengers. In January 2025, Martins reported from CES on IDUN Technologies and Analog Devices unveiling brain‑sensing earbuds, building on IDUN’s DRYODE electrodes and ADI’s biopotential front end to enable in‑ear EEG for health, productivity and entertainment.


By February 2026, the story had moved one step further. In “Neural Interface Earbuds: Naqi Logix Closes Acquisition of Wisear” (2026), Martins describes how Naqi is turning everyday earbuds into non‑invasive neural interfaces that detect brain waves, muscle impulses and micro‑gestures, converting them into digital commands for hands‑free, voice‑free control. The acquisition of Wisear, a pioneer in ear‑based neural signal processing and embedded AI, reinforces this vision of neural earbuds as a mainstream human–machine interface.


What I find fascinating in 2026 is how these developments line up with broader advances in neuroscience and neurotechnology. Brain‑sensing earbuds like IDUN’s Guardian platform are now backed by serious engineering work from Analog Devices, turning in‑ear EEG into a viable consumer‑grade tool for sleep, focus and cognitive monitoring. At the same time, companies such as Naqi, Neurable, AAVAA and others are showing that neural interfaces don’t have to be invasive implants to be powerful – they can be audio devices we already wear every day.


Looking ahead, this raises big questions – and opportunities – for anyone working with audio, media and digital content:

  • How do we design sound and experiences when headphones and earbuds can read brain signals, not just deliver audio?

  • What does “user experience” mean when the interface is literally neural?

  • And how do we ensure these technologies are used ethically, for accessibility, health and creativity – not just for more intrusive tracking?


As someone who has spent years working at the intersection of media, digital content and audio, I’m very interested in this next wave: brain‑aware audio products. If you’re working in neurotechnology, audio product development or applied neuroscience and want to exchange ideas about where this can go next, I’d be happy to connect.

bottom of page