Welcome to the Future of Noise Canceling

7 hours ago 9

You might be perfectly satisfied with the noise canceling in your life. Maybe you’ve got ANC headphones for your commute and you’ve found just the right soundproofing materials to finally stop the inter-apartment wars with your neighbors. In that case, don’t read on. We’re sure you won’t be interested in earbuds that can tune out arguments and tune in nature, or thin, affordable, sound-absorbing wallpaper. Not to mention the work that's being done to help people who are hard of hearing, and what's coming next in soundproofing. Yes, the future of noise cancellation is looking very interesting indeed.

But when we think about noise cancellation, what’s our baseline? If you want to block out all sound and go total cocoon, you can pick up a pair of Sony or Bose cans and get pretty close to that. But considering their ubiquity and cultural impact on how we engage with the outside world, let’s turn instead to Apple’s AirPods for an idea of more useful, fine-tuned features that could point to where the field is headed.

The third-gen AirPods Pro, alongside Apple’s over-ear AirPods Max, offer Active Noise Canceling, Transparency Mode, Adaptive Audio (which adjusts noise cancellation to your surroundings), and Hearing Protection, which automatically reduces dangerously loud sounds. Getting more advanced still, within the customisations available for Transparency Mode, there are several options for helping those that are hard of hearing. There's Conversation Boost, which focuses the audio pick-up on the person in front of you while canceling ambient noise, and Live Listen, which uses an iPhone microphone to boost a specific speaker’s voice—plus you can choose to amplify your own voice too.

The AirPods Pro  offer a number of nextgen noise canceling features.

The AirPods Pro (3rd gen) offer a number of next-gen noise canceling features.

Photograph: Parker Hall

This blurring of the lines between audio and health devices looks set to be a trend across the industry. “We really want to make sure that we take care of our customers’ hearing,” says Miikka Tikander, the Helsinki-based head of audio at Bang & Olufsen. Tikander points to recent data about the decline in hearing health in young adults and reports that there was a lot of emphasis from manufacturers on ANC and hearing health at the AES’ Headphone Technology conference in Espoo, Finland this August.

“Apple has a big lead in that area,” he says. “We want to make sure that our headphones can adapt, make this choice [on when to block out sound] on your behalf, if you let it, of course. Some people don't like that idea, but if there's a noisy event in your surroundings, the headset can take care of it, just tune it out a bit and get you back to normal listening once you are away from that noise.”

Enter the “Sound Bubble”

Hearvana AI is one startup looking to go much further than the AirPods’ current suite of noise canceling and ambient noise features. Cofounded by Shyam Gollakota, a computer science & engineering professor at the University of Washington, and two of his students, Malek Itani and Tuochao Chen, Hearvana recently raised $6 million in a pre-seed round which included none other than Amazon’s Alexa Fund.

One of the startup’s first big innovations was “semantic hearing,” which was the first project they approached, around three years ago. The team built a hardware prototype—a pair of on-ear headphones with six microphones across the headband, connected to an Orange Pi microcontroller—to test out a model that had been trained to recognize 20 different types of ambient sounds. This included things like sirens, car horns, birdsong, crying babies, alarm clocks, pets, and people talking, and then allowed the user to isolate say, one person’s voice as a “spotlight,” and block out all the other frequencies.

“So I'm going to the beach and I want to listen to just ocean sounds and not the people talking next to me, or I’m in the house vacuum cleaning but I still want to listen to people knocking on the door or important sounds, like a baby crying,” explains Gollakota, who is based in Seattle. “And that’s what we solved first. This was the difference between a vacuum cleaner and a door knock. They sound pretty different, right?”

side profile of person wearing headphones

Hearvana's intelligence headphone prototype features on-device deep learning algorithms, which create real-time “sound bubbles,” where all speakers within the bubble are amplified, and other sounds are suppressed.Courtesy of Hearvana

The next question the team tackled was the headphones being able to understand the subtle differences between individual human voices, allowing for “target speech hearing” based on proximity and who the user is looking at.

The result is the “sound bubble” feature—Hearvana’s souped-up version of Conversation Boost, in which ambient chatter is quieted to 49 decibels and the person in front of you, or the people on your table in a restaurant, have their voices automatically amplified, with a less than 10-20 millisecond lag time. It’s not just limited to blunt distance either; you could, for example, enroll a tour guide’s voice by looking at them for three to five seconds to teach the model their audio characteristics, and then feel free to look away at the attraction as the amplification continues.

He says experts were skeptical that anything using deep learning would be able to sync up to the wearer’s visual senses quickly enough, especially with the power and compute constraints of headphones, but they got there. The key to solving the issue of making these features near real time is keeping the model small and specific, on-device, no need for the cloud.

“For the sound bubble, we collected robotic data in different rooms and at different distances and then two humans, the authors of the paper, held the device and collected conversation data for 30 minutes,” says Gollakota. “It’s kind of an art honestly. Big Tech is throwing huge amounts of data and compute at the problem—we’re taking the opposite approach. You get intuition, it comes from experience building these systems, you have to understand the domain better.”

Making Smartglasses Sound Better

One of the big questions hovering over the next decade in consumer tech is: Are we all going to be wearing dorky face computers soon, or not? Meta is certainly one company hoping that we are, and one of its more recent investments—a $16.2m audio research lab in Cambridge, UK—suggests it could be considering how audio plays its part in that experience too.

Image may contain Mark Zuckerberg People Person Body Part Finger Hand Performer Solo Performance Adult and Crowd

Meta has invested $16.2m into an audio research lab in Cambridge, UK, focused on its AR and AI glasses.

Photograph: Getty Images

The lab, Meta says, is “dedicated to advancing audio technologies for Meta’s future AR and AI glasses,” and includes anechoic chambers for near-silent testing, a configurable reverberation room, which can mimic many different sound environments, and a 3,600 square foot area with sub-millimetre optical tracking to improve context-aware audio features.

However, the form factor clearly presents specific technical challenges. Currently, full noise canceling is a luxury that Ray-Ban Meta wearers must go without, as it’s an open-ear speaker set up for the glasses. There is an on-device five-mic array, which works with AI to reduce background noise during calls and recordings, plus it recently rolled out its AirPods Pro-style conversation focus feature. But it seems there's more to come.

Hearvana AI is also doing its own work in this area too. “Now that we've actually cracked the code for hearing aids and earbuds,” says Gollakota, “then smartglasses actually makes everything much easier because we have more microphones, there’s more compute and more power budget available.”

Next-Gen Soundproofing

Noise cancellation doesn't just happen in headphones and smartglasses, of course. Soundproofing is one of the original ways we used to reduce noise in our environment, and it's getting better all the time. MIT Materials Science and Engineering researcher Grace Yang has experimented with sound-suppressing silk fabrics, containing fibres that vibrate when a voltage is applied, in order to interfere with unwanted noise, and which could be used on walls or room dividers.

More conventional acoustic insulation makers, who manufacture the much thicker soundproofing options, are also increasingly turning to more natural and sustainable materials, such as sustainably grown hemp fibres (BASWA Natural), industrial hemp blended with recycled textile fibres (IndiNature’s IndiSilence) and mineral wool (ROCKwool), all of which are Quiet Mark certified.

And if you simply don’t have the time or the budget to create a soundproof environment at home, in the office or at an event, digital tools from companies such as Krisp and ai-acoustics are popping up to ‘de-noise’ audio recordings after the fact with AI-powered noise cancellation, anti-reverb and audio enhancement tools for meetings.

Perhaps one of the most incredible advancements in this area has come from a close study of nature. Marc Holderied’s bio-inspired meta-material went viral on TikTok in 2023, after he touted its use as “sonic wallpaper.” Holderied, a professor of sensory biology at the University of Bristol, says he doesn’t mind the eccentric academic label but he’s recently switched to an “acoustic wallpaper” descriptor to avoid any connotations with a certain video-game hedgehog. Now, he’s getting down to the serious matter of making it a reality.

Image may contain Animal Butterfly Insect Invertebrate and Moth

Courtesy of Attacus Acoustics

Image may contain Leaf Plant Art Handicraft and Wood

Courtesy of Attacus Acoustics

Holderied’s sound absorber prototypes are based on the unique microscopic scale structures found on moth’s wings, which evolved to counter the high-frequency sounds produced by bats using echolocation in order to hunt. When the sound hits the scale, it vibrates to absorb the specific frequency it is tuned to and with different shapes and sizes of scales across the wings, vibrating at different frequencies, the whole sound is neutralized. “We are approaching the potential application from a position of tens of millions of years of experience, rather than starting from the engineering principles. So our reverse engineering actually gave us a head start,” says Holderied. “The key invention is that they are thinner by about a factor of 10 than existing solutions.”

Wall of Sound

So it’s thin and, according to its inventor, quick, cheap, and scalable too. How does it perform? “What we’re working on is to make this ultra thin and broadband, so the best prototype we have at the moment achieves one over 100,” he explains. “So that means one hundredth of the wavelength being absorbed. We are getting to 70 or 80 percent of energy absorbed and we are currently working on making the absorption coefficient higher into the 90 percent range. That's what we're working on; we have the theoretical models that allow us to do this.”

The prototype material can be made transparent—not completely clear, there’s some visible patterning—for use on windows, and it can work behind fabrics or integrated into wood panelling. Holderied and his team are also currently working on research into the nano structures inside the wing’s scales, which hasn’t been published yet. “We find remarkable stuff happening there that also contributes to the acoustic performance.”

Image may contain Outdoors Airport Nature Architecture and Building

Courtesy of Attacus Acoustics

The project will be spun out as the startup Attacus Acoustics in 2026, after the team decides how internationally to apply its patent early next year—then it will begin looking for investor funding. Holderied has hired a postdoc team member to be the new CEO (with two more full-time researchers on board) and after many talks with enthusiastic architects and automotive bosses, he is in discussions around developing a wall lining for passenger compartments with a large, international airplane manufacturer.

Holderied is also in the queue to use the University of Bristol’s Isambard for his complex modeling, and he’s already taken advantage of other super-computing facilities on the project. “We've got certified prototypes where the performance has been measured in a laboratory in Southampton, at The Institute of Sound and Vibration Research. We've got square metres of prototypes that we've built. We're building new prototypes all the time.”

Read Entire Article