Google’s Smart Sound: Not all sound is the same

Chris Mitchell

Written by
Dr Chris Mitchell, CEO and Founder

October 5, 2017

Google announced their new Home Max speaker yesterday and it is a lovely looking piece of kit. It is well designed with some features that music fans will love. But what stood out for me, being a bit of a sound nerd, was the introduction of Smart Sound.

When introducing Smart Sound, Google’s Head of Google Home, Rishi Chandra, said: “It allows Max to adapt to you, your environment, your context, your preferences.” He also added that “Over time, Smart Sound will automatically adapt the sound to fit your context. Lower the volume in the morning, raising the volume when the dishwasher is running”.

From what I understand, Smart Sound is doing two different things. The first is based around understanding the audio environment – so how far away the walls are, etc. So where with Sonos you have to walk around a room waving your phone about, the Google Max speaker will do this on the fly within a few seconds based on the ‘thousands’ of room profiles that Google have taught it.

The other part is based around the other sounds in the environment that are competing with the sound that emanates from the speaker. I assume that when Rishi said ‘over time’ he means that this second component will go live at a later date but that isn’t clear. From what was presented it seems that as the Max detects other noises around it, it will increase the volume to compensate and get louder.

The problem with this approach is that just increasing the volume, irrespective of the actual context, means that in many cases the speaker will be doing the opposite of what the consumer wants it to do.

For example, you and I are sitting in my kitchen and my Max speaker is playing. As we start talking the Max speaker will detect that there is other sources of noise and it will increase its volume to make sure it is heard. As a result, you and I will need to speak loudly. As humans, I would place greater emphasis over our conversation than I would over the music we are listening to. Something ‘smart’ needs to understand the difference – and this is why context is so important and why you can’t treat all noise equally.

You also can’t just keep increasing the volume as eventually you reach maximum volume in a particular part of the sound spectrum. You have the change the structure of the music or voice to compensate for the specific audio characteristics of the other sounds. And this comes from really understanding the audio scene around the device.

Adding more personality to personal assistants

Read more

Audio Analytic announces partnership with Hive

Read more

Gartner highlights sound recognition as a high-impact use for AI-powered smartphones

Read more

Our software

Our embedded software platform, ai3™, is integrated into consumer products to make them more intelligent by understanding the sounds around them.

Find out more

Use cases

Our sound recognition software has been designed to work in a wide range of products from light bulbs, to smart speakers and beyond.

Read more