October 5, 2017

Google’s Smart Sound: Not all sound is the same

Google announced their new Home Max speaker yesterday and it is a lovely looking piece of kit. It is well designed with some features that music fans will love. But what stood out for me, being a bit of a sound nerd, was the introduction of Smart Sound.

When introducing Smart Sound, Google’s Head of Google Home, Rishi Chandra, said: “It allows Max to adapt to you, your environment, your context, your preferences.” He also added that “Over time, Smart Sound will automatically adapt the sound to fit your context. Lower the volume in the morning, raising the volume when the dishwasher is running”.

From what I understand, Smart Sound is doing two different things. The first is based around understanding the audio environment – so how far away the walls are, etc. So where with Sonos you have to walk around a room waving your phone about, the Google Max speaker will do this on the fly within a few seconds based on the ‘thousands’ of room profiles that Google have taught it.

The other part is based around the other sounds in the environment that are competing with the sound that emanates from the speaker. I assume that when Rishi said ‘over time’ he means that this second component will go live at a later date but that isn’t clear. From what was presented it seems that as the Max detects other noises around it, it will increase the volume to compensate and get louder.

The problem with this approach is that just increasing the volume, irrespective of the actual context, means that in many cases the speaker will be doing the opposite of what the consumer wants it to do.

For example, you and I are sitting in my kitchen and my Max speaker is playing. As we start talking the Max speaker will detect that there is other sources of noise and it will increase its volume to make sure it is heard. As a result, you and I will need to speak loudly. As humans, I would place greater emphasis over our conversation than I would over the music we are listening to. Something ‘smart’ needs to understand the difference – and this is why context is so important and why you can’t treat all noise equally.

You also can’t just keep increasing the volume as eventually you reach maximum volume in a particular part of the sound spectrum. You have the change the structure of the music or voice to compensate for the specific audio characteristics of the other sounds. And this comes from really understanding the audio scene around the device.

*****

Like this? You can subscribe to our blog and receive an alert every time we publish an announcement, a comment on the industry or something more technical.

About Audio Analytic

Audio Analytic is the pioneer of AI sound recognition software. The company is on a mission to map the world of sounds, offering our sense of hearing to consumer technology. By transferring our sense of hearing to consumer products and digital personal assistants we give them the ability to react to the world around us, helping satisfy our entertainment, safety, security, wellbeing and communication needs.

Audio Analytic’s ai3™ sound recognition software enables device manufacturers and chip companies to equip products with Artificial Audio Intelligence, recognizing and automatically responding to our growing list of sound profiles.

We are using our own and third party cookies which track your use of our website to help give you the best experience. By continuing, we’ll assume that you are happy to receive all cookies on our website.

You can check what cookies we use and how to manage them here and you read our privacy policy here.

Accept and close
>