Another day, another world-changing technology developed in Cambridge spreads its wings. Or in this case, spreads its wings further. Yesterday, Amazon announced the roll out of its ground breaking Smart Home hub, Echo, beyond the US to the UK and Germany.

The enabling speech recognition and Artificial Intelligence technology behind Echo, known more personably as ‘Alexa’, was largely pioneered and developed in Cambridge by Evi – a Cambridge company acquired by Amazon in 2012 for $26m.

Amazon Echo’s European launch recognises that consumer interest in and awareness of the Smart Home is growing. While the US market remains the primary engine behind the Smart Home market, growth in Europe is steadily increasing and the availability of Amazon’s flagship Smart Home hub, Echo, is bound to further invigorate adoption of Smart Home devices in Europe.

Echo, available for pre-order at £149.99, gives Amazon a reasonably-priced foothold in the home. The companion Echo Dot, a smart speaker with added Alexa, is priced even more affordably at £49.99. The price points for both Echo and Echo Dot are within the budget of most middle-income families – and for some families position it at the level of an impulse buy.

The Echo and Echo Dot are always listening for the trigger word “Alexa”. Once activated, Alexa is capable of recognising your speech and – unlike Siri which will often simply produce web links or open apps in response to queries – Alexa is a true assistant. On command, Alexa can book a takeaway from Just Eat, set the Hive thermostat of your home, play your favourite tracks from Spotify and, of course, order deliveries from Amazon. Once Echo is established in the home, the temptation for any owner is to buy and connect ever more smart devices, broadening the Smart Home experience.

We’ve seen the recent roll out of Amazon Dash – small stick-on buttons that bring the Internet of Things and automatic product ordering to conventional white goods in the home. One of the tantalizing aspects of Echo’s success is the question of what additional technologies Amazon, with its foothold established in the home, will now be able to merge into the Smart Home experience.

But what additional value can Echo deliver when you aren’t home? Without the magic word “Alexa” and the home owners’ verbal instructions, it’s largely useless. Surely the next stage for Smart Home hubs will be to make their AI assistants capable of complex decision making while you – the home owner – are asleep or not at home.

Let’s imagine you are at work. In the morning rush, somebody left the iron on and now the device has overheated and sparked a fire. Your smoke alarm detects the smoke and activates but there is nobody at home to hear it and take action. For Alexa to be truly smart, ‘she’ would need to recognise the sound of a smoke alarm and contact you at work. She would perhaps even alert the emergency services automatically leaving you free to jump in the car to head home.

A gentler use case would be one recognisable to any parent. If a baby wakes in the night and begins to cry, the home assistant would recognise the sound of baby cry and play a gentle lullaby in the hope of soothing baby back to sleep without disturbing mum and dad.

For home assistants like Echo’s Alexa, understanding ambient sounds is an order of magnitude more complex than understanding speech. Speech after all has a limited number of phonemes, dictated by the sounds that the human mouth is capable of making. With every language, the order that these phonemes appear in has a statistical probability. Once all phonemes have been mapped and statistical probabilities are understood, the decoding speech is computationally a relatively straightforward process.

Enabling computers to understand ambient sounds is a much more complex challenge as a particular genus of sound – say dog bark – tends to come in a wide range of varieties. The sound of a window break for example depends on the shape, size and type of window; the implement used to break the window; the acoustics of the room and many other factors.

Teaching computers to recognise sounds requires machine learning of the highest order based on millions of hours of real audio. At Audio Analytic, we’ve pioneered the development and commercialisation of sound recognition. We’ve broken thousands of windows, sounded almost every alarm available on the market and yes, we’ve listened to a lot of babies cry.

As our technology is trained to understand more and more sounds, we are filling the greatest remaining missing link in Artificial Intelligence – the decision-making technology behind assistants such as Amazon Echo’s Alexa and Apple’s Siri.

Home assistants – and by implication our homes – will only be truly smart when they are capable of not only understanding and responding to direct human speech, but also to the important ambient sounds which the human brain can decode and act upon intuitively. With the roll out of Echo, we’re moving ever closer to the original vision of the Smart Home – and AI assistants that truly “think” like humans.

This post originally appeared in Business Weekly.


Like this? You can subscribe to our blog and receive an alert every time we publish an announcement, a comment on the industry or something more technical.

About Audio Analytic

Audio Analytic is the pioneer of AI sound recognition software. The company is on a mission to map the world of sounds, offering our sense of hearing to consumer technology. By transferring our sense of hearing to consumer products and digital personal assistants we give them the ability to react to the world around us, helping satisfy our entertainment, safety, security, wellbeing and communication needs.

Audio Analytic’s ai3™ sound recognition software enables device manufacturers and chip companies to equip products with Artificial Audio Intelligence, recognizing and automatically responding to our growing list of sound profiles.