April 8, 2022
Step-by-step instructions: Making digital assistants more helpful
Digital assistants are now commonplace in most households and bring a whole host of benefits and automation into your life. But wouldn’t it be great if these devices could hear and respond to more than just speech? This is especially true when they are guiding you step-by-step through an unfamiliar process.
Picture the scene… You’re cooking to impress – you’ve got a new recipe on the smart display, smartphone or your smart glasses to follow; you’ve tenderised some beef on a chopping board with your mallet, you’ve just finished dicing the onions, the kettle has just boiled, the oven is beeping to let you know it’s up to temperature, and the butter for your sauce has just started to sizzle in the pan. But what are the next steps in the recipe?
Ordinarily, with each step, you’d be washing your hands to physically interact with the device, scrolling in search of the next recipe step before your butter burns, and perhaps hoping desperately that your device can withstand the damp hands, accidental splashes of oil, and being covered with the odd bit of carrot peel that a kitchen environment no doubt brings. Alternatively, you’re using voice commands and shouting over the cacophony of cooking sounds, asking your device to show or repeat the next step.
Yet sound recognition could make that so much easier, and prevent you from having to physically touch or talk to your device, by linking the sounds of your kitchen to where you are in the recipe. The sound of butter sizzling? Your smart display could show you a picture of how to correctly brown your onions. Sound of a kettle whistling? Your smart glasses tell you how much rice to put in your pot and for how long. All responding seamlessly, in response to where your device “hears” you are in your cooking.
And the kitchen is only one example of course. You could be doing some DIY at home, with gloves and a respirator making swiping or talking to your smart glasses cumbersome, but where sounds of a power tool could prompt devices for information as to how deep to drill a hole, or which part to pick up next. Or what if you’re a courier on a motorbike, and the sound of your engine turning off could prompt your phone to show you the delivery instructions without you having to remove your gloves and swipe along in your app?
Excitingly, this combination of sound recognition and information browsing is the core of a new patent that was granted to us this week. It’s a great insight into just how powerful and intuitive sound recognition can be when applied to consumer devices as diverse as wearables like glasses and earbuds, to smartphones and smart displays, and the multitude of areas in which it can help keep your hands free when you’re already spinning lots of plates and stirring many pots.