Sacha, who is Director of our R&D division – AALabs – will be speaking at this autumn’s SANE 2017, a one-day event that gathers researchers and students in speech and audio from the Northeast of America. It will be held on Thursday, October 19th at Google’s office in New York.
Sacha will be speaking alongside other leaders in their respective fields, including Aaron Courville (University of Montreal, MILA), Aaron van den Oord (Google DeepMind), Eric Humphrey (Spotify), Gunnar Evermann (Apple) and Floran Metze (CMU).
His presentation will be:
Sound is not speech
“The recognition of audio events is emerging as a relatively new field of research compared to speech and music recognition. Whereas it has started from known recipes from the latter fields, 24/7 sound recognition actually defines a new range of research problems which are distinct from speech and music.
After reviewing the constraints related to running sound recognition successfully on real-world consumer products deployed across thousands of homes, the talk discusses the nature of some of sound recognition’s distinctive problems, such as open set recognition or the modelling of interrupted sequences.
This is put in context with the most recent advances in the field, supported in the public domain, e.g., by competitive evaluations such as the DCASE challenge, to assess which of sound recognition’s distinctive problems are being currently addressed by state-of-the-art methods, and which ones could deserve more attention.”
For more information and to book your place, visit SANEworkshop.org.
Our embedded software platform, ai3™, is integrated into consumer products to make them more intelligent by understanding the sounds around them.
Our sound recognition software has been designed to work in a wide range of products from light bulbs, to smart speakers and beyond.