On June 9th I will be presenting a talk titled ‘tinyML doesn’t need Big Data, it needs Great Data’ during the latest tinyML Talks webcast series, which is organised by the tinyML Foundation.

The virtual, one hour event is free to attend, starts at 8am PDT/3pm UTC/4pm GMT and features two speakers.

The first presentation will be by Igor Fedorov from Arm’s ML Research team. He will be talking about CNNs on resource-constrained microcontrollers before I take over and deliver my talk.

To register for the event visit https://us02web.zoom.us/webinar/register/3815907727312/WN_HzhKFSsOQLOILbepOij9wA

My talk will discuss some of the challenges of obtaining and processing good quality audio data for sound recognition tasks and the ways that we have overcome those problems.

Specific topics will include:

  • what are good sources and bad sources
  • how to gather good quality audio data
  • employing complex labelling strategies
  • using the data to evaluate performance.

While not specially just a tinyML problem, the challenges of running at the edge across disparate devices makes the problem more acute and is shared by other tinyML applications.

Don’t worry if you can’t attend live, people who register will be notified when the recording and slides from the talk are available.

To access previous presentations or to see the calendar of future tinyML Talks visit: https://quip.com/MENbAvuQkrb0#UdGACAoLB7r


Like this? You can subscribe to our blog and receive an alert every time we publish an announcement, a comment on the industry or something more technical. 


 About Audio Analytic 

Audio Analytic is the pioneer of AI sound recognition technology. The company is on a mission to give machines a compact sense of hearing. This empowers them with the ability to react to the world around us, helping satisfy our entertainment, safety, security, wellbeing, convenience, and communication needs across a huge range of consumer products.

Audio Analytic’s ai3™ and ai3-nano™ sound recognition software enables device manufacturers to equip products at the edge with the ability to recognize and automatically respond to our growing list of sounds and acoustic scenes.