This week marked the start of ICASSP 2021 – the worlds largest conference on acoustics, speech and signal processing — where, for the second year running, we will be presenting a paper on evaluating polyphonic sound detection models. This year the event has a unique twist. It’s being hosted virtually in Toronto, Canada via Gather.Town where your avatar, interacts with exhibitors, visits poster presentations, has coffee breaks and connects with other attendees over video chat. 

Our latest paper is ‘Improving Sound Event Detection Metrics: Insights from DCASE 2020’ and you can find out more by joining us for our live poster session presentation and Q&A sessions live on  

  • Thursday 10th June at 15:30 EDT / 20:30 UK 
  • Friday 11th June at 03:30 EDT / 08:30 UK 

You can access our poster session with your ICASSP conference login via here: https://2021.ieeeicassp.org/Papers/ViewPaper.asp?PaperNum=3576 

In case you can’t make it to the live session, you can download the full poster here and watch our poster presentation here. 

 

Come and join us in our virtual poster session
Come and join us in our virtual poster session

At last year’s ICASSP, we introduced our Polyphonic Sound Detection Score (PSDS) as a more sophisticated metric for analysing sound event detection (SED) systems. It was then adopted by the annual DCASE Challenge organisers alongside the existing F1-score metric for Task 4.

This year, our paper which was written jointly with Nicolas Turpault and Romain Serizel from the French Université de Lorraine, presents an in-depth analysis of how PSDS can uniquely inform users about SED systems performance in comparison with conventional metrics, by using the results from the 2020 DCASE Challenge. 

The insights presented in the paper further support the universal adoption of PSDS as a sound detection evaluation standard across academic and industrial research teams.

We are very pleased to see that PSDS has been adopted by the DCASE Challenge organisers as the primary metric for the 2021 Task 4 challenge, so that the research community can compare the overall modelling power of the submitted systemsindependently from the influence of arbitrarily chosen sensitivity settings.

You can read this year’s paper here. 

You can download the poster from this year’s session here. 

You can watch the poster presentation via the video below or access it directly on vimeo here

You can find out how PSDS is an improvement on conventional metrics here. 

You can find out more about the PSDS formulation here. 

Get access to PSDS via GitHub here 

We look forward to sharing our insights with peers at this year’s ICASSP conference and discussing how PSDS can be used to support the development of better sound recognition models. 

*****

Like this? You can subscribe to our blog and receive an alert every time we publish an announcement, a comment on the industry or something more technical. 

 

 About Audio Analytic 

Audio Analytic is the pioneer of AI sound recognition technology. The company is on a mission to give machines a compact sense of hearing. This empowers them with the ability to react to the world around us, helping satisfy our entertainment, safety, security, wellbeing, convenience, and communication needs across a huge range of consumer products.

Audio Analytic’s ai3™ and ai3-nano™ sound recognition software enables device manufacturers to equip products at the edge with the ability to recognize and automatically respond to our growing list of sounds and acoustic scenes.

We are using our own and third party cookies which track your use of our website to help give you the best experience. By continuing, we’ll assume that you are happy to receive all cookies on our website.

You can check what cookies we use and how to manage them here and you read our privacy policy here.

Accept and close
>