The recent proposal by the European Commission to introduce measures to regulate AI, demonstrates the concern European regulators have long held regarding the potential for unregulated AI to adversely affect the fundamental rights of European citizens. 

Our latest whitepaper looks at the significant financial and reputational risks of poor data practice for AI and machine learning. If you follow poor practices, use questionable public datasets or inappropriate sources, then your AI’s performance is likely to be biased, discriminatory and sub-standard in real-world applications. 

However, audio data and the rights to use it for machine learning purposes are complicated. And the complications and restrictions increase with scale—both from the perspective of technological capability and geographical coverage. The whitepaper, focuses on the two broad categories that carry the largest risk of potential liabilities: 

  • Category 1: Audio data that contains personally identifiable information (PII).

– For example, an audio recording of somebody saying their name. 

  • Category 2: Audio data (or other data sources that contain an audio component, such as video) that is covered by copyright or is available under certain licence conditions. 

– For example, video content uploaded to YouTube is made available to other users under certain licence conditions that limit their use of the content to YouTube’s platform, and the copyright usually sits with the person who created the content. 

A key part of training sound recognition systems is to make sure that the models are trained to recognise certain sounds and ignore others. This means that, during the training process, an ML engineer will need both ‘target’ and ‘non-target’ audio data. Both are equally vital to the training process, so all data requires the same level of scrutiny. 

There are additional layers of complexity within category two that further increase risk: 

  • Capturing the real sounds in each environment is critical, and it is, therefore, important to understand the different laws in each location and how it applies to your data. 
  • If you are recording data in public spaces, it may still be the case that you need a licence, especially if that recording is to be used for ‘commercial purposes’. 
  • If you do have a licence, does the entity that granted it have the rights to do so? For example, the ongoing class-action lawsuit in the US (Vance v. International Business Machines) where IBM’s lawyers felt that Creative Commons via Flickr gave them the right to use these images for facial recognition. The lawsuit was filed against them for releasing the Diversity in Faces dataset, which was used by IBM as well as Amazon, Microsoft and Google’s parent company Alphabet to improve their facial recognition software. Another interesting case regarding copyright, which included an intermediary was Davidson v. United States. A sculptor successfully sued the US Postal Service for $3.5m for using a picture of his sculpture, which they licensed through an intermediary (in this case Getty Images), without his permission. 
  • It cannot be assumed that the licence terms are granted in perpetuity; in most cases, they are not. Parties that commercialise data and intellectual property rights will control access through contracts. Likewise, rights secured in relation to personally identifiable data, even where based on consent are neither irrevocable nor perpetual. Rights granted in relation to data do expire with the passing of time and the changing of circumstances. Businesses have a responsibility to manage rights now and in the future. This is relevant whether those rights are granted by or through an organisation or directly by the individual. A central objective with data privacy legislation is to give individuals control over how their personal data is used—this includes giving them the ability to revoke access. 

The rights-management side of machine learning is a significant undertaking, especially as datasets need to grow as the technology develops and scales. Each data point—recordings in the case of sound recognition—needs to have a fully traceable audit trail, along with appropriate licences and evidence of the right to use it for the correct purposes.  

You can read more about the topic in our recent whitepaper ‘Audio for Machine Learning: The Law and your Reputation’. 

*****

Like this? You can subscribe to our blog and receive an alert every time we publish an announcement, a comment on the industry or something more technical. 

 

 About Audio Analytic 

Audio Analytic is the pioneer of AI sound recognition technology. The company is on a mission to give machines a compact sense of hearing. This empowers them with the ability to react to the world around us, helping satisfy our entertainment, safety, security, wellbeing, convenience, and communication needs across a huge range of consumer products.

Audio Analytic’s ai3™ and ai3-nano™ sound recognition software enables device manufacturers to equip products at the edge with the ability to recognize and automatically respond to our growing list of sounds and acoustic scenes.

We are using our own and third party cookies which track your use of our website to help give you the best experience. By continuing, we’ll assume that you are happy to receive all cookies on our website.

You can check what cookies we use and how to manage them here and you read our privacy policy here.

Accept and close
>