What is ai3™?
ai3™ is an embedded software platform that provides a simple API to inform you when a specific audio event or acoustic scene is present, enabling your product to react to it.
It is delivered as a comprehensive SDK, including cross-platform C libraries, reference implementations, custom sound recognition debugging tools and well-documented APIs.
Offer your customers the gold standard in performance and drive the adoption and usage of new sound-based features and services
- Accurate: Our models are trained using Alexandria™, the world’s largest audio dataset for machine learning, with more than 15m labelled sounds, 700 label types and over 200m metadata points
- Specialist: At the heart of ai3™ is an optimised deep neural network called AuditoryNET™, designed to model the acoustic and temporal features of sounds (events and scenes)
- Responsive: Our software is fast and accurate, recognizing sounds immediately after they occur
Grow revenue opportunities
Create amazing new services and products that consumers value and are willing to pay for
- A wide range of sounds: Supporting applications across safety and security, health and wellness, convenience and entertainment, ai3 can enhance existing product feature sets and increase consumer benefits.
Quick time to market
Seize the immediate business opportunity by deploying sound recognition that has already been proven in the real world
- Tested: Our testing methodology, facilities and huge amounts of evaluation data give you peace-of-mind that our software is reliable
- Minimal risks: Consumer products featuring our technology have been deployed in over 165 countries by leading brands, including some of the world’s largest tech companies
- Comprehensive deliverables: Our SDK and simple cross-platform C libraries enable your teams to get to work immediately
Appropriately sourced audio data
Safeguard your company’s reputation by not exposing yourself to risks around the ethical and legal implications of machine learning data
- Guaranteed: We guarantee that we have the unambiguous rights to use 100% of the audio data we use to train our models – no scraping, repurposing or YouTube data
- Primary-sourced: We collect our own data in our dedicated anechoic facilities and through a global network of volunteers
- Trust: Our data can be used for commercial products the world over, both now and in the future, even as regulations change
Meet privacy demands
Your consumers demand AI that respects their privacy, especially with something as sensitive as sound
- AI at the edge: Our software is designed to run at the edge of the network on-device, so sounds never have to leave a consumer’s home, pocket, ears or car
- Build trust: Deliver a strong message to your customers around AI that they can be comfortable with
- Reduce cloud costs: Increase value but not costs. You don’t need expensive cloud infrastructure to analyse and store sounds
We recognise a whole world of sounds
A sense of hearing enables products as diverse as smart speakers, doorbells, cameras, smartphones and earbuds to deliver a range of valuable benefits for our customers, such as driving usage, growing revenue through added-value services and competitive differentiation.
Window glass break
Chaotic acoustic scene
Boring acoustic scene