What are ai3™ and ai3-nano™?
ai3™ and ai3-nano™ are embedded software platforms that provides a simple API to inform you when a specific audio event or acoustic scene is present, enabling your product to react to it.
ai3™ is designed for a wide range of device types, while ai3-nano™ is specifically designed for true-wireless earbuds or products required to run in always-on mode, such as smartphones, where power efficiency is critical.
They are delivered as a comprehensive SDK, including cross-platform C libraries, reference implementations, custom sound recognition debugging tools and well-documented APIs.
Offer your customers the gold standard in performance and drive the adoption and usage of new sound-based features and services
- Accurate: Our models are trained using Alexandria™, the world’s largest audio dataset for machine learning, with more than 40 million labelled recordings across 1,200 sound classes
- Specialist: At the heart of ai3™ and ai3-nano™ is an optimised deep neural network called AuditoryNET™, designed to model the acoustic and temporal features of sounds (events and scenes)
- Responsive: Our software is fast and accurate, recognizing sounds immediately after they occur
Grow revenue opportunities
Create amazing new services and products that consumers value and are willing to pay for
- A wide range of sounds: Supporting applications across safety and security, health and wellness, convenience, communciation and entertainment, ai3™ and ai3-nano™ can enhance existing product feature sets and increase consumer benefits.
Quick time to market
Seize the immediate business opportunity by deploying sound recognition that has already been proven in the real world
- Tested: Our testing methodology, facilities and huge amounts of evaluation data give you peace-of-mind that our software is reliable
- Robust: Consumer products featuring our technology have been deployed in over 150 countries by leading brands, including some of the world’s largest tech companies
- Comprehensive deliverables: Our SDK and simple cross-platform C libraries enable your teams to get to work immediately
Appropriately sourced audio data
Safeguard your company’s reputation by not exposing yourself to risks around the ethical and legal implications of machine learning data
- Guaranteed: We guarantee that we have the unambiguous rights to use 100% of the audio data we use to train our models – no scraping, repurposing or YouTube data
- Primary-sourced: We collect our own data in our dedicated anechoic facilities and through a global network of volunteers
- Trust: Our data can be used for commercial products the world over, both now and in the future, even as regulations change
Meet privacy demands
Your consumers demand AI that respects their privacy, especially with something as sensitive as sound
- AI at the edge: Our software is designed to run at the edge of the network on-device, so sounds never have to leave a consumer’s home, pocket, ears or car
- Build trust: Deliver a strong message to your customers around AI that they can be comfortable with
- Reduce cloud costs: Increase value but not costs. You don’t need expensive cloud infrastructure to analyse and store sounds
We recognise a whole world of sounds
A sense of hearing enables products as diverse as smart speakers, doorbells, cameras, smartphones and earbuds to deliver a range of valuable benefits for our customers, such as driving usage, growing revenue through added-value services and competitive differentiation.
Vehicle reversing alert
Emergency vehicle siren
Window glass break
Acoustic scene recognition
If you are the manufacturer of consumer electronics and would like to embed our ai3 software into your product then please get in touch.
Information on the companies that we work with is highly confidential. However, there are a few who you can find out more about.
Our expertise in machine learning, data and tinyML enables us to create the most accurate sound recognition technology on the planet.
ICASSP ‘22: Watch Dr Çağdaş Bilen’s industry expert talk on temporal decisions now
“It’s about the impact on people” – Prof. Mark Plumbley on the future of academic research
“I started off thinking that you had to hear through the ears” – Dame Evelyn Glennie on being a better listener