February 28, 2019
The distributed intelligence triple-whammy: 5G, AI and tinyML
Back in September 2018, Forbes contributor Prakash Sangam heralded two major tech trends, AI and 5G, and made the case, in autonomous driving, for distributed intelligence: AI running at the intelligent edge AND in the intelligent cloud. His point is to put critical ‘sensing’ that is required to act immediately in the car, while processing-intensive functions in the cloud. 5G ends up being the connective glue between both intelligent systems, offering low-latency, high bandwidth and high data transfer speeds.
I’ve just got back from Mobile World Congress and if you look past the foldable phone fad, AI and 5G were the most dominant buzzwords emblazoned on most stands. Each new cellular generation heralds in more products, excitement and hyperbole, but in 5G’s case, the hype may be appropriate. 4G offered most consumers just a faster wireless connection but the underlying architecture of the 5G networks bring a step-change in latency as well as speed and bandwidth.
While Prakash rightly points out the virtues of 5G and AI we should also add tinyML to this ‘Venn diagram of exciting tech’. Processors continue to get smaller, faster and more power efficient. Even though we finally broke Moore’s Law, the principle of major step changes with each new generation of architecture holds true. For example, Arm recently unveiled its Helium technology, which supercharges AI at the edge.
tinyML, which is being led by Google, Qualcomm and Arm, is critical to AI because of this need to run intelligence at the edge, whether that is for privacy or immediacy reasons. And 5G is critical to connecting these two parts of the distributed system and getting large amounts of data to the cloud from what is expected to be billions of devices.
What could this mean in reality? You can detect and accurately categorise sounds on the device before sending the clip or metadata to the cloud over a 5G network. Once in the cloud, this data could be analysed by algorithms that identify interesting trends over time or fuse together data from multiple sources/devices, such as other sounds, images or location, etc.
I’m excited by the combination of these technologies because we can work with our customers and partners to achieve the visions we share for a broad range of intelligent products that take full advantage of the sense of hearing. Those can be products for the intelligent home, headphones, smartphones or cars.
The possibilities are boundless.