On the latest episode of the podcast I was joined by Professor Vijay Reddi. Professor Reddi is an Associate Professor in the John A. Paulson School of Engineering and Applied Sciences at Harvard University, where he focuses on mobile and edge-based computing systems and directs the university’s Edge Computing Lab. He also set up and ran the TinyML HarvardX course which teaches the fundamentals of machine learning and embedded devices. 

In addition, he is a founding member of MLCommons, a non-profit focused on accelerating AI innovation, and a co-chair of MLPerf Inference, an organization that is responsible for fair and useful benchmarks for measuring the performance of ML systems. 

During our chat we talked about his work looking at what he calls the AI tax – the overheads, the things that get in the way of building ML models that we have to incur. One significant challenge comes from data.  

As Vijay says: “The elephant in the room is really how do you get the data, how do pre-process the data, how do you move that data into the neural network.” 

In a wide-ranging conversation we also talked about the drivers for running ML at the edge, how you define what ‘tiny’ means and what kind of embedded ecosystem will be required in the near future.



For more information on Vijay and his work, please visit the Harvard University website (https://scholar.harvard.edu/vijay-janapa-reddi/home).

You can find the Harvard Edge Computing Lab here, MLCommons here, and the TinyML course from HarvardX here.