Efficient Mobile Deep Inference
Increasingly mobile applications are leveraging deep learning models to support features such as image recognition, speech-to-text and translation.
Many of these applications are very computationally intensive meaning that these mobile applications can either run a simplified local model, with lower accuracy, or use remote inference on a well-provisioned server.
The choice between these two options is generally statically made by developers and can very quickly lead, when paired with unexpected devices or network dead zones, to non-ideal user experiences.
The MODI project, led by my advisor Dr. Tian Guo, aims to examine new ways to design and implement a mobile-aware deep inference platform that combines innovations in both algorithm and system optimizations.
For a more in depth explanation I would recomend checking out my advisor's research page.