Paper

Challenges and Opportunities of DNN Model Execution Caching


Abstract

We explore the opportunities and challenges of model execution caching, a nascent research area that promises to improve the performance of cloud-based deep inference serving. Broadly, model execution caching relies on servers that are geographically close to the end-device to service inference requests, resembling a traditional content delivery network (CDN). However, unlike a CDN, such schemes cache execution rather than static objects. We identify the key challenges inherent to this problem domain and describe the similarities and differences with existing caching techniques. We further introduce several emergent concepts unique to this domain, such as memory-adaptive models and multi-model hosting, which allow us to make dynamic adjustments to the memory requirements of model execution.


Gilman2019 PDF


BibTeX

@InProceedings{Gilman2019, author = {Gilman, Guin R and Ogden, Samuel S and Walls, Robert J and Guo, Tian}, booktitle = {Proceedings of the Workshop on Distributed Infrastructures for Deep Learning}, title = {Challenges and Opportunities of DNN Model Execution Caching}, year = {2019}, pages = {7--12}, abstract = {We explore the opportunities and challenges of model execution caching, a nascent research area that promises to improve the performance of cloud-based deep inference serving. Broadly, model execution caching relies on servers that are geographically close to the end-device to service inference requests, resembling a traditional content delivery network (CDN). However, unlike a CDN, such schemes cache execution rather than static objects. We identify the key challenges inherent to this problem domain and describe the similarities and differences with existing caching techniques. We further introduce several emergent concepts unique to this domain, such as memory-adaptive models and multi-model hosting, which allow us to make dynamic adjustments to the memory requirements of model execution.}, }