Releases: jinevening/ElasticDNN
IONN v2.0
This release adds the implementation of proposed by the paper titled "Enhanced Partitioning of DNN Layers for Uploading from Mobile Devices to Edge Servers" published in EMDL (The 3rd International Workshop on Deep Learning for Mobile Systems and Applications) 2019.
This repo adds the following features:
- efficiency based partitioning algorithm to partition DNN model.
- more efficient server-side implementation
- caffe version upgraded to most recent version
- minor improvements for robustness and stability
IONN v1.0
This release includes the implementation of IONN (Incremental Offloading of Neural Network), which was proposed by the paper titled "IONN: Incremental Offloading of Neural Network Computations From Mobile Devices to Edge Servers" published in SoCC (ACM Symposium on Cloud Computing) 2018.
The repository merely consists of two submodules: IONN-client and IONN-server. Each submodule runs on the mobile device (ARM) and the server (x86), respectively.
This repo supports following features:
- collaborative execution of a DNN model between client and server
- shortest path-based partitioning algorithm to minimize execution latency
- dominator-based merging for parallel DNN layers
- incremental uploading of a DNN model from client to server