IONN v1.0
This release includes the implementation of IONN (Incremental Offloading of Neural Network), which was proposed by the paper titled "IONN: Incremental Offloading of Neural Network Computations From Mobile Devices to Edge Servers" published in SoCC (ACM Symposium on Cloud Computing) 2018.
The repository merely consists of two submodules: IONN-client and IONN-server. Each submodule runs on the mobile device (ARM) and the server (x86), respectively.
This repo supports following features:
- collaborative execution of a DNN model between client and server
- shortest path-based partitioning algorithm to minimize execution latency
- dominator-based merging for parallel DNN layers
- incremental uploading of a DNN model from client to server