Distributed Machine Learning for Internet of Things
Machine Learning (ML) is an essential technology used in (Internet of Things) IoT applications, allowing them to infer higher-level information from a large amount of raw data collected by IoT devices. However, current state-of-the-art ML models often have significant demands on memory, computation, and energy. This is incompatible with the resource-constrained nature of IoT devices that is characterized by limited energy budget, memory, and computation capability. Typically, ML models are trained and executed on the cloud which requires data from IoT systems to be sent to the cloud across networks for processing. Using the cloud-native approach is computationally scalable as more resources for complex analytics would be readily available, but is principally disadvantaged in various ways: Firstly, the response time obtained from processing on geographically distant data centers may not be sufficient to meet the real-time requirements of latency-critical applications. Secondly, the cloud-centric methods risk potentially exposing private and sensitive information during data transmission and remote processing and storage. Thirdly, transferring the raw data to the centralized cloud increases the ingress bandwidth demand on the backhaul network.
An alternative approach is to execute ML on IoT end devices. However, the execution of computationally intensive models, such as Deep Neural Networks (DNNs), on heavily constrained IoT devices remains a challenge. Thus, both cloud-only and device-only execution approaches, although straightforward, are impractical for a wide range of IoT applications, such as health, industrial, and multimedia-based IoT applications. To overcome the aforementioned limitations, recent studies have proposed utilizing computation resources that are closer to data collection IoT devices through distributed computing. The objective of this research is to determine how to efficiently distribute ML tasks across different elements in IoT systems, taking into account computation and communication constraints.
Related Literature:
[1] Wiebke Toussaint and Aaron Yi Ding. “Machine Learning Systems in the IoT: Trustworthiness Trade-offs for Edge Intelligence”. In:arXiv preprint arXiv:2012.00419 (2020).
[2] Ahmed Imteaj, Urmish Thakker, Shiqiang Wang, JianLi, and M Hadi Amini. “Federated Learning for Resource-Constrained IoT Devices: Panoramas and state-of-the-art”. In:arXiv preprint arXiv:2002.10610 (2020).
[3] Yansong Gao, Minki Kim, Chandra Thapa, SharifAbuadbba, Zhi Zhang, Seyit A Camtepe, HyoungshickKim, and Surya Nepal. “Evaluation and Optimization of distributed Machine Learning Techniques for Internet of Things”. In:arXiv preprint arXiv:2103.02762 (2021).
[4] Hadidi, Ramyad, Jiashen Cao, Michael S. Ryoo, and Hyesoon Kim. "Toward collaborative inferencing of deep neural networks on Internet-of-Things devices." IEEE Internet of Things Journal 7, no. 6 (2020): 4950-4960.
[5] Zhuoran Zhao, Kamyar Mirzazad Barijough, and Andreas Gerstlauer. “DeepThings: Distributed adaptive deep learning inference on resource-constrained IoT edge clusters”. In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems37.11 (2018), pp. 2348–2359
Contact
- Name / Titel
- Eric Samikwa
- Funktion
- Research Assistant
- eric.samikwa@unibe.ch
- Phone
- +41 31 684 66 91