Building Lightweight Deep learning Models with TensorFlow Lite for Human Activity Recognition on Mobile Devices


Bursa S. Ö., Durmaz İncel Ö., Işıklar Alptekin G.

Annales des Telecommunications/Annals of Telecommunications, cilt.78, sa.11-12, ss.687-702, 2023 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 78 Sayı: 11-12
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s12243-023-00962-x
  • Dergi Adı: Annales des Telecommunications/Annals of Telecommunications
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, PASCAL, Compendex, INSPEC, zbMATH
  • Sayfa Sayıları: ss.687-702
  • Anahtar Kelimeler: Deep learning (DL), Energy consumption, Human activity recognition (HAR), Resource-constrained devices, Wearable sensors
  • Galatasaray Üniversitesi Adresli: Evet

Özet

Human activity recognition (HAR) is a research domain that enables continuous monitoring of human behaviors for various purposes, from assisted living to surveillance in smart home environments. These applications generally work with a rich collection of sensor data generated using smartphones and other low-power wearable devices. The amount of collected data can quickly become immense, necessitating time and resource-consuming computations. Deep learning (DL) has recently become a promising trend in HAR. However, it is challenging to train and run DL algorithms on mobile devices due to their limited battery power, memory, and computation units. In this paper, we evaluate and compare the performance of four different deep architectures trained on three datasets from the HAR literature (WISDM, MobiAct, OpenHAR). We use the TensorFlow Lite platform with quantization techniques to convert the models into lighter versions for deployment on mobile devices. We compare the performance of the original models in terms of accuracy, size, and resource usage with their optimized versions. The experiments reveal that the model size and resource consumption can significantly be reduced when optimized with TensorFlow Lite without sacrificing the accuracy of the models.