Affiliations 

  • 1 Department of Electrical and Electronic Engineering, University of Nottingham - Malaysia Campus, Semenyih, Selangor, Malaysia
PeerJ Comput Sci, 2021;7:e633.
PMID: 34322595 DOI: 10.7717/peerj-cs.633

Abstract

Incremental learning evolves deep neural network knowledge over time by learning continuously from new data instead of training a model just once with all data present before the training starts. However, in incremental learning, new samples are always streaming in whereby the model to be trained needs to continuously adapt to new samples. Images are considered to be high dimensional data and thus training deep neural networks on such data is very time-consuming. Fog computing is a paradigm that uses fog devices to carry out computation near data sources to reduce the computational load on the server. Fog computing allows democracy in deep learning by enabling intelligence at the fog devices, however, one of the main challenges is the high communication costs between fog devices and the centralized servers especially in incremental learning where data samples are continuously arriving and need to be transmitted to the server for training. While working with Convolutional Neural Networks (CNN), we demonstrate a novel data sampling algorithm that discards certain training images per class before training even starts which reduces the transmission cost from the fog device to the server and the model training time while maintaining model learning performance both for static and incremental learning. Results show that our proposed method can effectively perform data sampling regardless of the model architecture, dataset, and learning settings.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.