Stochastic computing (SC) has a substantial amount of study on application-specific integrated circuit (ASIC) design for artificial intelligence (AI) edge computing, especially the convolutional neural network (CNN) algorithm. However, SC has little to no optimization on field-programmable gate array (FPGA). Scaling up the ASIC logic without FPGA-oriented designs is inefficient, while aggregating thousands of bitstreams is still challenging in the conventional SC. This research has reinvented several FPGA-efficient 8-bit SC CNN computing architectures, i.e., SC multiplexer multiply-accumulate, multiply-accumulate function generator, and binary rectified linear unit, and successfully scaled and implemented a fully parallel CNN model on Kintex7 FPGA. The proposed SC hardware only compromises 0.14% accuracy compared to binary computing on the handwriting Modified National Institute of Standards and Technology classification task and achieved at least 99.72% energy saving per image feedforward and 31× more data throughput than modern hardware. Unique to SC, early decision termination pushed the performance baseline exponentially with minimum accuracy loss, making SC CNN extremely lucrative for AI edge computing but limited to classification tasks. The SC's inherent noise heavily penalizes CNN regression performance, rendering SC unsuitable for regression tasks.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.