PMID- 37430583 OWN - NLM STAT- PubMed-not-MEDLINE DCOM- 20230711 LR - 20230718 IS - 1424-8220 (Electronic) IS - 1424-8220 (Linking) VI - 23 IP - 10 DP - 2023 May 11 TI - Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI. LID - 10.3390/s23104667 [doi] LID - 4667 AB - Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators. FAU - Pistellato, Mara AU - Pistellato M AUID- ORCID: 0000-0001-6273-290X AD - Dipartimento di Scienze Ambientali, Informatica e Statistica (DAIS), Universita Ca'Foscari di Venezia, Via Torino 155, 30170 Venezia, Italy. FAU - Bergamasco, Filippo AU - Bergamasco F AUID- ORCID: 0000-0001-6668-1556 AD - Dipartimento di Scienze Ambientali, Informatica e Statistica (DAIS), Universita Ca'Foscari di Venezia, Via Torino 155, 30170 Venezia, Italy. FAU - Bigaglia, Gianluca AU - Bigaglia G AUID- ORCID: 0000-0001-7089-7632 AD - Dipartimento di Management, Universita Ca'Foscari di Venezia, Cannaregio 873, 30121 Venezia, Italy. FAU - Gasparetto, Andrea AU - Gasparetto A AUID- ORCID: 0000-0003-4986-0442 AD - Dipartimento di Management, Universita Ca'Foscari di Venezia, Cannaregio 873, 30121 Venezia, Italy. FAU - Albarelli, Andrea AU - Albarelli A AUID- ORCID: 0000-0002-3659-5099 AD - Dipartimento di Scienze Ambientali, Informatica e Statistica (DAIS), Universita Ca'Foscari di Venezia, Via Torino 155, 30170 Venezia, Italy. FAU - Boschetti, Marco AU - Boschetti M AD - Covision Lab SCARL, Via Durst 4, 39042 Bressanone, Italy. FAU - Passerone, Roberto AU - Passerone R AUID- ORCID: 0000-0001-6315-1023 AD - Dipartimento di Ingegneria e Scienza dell'Informazione (DISI), University of Trento, Via Sommarive 9, 38123 Trento, Italy. LA - eng GR - ID103/SMACT Competence Center scpa/ PT - Journal Article DEP - 20230511 PL - Switzerland TA - Sensors (Basel) JT - Sensors (Basel, Switzerland) JID - 101204366 SB - IM PMC - PMC10222267 OTO - NOTNLM OT - FPGA OT - edge AI OT - peak-detection OT - quantization-aware training OT - quantized CNN COIS- The authors declare no conflict of interest. EDAT- 2023/07/11 06:42 MHDA- 2023/07/11 06:43 PMCR- 2023/05/11 CRDT- 2023/07/11 01:01 PHST- 2023/02/27 00:00 [received] PHST- 2023/04/29 00:00 [revised] PHST- 2023/05/01 00:00 [accepted] PHST- 2023/07/11 06:43 [medline] PHST- 2023/07/11 06:42 [pubmed] PHST- 2023/07/11 01:01 [entrez] PHST- 2023/05/11 00:00 [pmc-release] AID - s23104667 [pii] AID - sensors-23-04667 [pii] AID - 10.3390/s23104667 [doi] PST - epublish SO - Sensors (Basel). 2023 May 11;23(10):4667. doi: 10.3390/s23104667.