PMID- 35345797 OWN - NLM STAT- MEDLINE DCOM- 20220330 LR - 20220401 IS - 1687-5273 (Electronic) IS - 1687-5265 (Print) VI - 2022 DP - 2022 TI - Power and Area Efficient Cascaded Effectless GDI Approximate Adder for Accelerating Multimedia Applications Using Deep Learning Model. PG - 3505439 LID - 10.1155/2022/3505439 [doi] LID - 3505439 AB - Approximate computing is an upsurging technique to accelerate the process through less computational effort while keeping admissible accuracy of error-tolerant applications such as multimedia and deep learning. Inheritance properties of the deep learning process aid the designer to abridge the circuitry and also to increase the computation speed at the cost of the accuracy of results. High computational complexity and low-power requirement of portable devices in the dark silicon era sought suitable alternate for Complementary Metal Oxide Semiconductor (CMOS) technology. Gate Diffusion Input (GDI) logic is one of the prompting alternatives to CMOS logic to reduce transistors and low-power design. In this work, a novel energy and area efficient 1-bit GDI-based full swing Energy and Area efficient Full Adder (EAFA) with minimum error distance is proposed. The proposed architecture was constructed to mitigate the cascaded effect problem in GDI-based circuits. It is proved by extending the proposed 1-bit GDI-based adder for different 16-bit Energy and Area Efficient High-Speed Error-Tolerant Adders (EAHSETA) segmented as accurate and inaccurate adder circuits. The proposed adder's design metrics in terms of delay, area, and power dissipation are verified through simulation using the Cadence tool. The proposed logic is deployed to accelerate the convolution process in the Low-Weight Digit Detector neural network for real-time handwritten digit classification application as a case study in the Intel Cyclone IV Field Programmable Gate Array (FPGA). The results confirm that our proposed EAHSETA occupies fewer logic elements and improves operation speed with the speed-up factor of 1.29 than other similar techniques while producing 95% of classification accuracy. CI - Copyright (c) 2022 Manikandan Nagarajan et al. FAU - Nagarajan, Manikandan AU - Nagarajan M AUID- ORCID: 0000-0003-2031-8940 AD - School of Computing, SASTRA Deemed University, Thanjavur 613 401, India. FAU - Muthaiah, Rajappa AU - Muthaiah R AUID- ORCID: 0000-0002-6659-1961 AD - School of Computing, SASTRA Deemed University, Thanjavur 613 401, India. FAU - Teekaraman, Yuvaraja AU - Teekaraman Y AUID- ORCID: 0000-0003-4297-3460 AD - Department of Electronic and Electrical Engineering, The University of Sheffield, Sheffield S1 3JD, UK. FAU - Kuppusamy, Ramya AU - Kuppusamy R AUID- ORCID: 0000-0002-1249-906X AD - Department of Electrical and Electronics Engineering, Sri Sairam College of Engineering, Bangalore 562 106, India. FAU - Radhakrishnan, Arun AU - Radhakrishnan A AUID- ORCID: 0000-0003-3700-2491 AD - Faculty of Electrical & Computer Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia. LA - eng PT - Journal Article DEP - 20220319 PL - United States TA - Comput Intell Neurosci JT - Computational intelligence and neuroscience JID - 101279357 SB - IM MH - Computer Simulation MH - *Deep Learning MH - Diffusion MH - *Multimedia MH - Semiconductors PMC - PMC8957425 COIS- The authors declare that they have no conflicts of interest. EDAT- 2022/03/30 06:00 MHDA- 2022/03/31 06:00 PMCR- 2022/03/19 CRDT- 2022/03/29 05:14 PHST- 2022/01/10 00:00 [received] PHST- 2022/02/03 00:00 [revised] PHST- 2022/02/10 00:00 [accepted] PHST- 2022/03/29 05:14 [entrez] PHST- 2022/03/30 06:00 [pubmed] PHST- 2022/03/31 06:00 [medline] PHST- 2022/03/19 00:00 [pmc-release] AID - 10.1155/2022/3505439 [doi] PST - epublish SO - Comput Intell Neurosci. 2022 Mar 19;2022:3505439. doi: 10.1155/2022/3505439. eCollection 2022.