Uroš Lotrič and Patricio Bulić Applicability of approximate multipliers in hardware neural networks. Neurocomputing . pp. 1-11. ISSN 0925-2312 (In Press)
This is the latest version of this item.
Abstract
In recent years there has been a growing interest in hardware neural networks, which express many benefits over conventional software models, mainly in applications where speed, cost, reliability, or energy efficiency are of great importance. These hardware neural networks require many resource-, power- and time-consuming multiplication operations, thus special care must be taken during their design. Since the neural network processing can be performed in parallel, there is usually a requirement for designs with as many concurrent multiplication circuits as possible. One option to achieve this goal is to replace the complex exact multiplying circuits with simpler, approximate ones. The present work demonstrates the application of approximate multiplying circuits in the design of a feed-forward neural network model with on-chip learning ability. The experiments performed on a heterogeneous Proben1 benchmark dataset show that the adaptive nature of the neural network model successfully compensates for the calculation errors of the approximate multiplying circuits. At the same time, the proposed designs also profit from more computing power and increased energy efficiency.
Item Type: | Article |
Keywords: | Hardware neural network; Iterative logarithmic multiplier; FPGA; Digital design; Computer arithmetic |
Related URLs: | |
Institution: | University of Ljubljana |
Department: | Faculty of Computer and Information Science |
Divisions: | Faculty of Computer and Information Science > Laboratory for Computer Architecture |
Item ID: | 1752 |
Date Deposited: | 03 Jul 2012 10:46 |
Last Modified: | 05 Dec 2013 13:20 |
URI: | http://eprints.fri.uni-lj.si/id/eprint/1752 |
---|
Available Versions of this Item.
Actions (login required)