Matej Gutman (2009) Implementation of neural network using FPGA programmable circuits. EngD thesis.
Living creatures pose amazing ability to learn and adapt, therefore researchers are trying to apply this ability to machines. There are many mathematical models that mimic the behaviour of the central neural system, especially the brain, with neural networks being one of them. One of the most widely used neural networks is a multilayer perceptron, which gained its popularity with discovery of the back propagation learning algorithm. A high degree of parallelism is inherently present in all types of neural networks. Since computers are not inherently parallel, the question arises, whether the existing architectures are appropriate for the implementation of such structures. Therefore, in the presented work we are trying to develop a chip with a new architecture, capable of vastly exploiting a parallelism present in the multilayered perceptron. Programmable devices based on the FPGA technology enable us to quickly and efficiently develop and test new architectures. The behaviour of these devices can be specified in design-entry languages like the VHDL. In the developing cycle we have heavily relied on the Matlab simulations that enabled us to quickly solve restrictions and limitations posed by the FPGA technology. Following the successful completion of these simulations, a decision was made to create a powerful arithmetic logic unit, which is able to process a neuron in only one clock cycle. Perhaps in the future, when the density of the gates in the FPGA devices becomes higher, a number of parallel arithmetic logic units might be applied. The practical application in the field of character recognition confirms the suitability of the proposed architecture for the target FPGA devices. There are many possibilities for future work, especially in the area of optimization and expansion of the presented architecture.
Actions (login required)