ABSTRACT
The hardware implementation of three different artificial neural networks is presented. The basis for the implementation is the reconfigurable hardware accelerator RAPTOR2000, which is based on FPGAs. The investigated neural network architectures are neural associative memories, self-organizing feature maps and basis function networks. Some of the key implementational issues are considered. Especially resource-efficiency and performance of the presented realizations are discussed.
- Pintaske, C. System- und Schaltungstechnik neuronaler Assoziativspeicher. PhD thesis, University of Paderborn, Heinz Nixdorf Institut, HNI-Verlagsschriftenreiche, no 29, 1998.Google Scholar
- Rückert, U.: "ULSI Implementations for Artificial Neural Networks", 9th Euromicro Workshop on Parallel and Distributed Processing, Feb. 2001, Mantova, Italien, pp. 436-442.Google Scholar
Cross Ref
- Kalte, H., Porrmann, M., Rückert, U.: "Rapid Prototyping System für dynamisch rekonfigurierbare Hardwarestrukturen", AES2000, Karlsruhe, 18.-19. Jan. 2000, pp. 150-157.Google Scholar
- H. Kalte, M. Porrmann, U. Rückert, "Using a Dynamically Reconfigurable System to Accelerate Octree Based 3D Graphics", Int. Conf. on Parallel and Distributed Processing Techniques and Applications (PDPTA), June 26-29, 2000, Las Vegas, Nevada, USA, pp. 2819-2824.Google Scholar
- Palm G., Schwenker F., Sommer F.T., Strey A. Neural Associative Memories. in Associative Processing and Processors. Krikelis A. Weems C.C. IEEE Computer Society, pp 307- 326, Press Los Alamitos, 1997Google Scholar
- Rückert U., Kreuzer I., Goser K. A VLSI Concept for an Associative Matrix Based on Neural Networks. in Proebster W.E, Reiner H. VLSI and Computer. Washington: Computer Society Press, pp 31-34, 1987.Google Scholar
- Kohonen T. Self-Organizing Maps. Springer, Berlin, 1995. Google Scholar
Digital Library
- Porrmann M., Heittmann A., Rüping S., Rückert U. A Hybrid Knowledge Processing System. NEURAP 98, Marseilles, France, pp 177-184, 11-13 March 1998.Google Scholar
- Rüping S., Porrmann M., Rückert U. SOM Hardware-Accelerator. WSOM'97: Workshop on Self-Organizing Maps. June 4-6, pp 136-141, Espoo, Finnland, 1997.Google Scholar
- Rüping S., Porrmann M., Rückert U. A High Performance SOFM Hardware-System. IWANN'97: International Work-Conference on Artificial and Natural Neural Networks, June 4-6, pp 772-781, Lanzarote, Spain, 1997. Google Scholar
Digital Library
- Rüping S., Goser K., Rückert U. A Chip for Selforganizing Feature Maps. IEEE MICRO, vol. 15, No. 3, pp 57-59, June 1995.Google Scholar
- Rüping S., Rückert U. A Scalable Processor Array for Self-Organizing Feature Maps. Fifth Int. Conf. on Microelectronics for Neural Networks and Fuzzy Systems, MicroNeuro'96, February 12-14, pp 285-291, Lausanne, Switzerland, 1996. Google Scholar
Digital Library
- Porrmann, M., Rüping, S., Rückert, U., "SOM Hardware with Acceleration Module for Graphical Representation of the Learning Process", Proceedings of the Seventh Int. Conference on Microelectronics for Neural, Fuzzy and Bio-Inspired Systems, pp. 380-386, Granada, April 1999. Google Scholar
Digital Library
- Rüping S. VLSI-gerechte Umsetzung von selbstorganisierenden Karten und ihre Einbindung in ein Neuro-Fuzzy Analysesystem. Fortschritt-Berichte VDI, Reihe 9: Elektronik, Düsseldorf: VDI Verlag, 1995.Google Scholar
- Rüping, S., Porrmann, M., Rückert, U., "SOM Accelerator System", Neurocomputing 21, pp. 31-50, 1998.Google Scholar
Cross Ref
- Porrmann, M., Rüping, S. and Rückert, U., "The Impact of Communication on Hardware Accelerators for Neural Networks", Proceedings of The 5th World Multi-Conference on Systemics, Cybernetics and Informatics, SCI 2001, Volume 3, pages 248-253, Orlando, Florida, USA, July 2001.Google Scholar
- Porrmann, M., Kalte, H., Witkowski, U., Niemann, J.-C. and Rückert, U., "A Dynamically Reconfigurable Hardware Accelerator for Self-Organizing Feature Maps", 5th World Multi-Conf. on Systemics, Cybernetics and Informatics, SCI, Volume 3, pages 242-247, Orlando, Florida, USA, July 2001.Google Scholar
- Kohonen T. Associative Memory: A system-theoretical approach. Springer, Berlin, 1977.Google Scholar
- Willshaw D.J., Buneman O.P., Longuet-Higgins H.C. A Non-Holographic Model of Associative Memory. Nature 222, no. 5197, pp. 960-962, 1969.Google Scholar
Cross Ref
- Palm G. On Associative Memory. Biol. Cybern. 36, pp 19-31, 1980.Google Scholar
Cross Ref
- Schmidt, M., Rückert, U.: "Content-Based Information Retrieval using an embedded Neural Associative Memory", 9th Euromicro Workshop on Parallel and Distributed Processing 2001, Mantova, Italien, 07.-09. Feb. 2001, S. 443-450.Google Scholar
Cross Ref
- Rückert U. VLSI Design of an Associative Memory based on Distributed Storage of Information. in Ramacher U., Rückert U. VLSI Design of Neural Networks. Bosten Kluwer Academic Publ., pp 153-168, 1991.Google Scholar
- Rückert U., Funke A., Pintaske C. Acceleratorboard for Neural Associative Memories. Neurocomputing 5, Elsevier, pp 39-49, 1993.Google Scholar
- Rückert U. An Associative Memory with Neural Architecture and its VLSI Implementation. HICSS-24, Hawaii, Jan. 1990.Google Scholar
- Heittmann A., Malin J., Pintaske C., Rückert U. Digital VLSI Implementation of a Neural Associative Memory. 6th Internat. Conf. on Microelectronics for Neural Networks, Evolutionary & Fuzzy Systems, pp 280-285, Dresden, Sep. 1997.Google Scholar
- Poggio T., Girosi F. "A Theory of Networks for Approximation and Learning", AI Memo 1140/CBIP Paper 31, Massachusetts Institute of Technology, Cambridge, MA, July 1989. Google Scholar
Digital Library
- Beineke S., Schütte F., Grotstollen H., "Comparison of Methods for State Estimation and On-line Identification in Speed and Position Control Loops", 7th Europ. Conf. on Power Electronics and Applications, Trondheim, Norway, vol. 3, pp. 364-369, Sept. 1997.Google Scholar
- Krüger L., Naunin D., Hidiroglu A., Neumerkel D., Franz J., "Neural Model Predective Control of an Induction Motor", Proc. of the Conf. on Power Electronics and Applications, Sevilla, Spain, vol. 1, pp. 409-414, 1995.Google Scholar
- Fogelman-Soulie F, "Applications of Neural Networks", The Handbook of Brain Theory and Neural Networks, M.A. Arbib (ed.), Bradford Books / MIT Press, pp. 94-98, 1995. Google Scholar
Digital Library
- Lowe D., "Radial Basis Function Networks", The Handbook of Brain Theory and Neural Networks, M.A. Arbib (ed.), Bradford Books / MIT Press, pp. 779-782, 1995. Google Scholar
Digital Library
- Katayama R. et Al., "Performance Evaluation of Self Generating Radial Basis Function for Function Approximation", Proceedings of 1993 International Joint Conference on Neural Networks, pp. 471-474, 1993.Google Scholar
- Hammerstrom D. A highly parallel digital architecture for neural network emulation. In Delgado-Frias J.G., Moore W.R. VLSI for Artificial Intelligence and Neural Networks. chapter 5.1, pp 357-366. Plenum press, New York, 1991.Google Scholar
- Ramacher U. SYNAPSE - A neurocomputer that synthesizes neural algorithms on a parallel systolic engine. Journal of Parallel and Distributed Computing, 14(3), pp 306-318, 1992 Google Scholar
Digital Library
Index Terms
- Implementation of artificial neural networks on a reconfigurable hardware accelerator
Recommendations
Artificial neural networks in hardware: A survey of two decades of progress
This article presents a comprehensive overview of the hardware realizations of artificial neural network (ANN) models, known as hardware neural networks (HNN), appearing in academic studies as prototypes as well as in commercial use. HNN research has ...
Reconfigurable hardware for neural networks: binary versus stochastic
This paper is focused on hardware implementation of neural networks. We propose a reconfigurable, low-cost and readily available hardware architecture for an artificial neuron. For this purpose, we use field-programmable gate arrays i.e. FPGAs. As the ...
Compact yet efficient hardware implementation of artificial neural networks with customized topology
Highlights Novel hardware implementation for artificial neural networks. The circuitry used for the neuron computation is re-utilized to compute the activation function. The activation function used is a sigmoid. The topology can be re-configured on-the-...




Comments