Aaron Yu; Arash Ahmadi; Leonard MacEachern, “A Spiking Neural Network Based Hough Transform Implementation for FPGAs”, IEEE Transactions on Emerging Topics in Computational Intelligence, April 2026.

Abstract: Leveraging their inherent suitability for low power, parallel computation, Spiking Neural Networks (SNNs) are an emergent topic in energy-efficient image processing research. This paper investigates the use of SNNs for line detection using an algorithm inspired by the Hough Transform. The implementation targets a Zynq UltraScale+ MPSoC ZCU102 FPGA to compare utilization, power, latency, and accuracy results with CNN, CPU, and GPU implementations. Our approach of using a neural network replicates the voting scheme found in the conventional Hough Transform, but is 2.2 times faster and requires up to 86% less accumulator-equivalent memory compared to recently published, competing FPGA implementations. Furthermore, it uses up to 97% (86%) less estimated power than equivalent OpenCV implementations running on a CPU (GPU), albeit with diminished accuracy. To ensure scalability, we employ kernel transformation techniques, in which a single hardware kernel is used to scan the entirety of the input image. As a result, hardware utilization is independent of image size up to the configured maximum resolution. The reported design is capable of processing a 100 by 100 pixel image in 13 microseconds, while requiring only 6.7 KiB of on-chip algorithm state (accumulators, line-parameter lookup tables, and weights) for line clustering. This implementation demonstrates the potential of SNNs for use in efficient, hardware-accelerated image processing tasks.