zikele

zikele

人生如此自可乐

一种适用于低端FPGA的鲁棒开源脉冲神经网络框架

2507.07284v2

中文标题#

一种适用于低端 FPGA 的鲁棒开源脉冲神经网络框架

英文标题#

A Robust, Open-Source Framework for Spiking Neural Networks on Low-End FPGAs

中文摘要#

随着传统神经网络对计算能力的需求显著增加,脉冲神经网络(SNNs)已成为应对日益耗电的神经网络的潜在解决方案。通过操作由神经元发出的 0/1 脉冲而不是算术乘加运算,SNNs 在时间和空间上传播信息,从而实现更高效的计算能力。为此,已经开发了许多用于加速和模拟 SNNs 的架构,包括 Loihi、TrueNorth 和 SpiNNaker。然而,这些芯片对更广泛的社区来说大多难以获得。现场可编程门阵列(FPGAs)已被探索作为类脑和非类脑硬件之间的中间方案,但许多提出的架构需要昂贵的高端 FPGAs 或针对单一的 SNN 拓扑结构。本文提出了一种框架,包括一个稳健的 SNN 加速架构和一个基于 Pytorch 的 SNN 模型编译器。该框架针对任何到任何和 / 或全连接的 SNN,FPGA 架构具有一种突触阵列,可以在 SNN 中进行铺排以传播脉冲。该架构针对低端 FPGAs,并且只需要很少的资源(6358 LUT,40.5 BRAM)。该框架在低端 Xilinx Artix-7 FPGA 上以 100 MHz 运行,识别 MNIST 数字的速度具有竞争力(0.52 ms/img)。进一步的实验还表明,在玩具问题上能够准确地模拟手动编写的任何到任何的脉冲神经网络。所有代码和设置说明均可在https://github.com/im-afan/snn-fpga\}\{\texttt {https://github.com/im-afan/snn-fpga.

英文摘要#

As the demand for compute power in traditional neural networks has increased significantly, spiking neural networks (SNNs) have emerged as a potential solution to increasingly power-hungry neural networks. By operating on 0/1 spikes emitted by neurons instead of arithmetic multiply-and-accumulate operations, SNNs propagate information temporally and spatially, allowing for more efficient compute power. To this end, many architectures for accelerating and simulating SNNs have been developed, including Loihi, TrueNorth, and SpiNNaker. However, these chips are largely inaccessible to the wider community. Field programmable gate arrays (FPGAs) have been explored to serve as a middle ground between neuromorphic and non-neuromorphic hardware, but many proposed architectures require expensive high-end FPGAs or target a single SNN topology. This paper presents a framework consisting of a robust SNN acceleration architecture and a Pytorch-based SNN model compiler. Targeting any-to-any and/or fully connected SNNs, the FPGA architecture features a synaptic array that tiles across the SNN to propagate spikes. The architecture targets low-end FPGAs and requires very little (6358 LUT, 40.5 BRAM) resources. The framework, tested on a low-end Xilinx Artix-7 FPGA at 100 MHz, achieves competitive speed in recognizing MNIST digits (0.52 ms/img). Further experiments also show accurate simulation of hand coded any-to-any spiking neural networks on toy problems. All code and setup instructions are available at https://github.com/im-afan/snn-fpga}{\texttt{https://github.com/im-afan/snn-fpga.

PDF 获取#

查看中文 PDF - 2507.07284v2

智能达人抖店二维码

抖音扫码查看更多精彩内容

加载中...
此文章数据所有权由区块链加密技术和智能合约保障仅归创作者所有。