Preview

Proceedings of the Southwest State University

Advanced search

Choice of component bit width for nonlinear neuron implementation on FPGA

https://doi.org/10.21869/2223-1560-2025-29-4-70-92

Abstract

Purpose. Investigation of the relationship between input data error of a neuron intended for use in an artificial neural network implemented on FPGA, and computational error, as well as development of a methodology for selecting the bit width of neuron components aimed at reducing hardware costs while maintaining computational accuracy consistent with the accuracy of the input data.

Methods. The study employed methods of digital circuit design based on the VHDL hardware description language, error analysis of computations relative to a floating-point reference model, as well as device synthesis and FPGA resource utilization estimation methods integrated into Xilinx ISE. Mathematical statistics techniques, including the construction of regression models describing the dependence of accuracy and hardware costs on input data bit width, were applied to process the experimental results.

Results. A method has been proposed for estimating the bit width of the processing unit, enabling its precision to be matched with the inherent error level of the input data. The impact of the bit width of input data and weight coefficients on computational accuracy and the amount of FPGA hardware resources consumed by the implemented neuron was investigated. Based on the VHDL description of the device, a parameterized model was developed that enables coordinated adjustment of the neuron’s internal component bit widths as the bit width of input signals is varied. To assess the effect of bit width on computational accuracy, a floating-point-based reference model was used. For each bit-width configuration, comparative computations of the device’s output were performed, and the resulting error was quantified. The influence of bit width on FPGA resource utilization — specifically the number of LUTs and flip-flops (FFs) — was also analyzed. The proposed methodology was validated on the Xilinx Spartan-3E XC3S500E (xc3s500e-4pq208) FPGA platform using the ISE Design Suite 14.7 environment. Multiple versions of the digital neuron were implemented, with input data bit widths ranging from 4 to 12 bits (including the sign bit). For each variant, the operating clock frequency, utilized FPGA resources, and computational accuracy were recorded. As a case study using 12-bit input data, an experimental evaluation determined that a sigmoid function lookup table with 8,192 entries achieves an optimal trade-off between computational accuracy (maximum relative error — 0.12%) and hardware cost (occupying only 1% of the FPGA’s available resources).

Conclusion. This paper presents a description of a neuron circuit with a sigmoid activation function, implemented in the VHDL hardware description language and suitable for integration into neural network solutions on FieldProgrammable Gate Arrays (FPGAs). The device accepts signed integer input values of fixed bit width, computes the weighted sum of inputs and bias, and generates the neuron’s output using a precomputed lookup table stored in block RAM. The operation, scaling, and optimization of the module are described in detail.

The proposed method enables determination of the optimal bit width for the processing unit, ensuring that computational error remains consistent with the error level of the input data while minimizing hardware resource consumption. The obtained relationships can be utilized during the design phase to select parameters for digital processing modules in real-time systems and embedded devices.

About the Authors

O. G. Bondar
Southwest State University
Russian Federation

Oleg G. Bondar, Cand. of Sci. (Engineering), Associate Professor, Associate Professor of Space Instrumentation and Communication Systems Department

50 Let Oktyabrya str. 94, Kursk 305040


Competing Interests:

The Authors declare the absence of obvious and potential conflicts of interest related to the publication of this article.



E. O. Brezhneva
Southwest State University
Russian Federation

Ekaterina O. Brezhneva, Cand. of Sci. (Engineering), Associate Professor of Space Instrumentation and Communication Systems Department

50 Let Oktyabrya str. 94, Kursk 305040


Competing Interests:

The Authors declare the absence of obvious and potential conflicts of interest related to the publication of this article.



D. A. Golubev
Southwest State University
Russian Federation

Dmitry A. Golubev, Student of Space Instrumentation and Communication Systems Department

50 Let Oktyabrya str. 94, Kursk 305040


Competing Interests:

The Authors declare the absence of obvious and potential conflicts of interest related to the publication of this article.



References

1. Hingu C., Fu X., Challoo R., Lu J., Yang X., Qingge L. Accelerating FPGA Implementation of Neural Network Controllers via 32-bit Fixed-Point Design for Real-Time Control. In: 2023 IEEE 14th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). 2023. P. 952-959. https://doi.org/10.1109/UEMCON59035.2023.10316098

2. Jiang Y., Vaicaitis A., Leeser M., Dooley J. Neural Network on the Edge: Efficient and Low Cost FPGA Implementation of Digital Predistortion in MIMO Systems. In: 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE). Antwerp, Belgium; 2023. P. 1–2. https://doi.org/10.23919/DATE56975.2023.10137251

3. Antunes P., Podobas A. FPGA-Based Neural Network Accelerators for Space Applications: A Survey. arXiv 2025, arXiv:2504.16173v2 https://doi.org/10.48550/arXiv.2504.16173

4. Prashanth B.U.V., Ahmed M.R. Design and Implementation of Reconfigurable Neuro-Inspired Computing Model on a FPGA. Adv. Sci. Technol. Eng. Syst. J. 2020; 5 (5): 331– 338. https://doi.org/10.25046/aj050541

5. Sakr F., Berta R., Doyle J., Capello A., Dabbous A., Lazzaroni L., Bellotti F. CBinNN: An Inference Engine for Binarized Neural Networks. Electronics. 2024; 13: 1624. https://doi.org/10.3390/electronics13091624

6. Kumari B.A.S., Kulkarni S.P., Sinchana C.G. FPGA Implementation of Neural Nets. Int. J. Electron. Telecommun. 2023; 69(3): 599–604. https://doi.org/10.24425/ijet.2023.146513

7. Lebedev M.S., Belecky P.N. Artificial Neural Network Inference on FPGAs Using OpenSource Tools. Trudy ISP RAN = Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2021;33(6):175-192. (In Russ.). https://doi.org/10.15514/ISPRAS-2021-33(6)-12

8. Acharya R.Y., Le Jeune L., Mentens N., Ganji F., Forte D. Quantization-aware Neural Architectural Search for Intrusion Detection. arXiv 2024, arXiv:2311.04194v2. https://doi.org/10.48550/arXiv.2311.04194

9. Jiang Y., Vaicaitis A., Dooley J., Leeser M. Efficient Neural Networks on the Edge with FPGAs by Optimizing an Adaptive Activation Function. Sensors. 2024; 24(6):1829. https://doi.org/10.3390/s24061829

10. Tasci M., Istanbullu A., Tumen V., Kosunalp S. FPGA-QNN: Quantized Neural Network Hardware Acceleration on FPGAs. Appl. Sci. 2025; 15: 688. https://doi.org/10.3390/app15020688

11. Solovyev R., Kustov A., Telpukhov D., Rukhlov V., Kalinin A. Fixed-Point Convolutional Neural Network for Real-Time Video Processing in FPGA. arXiv 2018, arXiv:1808.09945v2. https://doi.org/10.48550/arXiv.1808.09945

12. Wu H., Zheng L., Zhao G., Xu G., Xu M., Liu X., Lin D. Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation. arXiv 2020, arXiv:2004.09602v3. https://doi.org/10.48550/arXiv.2004.09602

13. Ngadiuba J., Loncar V., Pierini M., Summers S., Di Guglielmo G., Duarte J., Harris P., Rankin D., Jindariani S., Liu M., Pedro K., Tran N., Kreinar E., Sagear S., Wu Z., Hoang D. Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml. Mach. Learn.: Sci. Technol. 2021. 2: 015001. https://doi.org/10.1088/26322153/aba042

14. Kaneda N., Chuang C.-Y., Zhu Z., Mahadevan A., Farah B., Bergman K., Van Veen D., Houtsma V. Fixed-Point Analysis and FPGA Implementation of Deep Neural Network Based Equalizers for High-Speed PON. J. Lightwave Technol. 2022; 40 (7): 1972–1980. https://doi.org/10.1109/JLT.2021.3133723

15. Lohar D., Jeangoudoux C., Volkova A., Darulova E. Sound Mixed Fixed-Point Quantization of Neural Networks. ACM Trans. Embedd. Comput. Syst. 2023; 22 (5s): 136:1– 136:26. https://doi.org/10.1145/3609118

16. Jia H., Chen X., Dong D. FPGA-Based Implementation and Quantization of Convolutional Neural Networks. In: Proceedings of the 2025 3rd International Conference on Communication Networks and Machine Learning (CNML 2025), Nanjing, China, February 21–23, 2025. ACM, New York, NY, USA; 2025. 5 pages. https://doi.org/10.1145/3728199.3728263

17. Le Blevec H., Léonardon M., Tessier H., Arzel M. Pipelined Architecture for a Semantic Segmentation Neural Network on FPGA. In: Proceedings of the 2023 30th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Istanbul, Turkey; 2023. P. 1–4. https://doi.org/10.1109/ICECS58634.2023.10382715

18. Miao X., Ji X., Chen H., Mayet A.M., Zhang G., Wang C., Sun J. FPGA implementation of a complete digital spiking silicon neuron for circuit design and network approach. Sci Rep. 2025; 15: 8491. https://doi.org/10.1038/s41598-025-92570-z

19. Wang C., Luo Z. A Review of the Optimal Design of Neural Networks Based on FPGA. Appl. Sci. 2022; 12: 10771. https://doi.org/10.3390/app122110771

20. Claudionor N. Coelho Jr., Kuusela A., Li S. et al. Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors. Nat Mach Intell. 2021; 3: 675–686. https://doi.org/10.1038/s42256-021-00356-5

21. Gholami A., Kim S., Dong Z., Yao Z., Mahoney M.W., Keutzer K. A Survey of Quantization Methods for Efficient Neural Network Inference; CRC: Boca Raton, FL, USA; 2021. https://doi.org/10.48550/arXiv.2103.13630

22. Courbariaux M., Hubara I., Soudry D., El-Yaniv R., Bengio Y. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or −1. arXiv 2016, arXiv:1602.02830. https://doi.org/10.48550/arXiv.1602.02830

23. Kavitha S., Kumar C., Alwabli A. A low-power, high accuracy digital design of batch normalized non-linear neuron models: Synthetic experiments and FPGA evaluation. Ain Shams Eng. J. 2025: 16 (8): 103469. https://doi.org/10.1016/j.asej.2025.103469


Review

For citations:


Bondar O.G., Brezhneva E.O., Golubev D.A. Choice of component bit width for nonlinear neuron implementation on FPGA. Proceedings of the Southwest State University. 2025;29(4):70-92. (In Russ.) https://doi.org/10.21869/2223-1560-2025-29-4-70-92

Views: 88

JATS XML


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2223-1560 (Print)
ISSN 2686-6757 (Online)