Preview

Proceedings of the Southwest State University

Advanced search

Computational system performance evaluation

https://doi.org/10.21869/2223-1560-2025-29-2-201-220

Abstract

Purpose of research.  To analyze and model the performance of computing systems, including the calculation and comparison of various metrics such as system utilization, peak and asymptotic performance, system acceleration and real performance, using mathematical models to evaluate the performance of systems under dynamic tasks and multitasking. Special attention is given to the effect of various system parameters on the system's ability to perform computational operations and resource management efficiently.

Methods. In this paper, mathematical modeling techniques were used to analyze the performance of computing systems, calculating the system load as the arithmetic average of the loads of all devices, determining the peak performance of the system through the number of devices and the performance of each, calculating the system acceleration as the sum of device loads and the ratio of operations performed to time, estimating the real and asymptotic performance through minimum peak values, comparing different systems in terms.

Results. The study analyzes the performance of heterogeneous computing systems including Intel Xeon processors and Intel Xeon Phi coprocessors. It was revealed that the classical performance evaluation model based on a simple sum of nodes' capabilities significantly overestimates real performance due to ignoring architectural and system peculiarities such as data transfer latency and interconnect bandwidth. A modern model that takes into account AVX512 vectorization, multi-level memory, and PCIe 4.0 bus limitation resulted in a more accurate estimate of about 1.99 TFLops for a homogeneous CPU+GPU configuration. In this case, the PCIe bandwidth acts as a bottleneck in the joint operation of CPU and GPU. Analysis of heterogeneous configurations with Xeon Phi 7120P and Xeon E5-2683 v4 showed a significant performance gain of up to 2.67 TFLops, which exceeds the capabilities of homogeneous systems. The key parameter affecting the performance was the unload queue size factor mmm, which determines the maximum size of the processed data block. Experiments have shown that for small values of mmm, the communication overhead increases the total computation time, whereas the optimal range of m=25-35m = 25\text{}35m=25-35 achieves the minimum execution time due to the balance between queue size and communication overhead. Further increase in mmm leads to stabilization or slight increase in runtime due to increased complexity of load balancing and delays. The obtained data confirm that proper selection of queueing parameters is an important factor in the optimization of heterogeneous systems.

Conclusion. This research has confirmed the necessity of using modern performance evaluation models that take into account architectural features, bandwidth, interconnects and system limitations to accurately predict the computational capabilities of heterogeneous platforms. Classical evaluation methods prove to be insufficient as they do not take into account data transfer latency, memory features and parallelism, resulting in overestimated and unrealistic predictions. Modern models taking into account AVX vectorization, multi-level memory and PCIe bandwidth allow us to obtain an adequate evaluation and identify real bottlenecks important for optimization.

About the Author

G. V. Petushkov
MIREA – Russian Technological University
Russian Federation

Grigory V. Petushkov, Junior Researcher, Centre for Popularisation of Science and Higher Education, Institute of Youth Policy and International Relations, 

78, Vernadskogo str., Moscow 119454.


Competing Interests:

The Author declare the absence of obvious and potential conflicts of interest related to the
publication of this article.



References

1. Leontieva O.Yu., Klimanova E.Yu., Zelenko B.V. Performance evaluation of computing systems. Vestnik tekhnologicheskogo universiteta = Bulletin of the Technological University. 2015; 18(24): 102-105. (In Russ.).

2. Sorokin A.P., Benenson M.Z. Methodologies for performance evaluation of heterogeneous computing systems. Russian Technological Journal. 2017; (5): 11-19. (In Russ.).

3. Bahnam B.S., Dawwod S.A., Younis M.C. Optimizing software reliability growth models through simulated annealing algorithm: parameter estimation and performance analysis. The Journal of Supercomputing. April 2024. DOI: 10.1007/s11227-024-06046-4

4. Vepaev Sh.V. Study of Markov service models. Molodoi uchenyi = Young Scientist. 2022; (49): 26–28. (In Russ.).

5. Gachaev A.M., Dataev A.A., Vazkaeva S.S.-A. Study of software reliability in computer information technologies. Prikladnye ekonomicheskie issledovaniya = Applied economic research. 2023; (2):80-84. (In Russ.). https://doi.org/10.47576/2949-1908_2023_2_80.

6. Vadeyko V.S., Manko A.V. Markov reliability model. Minsk; 2022. P. 222–22. (In Russ.).

7. Terskov V.A., Sakash I.Yu. Mathematical model for reliability evaluation of multiprocessor computing complexes. Computational Nanotechnology. 2024; 11(2): 22–28. (In Russ.). https://doi.org/10.33693/2313-223X-2024-11-2-22-28. EDN: MHZWBU

8. Mikhalok V.V. Technical requirements for the software-hardware complex (SHC) of the executor. (In Russ.). Available at: https://intellectexport.ru/site/assets/files/1035/prilozhenie_2.doc

9. Klimanova E.Yu., Subkhankulova A.R., Zelenko B.V., Leontieva O.Yu. Performance evaluation of computing systems. Vestnik tekhnologicheskogo universiteta = Bulletin of the Technological University. 2015; 18(24):102-105. (In Russ.).

10. Gogolevsky A.S., Romanov A.V., Trepkova S.A. Methodology for performance evaluation of hardware-software complex of information control system. Izvestiya TulGU. Tekhnicheskie nauki = Izvestiya Tula State University. Technical Sciences. 2022; 10: 87-91. (In Russ.).

11. Larionov A.M., Mayorov S.A., Novikov G.I. Computing complexes, systems and networks. Leningrad: Energoatomizdat; 1987. 288 p. (In Russ.).

12. Min Thu Khaing, Lupin S.A., Ai Min Taik, Fedyashin D.A. Comparative analysis of node performance evaluation methods in distributed systems. Mezhdunarodnyi zhurnal otkrytykh informatsionnykh tekhnologii = International Journal of Open Information Technologies. 2023; 11(6). (In Russ.).

13. Lorenzo Luciano, Imre Kiss, Peter William Beardshear, Esther Kadosh, A. Ben Hamza WISE: a computer system performance index scoring framework. Journal of Cloud Computing: Advances, Systems and Applications. 2021; 10:8.

14. Albertyan A.M., Kurochkin I.I., Vatutin E.I. Use of heterogeneous computing nodes in grid systems for solving combinatorial problems. Izvestiya YuFU = Izvestiya of Southern Federal University. 2022: 142-153. (In Russ.).

15. Jim Holtman, Neil J. Gunther Getting in the Zone for Successful Scalability. Performance Dynamics Company, Castro Valley, California, USA, 2018.

16. Xin Li Scalability: strong and weak scaling. Royal Institute of Technology, 2018.

17. Rupak Roy, JaeHyuk Kwack Intel Analyzers. Argonne Leadership Computing Facility. 2025.

18. Brendan Gregg Visualizing Performance: The Developer's Guide to Flame Graphs. Communications of the ACM. 2022.

19. Martyshkin А. I., Kiryutkin I. А., Merenyasheva Е. А. Autotesting an Embedded Reconfigurable Computing System. Izvestiya Yugo-Zapadnogo gosudarstvennogo universiteta = Proceedings of the Southwest State University. 2023; 27(1): 140-152 (In Russ.). https://doi.org/10.21869/2223-1560-2023-27-1-140-152.


Review

For citations:


Petushkov G.V. Computational system performance evaluation. Proceedings of the Southwest State University. 2025;29(2):201-220. (In Russ.) https://doi.org/10.21869/2223-1560-2025-29-2-201-220

Views: 111


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2223-1560 (Print)
ISSN 2686-6757 (Online)