The scientific peer-reviewed journal ‘Proceedings of Southwest State University’ is a subscription periodical publication that publishes materials containing the results of fundamental and applied research in the field of mechanical engineering, computer science and computer engineering, construction and architecture. The main content of the journal includes scientific papers, scientific reviews, scientific critical reviews and comments.
The journal is registered as a mass media by Federal Service for Supervision in the Sphere of Communications, Information Technology and Mass Communications (certificate of registration PI No. FS77-42691 of 16.11.10).
Journal founder – Southwest State University
The journal is published in printed form with a periodicity of 6 issues per year. Mandatory copies of the journal are sent to the Information and Telegraph Agency of Russia (ITAR-TASS). In printed form the journal ‘Proceedings of Southwest State University’ is distributed throughout the Russian Federation, as well as abroad by subscription. Subscription index for the United catalogue ‘Press of Russia’ - 41219.
The journal is included in the list of leading scientific journals and publications of State Commission for Academic Degrees and Titles of the Ministry of Education and Science of Russia in the following groups of scientific specialties:
- 1.2.2. Mathematical modeling, numerical methods and software packages (technical sciences).
- 2.1.1. Building structures, buildings and structures (technical sciences).
- 2.1.3. Heat supply, ventilation, air conditioning, gas supply and lighting (technical sciences).
- 2.1.9. Construction mechanics (technical sciences).
- 2.3.1. System analysis, management
- 2.3.2. Computer systems and their elements (technical sciences).
- 2.3.3. Automation and control of technological processes and productions (technical sciences).
- 2.3.4. Management in organizational systems (technical sciences).
- 2.3.5. Mathematical and software support of computer systems, complexes and computer networks (technical sciences).
- 2.3.6. Methods and systems of information protection, information security (technical sciences).
- 2.5.4. Robots, mechatronics and robotic systems (technical sciences).
- 2.5.5. Technology and equipment for mechanical and physico-technical processing (technical sciences).
- 2.5.8. Welding, related processes and technologies (technical sciences).
The journal is open to all interested persons and organizations. The Editorial Board is constantly working to expand the range of authors, attracting scientists from Russia and abroad.
The Editorial Board of the journal accepts articles for consideration only previously unpublished and not intended for simultaneous publication in other editions.
The journal follows an open access policy. Full-text versions of articles are available on the website of the journal, scientific electronic library eLIBRARY.RU.
Editorial policy is based on compliance with the requirements of publication ethics.
Publication of articles for authors is FREE in the journal. The Editorial Office does not charge authors for the preparation, placement and printing of materials.
Target audience: researchers, teaching staff of educational institutions, the expert community, young scientists, graduate students, doctoral students, interested members of the general public.
Current issue
MECHANICAL ENGINEERING AND MACHINE SCIENCE
Purpose of reseach. Ensuring the specified accuracy of sequential and parallel movements in the ankle, knee, hip of an active rehabilitation exoskeleton of the lower extremities with simultaneous partial unloading of the ankle and knee joints from axial loads by installing one of the rotary motion actuators on the hip joint. Tasks. The development and implementation of an active - passive movement strategy (ADF), in which the phases of passive movement of the lower extremities (while the exoskeleton provides movement of the limbs) are combined with the phases of active movement, when the patient himself performs the desired movement, and the exoskeleton assists him. Comparative analysis of experimental results and assessment of the adequacy and applicability of the mathematical model.
Methods. The study was performed in accordance with generally accepted methods of conducting and planning experimental studies. When modeling the movement of the lower extremities, the parameters characterizing the force interaction of the exoskeleton and the human are taken into account, which makes it possible to determine reactions in the hip joint and synthesize the parameters of the control system taking into account external disturbing influences.
Results. A mathematical model of the movement of the lower extremities of a rehabilitation training complex has been developed, which differs from the known ones in that, along with taking into account the kinematic and dynamic features of the movement of the links of the rehabilitation device, parameters characterizing the force interaction of the exoskeleton and a person are taken into account, which makes it possible to determine reactions in the hip joint and synthesize the parameters of the control system taking into account external disturbing influences.
Conclusion. The mathematical model and structure of the rehabilitation device proposed in the paper in the form of a flat exoskeleton manipulator equipped with two actuators, one of which is aligned with the axis of the human hip joint, makes it possible to compensate for the influence of active and reactive forces acting on the human hip joint during medical manipulations.
Purpose of reseach. Study of the microstructure of experimental blanks of new tungsten-free hard alloys.
Methods. The experimental solid–alloy powder material (charge) was fused by the method of material synthesis by pulsed plasma fusion, which makes it possible to obtain high-quality compact products from metal powders and composites with minimal material and energy losses (SPS - Spark Plasma Sintering). The method is based on the effect of high voltage and impulsive electric current on metal particles inside the mold. This process is accompanied by the formation of a high-temperature plasma that occurs directly around each individual metal particle. Plasma causes a rapid local increase in temperature and pressure, which contributes to the intense diffusion interaction of particles and the formation of dense structured products. The microstructure of the alloy was studied using a QUANTA 600 FEG scanning electron microscope.
Results. The alloy has a complex microstructure consisting of various phases and inclusions. 1. Titanium Carbide (TiC) phase: large grains, which are titanium carbide (TiC), have a straight shape and are evenly distributed throughout the entire volume of the alloy; Titanium carbide is an important component of the alloy, as it provides high hardness and wear resistance. 2. Alloy matrix: an alloy matrix is located between the grains of titanium carbide, which consists of nickel (Ni) and molybdenum (Mo). This matrix ensures the ductility and strength of the alloy; The matrix has a granular morphology, which indicates the presence of small grains, which are the result of heat treatment during pulsed plasma fusion. 3. Defects and dislocations: There are minor defects and dislocations, especially near the interface. These defects can contribute to the formation of an additional reservoir of strength and resistance to fatigue damage.
Conclusion. A study of the microstructure of a new tungsten-free hard alloy has shown that the alloy has a complex two-phase structure consisting of titanium carbide and a matrix enriched in nickel and molybdenum. This structure provides the alloy with high hardness, wear resistance and durability. The presence of defects and dislocations can help improve the mechanical properties of the alloy. These results confirm the prospects of developing new tungsten-free hard alloys that can become an alternative to traditional materials containing expensive tungsten.
CONSTRUCTION
Purpose of research. The aim of this study is to analyze the structural capabilities and limitations of using textile-reinforced concrete (TRC) in roofing systems for sports facilities, with particular focus on a stadium roof cantilever structure designed for the climatic conditions of Sochi.
Methods. The study uses a computational model of a cantilever roofing structure made of TRC, with a cantilever length of 22.735 m and a total width of 84.4 m. Numerical methods with finite element modeling in ANSYS are employed. Loads, including snow and wind, are applied based on the regulatory standards for Sochi. Different scenarios of cover thickness and cantilever length variations are analyzed.
Results. The results show that TRC significantly enhances the stiffness of the structure, reducing vertical displacements of the cantilever by 30-35% compared to traditional reinforced cement. Furthermore, reducing the cover thickness to 200 mm maintains sufficient stiffness, leading to material savings. Extending the cantilever by 3-4 meters is feasible without exceeding the allowable deflection limits.
Conclusion. The study demonstrates that the use of TRC in roofing structures for sports facilities leads to significant improvements in stiffness and material efficiency, allowing for either increased spans or reduced cover thickness without compromising performance. This presents new opportunities for the design of lightweight and efficient structures in the southern regions of Russia.
Purpose of research. The article discusses an innovative method for improving the environmental performance of heat generators in autonomous heat supply systems based on the use of granular blast furnace slag as an adsorbent to remove harmful substances from flue gases, namely nitrogen and carbon oxides. The results of experiments confirming the effectiveness of this method are presented.
Methods. The study examines the operation of a pilot plant with the calculation of its effectiveness. To do this, we used the data obtained during experiments on this installation, which was installed on a chimney in order to purify flue gases from harmful gases (nitrogen and carbon oxides).
Results. The developed pilot plant makes it possible to reduce the content of nitrogen oxides in exhaust and flue gases into the atmosphere by 50-55%, and it was also possible to reduce the concentration of carbon oxides by 27,7%. The reduction of carbon oxides in emissions ranges from 16,7 to 26,7%. Blast furnace slag with granules from 5 to 10 mm in size was used during the operation of the installation. The unit effectively cleans flue and exhaust gases for 720 hours between blast furnace slag regeneration cycles, which requires up to 52,5 liters of water.
Conclusion. Pilot-industrial flue gas purification plant for heat generators of residential heating systems using adsorption purification and granular blast furnace slag as an adsorbent is an effective method of managing environmental parameters. It reduces the content of harmful components in flue gases, including nitrogen oxides (NOx), including NO), carbon monoxide (CO) and carbon dioxide (CO2). The method makes it possible to control the environmental parameters of heat generators, eliminate the need for condensate removal and disposal, and improve the technical, economic and environmental parameters of the heat generating system for apartment-by-apartment heat supply.
Purpose of research. The article provides a mathematical description of the heat transfer process during the combined utilization of low-potential waste heat and ventilation emissions in the channels of a multilayer plate heat exchanger.
Methods. In order to describe the operation of a combined exhaust gas and ventilation emissions disposal system, a mathematical model has been developed that takes into account the distribution of air flows in the channels of a plate heat recovery unit during the utilization of low-potential heat transferred by the air mass and heat transfer through a flat multilayer wall with integrated semiconductor Peltier elements. Based on this method, a methodology has been developed for the development and design of highly efficient and economical low-potential heat recovery systems with associated generation of thermoelectricity.
Results. A mathematical model has been developed describing the operation of a combined waste gas and ventilation emissions disposal system, including flow distribution in the channels of a plate heat recovery unit, heat recovery using Peltier thermoelectric elements and their effects on heat transfer through a flat multilayer wall, which will further create a design methodology for highly efficient and economical heat recovery systems, optimize heat and mass transfer processes., to conduct numerical experiments with an assessment of economic efficiency.
Conclusion. In order to increase the efficiency of waste gas low-potential heat recovery systems and ventilation emissions, a mathematical model has been created that includes the distribution of air flows in the interplate space of the heat exchanger, the process of heat transfer through a flat multilayer wall with mounted flat semiconductor Peltier elements.
COMPUTER SCIENCE, COMPUTER ENGINEERING AND CONTROL
Relevance. Gesture recognition in computer vision systems is important for the development of accessible human-computer interaction interfaces, including for people with disabilities. Traditional methods, such as manual feature extraction (HOG, SIFT) in combination with SVM classifiers, have limited accuracy and are sensitive to changes in lighting, background, and hand pose.
Purpose of research. The aim of this work is to build and train a convolutional neural network (CNN) for efficient gesture classification based on the Sign Language MNIST dataset. The study addressed the problems of data preprocessing, model architecture design, training, and recognition quality assessment on the test set.
Methods. TensorFlow and Keras libraries were used to implement the CNN. The model includes convolutional layers for local feature extraction, a Flatten layer for vectorization, fully connected layers with a ReLU activation function, and an output layer with Softmax. The training was performed using the Adam optimizer and the sparse_categorical_crossentropy loss function on 27,455 images, and testing was performed on 7,172 examples.
Results. The proposed model achieved 89.14% accuracy on the test dataset after 18 training epochs, which outperforms traditional methods (HOG + SVM - 70.1%) and simple neural networks (78.4%).
Conclusion. The use of convolutional neural networks for gesture classification is an effective approach that provides high accuracy and is robust to variations in input data, making it promising for computer vision and gesture interaction systems.
Purpose of research is to create a compiler model for the functional language Common Lisp, implement this model, and test the compiler model using a target virtual machine to increase the execution speed of programs.
Methods. A formal compiler model of the functional language Common Lisp was built using denotational semantics. Compilation takes place in several stages. At the first stage, the source language is transformed into an intermediate lambda language in which all macros are expanded, embedded forms are transformed into similar expressions, and variable names are replaced with local, global, and deep references. At the second stage, the expression in the intermediate language is transformed from a tree structure into a linear list of primitive instructions of the target virtual machine.
Results. The resulting primitive instructions are encoded using a special assembler into numeric code for execution on the target virtual machine. The compilation also results in a list of constants and the amount of memory required for the compiled program to run. The target virtual machine consists of memory sections for the encoded program, constants, global variables, stack, list of activation frames, registers (accumulator, stack pointer, instruction pointer, current activation frame). Activation frames are array objects that store a pointer to the previous frame, the call depth level number, and local arguments. Garbage collection takes place using the tagging and cleaning method.
Conclusion. As a result, a Common Lisp functional language compiler model was built and implemented. Compared to the interpreter, the speed of the program has increased by an average of 20 times. Further speed increases can be achieved by using various compiler optimizations at different stages. Of the simple optimizations, it can be noted: optimization of arithmetic expressions, elimination of unnecessary commands, simplification of expressions.
Purpose of research. The research is aimed at exploring the possibilities of using cognitive technologies (CT) to effectively solve poorly formalized tasks in the field of analyzing large amounts of textual data. Special attention is paid to the task of automatic text annotation, which is one of the most important problems of modern science and technology, stimulating the active development of artificial intelligence and machine learning methods.
Methods. To achieve this goal, an extractive method of automatic summary compilation was used. This approach involves selecting the most significant fragments of the source text by highlighting individual sentences or phrases based on certain criteria. The following selection criteria were used: frequency of occurrence of words; semantic importance of words and expressions; position of sentences within the document. These indicators allow you to highlight the main thoughts of the text and create a compact summary that preserves the meaning of the source material.
Results. During the experiments, a program was developed in the Python programming language that implements the extracted method of taking notes. The algorithm is based on an analysis of the frequency of occurrence of keywords in the text, which ensures an effective assessment of the significance of each sentence. The program has a number of advantages: simplicity of implementation and operation; open source code, which makes it easy to adapt the solution to the specific needs of users; high efficiency in processing significant amounts of textual information. The developed tool demonstrates the ability to effectively create a summary of the text, while maintaining the basic meaning and structure of the original content.
Conclusion. The use of a cognitive algorithm has significantly improved the productivity of the text information analysis and processing process. The proposed methodology is capable of automating routine operations for making notes, helping specialists quickly get a general idea of the contents of large documents and publications. This is especially important in the context of the modern information society, characterized by ever-growing data flows that require rapid and high-quality comprehension. Thus, the study showed the prospects of introducing cognitive technologies into the field of automation of intellectual work and offered a practical solution to the urgent problem of taking notes on large amounts of information, which can become an important assistant for specialists and researchers.
Purpose of research. The research objective is to evaluate the effectiveness of a hybrid algorithm combining a genetic algorithm (GA) and a least squares method (LSM) designed to identify bioimpedance parameters in the tasks of pulmonary pathology diagnostics. The main attention is paid to improving the measurement accuracy (the norm of the residual ≤ 0.09) and developing recommendations for integrating the method into software and.
Methods. The study was performed on the basis of the E20-10 module ("L-Card"), which generates sinusoidal signals using ECG electrodes. to minimize parasitic containers. The bioimpedance parameters were identified using a hybrid algorithm in which genetic search was combined with the least squares method, including regularization (λ=0.1). Data analysis was carried out using the Cole method, and measurement control was carried out through software implemented in Delphi, which provided real—time signal processing.
Results. Based on the E20-10 module and a complex developed for measurement and biological analysis. The average distance between the model data and Xperiment is 4% (standard deviation of 0.09). The REXTRA identification error does not exceed 2.1%, which confirms the reliability of the method in clinical settings.
Conclusion. The study has confirmed that the hybrid algorithm provides high accuracy (residual norm of 0.09) and the ability to regenerate when determining biological parameters. Analysis of the characteristics of the amplitude and frequency phase, which allowed the development of classifications that can automate the diagnosis of lung diseases. The results showed that the potential of the method is to create a medical diagnostic package that can work in real time (1 sec per analysis). However, additional research on living subjects is required for clinical implementation, as well as the adaptation of algorithms for various diseases.
Purpose of the work is to create models and pipeline schemes for high-performance processing of unitary codes in homogeneous computing systems.
The research methods are based on the theory of designing homogeneous computing systems, methods of synthesis of iterative networks and artificial intelligence systems. Unitary codes are a signal-information base for analyzing and planning parallel processes in homogeneous computing systems. Known one-dimensional and two-dimensional iterative networks are the basis for creating homogeneous pipeline schemes and recurrent computations in them.
Methods. Nevertheless, iterative networks consisting of homogeneous computing cells with regular connections, by default implement a single computing process and, as a rule, one search and computing function. To increase the specific performance of pipeline schemes, the principles of multi-functionality and multi-pipelining have been developed, allowing the implementation of several cells implementing more than one operation at each cycle of the pipeline.
Results. Practically significant pipeline schemes with the organization of several local computing processes with their own starting points have been created, which is necessary for the efficient operation of homogeneous computing systems - reconfigurable computing structures, database and knowledge machines, associative processors, etc.
Comparative assessments of pipeline schemes have been carried out for generally significant operations of processing unitary codes: arbitration, formation of the right, left series of logical "1". It is shown that the application of the principles of multi-functionality and multi-pipelining provides a proportional decrease in the time per operation.
Conclusion. The synthesis of parallel-pipeline schemes for processing unitary codes based on iterative networks is based on the unification and development of the principles of cell clocking, multi-functionality of cells, multi-pipelining, which allows for efficient processing of unitary code flows with dual interpretation of elements (digit/symbol).
Purpose. The purpose of the conducted research is to create a specialized geoinformation system as a single platform for systematization and automation of work with spatial and semantic data in planning and monitoring the state of points of the state geodetic and state leveling networks in the territory of the Kursk region, as well as the formation of reporting electronic documentation for the transfer of up-to-date information to the Federal Spatial Data Fund.
Methods. Based on the object-oriented analysis of the subject area, functional requirements for a specialized geoinformation system are formulated. A conceptual model of the system for implementing the functional requirements is built. Based on the object-connected method, a relational database for centralized storage and analysis of semantic and spatial information about points of state geodetic networks is designed. The method of thematic mapping is used to visualize data on the current state and location of points of geodetic networks on the ground.
Results. A cross-platform web-oriented specialized geographic information system has been developed, providing users with the following set of functions: visualization of the location and current status of points of the state geodetic and state leveling networks on the map of the Kursk region; transformation of coordinates from the WGS 84 system to the MSC 46 system; maintaining the database in an up-to-date state; searching for points using various filters; determining distances between points or arbitrary segments on the map; calculating geodetic coordinates of security zones of points; displaying the boundaries of security zones of points on the map; constructing an optimal route for moving between points; export/import operations for spreadsheet files
Conclusion. The developed geographic information system can be used by specialists of geodetic services and state land supervision departments for collecting, storing, editing, visualizing and analyzing spatial and semantic data when planning and monitoring the status of points of state geodetic networks, as well as when generating reporting documentation for transferring up-to-date information to the Federal Spatial Data Fund.
Purpose of research. The current problem in website construction systems is price and availability. Therefore, the task of developing a website construction system designed for a wide range of users and providing free access to it is relevant. The purpose of this study is to develop a software and information system for website construction.
Methods. The website builder was written in PHP. HTML, CSS, JavaScript, and Bootstrap were also used to create the user interface. MySQL, which is distributed under the GNU General Public License, was used to work with data, providing free use and reducing financial costs for software users. MySQL is known for its high performance, stability, and support for cross-platform data and code. The database is managed using the open-source web application phpMyAdmin.
Results. The developed website builder provides the following features to the user: navigation in the user menu; registration of a new user, authorization of a registered user, exit from the cabinet; creation, deletion of the site; adding and deleting a block, editing the text, color and background of the block; editing the background of the site. The system is a single module that provides management of created sites and editing of their contents. The system is a single module that provides management of created sites and editing of their contents. The result of the user's work is an HTML file containing the created website and its associated CSS files. During the testing, the functionality of the constructor was analyzed when all the provided functions were implemented.
Conclusion. The developed website builder is stable and ready for implementation. The system is designed to simplify the creation of websites by minimizing time and financial costs. The created website builder can be used by a wide range of users.
Purpose. Development and validation of the adaptive NCC methodology with hybrid weight optimization and dynamic correction of membership functions for unstable markets.
Methods. The methodology includes a proposal for a three-level NCC architecture (5 inputs, 4 hidden nodes, 3 outputs) initialized by the Saaty method with a consistency ratio of CR=0.038; hybrid weight optimization combining the particle swarm algorithm (PSO) and adaptive regularization; and quarterly adaptation of trapezoidal membership functions based on streaming clustering using Streaming C-means and exponential smoothing (EMA).
Results. The results of testing on data from the retail chain N (time span of 62 weeks, 345 observations) showed: high prediction accuracy with MAPE 7.2% (95% confidence interval [6.8;7.6]), which is statistically significantly lower (p<0.01) than the errors of the LSTM (9.8%) and static NCC (15.8%) models, and is comparable to the accuracy of XGBoost (7.8%, p=0.12), while adaptive NCC provides superiority in the interpretability of causal relationships (for example, the weight of the marketing budget's impact on sales w₁₁=0.78±0.05); increased robustness, resulting in a smaller increase in forecast error during the March shock period (+49.2% for adaptive NCC versus +86.9% for LSTM); and significant economic efficiency, confirmed by the results of implementation in the ERP system: reduction of logistics costs by 15.2% (absolute savings of 5.1 million rubles), reduction of inventory turnover from 18.3 to 15.1 days, quarterly ROI of 287.5% and estimated net present value (NPV) of the project 9.2 million rubles (95% CI [8.1;10.3].
Conclusion. The developed methodology provides highly accurate, interpretable, and robust sales forecasting in unstable market conditions, proving its practical effectiveness and economic feasibility. Promising areas of development include the automation of map construction using GANs, the acceleration of calculations through CUDA implementation, and the integration with graph neural networks (GNN).
Purpose of research. Improving the reliability of object recognition in an image by investigating the effect of gamma correction of the input image on the quality coefficient of object recognition on it.
Methods. Pre-processing of images obtained using the complex of video recording of traffic violations installed in the city of Kursk includes gamma correction, conversion from RGB to grayscale, blurring with a Gauss filter, highlighting the boundaries of objects based on the Canny algorithm, classification of objects using the YOLO algorithm.
Results. The main advantages of adaptive control traffic control systems are considered. The structural scheme of the pedestrian crossing control system and the stages of input image preprocessing, including gamma correction, and their effect on the reliability of object detection are described. The Recall indicator was calculated to quantify the detection efficiency at different gamma correction values for each of the classes under consideration: pedestrians (Recall = 0.46), cars (Recall =0.824), traffic lights (Recall =0.60).
Conclusion. The results of a series of experimental studies prove the positive effect of gamma correction on the recognition efficiency of only certain classes of objects, such as traffic lights, requiring a minimum value of γ ≈ 1.5 (gamma 100) to start recognition. The detection of other classes considered, such as pedestrians and cars, remains stable at any gamma values from the range [0; 200]. The largest number of detections was recorded at ranges of 20 and 80 for pedestrians and at ranges of 60, 100 and 120 for cars.
Purpose of reseach. To develop a self-learning algorithm for a two-layer neural network model that is as efficient as possible, given the current technical implementation of intelligent systems, including those designed for solving pattern recognition problems. This algorithm will be based on increasing the number of neurons and varying the weight coefficients of synaptic connections, with the possibility of extending it to a high-order multiconnected neural network with an internal product of vectors.
Methods. To solve this problem, this paper proposes an approach to synthesizing a high-order multiconnected neural network model with an internal product of vectors, as well as a self-learning algorithm for such a neural network. This algorithm provides for the rapid correction of the elements of the reference matrix, instead of the traditional variation of the weight coefficients of synaptic connections, in order to reduce the resource intensity of the performed operations.
Results. The proposed method was implemented as a software application linked to the self-training of a high-order neural network using speech sound types represented in raster format, pre-segmented from the general stream and transformed into polar coordinates for ease of processing and storing the resulting images as a training set.
Conclusion. The developed algorithm, during software implementation testing, demonstrated relatively high efficiency by eliminating the resource-intensive operation of varying weight coefficients and replacing it with direct correction of the reference matrix. This algorithm demonstrated relatively high efficiency, convergence in a finite number of steps due to the limited number of first-approximation codes of reference vectors, and noticeable performance compared to known analogs.
ISSN 2686-6757 (Online)




















