Articles in this Volume

Research Article Open Access
Hyperspectral anomaly detection research fusing global and nonlocal low-rank factorization and nonconvex sparse constraints
Article thumbnail
Hyperspectral anomaly detection is a key task in the field of remote sensing, which aims to identify targets with significant spectral differences from the background without prior knowledge. Traditional methods insufficiently characterize the sparsity of anomalies and are susceptible to background noise interference. This paper introduces the existing advanced low-rank denoising technique, Global and Nonlocal Low-Rank Factorization (GLF), for anomaly detection as a background modeling tool to obtain residual images. In the residual processing stage, a variety of nonconvex penalty functions are systematically adopted to replace the traditional L2, and anomaly score maps are generated through pixel-wise aggregation to more accurately approximate the sparse distribution of anomalies. Experiments on multiple ABU datasets show that the AUC of the proposed GLF-NC is significantly superior to classical methods such as RX, RPCA-RX, and LRASR. Transferring GLF to anomaly detection combined with nonconvex penalties can effectively improve detection accuracy, verifying the effectiveness of the method in anomaly enhancement and background suppression.
Show more
Read Article PDF
Cite
Research Article Open Access
Adaptive analysis of modulation recognition across full signal-to-noise ratio range based on lightweight CNN and ResNet
Article thumbnail
Modulation recognition is a core technology for non-cooperative communications and spectrum monitoring. It is evident that traditional feature-based machine learning methods lack robustness in low-to-medium signal-to-noise ratio (SNR) environments. Conversely, deep learning models, despite their capacity to enhance recognition accuracy, encounter trade-offs between computational efficiency and recognition precision. The present study focuses on lightweight convolutional neural networks (CNN2) and residual neural networks (ResNet), undertaking a systematic comparison of their modulation recognition performance across the full SNR range to determine model suitability for diverse scenarios. The RML2016.10a dataset, which contains 24 modulation types, was used to conduct experiments with nine SNR gradients ranging from -8 to 24 dB. These experiments were conducted under consistent training parameters in a Mac CPU environment. The results of the study indicate the following: In terms of recognition accuracy, ResNet demonstrates superior performance across the entire SNR range, achieving up to 59.33% higher accuracy than CNN2 in the low-to-medium SNR range (-8 to 8 dB), particularly excelling in complex modulated signal recognition. With regard to computational efficiency, CNN2 demonstrated a significant advantage. This research provides a basis for scenario-specific model selection: Where high recognition accuracy is paramount at low-to-medium SNR, ResNet should be prioritized. In contrast, for edge device deployments with limited computational resources, CNN2 is the optimal choice.
Show more
Read Article PDF
Cite
Research Article Open Access
Research on electric vehicle energy consumption prediction based on BP neural network
Article thumbnail
With the rapid growth of society's technology and economy, the energy and pollution problems brought by motor vehicles are gradually exposed to people. At this time, electric vehicles seem to be a good solution. However, due to the limited number of charging piles, long charging time and inadequate supporting facilities, people may feel uneasy about the range of electric vehicles. Accurately obtaining the range of a vehicle can effectively alleviate this "range anxiety". To obtain the accurate range, we need to predict the energy consumption on the planning path. In this paper, we build a two-level energy consumption prediction model based on the speed characteristics as a bridge to obtain accurate energy consumption prediction for electric vehicles. Firstly, using the experimentally obtained road traffic information and v-t data, the road and traffic characteristics parameters, vehicle speed characteristics parameters and energy consumption of each segment within the segment are calculated, and the characteristics parameters suitable as intermediaries of the secondary energy consumption prediction model are selected from the vehicle speed characteristics parameters. Secondly, the BP neural network for energy consumption prediction is established with the vehicle speed characteristic parameter as the input quantity and the energy consumption as the output quantity; the BP neural network for vehicle speed characteristic parameter prediction is established with the road and traffic characteristic number as the input quantity and the vehicle speed characteristic parameter as the output quantity. Lastly, the sequences of road and traffic feature parameters are extracted from the experimental data and input into the secondary energy consumption prediction model to obtain the predicted energy consumption and compare it with the actual energy consumption. The verification shows that the secondary energy consumption prediction model has a high accuracy.
Show more
Read Article PDF
Cite
Research Article Open Access
Visual SLAM algorithm with dynamic point elimination based on YOLACT network
Article thumbnail
A dynamic visual SLAM algorithm is proposed in this paper, which combines the YOLACT network with the geometric method to design a dynamic point detection module for eliminating dynamic points. The dense optical flow-based dynamic point detection scheme is adopted to make up for the problem that the elimination algorithm based on the instance segmentation network overrelies on object prior information. Aiming at the low accuracy of the original output mask of YOLACT, a mask post-processing method based on image processing and morphology is proposed to repair the dynamic point mask output by the YOLACT network. Finally, this module is integrated into the ORB-SLAM2 framework to construct a visual SLAM system adapted to dynamic scenes. The proposed algorithm is tested and verified on the public TUM dataset, which proves the effectiveness of the proposed module. Compared with the ORB-SLAM2 system, the localization accuracy of the proposed algorithm is improved by 93.4% in indoor dynamic scenes.
Show more
Read Article PDF
Cite
Research Article Open Access
A high-quality localization-aware action recognition algorithm based on YOLOv11
Article thumbnail
To address the challenges of large-scale variations in human targets, the loss of spatial details, and the inconsistency between prediction confidence and localization quality in complex scenarios, this study proposes a high-quality localization-aware action recognition method based on YOLOv11d. An SPDConv downsampling structure is introduced into the backbone network and the feature fusion stage to enhance the representation capability of small-scale target features. In addition, a localization quality estimation branch is incorporated into the detection head to explicitly model the Intersection over Union (IoU) of bounding boxes, and the confidence score is reweighted by combining the estimated localization quality with class probability. Experimental results demonstrate that the proposed method achieves an mAP@50 of 96.0% and an mAP@50–95 of 72.3%, representing improvements of 0.3% and 2.8%, respectively, compared with YOLOv11.
Show more
Read Article PDF
Cite
Research Article Open Access
Application research on intelligent probing based on large-scale AI models in internet fault handling
Article thumbnail
With the exponential growth in the complexity of Internet architectures and the widespread adoption of cloud-native service technologies, traditional operation and maintenance (AIOps) models—largely reliant on the paradigm of "expert rules + fixed scripts"—have become increasingly passive and inefficient when confronted with unknown faults and massive volumes of alerts. This study focuses on the application of large-scale AI model–based intelligent agents across the full lifecycle of Internet fault handling, aiming to construct autonomous O&M agents endowed with capabilities of perception, decision-making, and execution. The paper first analyzes the core challenges in current fault management: alert storms leading to missed and false incident reports, cross-system data silos hindering root cause localization, and heavy reliance on expert experience in manual troubleshooting, resulting in delayed response times. On this basis, a hierarchical solution architecture based on large-model agents is proposed, comprising a multi-source data perception layer, a fault reasoning and decision-making layer, and an automated execution layer [1]. By integrating retrieval-augmented generation (RAG) techniques with an O&M knowledge base, the proposed approach equips intelligent agents with the ability to interpret topology metrics, log semantics, and change events. Furthermore, the introduction of chain-of-thought reasoning and reflection mechanisms enables the agents to simulate expert diagnostic pathways, thereby achieving millisecond-level anomaly detection and minute-level root cause identification.
Show more
Read Article PDF
Cite
Research Article Open Access
Comprehensive techno-economic evaluation of electrochemical recirculating aquaculture system based on AHP
Article thumbnail
Aiming at the current situation that there is a lack of comprehensive evaluation system for Recirculating Aquaculture System (RAS) technology, this study constructs a comprehensive techno-economic evaluation framework based on Analytic Hierarchy Process (AHP), and conducts an empirical comparison between seawater RAS (electrochemical method) and freshwater RAS (biological method) in a breeding base in Zhejiang Province. The AHP results show that the weight of technical indicators is the highest (52.3%), and pollutant removal rate (C1), operational stability (C2) and water environmental quality (C6) rank the top three sub-indicators; the consistency test CR < 0.1, indicating the system is reliable. Empirical results show that the ammonia nitrogen removal rate of seawater RAS reaches 78% and nitrite nitrogen removal rate reaches 95%, with good production benefits but excessively high infrastructure investment, and there is a risk of excessive SO₂ and CO emissions.
Show more
Read Article PDF
Cite
Research Article Open Access
Why was the four-engine passenger aircraft discontinued? What engine improvements have been made since, and what challenges remain?
Article thumbnail
The Boeing 747 and Airbus A380 passenger jets used to control long-distance traveling, a symbol of safety, distance, and status. However, advances in high-bypass turbofan engines, rising fuel prices, and growing environmental pressures have made this configuration increasingly uneconomic. This project investigates three questions: why four-engine airliners have been discontinued, and what engine improvements have enabled twin-engine aircraft to replace them while still leaving essential challenges for the future. Drawing on technical textbooks, academic articles, industry reports, and historical case studies, the dissertation explains the working principles of turbofan engines, traces the development of quadjets, and compares the performance, fuel efficiency, maintenance demands, and regulatory constraints of four- and two-engine layouts. The analysis shows that modern twin-engine wide-bodies can match or exceed the range and payload of classic quadjets with much lower fuel burn and maintenance cost, especially after the relaxation of ETOPS rules and the shift towards point-to-point networks. At the same time, gas-turbine technology is approaching limits in bypass ratio, temperature, and materials, while climate policy highlights unresolved CO₂ and non-CO₂ impacts. Future propulsion concepts such as open-rotor engines, hybrid-electric systems, and sustainable aviation fuels offer potential benefits but introduce new technical and economic trade-offs.
Show more
Read Article PDF
Cite
Research Article Open Access
Machine learning approaches to stock return prediction: evidence from the JPX Tokyo stock exchange challenge
Article thumbnail
Predicting stock returns remains a central challenge in quantitative finance due to the noisy, non-stationary, and highly cross-sectional nature of financial markets. This paper investigates the use of machine learning methods for equity return prediction using data from the JPX Tokyo Stock Exchange Prediction Challenge. The dataset comprises daily price, volume, trading, and financial statement information for approximately 2,000 Japanese equities over the period 2017–2021, with performance evaluated through a ranking-based portfolio construction framework. We develop a modelling pipeline that emphasises data integrity, parsimonious feature engineering, and economically meaningful evaluation. Following minimal preprocessing focused on alignment and deduplication, features are constructed from adjusted prices, lagged returns, rolling window statistics, trading volume measures, and daily cross-sectional ranks. A set of linear regression models is employed as interpretable baselines, against which a gradient boosting decision tree model implemented via LightGBM is benchmarked. Model performance is assessed using both pointwise prediction diagnostics and a portfolio-based spread metric consistent with the competition objective, which measures the risk-adjusted return of a daily long–short strategy formed from predicted rankings. Results indicate that linear models struggle to capture the non-linear and interaction-driven structure of equity returns, while gradient boosting provides materially stronger performance in terms of portfolio level outcomes. Overall, the findings highlight the importance of cross-sectional evaluation and demonstrate that relatively simple, carefully constructed features combined with flexible tree-based models can deliver meaningful improvements in stock ranking tasks.
Show more
Read Article PDF
Cite
Research Article Open Access
Multi-UAV fire rescue path planning based on Monte Carlo city modeling
Article thumbnail
To address the issues of insufficient universality in algorithm validation and slow convergence in current path planning for multiple unmanned aerial vehicles (UAVs) involved in firefighting operations in urban fire scenarios, this study proposes a path planning and simulation method that integrates a Monte Carlo simulation–based urban model with an improved A* algorithm. A generalized urban model is constructed to reduce the path length and operational time of multi-UAV firefighting missions. The spatial distribution patterns of urban clusters are analyzed, and the A* algorithm is adopted as the path search method, with the shortest time and path length defined as the objective functions. In addition, a time-window optimization mechanism is incorporated to prevent collisions. Simulations and comparative analyses are conducted under three urban scenarios: large metropolitan cities, dual-core cities, and small-scale cities. The simulation results demonstrate the efficiency of the improved A* algorithm and enhance the operational performance of multi-UAV systems in urban fire scenarios. According to the numerical results, the improved A* algorithm achieves the fastest convergence across the three urban scenarios, reducing convergence time by up to 6.89%, 5.78%, and 1.27%, respectively, compared with other algorithms.
Show more
Read Article PDF
Cite