Articles in this Volume

Research Article Open Access
Hyperspectral anomaly detection research fusing global and nonlocal low-rank factorization and nonconvex sparse constraints
Article thumbnail
Hyperspectral anomaly detection is a key task in the field of remote sensing, which aims to identify targets with significant spectral differences from the background without prior knowledge. Traditional methods insufficiently characterize the sparsity of anomalies and are susceptible to background noise interference. This paper introduces the existing advanced low-rank denoising technique, Global and Nonlocal Low-Rank Factorization (GLF), for anomaly detection as a background modeling tool to obtain residual images. In the residual processing stage, a variety of nonconvex penalty functions are systematically adopted to replace the traditional L2, and anomaly score maps are generated through pixel-wise aggregation to more accurately approximate the sparse distribution of anomalies. Experiments on multiple ABU datasets show that the AUC of the proposed GLF-NC is significantly superior to classical methods such as RX, RPCA-RX, and LRASR. Transferring GLF to anomaly detection combined with nonconvex penalties can effectively improve detection accuracy, verifying the effectiveness of the method in anomaly enhancement and background suppression.
Show more
Read Article PDF
Cite
Research Article Open Access
Adaptive analysis of modulation recognition across full signal-to-noise ratio range based on lightweight CNN and ResNet
Article thumbnail
Modulation recognition is a core technology for non-cooperative communications and spectrum monitoring. It is evident that traditional feature-based machine learning methods lack robustness in low-to-medium signal-to-noise ratio (SNR) environments. Conversely, deep learning models, despite their capacity to enhance recognition accuracy, encounter trade-offs between computational efficiency and recognition precision. The present study focuses on lightweight convolutional neural networks (CNN2) and residual neural networks (ResNet), undertaking a systematic comparison of their modulation recognition performance across the full SNR range to determine model suitability for diverse scenarios. The RML2016.10a dataset, which contains 24 modulation types, was used to conduct experiments with nine SNR gradients ranging from -8 to 24 dB. These experiments were conducted under consistent training parameters in a Mac CPU environment. The results of the study indicate the following: In terms of recognition accuracy, ResNet demonstrates superior performance across the entire SNR range, achieving up to 59.33% higher accuracy than CNN2 in the low-to-medium SNR range (-8 to 8 dB), particularly excelling in complex modulated signal recognition. With regard to computational efficiency, CNN2 demonstrated a significant advantage. This research provides a basis for scenario-specific model selection: Where high recognition accuracy is paramount at low-to-medium SNR, ResNet should be prioritized. In contrast, for edge device deployments with limited computational resources, CNN2 is the optimal choice.
Show more
Read Article PDF
Cite