Journal Articles
Permanent URI for this collectionhttps://mro.massey.ac.nz/handle/10179/7915
Browse
3 results
Search Results
Item TFGNet: Frequency-guided saliency detection for complex scenes(Elsevier B.V., 2025-01-08) Wang Y; Wang R; Liu J; Xu R; Wang T; Hou F; Liu B; Lei NSalient object detection (SOD) with accurate boundaries in complex and chaotic natural or social scenes remains a significant challenge. Many edge-aware or/and two-branch models rely on exchanging global and local information between multistage features, which can propagate errors and lead to incorrect predictions. To address this issue, this work explores the fundamental problems in current U-Net architecture-based SOD models from the perspective of image spatial frequency decomposition and synthesis. A concise and efficient Frequency-Guided Network (TFGNet) is proposed that simultaneously learns the boundary details (high-spatial frequency) and inner regions (low-spatial frequency) of salient regions in two separate branches. Each branch utilizes a Multiscale Frequency Feature Enhancement (FFE) module to learn pixel-wise frequency features and a Transformer-based decoder to learn mask-wise frequency features, improving a comprehensive understanding of salient regions. TFGNet eliminates the need to exchange global and local features at intermediate layers of the two branches, thereby reducing interference from erroneous information. A hybrid loss function is also proposed to combine BCE, IoU, and Histogram dissimilarity to ensure pixel accuracy, structural integrity, and frequency distribution consistency between ground truth and predicted saliency maps. Comprehensive evaluations have been conducted on five widely used SOD datasets and one underwater SOD dataset, demonstrating the superior performance of TFGNet compared to state-of-the-art methods. The codes and results are available at https://github.com/yiwangtz/TFGNet.Item k-NN attention-based video vision transformer for action recognition(Elsevier B.V,, 2024-03-14) Sun W; Ma Y; Wang RAction Recognition aims to understand human behavior and predict a label for each action. Recently, Vision Transformer (ViT) has achieved remarkable performance on action recognition, which models the long sequences token over spatial and temporal index in a video. The fully-connected self-attention layer is the fundamental key in the vanilla Transformer. However, the redundant architecture of the vision Transformer model ignores the locality of video frame patches, which involves non-informative tokens and potentially leads to increased computational complexity. To solve this problem, we propose a k-NN attention-based Video Vision Transformer (k-ViViT) network for action recognition. We adopt k-NN attention to Video Vision Transformer (ViViT) instead of original self-attention, which can optimize the training process and neglect the irrelevant or noisy tokens in the input sequence. We conduct experiments on the UCF101 and HMDB51 datasets to verify the effectiveness of our model. The experimental results illustrate that the proposed k-ViViT achieves superior accuracy compared to several state-of-the-art models on these action recognition datasets.Item WBNet: Weakly-supervised salient object detection via scribble and pseudo-background priors(Elsevier Ltd, 2024-10) Wang Y; Wang R; He X; Lin C; Wang T; Jia Q; Fan XWeakly supervised salient object detection (WSOD) methods endeavor to boost sparse labels to get more salient cues in various ways. Among them, an effective approach is using pseudo labels from multiple unsupervised self-learning methods, but inaccurate and inconsistent pseudo labels could ultimately lead to detection performance degradation. To tackle this problem, we develop a new multi-source WSOD framework, WBNet, that can effectively utilize pseudo-background (non-salient region) labels combined with scribble labels to obtain more accurate salient features. We first design a comprehensive salient pseudo-mask generator from multiple self-learning features. Then, we pioneer the exploration of generating salient pseudo-labels via point-prompted and box-prompted Segment Anything Models (SAM). Then, WBNet leverages a pixel-level Feature Aggregation Module (FAM), a mask-level Transformer-decoder (TFD), and an auxiliary Boundary Prediction Module (EPM) with a hybrid loss function to handle complex saliency detection tasks. Comprehensively evaluated with state-of-the-art methods on five widely used datasets, the proposed method significantly improves saliency detection performance. The code and results are publicly available at https://github.com/yiwangtz/WBNet.
