TFGNet: Frequency-guided saliency detection for complex scenes

dc.citation.volume170
dc.contributor.authorWang Y
dc.contributor.authorWang R
dc.contributor.authorLiu J
dc.contributor.authorXu R
dc.contributor.authorWang T
dc.contributor.authorHou F
dc.contributor.authorLiu B
dc.contributor.authorLei N
dc.date.accessioned2025-01-21T01:34:54Z
dc.date.available2025-01-21T01:34:54Z
dc.date.issued2025-01-08
dc.description.abstractSalient object detection (SOD) with accurate boundaries in complex and chaotic natural or social scenes remains a significant challenge. Many edge-aware or/and two-branch models rely on exchanging global and local information between multistage features, which can propagate errors and lead to incorrect predictions. To address this issue, this work explores the fundamental problems in current U-Net architecture-based SOD models from the perspective of image spatial frequency decomposition and synthesis. A concise and efficient Frequency-Guided Network (TFGNet) is proposed that simultaneously learns the boundary details (high-spatial frequency) and inner regions (low-spatial frequency) of salient regions in two separate branches. Each branch utilizes a Multiscale Frequency Feature Enhancement (FFE) module to learn pixel-wise frequency features and a Transformer-based decoder to learn mask-wise frequency features, improving a comprehensive understanding of salient regions. TFGNet eliminates the need to exchange global and local features at intermediate layers of the two branches, thereby reducing interference from erroneous information. A hybrid loss function is also proposed to combine BCE, IoU, and Histogram dissimilarity to ensure pixel accuracy, structural integrity, and frequency distribution consistency between ground truth and predicted saliency maps. Comprehensive evaluations have been conducted on five widely used SOD datasets and one underwater SOD dataset, demonstrating the superior performance of TFGNet compared to state-of-the-art methods. The codes and results are available at https://github.com/yiwangtz/TFGNet.
dc.description.confidentialfalse
dc.edition.editionFebruary 2025
dc.identifier.citationWang Y, Wang R, Liu J, Xu R, Wang T, Hou F, Liu B, Lei N. (2025). TFGNet: Frequency-guided saliency detection for complex scenes. Applied Soft Computing. 170.
dc.identifier.doi10.1016/j.asoc.2024.112685
dc.identifier.eissn1872-9681
dc.identifier.elements-typejournal-article
dc.identifier.issn1568-4946
dc.identifier.number112685
dc.identifier.piiS1568494624014595
dc.identifier.urihttps://mro.massey.ac.nz/handle/10179/72383
dc.languageEnglish
dc.publisherElsevier B.V.
dc.publisher.urihttps://www.sciencedirect.com/science/article/pii/S1568494624014595
dc.relation.isPartOfApplied Soft Computing
dc.rights(c) 2025 The Author/s
dc.rightsCC BY 4.0
dc.rightshttps://creativecommons.org/licenses/by/4.0/
dc.subjectSalient object detection
dc.subjectSpatial frequency
dc.subjectConvolutional neural network
dc.subjectTransformer
dc.titleTFGNet: Frequency-guided saliency detection for complex scenes
dc.typeJournal article
pubs.elements-id493331
pubs.organisational-groupOther
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Wang_Yi_et_al_2025_Published.pdf
Size:
13.6 MB
Format:
Adobe Portable Document Format
Description:
493331 PDF.pdf
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
9.22 KB
Format:
Plain Text
Description:
Collections