This paper introduces deep gradient network(DGNet),a novel deep framework that exploits object gradient supervision for camouflaged object detection(COD).It decouples the task into two connected branches,i.e.,a contex...This paper introduces deep gradient network(DGNet),a novel deep framework that exploits object gradient supervision for camouflaged object detection(COD).It decouples the task into two connected branches,i.e.,a context and a texture encoder.The es-sential connection is the gradient-induced transition,representing a soft grouping between context and texture features.Benefiting from the simple but efficient framework,DGNet outperforms existing state-of-the-art COD models by a large margin.Notably,our efficient version,DGNet-S,runs in real-time(80 fps)and achieves comparable results to the cutting-edge model JCSOD-CVPR21 with only 6.82%parameters.The application results also show that the proposed DGNet performs well in the polyp segmentation,defect detec-tion,and transparent object segmentation tasks.The code will be made available at https://github.com/GewelsJI/DGNet.展开更多
We introduce a novel bilateral reference framework(BiRefNet)for high-resolution dichotomous image segmentation(DIS).It comprises two essential components:the localization module(LM)and the reconstruction module(RM)wit...We introduce a novel bilateral reference framework(BiRefNet)for high-resolution dichotomous image segmentation(DIS).It comprises two essential components:the localization module(LM)and the reconstruction module(RM)with our proposed bilateral reference(BiRef).LM aids in object localization using global semantic information.Within the RM,we utilize BiRef for the reconstruction process,where hierarchical patches of images provide the source reference,and gradient maps serve as the target reference.These components collaborate to generate the final predicted maps.We also introduce auxiliary gradient supervision to enhance the focus on regions with finer details.In addition,we outline practical training strategies tailored for DIS to improve map quality and the training process.To validate the general applicability of our approach,we conduct extensive experiments on four tasks to evince that BiRefNet exhibits remarkable performance,outperforming task-specific cutting-edge methods across all benchmarks.Our codes are publicly available at https://github.com/ZhengPeng7/BiRefNet.展开更多
文摘This paper introduces deep gradient network(DGNet),a novel deep framework that exploits object gradient supervision for camouflaged object detection(COD).It decouples the task into two connected branches,i.e.,a context and a texture encoder.The es-sential connection is the gradient-induced transition,representing a soft grouping between context and texture features.Benefiting from the simple but efficient framework,DGNet outperforms existing state-of-the-art COD models by a large margin.Notably,our efficient version,DGNet-S,runs in real-time(80 fps)and achieves comparable results to the cutting-edge model JCSOD-CVPR21 with only 6.82%parameters.The application results also show that the proposed DGNet performs well in the polyp segmentation,defect detec-tion,and transparent object segmentation tasks.The code will be made available at https://github.com/GewelsJI/DGNet.
基金supported by the Fundamental Research Funds for the Central Universities(No.Nankai University,63243150).
文摘We introduce a novel bilateral reference framework(BiRefNet)for high-resolution dichotomous image segmentation(DIS).It comprises two essential components:the localization module(LM)and the reconstruction module(RM)with our proposed bilateral reference(BiRef).LM aids in object localization using global semantic information.Within the RM,we utilize BiRef for the reconstruction process,where hierarchical patches of images provide the source reference,and gradient maps serve as the target reference.These components collaborate to generate the final predicted maps.We also introduce auxiliary gradient supervision to enhance the focus on regions with finer details.In addition,we outline practical training strategies tailored for DIS to improve map quality and the training process.To validate the general applicability of our approach,we conduct extensive experiments on four tasks to evince that BiRefNet exhibits remarkable performance,outperforming task-specific cutting-edge methods across all benchmarks.Our codes are publicly available at https://github.com/ZhengPeng7/BiRefNet.