期刊文献+

基于知识蒸馏的低质量人脸图像识别算法

Knowledge Distillation Based Algorithm for Low Quality Face Image Recognition
在线阅读 下载PDF
导出
摘要 针对基于统一特征子空间的低质量人脸识别算法存在对低质量人脸的鲁棒性较差、特征表示能力有限等缺点,提出了一种基于知识蒸馏的低质量人脸图像识别算法。首先,将ResNeXt网络作为骨干特征提取网络,并引入双通道注意力模块构建具有注意力机制的教师-学生知识蒸馏框架。其次,采用教师网络的输出特征作为标签知识,将有效的识别特征传递给学生网络、采用注意力图特征作为中间层知识,弥补输出层知识信息单一的不足,通过结合两种知识蒸馏的方式丰富特征知识以保证教师网络模型知识信息的多样性。然后,将标签知识蒸馏损失、注意力图蒸馏损失和识别损失的加权平均融合作为网络总的损失函数,确保学生网络模型具有更好的学习能力。最后,在AgeDB-30和CPLFW测试集不同质量图像下进行测试,消融实验结果表明,相比无蒸馏的通用人脸识别模型,经过两种知识蒸馏的模型,在识别准确率上分别获得了2.25%、11.33%、24.64%和2.8%、10.58%、27.85%的提升。对比实验表明,与其他主流算法相比,本文所提算法在准确率上也获得了不同程度的提升。 Aiming at the shortcomings of low-quality face recognition algorithms based on unified feature space,such as poor robustness to low-quality faces and limited feature representation capability,a low-quality face image recognition algorithm based on knowledge distillation was proposed.First,the ResNeXt network was used as the backbone feature extraction network,and the two-channel attention module was introduced to construct a teacher-student knowledge distillation framework with an attention mechanism.Secondly,the output features of the teacher network were adopted as labeled knowledge,and the effective recognition features were passed to the student network.And the attention graph features were adopted as the intermediate layer knowledge to solve the lack of single knowledge information in the output layer,and the feature knowledge was enriched by combining two kinds of knowledge distillation to ensure the diversity of knowledge information in the teacher network model.Then,the weighted average of labeled knowledge distillation loss,attention graph distillation loss,and recognition loss were fused as the total network loss function to ensure that the student network model has a better learning ability.Finally,tested under different quality images in AgeDB-30 and CPLFW test sets,the results of the ablation experiments show that compared to the generic face recognition model without distillation,the model with two types of knowledge distillation gains 2.25%,11.33%,24.64%and 2.8%,10.58%,27.85%improvement in recognition accuracy,respectively.Comparative experiments show that the algorithm proposed in this paper also obtains different degrees of improvement in accuracy compared to other mainstream algorithms.
作者 英特扎尔·艾山江 伊力哈木·亚尔买买提 YINGTEZHAER Aishanjiang;YIlIHAMU·Yaermaimaiti(School of Electrical Engineering,Xinjiang University,Urumqi 830017,China)
出处 《科学技术与工程》 北大核心 2025年第2期695-703,共9页 Science Technology and Engineering
基金 国家自然科学基金(62362063,61866037)。
关键词 低质量人脸图像 知识蒸馏 注意力机制 ResNeXt low quality face images knowledge distillation attention mechanism Resnext
  • 相关文献

参考文献6

二级参考文献41

共引文献86

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部