为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据...为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据增强,提升模型的泛化性。引入广义平均池化(generalized mean pooling, GeM)方式,关注图像中比较显著的区域,增强模型的鲁棒性;选用Focal Loss损失函数,针对表情类别不平衡和错误分类问题,提高较难识别表情的识别率。该方法在FER2013数据集上准确率达到了70.41%,相较于原Res2Net50网络提高了1.53%。结果表明,在自然条件下对人脸表情识别具有更好的准确性。展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
文摘为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据增强,提升模型的泛化性。引入广义平均池化(generalized mean pooling, GeM)方式,关注图像中比较显著的区域,增强模型的鲁棒性;选用Focal Loss损失函数,针对表情类别不平衡和错误分类问题,提高较难识别表情的识别率。该方法在FER2013数据集上准确率达到了70.41%,相较于原Res2Net50网络提高了1.53%。结果表明,在自然条件下对人脸表情识别具有更好的准确性。
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.