摘要
工业协作型缝纫机器人代替操作工自动完成缝合是未来发展的必然趋势,但当前的工业协作型缝纫机器人难以快速精准地定位面料边缘轮廓信息,影响缝纫效率与缝纫质量。针对印花面料提出线下使用深度学习建立面料检测模型,线上调用面料检测模型分割面料与背景并结合传统轮廓检测算法快速准确提取面料边缘轮廓的方法。首先,建立面料图像数据集,并通过卷积拆分和融合损失函数对VGG-UNet模型进行优化,将面料数据集输入至优化的VGG-UNet模型进行训练学习并构建最优面料检测模型;其次,利用最优面料检测模型分割面料与背景;然后,采用数学形态学算法对分割后的面料图像进行自适应开运算去除面料边缘的毛边;最后,利用Canny算子对去除毛边后的面料图像进行轮廓提取。实验结果表明,本文方法可较好去除面料毛边并快速精准提取印花面料的边缘轮廓,所提取的轮廓与面料边缘轮廓高度拟合,轮廓提取精度高于99%,轮廓提取时间仅需0.216 s。本文研究可为后续机器人的轨迹规划提供快速准确的坐标信息,提高缝合质量和效率,推进无人化、自动化缝合生产线的实现。
Objective The industrial sewing robot based on contour extraction detects fabric edge contours with visual aids and works out the robot′s movement trajectory based on the fabric edge contour information to achieve the sewing of the fabric in conjunction with the sewing machine.However,the large number of raw edges of the fabric after cutting,the print pattern of the fabric and the background of the fabric image acquisition all affect the accuracy of the fabric edge contour extraction,and the extraction time directly affects the sewing efficiency.Method The conventional VGG-UNet model was optimized by convolutional splitting and fusion loss functions to improve the inference speed and segmentation accuracy of the model.The optimal fabric detection model was then constructed and trained using the optimized VGG-UNet to segment quickly and accurately the printed fabric and the desktop background,and the fabric burrs were removed using adaptive open operations before the Canny operator was used for edge detection to obtain accurate fabric edge contours.Results The optimized VGG-Unet optimal training results were 0.79%,0.79%,1.6%,0.79%higher than those of the VGG-UNet model in each index,and the inference speed was reduced by 10.368 ms,and the number of total parameters of the optimized VGG-UNet model was greatly reduced.The optimal fabric detection model that was trained and constructed showed obvious advantages in terms of memory resource consumption and detection efficiency.The superimposed image showed that the contour extraction accuracy was not affected even though the printed fabric was similar in color to the desktop background and the lighting was not uniform.In addition,the contour lines were hand-drawn on the original image,and the fabric edge contour lines extracted in this paper were computed by OpenCV to find out the overlap of the two contour lines,and the accuracy of the contour line extraction was more than 99%.The complete algorithm was obtained by pytorch programming on a computer with Windows 11 operating system,GPU using NVIDIAGerforce GTX 1650 and 16 G memory,and it took only 0.216 s to extract the edge contour of the fabric in a fabric image,while the proposed conventional contour extraction method took 2.852 s.The edge contour of the printed fabric was the worst when the fabric color is close to the background color of the desktop and when the reflection of the desktop was severe.In addition,the conventional contour extraction method does not consider the burr problem generated by the fabric cutting,so the conventional contour extraction algorithm not only has a long extraction time but also cannot remove the noise and the burr efficiently,making it difficult to accurately extract the edge contour of the printed fabric.Conclusion This paper proposes the use of deep learning combined with conventional contour detection algorithms to extract fabric edge contours for the first time.It solves the problem that traditional fabric contour extraction methods are affected by fabric color,print pattern,fabric texture and desktop background,and has excellent performance in extracting edge contours of fabrics with complex prints.In this paper,we consider the large number of raw edges generated by the fabric edges after cutting the fabric.This method can effectively remove the raw edges and extract the fabric edge contours quickly and accurately,and the extraction process is not affected by the print pattern,table background,color,fabric texture and light source,the method is good in generality and the extraction results fit the fabric edge contours highly.
作者
文嘉琪
李新荣
冯文倩
李瀚森
WEN Jiaqi;LI Xinrong;FENG Wenqian;LI Hansen(School of Mechanical Engineering,Tiangong University,Tianjin 300387,China;Key Laboratory of Modern Mechanical and Electrical Equipment Technology,Tianjin 300387,China)
出处
《纺织学报》
EI
CAS
CSCD
北大核心
2024年第5期165-173,共9页
Journal of Textile Research
基金
工信部产业技术基础公共服务平台项目(2021-0173-2-1)
国家重点研发计划项目(2018YFB1308801)。