In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p...In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.展开更多
家电智能化研究不断推进,当前智能控制系统的研究与应用实践中存在诸多影响用户体验的技术缺陷。针对智能语音交互控制的不足,研究并提出基于大语言模型(Large Language Model,LLM)的智能家电控制系统,内容重点聚焦于大语言模型在家电...家电智能化研究不断推进,当前智能控制系统的研究与应用实践中存在诸多影响用户体验的技术缺陷。针对智能语音交互控制的不足,研究并提出基于大语言模型(Large Language Model,LLM)的智能家电控制系统,内容重点聚焦于大语言模型在家电领域垂直应用的训练方法与参数调优,并同步构建基于大模型的家电智能交互系统。创新性地引入语音情感识别(Speech Emotion Recognition,SER)和情感语音合成(Text to Speech,TTS),构建一整套拟人化应用的智能人机交互体系。该人机交互系统的应用,可进一步提升家电设备交互控制能力,更接近人的自然交互方式,缩短机器与人的距离,显著提升用户的设备使用体验。展开更多
语音合成技术是指给定文本经过模型处理生成目标说话人语音的过程,该技术在现实社会中已经得到广泛应用。在众多的语音合成模型中,VITS(The Variational Inference for Text-to-Speech)模型将多任务损失函数进行有效组合,相比以往的模型...语音合成技术是指给定文本经过模型处理生成目标说话人语音的过程,该技术在现实社会中已经得到广泛应用。在众多的语音合成模型中,VITS(The Variational Inference for Text-to-Speech)模型将多任务损失函数进行有效组合,相比以往的模型,能够生成质量更高、听感更自然的语音。然而,现有模型依赖多个损失函数,暂时缺乏对其有效权衡的研究。因此,在现有模型损失函数的基础上,引入了梯度归一化自适应损失平衡优化方法,它根据模型不同损失函数的量级与不同子任务的训练速度来平衡各损失函数之间的权重,以验证该方法在语音合成任务中的适用性。在公开的中文语音合成数据集上评估了该方法合成语音的准确度与自然度,结果表明,采用此损失函数的模型在性能上得到了提升,证明了方法的有效性。展开更多
基金This research was funded by Shenzhen Science and Technology Program(Grant No.RCBS20221008093121051)the General Higher Education Project of Guangdong Provincial Education Department(Grant No.2020ZDZX3085)+1 种基金China Postdoctoral Science Foundation(Grant No.2021M703371)the Post-Doctoral Foundation Project of Shenzhen Polytechnic(Grant No.6021330002K).
文摘In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.
文摘家电智能化研究不断推进,当前智能控制系统的研究与应用实践中存在诸多影响用户体验的技术缺陷。针对智能语音交互控制的不足,研究并提出基于大语言模型(Large Language Model,LLM)的智能家电控制系统,内容重点聚焦于大语言模型在家电领域垂直应用的训练方法与参数调优,并同步构建基于大模型的家电智能交互系统。创新性地引入语音情感识别(Speech Emotion Recognition,SER)和情感语音合成(Text to Speech,TTS),构建一整套拟人化应用的智能人机交互体系。该人机交互系统的应用,可进一步提升家电设备交互控制能力,更接近人的自然交互方式,缩短机器与人的距离,显著提升用户的设备使用体验。
文摘语音合成技术是指给定文本经过模型处理生成目标说话人语音的过程,该技术在现实社会中已经得到广泛应用。在众多的语音合成模型中,VITS(The Variational Inference for Text-to-Speech)模型将多任务损失函数进行有效组合,相比以往的模型,能够生成质量更高、听感更自然的语音。然而,现有模型依赖多个损失函数,暂时缺乏对其有效权衡的研究。因此,在现有模型损失函数的基础上,引入了梯度归一化自适应损失平衡优化方法,它根据模型不同损失函数的量级与不同子任务的训练速度来平衡各损失函数之间的权重,以验证该方法在语音合成任务中的适用性。在公开的中文语音合成数据集上评估了该方法合成语音的准确度与自然度,结果表明,采用此损失函数的模型在性能上得到了提升,证明了方法的有效性。