摘要
未来战争中,无人智能作战力量参与战争的比重将进一步加大,协同运用传统的“有人”作战力量和人工智能技术赋能的“无人”作战力量将成为重要作战形式,不可解释带来的“信任不足”极可能导致错失转瞬即逝的战机,限制军事智能应用发展。文中在定义可解释人工智能的基础上,研究分析了可解释人工智能的发展目标、主要方法和国内外现状,针对装备研发人员、作战指挥人员、任务执行人员、军备控制人员等用户对象剖析了对可解释性的不同需求,从增强协同效能、促进安全可控、加速装备研发、遂行攻击对抗等4个方面提出了可解释人工智能的军事应用前景。
In future wars,unmanned intelligent combat forces will further participate,the collaboration between traditional"manned"forces and"unmanned"forces empowered by artificial intelligence technology will become further important,and the"lack of trust"caused by unexplained AI will most likely lead to missing the time window and limiting the applications of military intelligence.Based on the definition of explainable artificial intelligence,this paper studies and analyzes the development goals,main methods and current situation of explainable artificial intelligence,analyzes the different needs for explainability for user objects such as equipment R&D personnel,combat command personnel,task implementation personnel,and arms control personnel,and puts forward the military application prospects of explainable artificial intelligence from four aspects:enhancing synergy efficiency,promoting security and controllability,accelerating equipment R&D,and carrying out attacks to AI system.
作者
张林超
周树德
朱宇晴
ZHANG Lin-chao;ZHOU Shu-de;ZHU Yu-qing(Artificial Intelligence Institute of CETC,Beijing 100041,China)
出处
《中国电子科学研究院学报》
北大核心
2023年第8期690-696,共7页
Journal of China Academy of Electronics and Information Technology
关键词
可解释人工智能
军事智能
人机协同
explainable artificial intelligence
military intelligence
human-machine collaboration