期刊文献+
共找到48篇文章
< 1 2 3 >
每页显示 20 50 100
Effective Controller Placement in Software-Defined Internet-of-Things Leveraging Deep Q-Learning (DQL)
1
作者 Jehad Ali Mohammed J.F.Alenazi 《Computers, Materials & Continua》 SCIE EI 2024年第12期4015-4032,共18页
The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent comm... The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance. 展开更多
关键词 Software-defined networking deep q-learning controller placement quality of service
在线阅读 下载PDF
Intelligent Fast Cell Association Scheme Based on Deep Q-Learning in Ultra-Dense Cellular Networks 被引量:1
2
作者 Jinhua Pan Lusheng Wang +2 位作者 Hai Lin Zhiheng Zha Caihong Kai 《China Communications》 SCIE CSCD 2021年第2期259-270,共12页
To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q... To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed. 展开更多
关键词 ultra-dense cellular networks(UDCN) cell association(CA) deep q-learning proportional fairness q-learning
在线阅读 下载PDF
基于Dueling Double DQN的交通信号控制方法
3
作者 叶宝林 陈栋 +2 位作者 刘春元 陈滨 吴维敏 《计算机测量与控制》 2024年第7期154-161,共8页
为了提高交叉口通行效率缓解交通拥堵,深入挖掘交通状态信息中所包含的深层次隐含特征信息,提出了一种基于Dueling Double DQN(D3QN)的单交叉口交通信号控制方法;构建了一个基于深度强化学习Double DQN(DDQN)的交通信号控制模型,对动作... 为了提高交叉口通行效率缓解交通拥堵,深入挖掘交通状态信息中所包含的深层次隐含特征信息,提出了一种基于Dueling Double DQN(D3QN)的单交叉口交通信号控制方法;构建了一个基于深度强化学习Double DQN(DDQN)的交通信号控制模型,对动作-价值函数的估计值和目标值迭代运算过程进行了优化,克服基于深度强化学习DQN的交通信号控制模型存在收敛速度慢的问题;设计了一个新的Dueling Network解耦交通状态和相位动作的价值,增强Double DQN(DDQN)提取深层次特征信息的能力;基于微观仿真平台SUMO搭建了一个单交叉口模拟仿真框架和环境,开展仿真测试;仿真测试结果表明,与传统交通信号控制方法和基于深度强化学习DQN的交通信号控制方法相比,所提方法能够有效减少车辆平均等待时间、车辆平均排队长度和车辆平均停车次数,明显提升交叉口通行效率。 展开更多
关键词 交通信号控制 深度强化学习 dueling Double DQN dueling network
在线阅读 下载PDF
基于Dueling Network与RRT的机械臂抓放控制 被引量:2
4
作者 王永 李金泽 《机床与液压》 北大核心 2021年第17期59-64,共6页
针对当前机械臂抓取与放置方式固定、指令单一、难以应对复杂未知情况的不足,提出一种基于深度强化学习与RRT的机械臂抓放控制方法。该方法将物件抓取与放置问题视为马尔科夫过程,通过物件视场要素描述以及改进的深度强化学习算法Duelin... 针对当前机械臂抓取与放置方式固定、指令单一、难以应对复杂未知情况的不足,提出一种基于深度强化学习与RRT的机械臂抓放控制方法。该方法将物件抓取与放置问题视为马尔科夫过程,通过物件视场要素描述以及改进的深度强化学习算法Dueling Network实现对未知物件的自主抓取,经过关键点选取以及RRT算法依据任务需要将物件准确放置于目标位置。实验结果表明:该方法简便有效,机械臂抓取与放置自主灵活,可进一步提升机械臂应对未知物件的自主操控能力,满足对不同物件抓取与放置任务的需求。 展开更多
关键词 机械臂 深度强化学习 dueling network RRT 抓放控制
在线阅读 下载PDF
Deep Q-Learning Based Optimal Query Routing Approach for Unstructured P2P Network
5
作者 Mohammad Shoab Abdullah Shawan Alotaibi 《Computers, Materials & Continua》 SCIE EI 2022年第3期5765-5781,共17页
Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environmen... Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively. 展开更多
关键词 Reinforcement learning deep q-learning unstructured p2p network query routing
在线阅读 下载PDF
一种改进dueling网络的机器人避障方法 被引量:5
6
作者 周翼 陈渤 《西安电子科技大学学报》 EI CAS CSCD 北大核心 2019年第1期46-50,63,共6页
针对传统增强学习方法在运动规划领域,尤其是机器人避障问题上存在容易过估计、难以适应复杂环境等不足,提出了一种基于深度增强学习的提升机器人避障性能的新算法模型。该模型将dueling神经网络架构与传统增强学习算法Q学习相结合,并... 针对传统增强学习方法在运动规划领域,尤其是机器人避障问题上存在容易过估计、难以适应复杂环境等不足,提出了一种基于深度增强学习的提升机器人避障性能的新算法模型。该模型将dueling神经网络架构与传统增强学习算法Q学习相结合,并利用两个独立训练的dueling网络处理环境数据来预测动作值,在输出层分别输出状态值和动作优势值,并将两者结合输出最终动作值。该模型能处理较高维度数据以适应复杂多变的环境,并输出优势动作供机器人选择以获得更高的累积奖励。实验结果表明,该新算法模型能有效地提升机器人避障性能。 展开更多
关键词 机器人避障 深度增强学习 dueling网络 独立训练
在线阅读 下载PDF
Deep Reinforcement Learning-Based URLLC-Aware Task Offloading in Collaborative Vehicular Networks 被引量:5
7
作者 Chao Pan Zhao Wang +1 位作者 Zhenyu Zhou Xincheng Ren 《China Communications》 SCIE CSCD 2021年第7期134-146,共13页
Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collabo... Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collaborations with edge servers and vehicular fog servers(VFSs).However,the optimization of task offloading in highly dynamic collaborative vehicular networks faces several challenges such as URLLC guaranteeing,incomplete information,and dimensionality curse.In this paper,we first characterize URLLC in terms of queuing delay bound violation and high-order statistics of excess backlogs.Then,a Deep Reinforcement lEarning-based URLLCAware task offloading algorithM named DREAM is proposed to maximize the throughput of the UVs while satisfying the URLLC constraints in a besteffort way.Compared with existing task offloading algorithms,DREAM achieves superior performance in throughput,queuing delay,and URLLC. 展开更多
关键词 collaborative vehicular networks task of-floading URLLC awareness deep q-learning
在线阅读 下载PDF
Collaborative Pushing and Grasping of Tightly Stacked Objects via Deep Reinforcement Learning 被引量:4
8
作者 Yuxiang Yang Zhihao Ni +2 位作者 Mingyu Gao Jing Zhang Dacheng Tao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期135-145,共11页
Directly grasping the tightly stacked objects may cause collisions and result in failures,degenerating the functionality of robotic arms.Inspired by the observation that first pushing objects to a state of mutual sepa... Directly grasping the tightly stacked objects may cause collisions and result in failures,degenerating the functionality of robotic arms.Inspired by the observation that first pushing objects to a state of mutual separation and then grasping them individually can effectively increase the success rate,we devise a novel deep Q-learning framework to achieve collaborative pushing and grasping.Specifically,an efficient non-maximum suppression policy(PolicyNMS)is proposed to dynamically evaluate pushing and grasping actions by enforcing a suppression constraint on unreasonable actions.Moreover,a novel data-driven pushing reward network called PR-Net is designed to effectively assess the degree of separation or aggregation between objects.To benchmark the proposed method,we establish a dataset containing common household items dataset(CHID)in both simulation and real scenarios.Although trained using simulation data only,experiment results validate that our method generalizes well to real scenarios and achieves a 97%grasp success rate at a fast speed for object separation in the real-world environment. 展开更多
关键词 Convolutional neural network deep q-learning(DQN) reward function robotic grasping robotic pushing
在线阅读 下载PDF
A deep Q-learning network based active object detection model with a novel training algorithm for service robots 被引量:3
9
作者 Shaopeng LIU Guohui TIAN +1 位作者 Yongcheng CUI Xuyang SHAO 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2022年第11期1673-1683,共11页
This paper focuses on the problem of active object detection(AOD).AOD is important for service robots to complete tasks in the family environment,and leads robots to approach the target ob ject by taking appropriate m... This paper focuses on the problem of active object detection(AOD).AOD is important for service robots to complete tasks in the family environment,and leads robots to approach the target ob ject by taking appropriate moving actions.Most of the current AOD methods are based on reinforcement learning with low training efficiency and testing accuracy.Therefore,an AOD model based on a deep Q-learning network(DQN)with a novel training algorithm is proposed in this paper.The DQN model is designed to fit the Q-values of various actions,and includes state space,feature extraction,and a multilayer perceptron.In contrast to existing research,a novel training algorithm based on memory is designed for the proposed DQN model to improve training efficiency and testing accuracy.In addition,a method of generating the end state is presented to judge when to stop the AOD task during the training process.Sufficient comparison experiments and ablation studies are performed based on an AOD dataset,proving that the presented method has better performance than the comparable methods and that the proposed training algorithm is more effective than the raw training algorithm. 展开更多
关键词 Active object detection deep q-learning network Training method Service robots
原文传递
Safe Navigation for UAV-Enabled Data Dissemination by Deep Reinforcement Learning in Unknown Environments 被引量:1
10
作者 Fei Huang Guangxia Li +3 位作者 Shiwei Tian Jin Chen Guangteng Fan Jinghui Chang 《China Communications》 SCIE CSCD 2022年第1期202-217,共16页
Unmanned aerial vehicles(UAVs) are increasingly considered in safe autonomous navigation systems to explore unknown environments where UAVs are equipped with multiple sensors to perceive the surroundings. However, how... Unmanned aerial vehicles(UAVs) are increasingly considered in safe autonomous navigation systems to explore unknown environments where UAVs are equipped with multiple sensors to perceive the surroundings. However, how to achieve UAVenabled data dissemination and also ensure safe navigation synchronously is a new challenge. In this paper, our goal is minimizing the whole weighted sum of the UAV’s task completion time while satisfying the data transmission task requirement and the UAV’s feasible flight region constraints. However, it is unable to be solved via standard optimization methods mainly on account of lacking a tractable and accurate system model in practice. To overcome this tough issue,we propose a new solution approach by utilizing the most advanced dueling double deep Q network(dueling DDQN) with multi-step learning. Specifically, to improve the algorithm, the extra labels are added to the primitive states. Simulation results indicate the validity and performance superiority of the proposed algorithm under different data thresholds compared with two other benchmarks. 展开更多
关键词 Unmanned aerial vehicles(UAVs) safe autonomous navigation unknown environments data dissemination dueling double deep Q network(dueling DDQN)
在线阅读 下载PDF
考虑行为克隆的深度强化学习股票交易策略 被引量:3
11
作者 杨兴雨 陈亮威 +1 位作者 郑萧腾 张永 《系统管理学报》 CSSCI CSCD 北大核心 2024年第1期150-161,共12页
为提高股票投资的收益并降低风险,将模仿学习中的行为克隆思想引入深度强化学习框架中设计股票交易策略。在策略设计过程中,将对决DQN深度强化学习算法和行为克隆进行结合,使智能体在自主探索的同时模仿事先构造的投资专家的决策。选择... 为提高股票投资的收益并降低风险,将模仿学习中的行为克隆思想引入深度强化学习框架中设计股票交易策略。在策略设计过程中,将对决DQN深度强化学习算法和行为克隆进行结合,使智能体在自主探索的同时模仿事先构造的投资专家的决策。选择不同行业的股票进行数值实验,说明了所设计的交易策略在年化收益率、夏普比率和卡玛比率等收益与风险指标上优于对比策略。研究结果表明:将模仿学习与深度强化学习相结合可以使智能体同时具有探索和模仿能力,从而提高模型的泛化能力和策略的适用性。 展开更多
关键词 股票交易策略 深度强化学习 模仿学习 行为克隆 对决深度Q学习网络
在线阅读 下载PDF
基于改进联邦竞争深度Q网络的多微网能量管理策略 被引量:2
12
作者 黎海涛 刘伊然 +3 位作者 杨艳红 肖浩 谢冬雪 裴玮 《电力系统自动化》 EI CSCD 北大核心 2024年第8期174-184,共11页
目前,基于联邦深度强化学习的微网(MG)能量管理研究未考虑多类型能量转换与MG间电量交易的问题,同时,频繁交互模型参数导致通信时延较大。基于此,以一种包含风、光、电、气等多类型能源的MG为研究对象,构建了支持MG间电量交易和MG内能... 目前,基于联邦深度强化学习的微网(MG)能量管理研究未考虑多类型能量转换与MG间电量交易的问题,同时,频繁交互模型参数导致通信时延较大。基于此,以一种包含风、光、电、气等多类型能源的MG为研究对象,构建了支持MG间电量交易和MG内能量转换的能量管理模型,提出基于正余弦算法的联邦竞争深度Q网络学习算法,并基于该算法设计了计及能量交易与转换的多MG能量管理与优化策略。仿真结果表明,所提能量管理策略在保护数据隐私的前提下,能够得到更高奖励且最大化MG经济收益,同时降低了通信时延。 展开更多
关键词 微网(MG) 联邦学习 竞争深度Q网络 正余弦算法 能量管理
在线阅读 下载PDF
基于知识融合和深度强化学习的智能紧急切机决策 被引量:1
13
作者 李舟平 曾令康 +4 位作者 姚伟 胡泽 帅航 汤涌 文劲宇 《中国电机工程学报》 EI CSCD 北大核心 2024年第5期1675-1687,I0001,共14页
紧急控制是在严重故障后维持电力系统暂态安全稳定的重要手段。目前常用的“人在环路”离线紧急控制决策制定方式存在效率不高、严重依赖专家经验等问题,该文提出一种基于知识融合和深度强化学习(deep reinforcement learning,DRL)的智... 紧急控制是在严重故障后维持电力系统暂态安全稳定的重要手段。目前常用的“人在环路”离线紧急控制决策制定方式存在效率不高、严重依赖专家经验等问题,该文提出一种基于知识融合和深度强化学习(deep reinforcement learning,DRL)的智能紧急切机决策制定方法。首先,构建基于DRL的紧急切机决策制定框架。然后,在智能体处理多个发电机决策时,由于产生的高维决策空间使得智能体训练困难,提出决策空间压缩和应用分支竞争Q(branching dueling Q,BDQ)网络的两种解决方法。接着,为了进一步提高智能体的探索效率和决策质量,在智能体训练中融合紧急切机控制相关知识经验。最后,在10机39节点系统中的仿真结果表明,所提方法可以在多发电机决策时快速给出有效的紧急切机决策,应用BDQ网络比决策空间压缩的决策性能更好,知识融合策略可引导智能体减少无效决策探索从而提升决策性能。 展开更多
关键词 紧急切机决策 深度强化学习 决策空间 分支竞争Q网络 知识融合
在线阅读 下载PDF
基于双智能体深度强化学习的交直流配电网经济调度方法 被引量:2
14
作者 赵倩宇 韩照洋 +3 位作者 王守相 尹孜阳 董逸超 钱广超 《天津大学学报(自然科学与工程技术版)》 EI CAS CSCD 北大核心 2024年第6期624-632,共9页
随着大量直流电源和负荷的接入,交直流混合的配电网技术已成为未来配电网的发展趋势.然而,源荷不确定性及可调度设备的类型多样化给配电网调度带来了巨大的挑战.本文提出了基于分支决斗深度强化网络(branching dueling Q-network,BDQ)... 随着大量直流电源和负荷的接入,交直流混合的配电网技术已成为未来配电网的发展趋势.然而,源荷不确定性及可调度设备的类型多样化给配电网调度带来了巨大的挑战.本文提出了基于分支决斗深度强化网络(branching dueling Q-network,BDQ)和软演员-评论家(soft actor critic,SAC)双智能体深度强化学习的交直流配电网调度方法.该方法首先将经济调度问题与两智能体的动作、奖励、状态相结合,建立经济调度的马尔可夫决策过程,并分别基于BDQ和SAC方法设置两个智能体,其中,BDQ智能体用于控制配电网中离散动作设备,SAC智能体用于控制连续动作设备.然后,通过集中训练分散执行的方式,两智能体与环境进行交互,进行离线训练.最后,固定智能体的参数,进行在线调度.该方法的优势在于采用双智能体能够同时控制离散动作设备电容器组、载调压变压器和连续动作设备变流器、储能,同时通过对双智能体的集中训练,可以自适应源荷的不确定性.改进的IEEE33节点交直流配电网算例测试验证了所提方法的有效性. 展开更多
关键词 交直流配电网 深度强化学习 经济调度 分支决斗深度强化网络 软演员-评论家
在线阅读 下载PDF
自动化立体仓库退库货位优化问题及其求解算法 被引量:2
15
作者 何在祥 李丽 +1 位作者 张云峰 郗琳 《重庆理工大学学报(自然科学)》 CAS 北大核心 2024年第3期183-194,共12页
针对自动化立体仓库出库作业过程中剩余货物退库问题,以堆垛机作业总能耗最小化为目标,以退库货位分配为决策变量,建立了自动化立体仓库退库货位优化模型,提出了基于深度强化学习的自动化立体仓库退库货位优化框架。在该框架内,以立体... 针对自动化立体仓库出库作业过程中剩余货物退库问题,以堆垛机作业总能耗最小化为目标,以退库货位分配为决策变量,建立了自动化立体仓库退库货位优化模型,提出了基于深度强化学习的自动化立体仓库退库货位优化框架。在该框架内,以立体仓库实时存储信息和出库作业信息构建多维状态,以退库货位选择构建动作,建立自动化立体仓库退库货位优化的马尔科夫决策过程模型;将立体仓库多维状态特征输入双层决斗网络,采用决斗双重深度Q网络(dueling double deep Q-network,D3QN)算法训练网络模型并预测退库动作目标价值,以确定智能体的最优行为策略。实验结果表明D3QN算法在求解大规模退库货位优化问题上具有较好的稳定性。 展开更多
关键词 自动化立体仓库 退库货位优化 深度强化学习 D3QN
在线阅读 下载PDF
基于改进双重深度Q网络主动学习语义分割模型
16
作者 李林 刘政 +2 位作者 南海 张泽崴 魏晔 《计算机应用研究》 CSCD 北大核心 2024年第11期3337-3342,共6页
针对在图像语义分割任务中获取像素标签困难和分割数据集类别不平衡的问题,提出了一种基于改进双重深度Q网络的主动学习语义分割模型CG_D3QN。引入了一种结合决斗网络结构以及门控循环单元的混合网络结构,通过减轻Q值过估计问题和有效... 针对在图像语义分割任务中获取像素标签困难和分割数据集类别不平衡的问题,提出了一种基于改进双重深度Q网络的主动学习语义分割模型CG_D3QN。引入了一种结合决斗网络结构以及门控循环单元的混合网络结构,通过减轻Q值过估计问题和有效地利用历史状态信息,提高了策略评估的准确性和计算效率。在CamVid和Cityscapes数据集上,该模型相较于基线方法,所需的样本标注量减少了65.0%,同时对于少样本标签的类别,其平均交并比提升了约1%~3%。实验结果表明,该模型能够显著减少样本标注成本并有效地缓解了类别不平衡问题,且对于不同的分割网络也具有适用性。 展开更多
关键词 深度强化学习 主动学习 图像语义分割 决斗网络 门控循环单元
在线阅读 下载PDF
基于深度强化学习带时间窗的绿色车辆路径问题研究
17
作者 曹煜 叶春明 《物流科技》 2024年第19期72-79,共8页
如何在客户规定的时间内合理安排车辆运输路线,一直是物流领域亟待解决的问题。基于此,文章提出使用基于软更新策略的决斗双重深度Q网络(Dueling Double Deep Q-network,D3QN),设计动作空间、状态空间与奖励函数,对带时间窗的绿色车辆... 如何在客户规定的时间内合理安排车辆运输路线,一直是物流领域亟待解决的问题。基于此,文章提出使用基于软更新策略的决斗双重深度Q网络(Dueling Double Deep Q-network,D3QN),设计动作空间、状态空间与奖励函数,对带时间窗的绿色车辆路径问题进行建模与求解。选择了小、中、大规模的总计18个算例,将三种算法的实验结果在平均奖励、平均调度车辆数、平均里程和运算时间四个维度进行比较。实验结果表明:在大多数算例中,与Double DQN和Dueling DQN相比,D3QN能在可接受的增加时间范围内,获得更高的奖励函数,调度更少的车辆数,运输更短的里程,实现绿色调度的目标。 展开更多
关键词 深度强化学习 路径优化 决斗双重深度Q网络 D3QN算法 车辆路径问题
在线阅读 下载PDF
基于对决深度Q网络的机器人自适应PID恒力跟踪研究
18
作者 杜亮 梅雪川 《机床与液压》 北大核心 2024年第15期50-54,共5页
为确保机器人与环境接触时能保持稳定的接触力,基于对决深度Q网络设计一种自适应PID控制恒力跟踪算法。分析机器人与外界的接触过程,并构建基于PID算法的机器人力控制器;提出基于对决深度Q网络的自适应PID算法,以适应外界环境的变化,该... 为确保机器人与环境接触时能保持稳定的接触力,基于对决深度Q网络设计一种自适应PID控制恒力跟踪算法。分析机器人与外界的接触过程,并构建基于PID算法的机器人力控制器;提出基于对决深度Q网络的自适应PID算法,以适应外界环境的变化,该算法利用对决深度Q网络自主学习、寻找最优的控制参数;最后,通过Coopeliasim与MATLAB软件平台展开机器人恒力跟踪实验。仿真结果表明:提出的基于对决深度Q网络的自适应PID算法能够获得较好的力跟踪效果,验证了算法的可行性;相比于深度Q网络算法,力误差绝对值的平均值减少了51.6%,且收敛速度得到提升,使机器人能够更好地跟踪外界环境。 展开更多
关键词 机器人 恒力控制 自适应PID控制 对决深度Q网络
在线阅读 下载PDF
A transferable energy management strategy for hybrid electric vehicles via dueling deep deterministic policy gradient 被引量:1
19
作者 Jingyi Xu Zirui Li +3 位作者 Guodong Du Qi Liu Li Gao Yanan Zhao 《Green Energy and Intelligent Transportation》 2022年第2期75-87,共13页
Due to the high mileage and heavy load capabilities of hybrid electric vehicles(HEVs),energy management becomes crucial in improving energy efficiency.To avoid the over-dependence on the hard-crafted models,deep reinf... Due to the high mileage and heavy load capabilities of hybrid electric vehicles(HEVs),energy management becomes crucial in improving energy efficiency.To avoid the over-dependence on the hard-crafted models,deep reinforcement learning(DRL)is utilized to learn more precise energy management strategies(EMSs),but cannot generalize well to different driving situations in most cases.When driving cycles are changed,the neural network needs to be retrained,which is a time-consuming and laborious task.A more efficient transferable way is to combine DRL algorithms with transfer learning,which can utilize the knowledge of the driving cycles in other new driving situations,leading to better initial performance and a faster training process to convergence.In this paper,we propose a novel transferable EMS by incorporating the DRL method and dueling network architecture for HEVs.Simulation results indicate that the proposed method can generalize well to new driving cycles,with comparably initial performance and faster convergence in the training process. 展开更多
关键词 Energy management strategies deep reinforcement learning dueling network architecture Transfer learning
原文传递
基于竞争双深度Q网络的动态频谱接入 被引量:3
20
作者 梁燕 惠莹 《电讯技术》 北大核心 2022年第12期1715-1721,共7页
针对多信道动态频谱接入问题,建立了存在感知错误与接入碰撞的复杂信道场景,提出了一种结合双深度Q网络和竞争Q网络的竞争双深度Q网络学习框架。双深度Q网络将动作的选择和评估分别用不同值函数实现,解决了值函数的过估计问题,而竞争Q... 针对多信道动态频谱接入问题,建立了存在感知错误与接入碰撞的复杂信道场景,提出了一种结合双深度Q网络和竞争Q网络的竞争双深度Q网络学习框架。双深度Q网络将动作的选择和评估分别用不同值函数实现,解决了值函数的过估计问题,而竞争Q网络解决了神经网络结构优化问题。该方案保证每个次要用户根据感知和回报结果做出频谱接入决策。仿真结果表明,在同时存在感知错误和次要用户冲突的多信道情况下,竞争双深度Q网络相比于同类方法具有较好的损失预测模型,其回报更稳定且提高了4%。 展开更多
关键词 认知无线电 频谱感知 动态频谱接入 深度强化学习 竞争双深度Q网络
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部