A Deep Reinforcement Learning Method for Economic Power Dispatch of Microgrid in OPAL-RT Environment

Faa Jeng Lin, Chao Fu Chang, Yu Cheng Huang, Tzu Ming Su

研究成果: 雜誌貢獻期刊論文同行評審

4 引文 斯高帕斯(Scopus)

摘要

This paper focuses on the economic power dispatch (EPD) operation of a microgrid in an OPAL-RT environment. First, a long short-term memory (LSTM) network is proposed to forecast the load information of a microgrid to determine the output of a power generator and the charging/discharging control strategy of a battery energy storage system (BESS). Then, a deep reinforcement learning method, the deep deterministic policy gradient (DDPG), is utilized to develop the power dispatch of a microgrid to minimize the total energy expense while considering power constraints, load uncertainties and electricity price. Moreover, a microgrid built in Cimei Island of Penghu Archipelago, Taiwan, is investigated to examine the compliance with the requirements of equality and inequality constraints and the performance of the deep reinforcement learning method. Furthermore, a comparison of the proposed method with the experience-based energy management system (EMS), Newton particle swarm optimization (Newton-PSO) and the deep Q-learning network (DQN) is provided to evaluate the obtained solutions. In this study, the average deviation of the LSTM forecast accuracy is less than 5%. In addition, the daily operating cost of the proposed method obtains a 3.8% to 7.4% lower electricity cost compared to that of the other methods. Finally, a detailed emulation in the OPAL-RT environment is carried out to validate the effectiveness of the proposed method.

原文???core.languages.en_GB???
文章編號96
期刊Technologies
11
發行號4
DOIs
出版狀態已出版 - 8月 2023

指紋

深入研究「A Deep Reinforcement Learning Method for Economic Power Dispatch of Microgrid in OPAL-RT Environment」主題。共同形成了獨特的指紋。

引用此