Electric Power ›› 2025, Vol. 58 ›› Issue (4): 78-89.DOI: 10.11930/j.issn.1004-9649.202410051

• Key Technologies for Transient Operation Control and Test Verification of Wind Turbines • Previous Articles     Next Articles

Power Optimization of Wind Farms Based on Improved Jensen Model and Deep Reinforcement Learning

WANG Guanchao1(), HUO Yuchong1(), LI Qun2(), LI Qiang2()   

  1. 1. Department of Electrical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
    2. Electric Power Research Institute of State Grid Jiangsu Electric Power Co., Ltd., Nanjing 211103, China
  • Received:2024-10-15 Accepted:2025-01-13 Online:2025-04-23 Published:2025-04-28
  • Supported by:
    This work is supported Science and Technology Project of SGCC (Reseach Team Project) (Active Frequency Support for Mid and Long Distance Offshore Wind Farm with Multiple Grid-Forming Converter Connected via VSC-HVDC, No.5108-202218280A-2-241-XG).

Abstract:

The power capture capability of wind farms is constrained by various factors. To maximize the power output of wind farms and address the impacts of wake effects and turbulent wind speeds, this paper proposes a wind farm control scheme based on deep reinforcement learning. This scheme combines both model-based and model-free control methods and integrates them into a deep reinforcement learning deep deterministic policy gradient network with an Actor-Critic architecture. In terms of control accuracy, Jensen wake model consider time delay is adopted to enhance the precision of wake effects and effectively captures the long-term impact on the wind farm's power output. Simulation results show that, compared to traditional model-based or model-free methods, this scheme significantly increases the maximum power output of the wind farm while maintaining control accuracy, and significantly reduces training time and computational resource consumption, thereby improving the overall performance of the control strategy.

Key words: wind farm control, maximizing wind energy capture, deep reinforcement learning, model-free control, model-based control, neural network