强化学习
适应(眼睛)
灵活性(工程)
计算机科学
气候变化
适应性管理
大洪水
适应能力
风险分析(工程)
环境资源管理
环境科学
人工智能
业务
经济
地理
生态学
物理
管理
光学
生物
考古
作者
Kairui Feng,Ning Lin,Robert E. Kopp,Siyuan Xian,Michael Oppenheimer
标识
DOI:10.1073/pnas.2402826122
摘要
Conventional computational models of climate adaptation frameworks inadequately consider decision-makers’ capacity to learn, update, and improve decisions. Here, we investigate the potential of reinforcement learning (RL), a machine learning technique that efficaciously acquires knowledge from the environment and systematically optimizes dynamic decisions, in modeling and informing adaptive climate decision-making. We consider coastal flood risk mitigations for Manhattan, New York City, USA (NYC), illustrating the benefit of continuously incorporating observations of sea-level rise into systematic designs of adaptive strategies. We find that when designing adaptive seawalls to protect NYC, the RL-derived strategy significantly reduces the expected net cost by 6 to 36% under the moderate emissions scenario SSP2-4.5 (9 to 77% under the high emissions scenario SSP5-8.5), compared to conventional methods. When considering multiple adaptive policies, including accomodation and retreat as well as protection, the RL approach leads to a further 5% (15%) cost reduction, showing RL’s flexibility in coordinatively addressing complex policy design problems. RL also outperforms conventional methods in controlling tail risk (i.e., low probability, high impact outcomes) and in avoiding losses induced by misinformation about the climate state (e.g., deep uncertainty), demonstrating the importance of systematic learning and updating in addressing extremes and uncertainties related to climate adaptation.
科研通智能强力驱动
Strongly Powered by AbleSci AI