摘要
The integration of Artificial Intelligence (AI) into High-Performance Liquid Chromatography (HPLC) method development marks a paradigm shift from empirical and interpretive frameworks toward adaptive, data-driven optimization. This critical review dissects the technological evolution from traditional Design of Experiments (DoE) and retention modeling to AI-powered platforms employing machine learning (ML), deep learning (DL), and reinforcement learning (RL). While AI offers unmatched capabilities in predicting retention times, optimizing gradient conditions, and enabling real-time control, its adoption remains fragmented due to critical challenges in model interpretability, regulatory validation, and data standardization. A key insight is the persistent mischaracterization of deterministic simulators (e.g., DryLab®, AutoChrom™) as AI tools, which obfuscates the conceptual boundaries between mechanistic modeling and data-driven learning. Furthermore, black-box models-though powerful-suffer from poor explainability, limiting their acceptance in GxP-regulated environments. The review emphasizes the need for hybrid frameworks that merge mechanistic transparency with AI adaptability, and highlights gaps in training dataset diversity, feature engineering, and lifecycle-based model validation. Emerging trends such as explainable AI (XAI), closed-loop reinforcement learning, digital twins, and federated learning are discussed as pivotal enablers of next-generation autonomous analytical platforms. Ultimately, this review establishes that AI is not merely a computational enhancement, but a strategic imperative for scalable, reproducible, and intelligent HPLC workflows. However, its transformative potential can only be realized through ethical deployment, domain-aligned design, and interdisciplinary collaboration that aligns innovation with regulatory trust and operational relevance.