作者
Etienne Reboul,Zoe Wefers,Harish Prabakaran,Jérôme Waldispühl,Antoine Taly
摘要
Generative modeling for chemistry has advanced rapidly in recent years, but this surge in popularity raises a foundational question: which molecular representation is best suited for modern machine learning models? Despite not being designed for generative tasks, SMILES remains the most commonly used string-based representation. However, while SMILES follows strict syntactic rules, grammatically correct SMILES strings do not always correspond to valid molecules. SELFIES, an alternative grammar, addresses this limitation by ensuring that every string of SELFIES tokens represents a valid molecule. In this study, we comprehensively evaluate the limitations of both SMILES and SELFIES as representations for generative models. We analyze two key criteria for robust molecular generation: viability, which means that generated strings represent novel, unique molecules with correct valence, and fidelity, where the distribution of physicochemical properties from sampled molecules resembles that of the training data. We find that approximately one-fifth of the molecules generated using RDKit default canonical SMILES are invalid, failing the viability criterion. In contrast, all SELFIES-generated molecules are viable, but they deviate significantly from the training distribution, indicating low fidelity. To address these limitations, we develop data augmentation procedures for both representations. While simplifying the SELFIES grammar yields only modest gains in fidelity, our stochastic augmentation method for SMILES, ClearSMILES, significantly improves both viability and fidelity. ClearSMILES simplifies syntax by reducing the vocabulary size and explicitly encoding aromaticity via Kekule SMILES, making the string representations easier for models to process. Using ClearSMILES, the rate of invalid samples decreases by an order of magnitude, from 20 to 2.2%, and fidelity to the training distribution is also moderately improved.