Designing effective mRNA sequences for therapeutics remains a formidable challenge. Inspired by successes in protein design, language models (LMs) are now being applied to RNA, but progress is often impeded by the lack of comprehensive training data. Existing models are frequently limited to UTR or CDS regions, restricting their application for complete mRNA sequences. We introduce mRNABERT, a robust, all-in-one mRNA designer pre-trained on the largest available mRNA dataset. To enhance performance, we propose a dual tokenization scheme with a cross-modality contrastive learning framework to integrate semantic information from protein sequences. On a comprehensive benchmark, mRNABERT demonstrates state-of-the-art performance, outperforming previous models in the majority of tasks for 5' UTR and CDS design, RNA-binding protein (RBP) site prediction, and full-length mRNA property prediction. It also surpasses large protein models in several related tasks. In conclusion, mRNABERT's superior performance across these diverse tasks signifies a substantial leap forward in mRNA research and therapeutic development.