Road anomaly detection is critical to safe autonomous driving, because current road scene understanding models are usually trained in a closed-set manner and fail to identify unknown objects. What's worse, it is difficult, if not impossible, to collect a large-scale dataset with anomaly annotations. So this paper studies unsupervised anomaly detection which finds out anomaly regions using scene parsing logits solely. While former methods depend on the weights learned from the closed training set as anchors for logit generation, we resort to language anchors that are learned from enormous paired vision and language data. Thanks to rich open-set semantic information contained in these language anchors, our method performs better than former unsupervised counterparts while maintaining the advantage of training without accessing any out-of-distribution data. We delve into this new paradigm and identify the superiority of using pair-wise binary logits, which we credit to a better understanding of the negation language anchor. Last but not least, we find that the former top-1 selection of semantic labels for uncertainty measurement is problematic in many cases and a new blended standardization strategy brings clear improvements to our solution. We report state-of-the-art performance on FS LostAndFound, LostAndFound and RoadAnomaly datasets among comparable methods. The codes are publicly available at https://github.com/TB5z035/URAD-LA.git