Nested named entities (nested NEs) refer to the situation where one named entity is included or nested within another named entity, which cannot be recognized by the traditional sequence labeling methods. Recently, span-based methods have become the mainstream methods for nested Named Entity Recognition (nested NER). The fundamental concept behind this method is to enumerate nearly all potential spans as entity mentions and subsequently classify them. However, span-based methods independently classify spans without considering the semantic relations among them, which negatively impacts the span representation. To address the issue, we propose a novel deep learning architecture for nested NER that explores interactive and contrastive relations among spans. Specifically, we design a scale transformation mechanism that embeds geometric information into span representations, which enhances the model's ability to encode interactive relations between spans. Additionally, we introduce a supervised contrastive learning loss that pulls apart highly overlapping spans in the embedding space to encode the contrastive relations. Experiments show that our method achieves state-of-the-art or competitive performance on three publicly nested NER datasets, thus validating its effectiveness.