Language has a profound impact on how we perceive the world. With GPT-3's rise in popularity, as of latest utilized in 300 applications averaging 4.5 billion words per day, it is critical to identify and correct biases in its generations. A variety of biases have been identified in generative language models, spanning biases based on gender, race, and religion. In this paper, we pioneer the study of the Brilliance Bias for generative models. This implicit, yet powerful bias imposes the idea that "brilliance" is a male trait and in turn, sets back women's achievements starting as early as ages 5-7. We perform an analysis of two GPT-3 models, the base GPT-3 model (davinci) and InstructGPT (text-davinici-002), focusing on adjectives, verbs and lexicons found in their generations. Our analysis reveals the presence of substantial Brilliance Bias across both models.