People often provide answers that go beyond what a question literally asks, but it has been difficult to pin down what makes some answers more relevant than others. Here we introduce PRIOR-PQ, a probabilistic cognitive model formalizing how people use theory of mind to produce and interpret relevantly overinformative answers to yes-no questions. Specifically, PRIOR-PQ grounds the pragmatics of question-answering in inferences about the underlying goal that motivated the questioner to ask the given question as opposed to a different question. We evaluate our probabilistic model against human answering behavior elicited in three case studies of increasing complexity, demonstrating its ability to predict nuanced patterns of relevance better than existing models, including state-of-the-art large language models. We also show how the goal-sensitive reasoning instantiated in our probabilistic model motivates a novel chain-of-thought prompting method allowing language models to approach more human-like performance. This work illuminates the mechanistic role of theory of mind in the pragmatics of question-answer exchanges, bridging formal semantics, cognitive science, and artificial intelligence. Our findings have implications for developing more socially grounded dialogue systems and highlight the importance of integrating normative cognitive models with machine learning approaches.