Despite the proliferation of artificially intelligent systems capable of social interaction, how and why social interaction influences users over time remains poorly understood. We draw on theories of technology adoption and research in affective computing, social psychology, and management to introduce the concept of human-AI relationships involving interdependence, temporality, and intensity. We develop the Relational Tradeoff Model, extending current theorizing on technology adoption by accounting for a critical third factor in addition to cognitive acceptance and behavioral use: human subjective well-being. The model reveals an important unexplored tradeoff in relationships with socially interactive AI: short-term acceptance and use gains but long-term subjective well-being costs for trust, psychological safety, and emotional labor, depending on AI social function and exacerbating and mitigating individual and relational factors. We discuss implications and suggestions for future exploration, including intrapersonal, interpersonal, and team relational dynamics and evolving expectations of AI in organizations.