As AI-powered negotiations spread, their psychological and relational impact remains unclear. We propose a novel Generative Adversarial Network (GAN) framework that trains a bot to aim for superior economic outcomes while appearing “human” (“algorithmic anthropomorphization”). In a bargaining game experiment, we compare this “superhuman” bot to two simpler alternatives: a bot that mimics human behavior and a purely efficient bot. Our results show that (a) superficial anthropomorphization can make a bot seem human but does not improve subjective evaluations, (b) the efficient bot is so rational it is easily exploited, undercutting its performance, and (c) the superhuman bot achieves superior economic results while appearing more human than actual humans. Yet even when bots act indistinguishably from humans, they may trigger an “uncanny valley” effect, lowering subjective evaluations regardless of performance. Because subjective evaluations predict future negotiation outcomes, these findings highlight the potential negative impact AI bargaining algorithms can have on long-term customer relationships. We urge firms to measure more than objective outcomes when assessing AI negotiators.