TY - JOUR AU - Vaassen, Bram AB - This article investigates how human interactions with AI‐powered chatbots may offend human dignity. Current chatbots, driven by large language models, mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphize chatbots – indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings' behaviour towards chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second‐personal, relational account of dignity, we argue that interacting with chatbots in this way is incompatible with the dignity of users. We show that, since second‐personal respect is premised on reciprocal recognition of second‐personal moral authority, behaving towards chatbots in ways that convey second‐personal respect is bound to misfire in morally problematic ways, given the lack of reciprocity. Consequently, such chatbot interactions amount to subtle but significant violations of self‐respect – the respect we are duty‐bound to show for our own dignity. We illustrate this by discussing four actual chatbot use cases (information retrieval, customer service, advising, and companionship), and propound that the increasing societal pressure to engage in such interactions with chatbots poses a hitherto underappreciated threat to human dignity. TI - AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self‐Respect JF - Journal of Applied Philosophy DO - 10.1111/japp.70037 DA - 2025-08-07 UR - https://www.deepdyve.com/lp/wiley/ai-mimicry-and-human-dignity-chatbot-use-as-a-violation-of-self-38pyUvId04 VL - Early View IS - DP - DeepDyve ER -