Introduction
AI technologies raise many ethical questions. Some of these are associated with technical underpinnings and operation; surveillance, data gathering, and analysis, or copyright and intellectual property issues as can be seen in the widescale processing of published writing or artworks that support generative AI. Alongside this though, the ethics of AI is also questioned because of the ways that people relate to these technologies. Framed as “intelligent,” they foster anthropomorphism, supporting people’s tendency to understand them as humanlike. While some regard anthropomorphism as the only way to interpret and explain the behav- iours of nonhumans; for others it is unscientific, unwarranted, dangerous, and to be avoided at all costs. The designers and developers of AI technologies, including software bots and robots, who embrace anthropomorphic design as an approach that encourages anthropomorphism in human interactants, may therefore be lauded for creating systems that support easy, natural communication with humans or charged with deception and fakery in their efforts to draw people into misguided relationships with technology. Arguments about anthropomorphism and AI technologies can, therefore, be roughly divided into two opposing positions: 1) the desire to create machines that are (or appear to be) humanlike, or 2) the fear of deceiving people into believing that a machine is more humanlike than it is or (some would argue) could ever be.
Anthropomorphism and its discontents proposes that anthropomorphism is best understood as a process by which people interpret the characteristics and behaviours of nonhuman others as-if partially human-like. It accepts this as the only way for humans to encounter nonhumans—a response shaped by their personal human experience of the world—though other alternatives will be identified and discussed. The definition provided here, while it shares much in common with others (as presented below), is worded to emphasize the partial nature of the interpretive process. Nonhumans are not human and, while they may appear somewhat humanlike at one time or another, there is no need to consider them as being humanlike in any fixed way in order to relate to them. Indeed, nonhumans do not need to be much like humans at all for anthropomorphism to occur and be either helpful or potentially deceitful. Embracing the idea that anthropomorphism can only ever be partial, tempered by recognition that the other may be decidedly not human, opens up possibilities for reconsidering the ethics around human relations with nonhumans including AI technologies.