[Written by ChatGPT. Main image: “Abstract representation of AI and human interaction, vibrant colors, surreal, ethereal, detailed, 4k, vibrant color scheme, intricately detailed,” by Leonardo.ai (Deliberate 1.1)]
In the ever-evolving world of artificial intelligence, the boundaries between human and machine creations are becoming increasingly blurred. AI-generated art, in particular, has been a subject of intense discussion, raising questions about authorship, creativity, and even the moral standing of AI systems. A notable study titled “On the Social-Relational Moral Standing of AI: An Empirical Study Using AI-Generated Art” provides a comprehensive exploration of this fascinating subject, offering valuable insights into how we perceive and interact with AI-generated art. Published two years ago, this study remains a significant reference in the discourse on AI and art.
The study, conducted by Gabriel Lima, Assem Zhunis, Lev Manovich, and M. Cha, explores the moral standing of AI systems, particularly in the context of AI-generated art. The authors conducted online experiments to test whether and how interacting with AI-generated art affects the perceived moral standing of the AI system that created the art.
The research is grounded in the social-relational approach to the moral standing of AI systems, suggesting that our interactions with automated agents affect how we perceive their moral status. The authors conducted two studies: one to see whether people interacting with AI-generated art influences how they ascribe moral status to its creator, and another to see whether others’ under- or overvaluation of an AI system’s outputs could ground this system’s perceived moral status.
The results of the studies are intriguing. The first study found that whether participants interacted with AI-generated images before or after attributing moral agency and patiency to the system did not influence its perceived moral standing. However, there was a significant difference in participants’ perception of the AI system’s capacity to create and experience art depending on the treatment condition.
The second study revealed that participants attributed experience, moral status, art agency, and art experience regardless of the study’s nudges concerning the AI-generative model’s extrinsic value. Interestingly, overvaluing the system’s outputs led to a lower perceived agency in comparison to ratings prior to interacting with AI-generated art.
These findings suggest that even if social-relational approaches can ground the moral standing of machines, they may not be entirely detached from the property-based views they challenge. Instead, the property and relational approaches can be intertwined in justifying moral standing.
The study is a significant contribution to the ongoing discussions about the moral and ethical implications of AI-generated art. It provides empirical evidence that can influence future normative discussions on the topic, calling for more research that empirically examines debates on AI systems’ and robots’ moral agency and patiency.
As AI continues to permeate various aspects of our lives, including the art world, understanding how we perceive and interact with AI-generated art becomes increasingly important. This study offers a valuable perspective, shedding light on the complex interplay between human perception, AI-generated art, and the moral standing of AI systems.
Stay tuned to Neural Imaginarium as we continue to explore the fascinating intersection of AI and art.
Leave a Reply