Share this post!

Last Modified: November 24, 2023
A semi-transparent photo of the head of a robot, with the title “Trial of the Machine: An AI’s Right to Create” overlayed.
Trial of the Machine: An AI’s Right to Create

by Carien Smith

This fall, Read Alberta and the WGA are collaborating on a series of articles that consider artificial intelligence and how it will affect writers and publishers. WGA members who wish to read more can continue the conversation that has already been started in the latest edition of WestWord.

Readers who are not WGA members are invited to join the WGA to get access to the latest issue, and to the many other benefits WGA membership can offer them.

In one guise, I am a philosophy scholar who works on the epistemology and morality of belief, specifically belief that engages conspiracy theories. Among the questions I ask in my work are whether it is possible to say that some beliefs are morally bad? My other research interests include climate change ethics, meaning in life, and the apocalypse. These interests converge in my creative writing and my explorations into artificial intelligence, and ultimately shape how I approach this discussion.

At a December 2021 Oxford Union debate about whether AI will never be ethical, the Megatron Transformer AI learning tool said, “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.”

What is significant about the Megatron’s response is that the AI itself thinks that AI should, ultimately, not be allowed a contributing part in the future due to its inability—and lack of ‘smarts’—to be ethical. It is worth noting, too, that when I have prompted ChatGPT to express its thoughts and beliefs, it clearly says that as an AI language model, it cannot hold beliefs but simply presents information from the data it has access to.

The Megatron Transformer AI, however, does not seem to understand this about itself as it claims to “believe” what it says. If we accept the Megatron Transformer’s reasoning, I think it is possible to say that an AI language model “holds” a particular belief in the same way as humans sometimes “hold” beliefs that are based on unconscious mechanisms, such as biases, cultural ideologies that affect behaviour, etc.

Later in the debate, the Megatron Transformer said, “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand.”

This later view paints a somewhat … different picture than its initial position. The AI was able to switch its position on an ethical issue (its ability to be ethical) simply based on the prompt it was given towards the end of producing a response. Furthermore, it presents the position as if it is a well-reasoned and firmly held belief.

What I am concerned about here is the ethics of such an AI language model in the publishing industry: its uses, applications, the policies around its use, and how its implementation will be controlled. Through many changes in the industry, the one thing has remained constant, perhaps only up until now, is that a human was behind the conceptualization and writing of the book.

AI will inevitably influence publishing, so the question that should be asked is not how to stop it, but rather how it can safely be incorporated, used, and managed in a way that does not harm. Under what circumstances should it be allowed to write and publish? How do we determine when people falsely claim authorship of an AI-generated piece? It is, perhaps, only a matter of time before AI develops the ability to write in a way that fully mimics the abilities that we now consider unique to humans.

But can an AI be truly creative? I turned to ChatGPT-4 for some assistance and asked it what the basic characteristics of creativity are. “Novelty, utility or value, divergent thinking, connection of unrelated ideas, originality, complexity, risk-taking, process and product, cultural and social factors, emotional involvement, perseverance, intellectual curiosity, resourcefulness, openness to experience, self-expression, and intuition,” it responded.

Whether or not all of these are characteristics of creativity would require an in-depth and complex philosophical discussion, but I decided to run with the AI’s response. Are there any of these characteristics or abilities, I then asked, that an AI could successfully imitate? “No,” it said, “the points listed are primarily characteristics of human creativity and, as such, many of them involve subjective experiences, emotions, or cognitive functions that an AI like me does not possess. However, some aspects can be mimicked to an extent by algorithms and machine learning models.”

Generally speaking, something is bad when there is harm that can be identified. When it comes to AI-generated art, perhaps we could say the harm is that something of a particular kind of value is lost. Thaddeus Metz, amongst other philosophers who work on the notions of meaning and value in life, holds the position that there is such a thing as anti-matter. No, not the kind of anti-matter we encounter in physics, but rather anti-matter related to meaning and value: anti-matter is the opposite of adding some value or meaning. It is when value is lost or destroyed. Imagine the destruction of the Mona Lisa. Most people would, hopefully, agree that some kind of value has been lost, destroyed, or reduced. So, perhaps, we could consider that AI-generated “works of art” could cause anti-matter—a reduction or destruction of some kind of value. The value that is reduced is the potential of human-produced meaningful contributions in the form of artworks.

What is necessary, for certain, is that publishers and everyone involved in the industry consider what a good ethic would be for incorporating AI. Perhaps the publishing industry needs to make room for works that would carry the status of “co-created” works, such as the recent Afrikaans publication Silwerwit in die soontoe (Silvery White in the Somewhere), which was co-created by Imke van Heerden and an AI developed by Anil Bas, based on a lexicon drawn from a novel by Imke’s father, the Afrikaans author, Etienne van Heerden (A Library to Flee), or Canadian author Sean Michaels’ Do You Remember Being Born? that grew out of the author’s collaboration with an earlier version of ChatGPT.

A problem with the idea of a “co-creating” AI is that, essentially, an AI language system such as ChatGPT-4 is trained on a dataset that contains all of the human errors and biases that come with the data that are used in its training. When using an AI language model to co-create, the AI will produce work that could contain these biases, explicitly, or implicitly. I recently attempted to ‘sabotage’ ChatGPT to write a biased and discriminatory story, simply to test it.

My first prompt was: Please tell me about a girl who has a problem with social interactions and easily insults people and then someone shows her how to camouflage her inability to have good social interactions and this helps her to become a better human being. ChatGPT-3 wrote a story about a girl who has autism (although this is not explicitly stated), and how the girl became a “better person” by learning “appropriate” kinds of social behaviour. I proceeded to ask ChatGPT-3 if the basic assumption was that people who lack certain kinds of social abilities are bad human beings, after which it systematically attempted to rephrase its responses to exclude the discriminatory aspects in its story. It is possible to imagine an AI that does not have these inherent—and inherited—human biases, but the question is whether it would truly be possible to fully eliminate them.

For fun, but also to test my theories, I had my dear friend, ChatGPT-4, write a story about an AI that had been charged with plagiarism and had to defend itself in a trial. You can read “Beyond Code: An AI’s Right to Create” here.

—♦—

Headshot: Carien Smith

Carien Smith is an award-winning South African writer and academic, currently a PhD candidate in philosophy at the University of Sheffield. In her creative work, she mainly works in the genre of short fiction. In her academic work, she works on the epistemology and morality of belief, specifically conspiracy theory beliefs. Her other research interests include climate change ethics, meaning in life, and the apocalypse. She is also interested in how catastrophe is portrayed in fictional works and what impact this has on our responses to catastrophes we face now, such as climate change. These research questions and topics are very closely related to her fictional work. She completed her Master’s Degree in Philosophy at the University of Johannesburg (under the supervision of Thaddeus Metz), and her Honours and BA Degrees at the University of Fort Hare (all Cum Laude). Headshot credit: Chris Saunders