Opinion: Who is the artist in a word where artificial intelligence creates art?
Before there was generative artificial intelligence (GAI) art — a specific application in the world of artificial intelligence — there was digital art, or computer-generated imagery. I’ve seen movies where the credits list hundreds of digital artists and CGI artists creating backgrounds and special effects. Now, technology has evolved to include words being extrapolated into pictures.
The key question for artists who use these techniques as well as those who appreciate their work is trust. Is there a human being in what we see and hear? Does our lack of understanding of how artists use these techniques give the appearance of magic? What if these technologies were understood as collaborators in making art? Some might say it doesn’t really matter. Maybe all that matters is that we’re having fun.
To a large extent, digital editing tools have become accepted as just another set of tools for making art. Many people understand this when we use our mobile phones as a camera and then use an app to edit our photos. This is how mainstream and simple digital tools have been integrated into what we might call the egalitarian aspect of art making. This is likely the result of images in the world of artificial intelligence.
It is worth taking a closer look at how GAI art is made, especially to discover who we identify as the artist in these images. The answer resonates with the extent of human authorship. This is important to determine whether to grant copyright to such images.
There are other questions that come into play. Is the artist exploiting images owned by others – the thousands and millions of images on which the GAI model was trained? The answer to both of these questions may revolve around how much post-processing occurs to the raw GAI image and whether artists will train their own GAI model on their own datasets, their own art, and their own images.
Here we can have a productive conversation about the identity of the artist in the world of artificial intelligence.
I used the following text as a prompt to generate an image in the AI model: A photo of a serious chatbot, half human and half cyborg, sitting in front of a computer screen and writing a text message, appears on the wall in front of the cyborg, which is a mirror image of an 80-year-old indigenous human.
I used Bing’s DALLE▪3 generator to convert text to an image. One of the four images created sparked my interest.
I could have stopped at this point with some minor lighting adjustments. However, I decided to take this photo and open it in Photoshop. Photoshop has a tool called Generative Fill. You have expanded the image canvas and allowed the generative fill to expand the image. I’m still not satisfied. I decided to play with the story.
Here is a cyborg who created his own image of a human. It might also be interesting for this “human being” to imagine himself in memory of his youth. But no matter how many words she used, no memories of youth were generated.
So, I reviewed previous GAI images I had created using other words and found an image that could fit this emerging story. I copied this image and pasted it into the image I was working on. Now, I was combining images – creating a collage – inside Photoshop. Since my previous incarnation as a digital artist, I set out to combine the two images into a more powerful image.
lay aside Salah For these images, we might ask whether either image can be considered eligible for copyright? As an artist, I think my first collaboration with DALLE▪3 was minimal. As a collaborative art between human and robot, I would like to admit that the image of DALLE▪3 was not expected by me even though it was outside my script.
In terms of human authorship, I participated more in words than in images. Is it possible to separate words and images? In this analysis, words are likely to be excluded and the problem reduced to an image.
But what about my successful image? In the review, I expanded the palette with another GAI program; I brought a separate photo, which admittedly was also a GAI photo like the one I was working on; I designed a title to tell an unusual story. I placed both images in one canvas to create an effective composition and continued with more lighting effects.
If I were asked about the human authorship of this revised image, I would assert that I have moved the needle to greater human involvement, to more explicit human authorship. Is this enough for copyright at this moment in the development of new art-making technology? There are perhaps other questions more interesting than the issue of copyright.
Now, let’s have a conversation.
Joe Nalvin is a consultant for Californians for Equal Rights and former associate director of the California Regional Studies Institute at San Diego State University.