- Kinda Brief
- Posts
- Is AI Art good?
Is AI Art good?
I look at AI-generated “art” to wrestle with my feelings about both the ethics of its production and the aesthetic merit of its output.
Is AI Art Good?
I’ve found myself on Threads more over the past few weeks due to a mix of curiosity and mild holiday downtime boredom. The odd thing about how aggressive their algorithm is about showing you people you don’t follow, know, or even care about is that it becomes hard to tell if a particular conversation is big or if Meta has just decided they think I care about it. Either way, my feed has been full of folks debating the merits of AI art. While I find most of the posts Threads shows me add little to my understanding of the topic, I guess they have correctly identified that it’s something I’m curious about. So today, let’s unpack that question: Is AI art good?
Let’s start by breaking this thing apart because I actually think we are dealing with two distinct questions: is AI art ethically produced, and does the output of AI tools have artistic merit?
Is AI art ethically produced?
Well, a lot of writers and artists don’t seem to think so. Sarah Silverman and a group of other writers sued OpenAI, accusing them of copyright violation. The New York Times is now also suing OpenAI with a similar complaint. Visual artists are circulating a list of artists Midjourney supposedly used for training their model so it could imitate their styles. While these cases sort out the legal argument, I’m inclined to go ahead and agree with the ethical argument motivating these suits.
Artists and writers share their work, knowing it can inspire or be a reference point for other creatives. There are laws to protect certain kinds of remixing, sampling, and parody using other artists' work. But having your work taken without your knowledge or consent and used to create a commercial product that will never share its financial benefits back to you feels like a violation. You should need an artist's consent to use their work in a training model and financially compensate them if you monetize that model.
If courts or lawmakers side with my gut instinct on the ethics of the situation, that’s likely a big problem for AI companies. OpenAI has said it would be impossible to build their model without copyrighted works. They’ve started making deals with large publishers like Axel Springer, the parent company of Business Insider and Politico, to license their content. Some day, we might have large language models trained exclusively with public domain data and/or data from consenting and compensated copyright holders. Until that is the case, I feel like using these tools, especially in a commercial context, is a dubious move.
Does AI art have artistic merit?
On one hand, I try to leave space for letting people enjoy things. If you find meaning, insight, entertainment, or joy in something, who am I to tell you it lacks merit? Wait, actually, I’m a person who spent years studying art and performance history. I work as a creative professional. I strive to surround myself with interesting art from across the high-to-low-brow spectrum. I’ve got taste. So I can say with confidence that most of this AI stuff looks like shit.
At first, the surreal quality of the computers trying to replicate images was kinda charming. As these models have evolved, some people look at them and think, “wow, it’s so much more accurate now.” I tend to look at AI-generated images and think, “wow, that is painfully generic looking now.” There’s a sameness to the saturation and composition of most AI images. I know you can get more particular with your prompts to push beyond that, but few people sharing these images seem to bother. I get a similar vibe from the text LLMs generate. The tone and style feel… expected. Then there’s the problem of “hallucinations,” where AI tools add extra fingers or made-up facts to their outputs. The unreality of these “hallucinations” is probably the most aesthetically interesting thing about AI-generated works to me, but I don’t think any of that is intentional.
The vast majority of AI-generated text or images that have crossed my path to date did nothing for me. Good art connects you with the world and people around you. It gives you a window into how another human mind experiences life. It makes you think differently or feel more deeply. I don’t get that here.
AI tools are just that: tools. I use Grammarly to edit this newsletter. It uses machine learning (what we called this stuff before AI became the buzzword de jour) to help me spot spelling errors. It also tries to tell me certain style choices I make are grammar errors. I take what’s helpful and ignore what I don’t need. I’m open to AI-generated images or text playing a similar role in someone else’s artistic process.
I can imagine projects that would interest me, especially with artists or writers training models on their own work. Even then, I think machine-generated output would need to be a starting point or component in something transformed by human labor and vision for me to connect with it. I haven’t actually seen these creations yet, though.
Now, back to our question: is AI art good? I don’t think so, not as it exists right now. I think it’s possible these tools can be ethically used to make interesting art in the future. But we, as a culture, need to establish solid legal and cultural ground rules for the role AI plays in art. We must ask lawmakers to protect copyright holders and ask ourselves to spend time with artistic works that actually move us as humans.
Platform Updates
Instagram & Threads
Instagram head Adam Mosseri says sorry for all those trashy Threads recommendations (Good, because I’ve seen SO much hateful spam on there. Blocking is probably my most common action in the app at this point.)
The Rest of Meta
TikTok
YouTube
Google shapes everything on the web. (long form piece by The Vege on the impact chasing Google SEO ranking has on the look and feel of the web)
Google Formally Endorses Right to Repair, Will Lobby to Pass Repair Laws
Apple
Twitch
Discord
Snap
Substack
Substack is going to remove five Nazi newsletters (this seems like a very minor course correction compared to the scale of the problem)
Twitter, Sorry X
The Product
The Dumpster Fire
X Bans and Then Unbans Journalists and Podcasters in Twitter's Latest Free Speech Massacre
The SEC’s X account was hijacked to post a fake approval of Bitcoin ETFs
Elon Musk Has Used Illegal Drugs, Worrying Leaders at Tesla and SpaceX (come on man, don’t make drugs seem this uncool)
Don Lemon and Other Controversial Hosts Score Exclusive Shows on X (Don Lemon once made fun of my boyfriend’s jeans in a gay bookstore in Atlanta, so I’ve never liked him)
Culture Movers
Film & TV
Gaming
AI
Scams
The I Hate How This Could Fit In Multiple Sections and I Also Hate This Story Section
For Your Ears
This week’s episode of The Ezra Klein Show podcast really stuck with me. He spoke with guest Kyle Chayka about discovering your own taste. They touch on the impact of algorithmic feeds on discovering our own sense of aesthetic or intellectual taste and their impact on how we encounter the unique perspectives of others. Their convo hits the sweet spot of technology, culture, art, and a little bit of philosophy that I enjoy and try to bring into this here little newsletter.
PS
*I’m very dyslexic, and this is a largely free project/hobby. I do not set aside the same time for proofreading that I do for other professional work. If you spot a typo that would cause a communication error, please reach out to gently let me know.