M.O.D.O.K. > ChatGPT

I finally get around to discussing AI here

I saw Ant-Man

The main point of this movie is to do exposition dumps about Kang the Conqueror that I guess might be relevant for future MCU projects. Everything else feels like someone asked ChatGPT to write a generic sci-fi action film. I’m getting real close to invoicing Kevin Feige my overtime rate for making me watch these movies. Worst of all, they ruined my boy M.O.D.O.K.. So before we get to the updates, let me remind you what M.O.D.O.K. should look like.

Jesus Christ, how do I describe an image of M.O.D.O.K.? He’s a big head guy in a floating chair. His visage is both very evil and very silly. I love him.

Platform Updates

Instagram 

The Rest of Meta 

TikTok

Twitter

YouTube

Tumblr 

Twitter Alternatives 

Culture Movers 

Film & TV

Gaming 

Time to talk about AI

I’ve been holding off on writing too much about AI here for a few reasons. I wanted time to learn more about these new products, so I actually have something additive to say. I wanted to get a better sense of where the real world use cases are and what’s just the VC crypto scammers pivoting to the next big thing. I wanted to get further into this hype cycle before putting my perspective down in words. This feels like the week to start unpacking my observations, though. 

First, no one knows what they are talking about here. On a fundamental level, I think a lot of people misunderstand this technology. It shows in how they talk about products like ChatGPT. “It feels sad.” “It lied to me.” “It hallucinated an answer.” These are all great observations of how AI programs make us humans feel, but they don’t accurately describe what the software is doing. 

Here’s where I might put a disclaimer that I’m not a software engineer, and I might get some technical specifics wrong, but I actually think that’s counterproductive. I’m not highly technical, I’m primarily a creative. I want to explain what’s happening here as I understand it. One trick of the style over substance technologist is to say, “you’re not technical enough to understand,” and they’re wrong here. So let’s go. 

The AI tools that have been all over the headlines recently are predictive models. You give a computer a large set of images or text and ask it to look for patterns. It’s a computer, so you can give it a lot of data to look through and a lot of time/compute resources to identify patterns. 

Then you start asking the computer to predict stuff based on the patterns it’s picked out. “Hey, based on all of the patterns you’ve identified in the text we gave you, what color is an apple?” 

If you’ve given the computer good data like science textbooks and issues Highlights magazine, it will probably be able to generate “apples are often red, but come in several different colors” as a likely extension of the pattern. If its data diet was just surrealist plays, beat poetry, soviet propaganda, and 4chan shitposting, you might get something different. 

These programs don’t feel or know anything. They predict what should come next based on their training data. That’s still really cool. We just need to understand a little bit about how these tools operate if we want to do anything interesting or useful with them. 

Pivot to an AI publishing story

Speaking of interesting and useful, I, unfortunately, want to talk about how AI is actually being used right now.    

Earlier this week Clarkesworld, a well-regarded science fiction magazine, announced that they’re closing submissions. 

Crash course in lit submissions: it’s hard for new writers to find audiences, a great way to do that is by submitting work to be published in lit magazines, reading and evaluating all of those submissions takes time, so many magazines have rules around who/when/how they take submissions or even charge authors to submit work. 

Clarkesworld had an open submission policy. So if you had a good story, you could just send it to them, and they’d evaluate if it should go in the magazine. Enter the grifters. 

There’s a whole world of people who come up with internet get-rich-quick schemes and sell the info on how to do them as their own ratchet. If you ever see a TikTok that says “passive income,” run. One popular grift a while ago was to chain outsourced labor together to make low-quality audiobooks for Amazon. You pay for an SEO person’s list of possible topics, get a content farm to generate a few hundred pages about the subject, publish that as an ebook, hire the cheapest possible voice talent to record some audio, then publish that as an audiobook. Classic hustle bro shit, but also an industry ready to be disrupted by AI. 

These kinds of grifters don’t care about quality. So the fact that AI models aren’t good at writing science fiction short stories yet (and likely never be because the best stories aren’t extensions of existing literary patterns but new ideas from an author's imagination and lived experiences) didn’t stop them from flooding Clarkesworld with AI-generated submissions. 

It’s a sad situation and one I don’t see a good immediate short-term fix for. Long term, I hope we’ll have AI tools that are good at spotting other AI works, digital watermarks on the output from AI programs, outlets specifically for appreciating creative work built using AI tools, and better ways to quarantine our digital selves from hustle bros generally. Right now, this just sucks, and my heart goes out to all of the authors with great work looking for an audience. 

Other topics I’m watching

I was torn between deep diving into a few topics this week, so I just wanted to call those out here. I might revisit them later, or I might just let these breadcrumbs speak for themselves. 

  • Gonzalez v. Google - There’s a lot going on with this case and a few others that could make big changes to Section 230 and how basically every platform I talk about in this newsletter does business. 

  • Paid Meta - I don’t hate the idea of paying for social media. I just don’t think Meta or Twitter has figured out what most people actually want to pay for. 

Have a great weekend, folks!