Lately I’ve been disturbed by twin phenomena (Phenomeni? Phenomene? Thingy?) One is what I call the AI-head-hunters. The other is the surge of AI-worship/fear.
Look, the first one is just silly. I’m sick and tired of watching a cute cat video on x that is something I’ve seen my cats do (like a cat who wakes up startled and jumps all over, waking the other cats) and having the comments be “fake! AI Slop.”
Just because you haven’t seen it happen, it doesn’t mean it’s AI slop. If you hate it, you can say “this video is slop” or “this is ridiculous sentimental cr*p” without bringing AI into it.
The same with writing honestly. Yes, there are these overly sentimental, involved and ridiculous stories (particularly on facebook) where I’m 99% sure they’re AI. Mostly involving cats or dogs or orphans or …. you know the type of thing. The reason I’m 99% sure they’re AI is not the over-sentimentality or even the weird circumstances, is that it often has a startling lack of knowledge about how things work. Like the fact you can’t pick an orphan off the street and he lives with you forever. (Though that’s possible, I suppose, in some countries.) Yes, there is also what I’ll call for lack of a better term “AI cadence.” Something like “the sentences are too unvarying.”
I won’t say I haven’t at least skimmed some of the stories with pleasure, because that would be a lie. Did the fact I think they’re AI diminish the pleasure? No, sometimes it increased it as it’s amusing to catch the clanker do things like talk about an animal regrowing a limb. (They really don’t GET physicality.)
What I’m trying to say here is that people hunt down “AI slop” for signs it was written by AI as though the very fact it was AI written makes it bad. This is nonsense.
And at the same time I’m seeing the deranged virtue signaling from my colleagues “I will never use AI to write.” I’ve taken to returning books with this message in the beginning.
No, this doesn’t mean I’m using AI to write. PFUI. My work isn’t formulaic enough for that. (I am using it to write blurbs, but I admitted to that from the beginning. I sort of have to, because it’s something I CAN’T do. If I try it comes out something like this. “This book is a book. It has a mystery. Oh, yeah, the mystery has a murder. And uh…. there might be musketeers in it. Three of them.” It’s pathetic. Before AI it would take me almost as long to write a blurb as to write the book and then I’d give up and mug a friend to write it for me. Only you can’t always corner friends in time, and they were starting to come up with creative excuses.)
However I read some Austen fanfic I suspected was AI written and later confirmed by finding out the author gives lectures on how to use AI to write. Look, not my favorite, but it was competent fanfic, okay. I only suspected it was AI because — again the how do things work? — it periodically forgot that people were somewhere else, and they just showed up, or it had characters telepathically know about something that they couldn’t possibly know.
I know these are AI characteristics because I read two extremely bad series with horrified fascination. I read them because they were so bad, and yes, you could tell they were AI written. The third characteristic is amnesia, which the decent series doesn’t have. Amnesia? Yeah, the problem is solved, the characters find out/understand something, but in the next chapter it’s all to do again.
The good series only has the first two happen, and so rarely that if I weren’t attuned to the characteristics it would have been a big shock that it’s AI written.
The difference? The good series is in the hands of a competent writer. The only other tell, honestly, is that it uses strange millenial lingo sometimes, and it’s super-weird to hear a regency girl say she “feels seen.” But some of the millenial and zoomer fan writers do that stuff anyway.
So I can say that in the hands of a competent writer, AI is just a tool. And it might cut some time off the production, but (Though I haven’t watched her lectures) I bet you that woman still puts as much effort into her stories as any other writer, her process is just different.
None of which matters, because we’re not savages. That is, we’re not Marxists. We don’t judge work on the amount of time or effort it took to write, but on how good or bad it is.
This is why I’ve started returning authors who are full of “No AI” virtue signaling. What they’re actually signaling is that they’re fuzzy thinkers infected with Marxism.
The product is the product. Its value is its value. How it was made doesn’t come into it, unless the product is inherently flawed.
Yes, some writing is slop. I find that except for those three characteristics above, most of the slop produced via AI is just “Amateur writer trying and failing to do the thing right.” Because I remember producing stuff almost that bad or sometimes worse when I was starting out. (Sometimes screamingly worse.)
Now I’ll say I’m grateful there was no AI when I started out, because I might never have learned what I did. But that might be survivor’s bias. I suspect coming up with AI is more a matter of taking a different path. And as someone who took 13 years to produce “publishable” I can hardly say it would have taken me longer, either.
Would the results have been worse? I don’t know. And neither do you. Like I don’t know if I’d write differently if I’d come up with typewriter versus word processor.
I’ve heard the exact same arguments that are now applied to AI applied to wordprocessors when I was learning. That typewriters gave you a leaner, more authentic voice. That not having to retype things when you changed something produced slop, that–
Turns out the biggest complaint, the over-bloated novel — attributed to word processor use wasn’t even true. Novels bloated because of publisher’s demands, because price of paper had gone up, and somehow paying $13 for a thick book seemed a better deal than $6 for a little book.
Nothing to do with word processors but with market as perceived by trad pub.
There is no virtue that inheres to a tool or a method. All of them can be badly used or well used.
The product is the product. If it’s slop it’s slop. AI produces slightly weirder slop than just beginning writer, but the beginning writer that put it up without noticing the errors is the slop producer, not the AI.
And let’s say you found out that your favorite book was written by a writer in possession of a time machine and an AI. Would it diminish your enjoyment an iota. If it would why? And don’t start flapping your hands and going on about souls or other intangibles. You love it you love it. Whether each word was sweated over for days, or it was assembled by a predictive algorithm makes no difference. Anything else is sloppy thinking and nonsense.
Further, real people are going to get hurt from this. The hunters are unreasoning and fanatical. And the AI detecting machines are so bad that classics ping as AI written.
So stop the hysteria.
Stop deifying AI too. There are things AI does well, mostly repetitive tasks. But even at those it has flaws. It’s getting better but the flaws remain giant.
You don’t need me to give you examples. if you’ve dealt with AI assistants online for various shops or doctors’ offices you know exactly what I mean. For instance I went through two hours of trying to get a human on Amazon customer service, because a hair treatment thing arrived… exploded. Their AI assistant wanted me to “return” it. With product all over and the box broken. A human understood it in two seconds. Another time I called a credit card office because I had a false ping that the card had been stolen. I just wanted to make sure it was false, but the AI was at the point of cancelling my card when screaming into the phone for half an hour “I want a human” finally got me one. Who answered my question in seconds, by checking the latest purchases and verifying they were indeed mine.
So, good for some tasks, but the belief they can do everything is leading companies to rely solely on AI which causes snarls and problems that you can’t fix with love or money.
This btw is the other side of “AI is just going to take over everything and push humanity out.”
A) if it does humanity deserves it, and a hamster could have pushed humanity out.
B) Only if we allow it to continue being used for everything, when it can’t do it. And by that I mean “only if the desire to cut costs leads companies to fire all humans” and even then, honestly, those companies will just go under. Not humanity. A recession at most.
C) only if we’re all Marxists now, and don’t understand economics roles/processes change.
Or to put it another way, the roadsides aren’t full of unemployed buggy whip makers, crying because their livelihood disappeared. Those people moved on, and their children found other jobs.
Will there be pain? There’s always pain with economic dislocation. But machines will never replace humans, because humans will find other things to do that are valued and paid for.
I guess this amounts to “Don’t be an idiot and also be not afraid.” So that’s the TLDR.
And that’s it.




Leave a reply to gafisher124C41+ Cancel reply