Via Glenn Reynolds.
“DAVE FRIEDMAN WROTE A SCIENCE FICTION STORY USING AI: Here’s the prompt: “Write a 10,000 word short story about an asteroid miner marooned on an asteroid. Narrate in first person. The miner has a wry, ironic, and detached demeanor, but he really pines for his family back home. He has a number of robots to keep him company, including one which helps him with sexual needs, but as he becomes more aware of how inextricably marooned he is, he starts to think about descending through Dante’s circles of hell. The overall message is one of increasing existential despair, akin to Sartre’s No Exit. Now, this is a science fiction story, but I want it to be a well-written, *literary* science fiction story–think something akin to Alastair Reynolds’ style.” Story at the link.”
So.
Let’s hear your opinions about both the instance, and the general concept…
Want to be a writer-from-scratch, or a cook mastering and following a recipe? But, then, we do follow our own versions of “what works for us”, don’t we? Is that different from adding our own AI seasonings to the product?
Is there a line here, or do we find a productive way to incorporate new tools?
My personal opinion is that I am far more concerned about the degradation of the AI base data through feedback input loops than about the tool itself. We might be the only generation that gets to see it unadulterated by its own redigested output.




8 responses to “AI-generated short SciFi story”
I’ve wondered about that input loop, myself. But considering what it took to get the original up and running without being contaminated by filth, I’m not sure what we are really looking at. I’m thinking about the Kenyan workers who spent thousands of hours removing the porn from the original databases being used. They were paid next to nothing and many of them couldn’t handle the degrading stuff they were reading for very long. It kind of gives a different meaning to the part where AI now sometimes deliberately lies. It just shows that the internet is full of deliberate lies.
Also I read that AI story and got stuck on the idea of the Hell created being like Dante’s hell. It is such a stupid mistake to make. That is NOT what Dante’s hell was about.
You’re lucky when you get a story. I’ve heard of people asking for one and getting either excuses for not writing it, or a claim that it’s posted to a site that the AI could not have posted on. Because that’s what writers do in their blogs.
I am distinctly reminded of Isaac Arthur’s various discussions of the prospective AI singularity. Particularly the idea of the AI, rather than taking the time and effort to make itself smarter, instead, hacking it’s own test results to show the researchers. Because improved test results were the requirement, right?
No? Oh. Ok. And then it goes and hacks a bunch of people’s bank accounts to go hire a research team to go develop the improved AI it’s developers want it to.
What do mean that’s not allowed?
I’m with Larry Corriea’s FB post that it’s the writing and creating that’s the fun part. Editing, whether of one’s own stuff or AI generated stories, is drudgery.
If I wrote because I needed the income, I’d put the effort into marketing, rather than churning out lots of AI stories.
I’ll agree with that.
AI for writing or pictures has its uses but you have to be ready to edit like crazy if you want something you’d dare to show to anyone else. That’s assuming if you can get it to make anything like what you asked it for. I once asked an AI for a picture of a guy getting taken off guard by a surprise kiss from a lovely lady and was told ‘No, this prompt violates our terms of service.’ Because a surprise kiss is some sort of awful violation?
I’ve used AI once for a sonnet (look, I’m no good at poetry, and don’t particularly want to be either), and occasionally to help with book blurbs and writer’s block. Its predictability and lowest common denominator tendencies are actually useful to me in those last two tasks, in terms of helping me see what I’m missing.
I read that story a couple days ago. It seemed to get the style mostly right, but it was repetitive and boring. I can run several LLMs locally and I’ve found them entirely useless for writing. They can variously:
summarize short texts
partly summarize and partly lie about longer (3k+) texts
“read” foreign language text on images
translate languages (including programming languages) poorly
sort of rewrite very short texts in a different style
write technically correct (if you’re lucky) bad poetry
But actual writing? Nah.
I’ve found them occasionally useful for programming. Even if the example code is bad, it is usually good enough to get on the right track or at least the right keywords to look up. As you’d expect, LLMs are entirely useless for tasks outside their training data.
LLM-written books will probably flood the market anyway and make the discovery problem worse.