“AI will replace writers and other knowledge workers.” How long have we been hearing this? Amazon and other distributors are already trying to weed out computer generated novels and short stories, with varying degrees of success. Thus far, the novels have not been ready for prime time, as the saying goes, at least in English.

What about children’s books? At least one AI has started trying that path. Publishers Weekly has an article about early attempts by Google™ Gemini AI chatbot to generate illustrated children’s stories and non-fiction books. The hope is that the chatbot will create appropriate illustrations and stories based on the prompts, tailoring the stories to child and the prompts.

At first, this sounds a lot like the custom children’s books that are already available, where the buyer picks from a menu of story lines and descriptions, and the publisher tucks in the child’s name, some details, modifies the illustrations to suit the description of the child, and sends it out, usually in hardback.

“According to a company release, the app’s books can help children to “understand a complex topic” such as the solar system, teach lessons about values such as kindness through characters that reflect the child’s interests (such as animals), bring a child’s own artwork to life, or turn family memories into a custom storybook. In the example story offered by Google with the release, a mother uploads her resume to the app so that her daughters can understand what she does for a living through a custom story featuring her at work.” ( From PW.)

According to the review article, users have reported mixed results. Some love it and say it works very well for what they want. It produced books in English and Hindi that followed the prompts, and that would probably appeal to children. Others complain because the defaults favor white people for the illustrations, and the images are not always child appropriate. Depending on the story prompt, getting Gemini to produce PG illustrations can be very difficult. I suspect the problem is in part the images available on the internet for training the AI. Statistically speaking certain mythical things are more likely to be PG-13 or R than PG and G.

Other complaints center on the “flat” language and not-great illustrations. https://www.creativebloq.com/ai/ai-art/why-i-wouldnt-subject-my-child-to-googles-ai-storybooks The blogger, also an illustrator, is less than impressed by the plot, the lack of consistency between pages (birds change color, characters look different), and dull language.

I suspect that, at least for now, Gemini produces OK stories for rushed parents or babysitters who just want something new on no notice and who forgot to bring along books, or who don’t do bedtime stories easily. It will get better, depending on the prompts and user feedback. Is it a threat to writers’ jobs yet? No. Illustrators? Probably not, in part because there are other, better LLM image generators out there, and the buyers of children’s books look for different things than do many of the readers (grandparents). Lawsuits are probably in progress at the moment, because of how close some of the images look to the work of artists with very famous styles (and that are still in copyright). There are also complaints about the AI encouraging kids to stay on-line, rather than handling real books and getting the other sensory stimulation. Personally, the thought of the computer reading a bedtime story to my child … Doesn’t appeal, but I’m not the market.

https://www.techradar.com/ai-platforms-assistants/gemini/gemini-ai-can-turn-prompts-into-picture-books-but-i-still-prefer-paddington

https://www.publishersweekly.com/pw/by-topic/childrens/childrens-industry-news/article/98452-google-launches-personalized-gemini-storybook-app-to-industry-concern.html

9 responses to “AI, Storytelling, and Tales on Demand?”

  1. I couldn’t get into Neal Stephenson’s The Diamond Age, but this feels like the first stumbling baby steps towards someone attempting to program the educational primer i have seen described in reviews of that book.

  2. Gotta say, the idea of generating a novel at the push of a button holds no appeal to me, despite my own struggles in finishing one. I love the process of creating, no matter how frustrating it can get.

    1. I’m with you on that one. I don’t trust a computer to write fiction that I want to read. Given what has been scraped into the LLMs from popular (i.e. “best selling”) fiction, the subjects, characters, and plots are probably a bit distant from my preferred stories.

  3. Is it bad that my first thought was, “well, if they did this for most tv or movie scriptwriting, would we really be able to tell?”

    The ignorance of the writers doesn’t come across much different than LLM hallucinations.

    1. Some people have run script ideas through various LLMs and decided that it was no worse, perhaps better (“police procedural in big city” and so on.)

    2. “well, if they did this for most tv or movie scriptwriting, would we really be able to tell?”

      I think you would, because the scripts would improve. The AI probably wouldn’t break the characters as much, and have better continuity. “MC got shot last episode, he should be limping this episode.”

      1. Alas, AI is prone to continuity glitches.

        1. Yes, apparently the LLM loses the thread of the story, and the longer it gets the more it wanders off into the wilderness.

          I maintain this would still be an improvement over the ridiculous trash we have inflicted upon us by Hollyweird. At least the LLM is not churning out garbage deliberately.

  4. LLMs are hitting the error reduction wall. It takes an order of magnitude more compute to gain a percent of error improvement. It might actually be worse than that, I’ve seen some numbers being thrown around. It’s a power law, apparently. Something new will be called for to breach the wall, and that is not on the horizon.

    LLMs will not be producing book on demand good enough to read anytime soon, if ever. Oh well…

Trending