Artificial intelligence and the craft of writing

In his superb “Up the Organization” (probably the best book on business and management I’ve ever read, and one I like to re-read every year), Robert Townsend spoke of the “Man From Mars” approach.

In solving a complex problem, pretend that you are a Martian.  Assume that you understand everything about Man and his Society – except what has been done in the past by other companies in your industry to solve this particular problem.

Cover 'Up the Organization'

For example, when the Massachusetts Turnpike Authority was about to tear down the Avis headquarters in Boston, we asked ourselves, “Where would a man from Mars locate the headquarters of an international company in the business of renting and leasing vehicles without drivers?”  The main criteria became clear:  near active domestic and international airports, so we could go see our managers and they could get to us;  and in a good accounting and clerical labor market.  So we moved to Long Island between JFK and La Guardia, while our larger competition isolated itself on the tight little island of Manhattan.

Makes sense, doesn’t it?  I’ve used that approach when managing my own department, and subsequently a larger division, in business.  It helped to cut through the clutter, and highlight the truly important issues.  It’s applicable to writing, too, as I’ll discuss in a little while.

I’ve written before about the impact of technology on the craft and business of writing.  I think many authors tend to dismiss it as of little importance.  Unfortunately for them, that’s about to change, because artificial intelligence (AI) appears to have just taken a major step forward.  It’s not yet to the point that it can replace a writer, but it’s no longer very far from that, either.  It will certainly threaten the livelihoods of technical writers in the short term.  The New York Post reports:

This AI is so good at writing, its creators won’t release it

A group of scientists … have designed a predictive text machine that is so eerily good its creators are worried about releasing it to the world.

Designed by OpenAI, a nonprofit artificial intelligence research organization co-founded by the eccentric billionaire, the machine can take a piece of writing and spit out many more paragraphs in the same vein.

Called the GPT-2, it was trained on a dataset of 8 million web pages and is so good at mimicking the style and tone of a piece of writing that it has been described as a text version of deepfakes.

. . .

The software has difficulty with “highly technical or esoteric types of content” but otherwise is able to produce “reasonable samples” just over 50 percent of the time, researchers said.

A couple of journalists from The Guardian were given the chance to take the technology for a spin and were suitably concerned by its power.

“AI can write just like me. Brace for the robot apocalypse,” reads the headline by journalist Hannah Jane Parkinson.

The OpenAI computer was fed an article of hers and “wrote an extension of it that was a perfect act of journalistic ventriloquism,” she said.

. . .

The organization usually releases the full extent of its research but has withheld the totality of its latest project out of fear it could be abused or misused — a high likelihood given the increased weaponization of “fake news” thanks to social media.

There’s more at the link.  For more information, see also:


From the last link above, consider this:

A very careless plagiarist takes someone else’s work and copies it verbatim: “The mitochondria is the powerhouse of the cell”. A more careful plagiarist takes the work and changes a few words around: “The mitochondria is the energy dynamo of the cell”. A plagiarist who is more careful still changes the entire sentence structure: “In cells, mitochondria are the energy dynamos”. The most careful plagiarists change everything except the underlying concept, which they grasp at so deep a level that they can put it in whatever words they want – at which point it is no longer called plagiarism.

We’ve all heard of cases where someone’s book has been “appropriated” by another writer, who’s changed names, dates and places, but apart from that has essentially copied the book almost word-for-word, then published it under his or her own name.  An AI tool like GPT-2 might be able to automate that process, making tracking such plagiarism a nightmare given its proliferation.  Given enough machine intelligence, the “fakebooks” thus produced might even be differentiated enough from their source that it would become difficult to prosecute the plagiarizer.  That’s not a happy thought.

Be that as it may, let’s go back to the “Man From Mars” approach with which I opened this article.  What might a “Man From Mars” do in his approach to the business of writing?  Is it possible he might be able to use such AI technology to revolutionize the business, both in the broad sense (publishing in general) and in the production of works to publish – the task of the writer?  I think it is.  What’s more, I think that if we aren’t thinking about that possibility already, it’s going to sneak up on us and mug us – to the detriment of our writing careers.

There are several “families” of popular fiction where a well-known author’s name appears on the cover, but the books are ghostwritten by a series of anonymous contributors who are never acknowledged.  What if AI software such as GPT-2 could be used to write such books, instead of human authors?  I’d say such a development can’t be too far away.  It would save paying those authors, and probably increase the output of the “novel mills” that church out such dreck works of literature.

What if a writer could develop the plot for a novel, then feed it into an AI program to create the text?  At present, software like GPT-2 can’t handle that, but it’s already come an amazingly long way from the beginnings of the field.  How much longer will it take to go the rest of the way?  Your guess is as good as mine, but my guess is, within a decade – perhaps less.  How will that affect the book market?  I imagine plot and character development may take on a much more significant role for an author than the actual writing.  Is this possible?  Is it feasible?  Would a Man From Mars consider such an approach?  If not, why not?  What are the alternatives?

What might a publishing house do with such AI tools?  There are many well-known series of books that are out of copyright, or about to exit its protection (for example, Arthur Conan Doyle’s “Sherlock Holmes” books, or Shakespeare’s plays).  Could AI write new books or plays along the same lines, using the same language, true to the historical canon, but setting the works in modern times, or creating entirely new derivative works that can be copyrighted as new creations?  What would that do to the publishing market for the earlier works on which they’re based?

What about assisting authors as they write?  Could AI software act as an “on-the-fly” editor, analyzing one’s output as it’s produced (or perhaps at the end of each day), and suggesting improvements?  Instead of merely grammar- or spell-checking, it could examine style, vocabulary, or expectations of the genre (“You haven’t included a ‘happily-ever-after’ scene to conclude your romance novel.  This is de rigueur for the genre.  Would you like to add it now?”)  It could become a “virtual assistant” to any writer, to such an extent that it might become the equivalent of a co-author.  Are we ready to consider this?  If not, why not?

What about the impact of AI on the publishing side?  More and more, I expect AI to take the place of slush pile readers and low-level editors.  Why have humans doing what software can do just as well, much more cheaply, and very much faster?  What’s more, such tools will be available (for a price) to independent authors as well.  What will happen to alpha and beta readers, editors, etc. when their services can be rendered by a computer instead of a human?  The best will still be in demand, but many of the lesser lights in the field will go to the wall.  How (if at all) will we, as writers, change the way we work, to accommodate such changes?

What about marketing?  Can AI software produce genuinely useful reviews of a book?  If so, look for them to proliferate on outlets such as Amazon.  Can AI analysis tell whether a book’s been largely written by a computer program, or by a human being?  If it can, will potential readers use it to find out which books are electronic, and avoid them in favor of human-written works?  Might they start looking for specific features that an AI can highlight, so that they buy only books containing them?  At the moment, nobody knows… but if the technology is out there, all these things can be made to happen.  It’s merely a matter of programming – and AI can write the programs for us, too.

Finally, could there be completely new, original, never-before-tried aspects of the writing profession?  If there are, could AI software discover and exploit them?  Could such programs revolutionize the production of novels and associated works, by approaching the task in a novel way (you should pardon the expression)?  What innovation might be possible to a program that can (and will) consider every previous innovation in history in our profession, and weigh them against each other?  Who knows?

These are just a few of the possibilities opened up by AI in our profession.  We’ll be fools not to start thinking about them, and others that might occur to us.  Like it or not, technology is going to become more and more a part of our craft.  We risk being left behind if we aren’t following it, and actively considering its implications for us.


  1. > What about the impact of AI on the publishing side?


    The publishers will tell it to look for certain things, so it will return manuscripts with those, and toss the rest. And the lockstep drive to sameness will be complete; each season’s releases will be effectively the same as the ones before.

    Jeff Duntemann said “A good tool changes the way you work; a great tool changes the way you think.” A tool like an AI slush pile, and a publisher wouldn’t have to think at all. Just take the top X number of submissions, run them by an editor intern, and send them to the printer…

    1. On the other hand, you don’t need a computer to do that. HR departments are infamous for looking for buzzwords in resumes.

  2. Once upon a time in the 1990s I wrote a research project into the medical literature on gun control. It ended up as a web page (Geocities, long defunct) where I listed ALL the papers on the subject from the first one in ~1965 to the latest, about 1999 or so.

    (That’s why I’m so down on the medical journals. I read a couple hundred papers that shouldn’t have passed the giggle test, much less peer review.)

    But as I slogged through that mass of seething bullshit, tracking down nets of authors cross-quoting and citing each other’s work, recognizing the patterns of similarity in the methods and study design etc. I wished there was a machine that could to the stupid parts for me.

    For example, couldn’t there be a program to collect all the papers with the same flawed “correlation-equals-causation” study design, and highlight all the conclusions that weren’t justified by the results?

    Or even, couldn’t there be a machine to read all these things and make a list of papers funded by the same foundations? It is tedious beyond words to be photocopying these bloody things out of a dusty journal, pouring through them, tracing down citation chains to finally dig out the source of some idiot notion like “bullet as contagion.”

    Essentially, it is harder to fact-check an article than it is to write it in the first place. That’s how these moral-panics get going in the first place. Some guy makes up a plausible sounding lie (video games cause violence!) and churns out a “study” to support it. By the time the community grinds through all the bafflegab to discover its bullshit, there are twelve more papers out there jumping on the band wagon.

    An AI helper could be very useful if it automated all the repetitive, stupid part of reviewing science.

    1. Computer can be trained to look for word patterns that point toward correlation instead of causation, but you’d probably have to eyeball to verify. And once it became known, they would change the words.

  3. About 15 years ago I looked into “writing factual articles for online dissemination” as a potential income stream, and learned that there are two reasons why this isn’t worth the bother: 1) the going rate was based on wages in India (thus was only $5 to $25 per article,and has not improved; I was lately offered the princely sum of $100/month to produce daily articles), and 2) about 3/4ths of the sites that use such articles as clickbait were already machine-written, thus essentially automated.

    Pomo generators already outperform the bafflegarb papers produced by professors of Useless Studies. Note this one that has been running for 19 years now:
    Reload for endless fun, Now with citations! Today for me it spit up the plausible insanity, “neocapitalist socialism”.

    1. Ugh. I did advertising copy for a while, with ever increasing quotas. Eventually it got to the point that I was expected to work, no breaks, from 8 am to 10 pm. The amount I got paid depended on how many was accepted (and I’d have to rewrite the rest.) The burnout was bad, but I did better than most because I am a very fast touch typist and I had a much larger vocabulary than the average person.

      Apparently, because I’m work from home, that’s ‘fine.’

      1. at the tech site i wrote for, i had one news editor tell me i couldn’t copy a bullet list of software features from a press release because that would be plagiarism… even tho that’s exactly what everyone else was doing

        you can only rephrase a software feature list so many ways

          1. meanwhile, other more major computer graphics tech sites just posted the original press release, or the release with minimal edits.

            1. I got scolded at one point for repeatedly using the word “RAM” when having to hype up graphics cards.

              Like, what the fscks else was I supposed to describe, when the card I was supposed to make super desirable for (online shopping sites) was a complete and utter piece of tosh that was likely to be superseded by the onboard graphics chip?

              And yes, the American ad company we worked for basically hadn’t even a bit of a clue.

              1. Well, i was at least working for a fairly technical site so i didn’t have problems like that. I did have to explain basic workflow things to them a lot, and what i was trying to test that kept not working (e.g. there is no good way to record screen-space user interactions)

  4. AI generated Science Fiction Example Here:

    The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

    Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

    Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

    Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

    Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

    1. I’d say that’s not too too terribly bad, but I’ve been grading essays and reading textbooks most of this week, so I’m not a good judge at the moment. 😛

  5. It’s not AI, it’s advanced, applied statistics.

    I think one thing these AI (and thephantom182’s) examples is how much obfuscated BS has been and is being written.

  6. Never happen. At least not based on digital computers – maybe on some other type of artificial substrate that doesn’t exist yet. AI hype and failed predictions have been around since at least the 70’s, something of a joke among smarter computer geeks. AI has been and will continue to be really just clever games and/or toys. An algorithm cannot write a book that isn’t obviously the product of an algorithm. We’re no closer to HAL now than we were in 2001.

    1. Well I shouldn’t have said “smarter” computer geeks, since there have been many smart computer geeks that have promoted and worked on AI, especially in academia, it’s just that they’ve been on the wrong side of reality for a long time.

      1. “Rich and dumb” covers a lot of ground in Silicon Valley.

        It was also revealed by the CRISPR Twins in China that there is a great deal of interest in Silicon Valley over gene-edited offspring. In fact the poor little twins might not be an HIV experiment, but an increased memory/intelligence experiment instead.

        At this time I will opine that having kids who are smarter than Mom and Dad is an “interesting” journey, one that the Government of China or Google are not equipped to take. With the goal to make kids who are MUCH smarter, the journey will be that much more “interesting.”

        Recalling that “May you live in interesting times” is a Chinese curse.

    2. Yes, absolutely. The thing that gets buried under all the hype is that algos are MATH, they are not some magic thing that makes a computer into a person. They are not “intelligent,” they are clockwork. 2+2=4 and nothing else.

      Now, because the clock spins at gigahertz speed, the machine can do the algo so fast that it -looks- intelligent and predictive in some circumstances. But it isn’t. It is still clockwork. The ultimate Babbage Engine. Too bad poor old Charles couldn’t have lived to see Eniac. Or my phone. ~:D

      1. Physics is math. Chemistry is physics one abstraction level up. Neurons using chemistry and some electricity almost always do the same thing, except when uncertainties in the physics make a chaos cascade go in a different direction.

        A “clockwork” can simulate all of that, even building in little bits of randomness. Or specialized hardware can be built which allows errors to exist, to better simulate biological systems.

        As more and more research into AI happens, it seems to reveal increasingly that humans ourselves are not that special. Most of what we do is massive pattern matching. Occasionally we rise above that with actual original thought. But not often.

        At some point we have to prove that people are not computers. Messy error prone biological ones.

        1. Headline I saw the other day, recently discovered NEW way that neurons interact: they have “wireless”.

          Clockwork or Turing Machines can be made to imitate what we do. Maybe. If we work really hard, for a really long time.

          But it will never be able to -do- what we do, because Turing machines run algorithms, and there are some things that algos simply can’t do.

          We also run algorithms, pattern matching, search programs etc. There’s plenty of brain circuitry that’s been mapped out over the years which shows the math being run in it. But that’s not ALL we do, and those original thoughts that don’t come often are the proof. Machines -can’t- do that.

          Penrose covered all this rather nicely in The Emperor’s New Mind. Humans aren’t meat-bots. The notion has been disproven. Humans, and probably a lot of the higher mammals, are something else.

            1. Humans deal in the Real World, where if you are not paying attention, you can die. We have limits to our working memory, visual acuity, general sensory intake capacity. We can also only pay attention to one thing. When we are paying attention, the rest of the body operates autonomously. This is why people sit down to concentrate, it frees up attention they are using to keep from falling over.

              Yes, obviously humans can be fooled. Sometimes for a while, if they’re not paying attention.

              All of that is the successful machinery that keeps a human going in hostile Nature. The last 70 years or so its become very fashionable to pretend that humans can be completely explained by that machinery.

              This view overlooks the inconvenient fact that humans, despite our lack of teeth, claws, massive strength etc. are the top predator on this planet. We hunt everything. Nothing hunts us.

    3. Beg to differ… we really are. In 2001, practical quantum processors were only in labs, now you can actually buy them.

      1. (and people who keep saying ‘AI will never be able to X” are ignoring quantum processors at their own peril, tbh)

  7. I guarantee that AI will be able to generate inclusive diverse important Hugo-award winning works within 5 years,

  8. > Given enough machine intelligence, the “fakebooks” thus produced might even be differentiated enough from their source that it would become difficult to prosecute the plagiarizer.

    Flip side, there are common ways of phrasing things, common plots, etc. When you’re using generic building blocks, at what point does something become “plagiarized”? And from whom?

    Somewhere in the algorithms, someone is going to have to make a judgement call as to what is significant and what is not, and how to weight each ‘hit’ on some scale, and come up with a yes/no per percentage of likelihood.

    And colleges have been using “plagiarism detection” software for *years* to assess theses and papers; there’s basically no recourse to “the computer” and its decision.

    Sooner or later, plagiarism detection software is going to show up in a court case. And of course the vendor is going to assert that the code is all trade secrets… doesn’t matter if the victim was guilty or not; they’ll always have the accusation hanging out on the web, and they’ll be paying their lawyers for a long time.

    1. Knowing a current TA who uses the plagiarism detection software, there is actually recourse. because the software doesn’t say “Is this plagiarism? Yes/No”, It says “Paper A has a 80% overlap with uploaded paper B. See the following highlighted areas.” At which point the TA eyes the paper, and goes, “They’re both quoting the same source”, or “That’s so generic it’s likely not plagiarism” or “Oh, look at that. You even copied the autocorrect mistakes from the original paper. 0% for Plagarism, and referring this one to the ethics committee!”

  9. I guess I’ve read too much SF.

    When I read “Artificial Intelligence”, I think of computers as intelligent (or better) as people. 😀

    Oh yes, I know that “Artificial Intelligence” means something different in the Real World. 😉

Comments are closed.