AI? What Is It Good For? by Charlie Martin
Using AI in writing has been a subject of great controversy. Now, over at PJ Media, I get paid for page hits, so I love controversy. Not that I didn’t when I caused controversy for free, but the money, such as it is, is nice too.
One of the running controversies seems to be people using AI for writing. One of the major issues is turning out to be lawyers who are entirely too smart for their own good, and use something like ChatGPT to write briefs for them.
The temptation is easy to see — legal briefs are usually among the most unpleasant things to write or read, full of jargon what’s an “estoppel”?), and words that have special meanings within law. At Confinement last weekend Glenn Reynolds mentioned “qui tam”.
Who what? River Tam’s brother maybe?
I didn’t want to ask Glenn in the middle of his actually interesting Guest of Honor talk, the gist of which was that however corrupt you think politics is, it’s probably more. So I asked Grok, which told me:
“Qui tam” is a Latin phrase short for “qui tam pro domino rege quam pro se ipso in hac parte sequitur,” which translates to “he who sues in this matter for the king as well as for himself.” In legal terms, it refers to a provision in certain laws, like the U.S. False Claims Act, where a private individual (often called a whistleblower or relator) can file a lawsuit on behalf of the government against a person or entity defrauding it.
You can certainly see why they abbreviate it.
You can see how AI can be useful, too. I could have looked that up, but it would have taken 15 minutes and led me down a rabbit hole, which is always a risk for any Google search. (Yes, yes, I know, Duck Duck Go. The advantage of Google is that it usually leads me to what I’m looking for, although it seems like there’s always a page full of sponsored links first.)
The trick with Grok — my favorite — or really any “large language model” (LLM) AI tool is that you need to understand what it does and doesn’t do.
What it doesn’t do first: it doesn’t know anything. I explain it at some length in my article in 2023 Unraveling the Mystery: How AI Models Process Language Without Truly Thinking. In case you don’t subscribe as a VIP member (and why not harumph?) the gist of it is that an LLM is fed a metric craptonne of existing information and then builds a model of that information. This model is, under the covers, a great big multiplication that looks at the text up to that point and decides what the most probable next word or phrase is going to be.
Now, if it freaks you out a little that just doing big multiplications can seem so much like there’s an elf in there. Well, it does me too, and that’s even though I understand what’s happening pretty well.
Sarah has a good explanation of some of what AI does well in her piece last February. Midjourney is different in some details, but under the covers it’s doing something very much the same — it has a model of what a, I don’t know, cover with an orc holding a sausage should look like, and it makes things up until it has something it computes looks like an orc holding a sausage.
Now, I also wrote about using AI in my article Using Grok as a Second Brain. You may have heard about various note taking methods that are generally called “having a second brain” — zettelkasten and Tiago Forte’s PARA method are a couple of them — but the idea behind them is that would take notes on just about everything and organize them in a way that lets you find things in the notes later. I use a tool called Obsidian and my own sort of ad hoc method, but the place where it falls down is when I have a good idea of what I need but haven’t kept any notes on it; I’m just going on what my flypaper brain has kept.
And that is, at long last, the point of this article.
Part of my journey on re-learning to write is to start writing fiction again after years away from it. Now, I’ve been writing about some of the things I do, like Morning Pages, inspired by Julia Cameron’s The Artist’s Way. I’m sure to write more in the future. Right now, I’m working on a series of short stories I’m calling collectively “The Grand Tour” about a travel writer and vlogger who is invited on a space cruise ship for a cruise around the Solar System that will contact something like 16 different locations. I started a series like that for the gone-and-mostly-forgotten Right Network, which went belly up still owing me money, years ago. The idea was to write something for the YA crowd that would introduce some of the wonders of the solar system in a somewhat palatable fictional format.
Right now I foresee it being about 16 episodes, and following some hints from Dean Wesley Smith’s books, I decided to write what I think is the third or fourth episode first because it was sticking in my head today. Working title is Gaia and the Cloud Cities of Venus and the story is on or strictly above Venus in cities floating in the clouds.
The thing being, it’s supposed to be hard SF. No antigravity, all reasonable physics.
Which, it turns out, is possible. Now, I insist that I thought of it first, but in the tradition of Hungarian mathematicians, Geoff Landis published first. Oh well.
The trick is that Venus’ atmosphere is very very dense, so dense that a sufficiently large geodesic sphere can float in the atmosphere. An idea which, according to Drake’s Law[1], I’m stealing from Buckminster Fuller.
The next question, though, is “can I get away with it?” And here’s the place when my research assistant Grok comes in.
If anyone wants to see the whole dialogue it’s here, but here’s the payoff:
Foma; Answer The cloud city achieves near-neutral buoyancy at approximately 51 km altitude in Venus’s atmosphere, assuming a 1 km diameter sphere with 84.3 kPa internal pressure and a lightweight structure. This altitude offers ~1–1.2 kg/m³ density, a pressure of ~0.7–1 bar, and a temperature of ~320 K—hot but manageable with tech. Adjustments to the structure’s mass could tweak this between 50–52 km.
TL;DR: Hell yes it’ll work. And while I could have figured this out myself — eventually — I would have probably spent days doing it. Assuming I could keep focus that long.
That’s what I think Grok, and AI in general, is good for. I wouldn’t use it to actually write these stories — that’s the fun part — but now I can push the hard work off on a computer.
Heinlein wrote this in Expanded Universe:
I was telling this young scientist how we obtained yards of butcher paper, then each of us worked three days, independently, solved the problem and checked each other—then the answer disappeared into one line of one paragraph (Space Cadet) but the effort had been worthwhile as it controlled what I could do dramatically in that sequence. Heinlein, Robert A.. Robert Heinlein’s Expanded Universe: Volume Two (p. 229). (Function). Kindle Edition.
Working this out for myself would have taken days and miles of butcher paper. But now I can just push it through a computer.
[1]“Always steal from the best.”





31 responses to “AI? What Is It Good For? by Charlie Martin”
I work with a volunteer group which gets most of its operating expenses from grants (they focus on emergency preparedness and long term recovery for communities post natural disasters). There is one woman there who does nothing but apply for grants. Last week she showed me how she uses ChatGPT to help her do that. It isn’t fiction writing, but that’s how I learned what a useful tool it is to have in your arsenal.
I’ve written a number of grants. I dispute the notion it’s not fiction.
The great thing is the people handling the grants are probably going to feed them to an LLM for a summary…
And one of the known problems with “AI” LLMs is that you can drive them to simulated schizophrenia by feeding them training data generated by LLMs.
Eh. You can do that to humans by feeding them bad data too. Look at Europe. SERIOUSLY.
I was going to say that first sentence….
I can’t speak to Grok, which may well be more sophisticated than the ones I have access to, but on a Discord I belong to there’s been a lot of humor and angst about the inability of some of the LLMs to comprehend math and logic problems – “how many ‘R’s in strawberry?” – that kind of thing.
I did do something similar recently where I needed information how a large dog would attack an adult person(1), and went to Claude.ai, which confirmed my suspicions (leaping attack targeting face and throat, probable defensive wounds on the arms). I then asked it for the kinds of names a person might give an attack dog, and then asked it to come up with Victorian era equivalents, and was pleasantly surprised with the results.
(1)most well-known cases in recent history involve dog-on-child, which wasn’t ideal for my purposes.
I suppose you could read the whole dialogue, to which I linked.
That would require access to my home computer rather than the one I am using right now, which twitter/X is blocked on. Will certainly take a look at it when I get home.
Aaaaand nope, trying it on smartphone just redirects me to the signup/login page on X.
It sends me to an ad for Grok.
Grok actually listed out the letters and indicated whether they were or where not Rs before concluding Three.
It was also somewhat vacuous about The Maze, the Manor, and the Unicorn, but at least it didn’t claim that the unicorn was in a tapestry rather than in the woods on the other side of the maze.
Cool!
I’ve been finding AI snippets useful in cover art. Doing the whole cover makes it look goofy, to my eye, but you can add little details of AI generated art, particularly if you shrink them.
Cover of Secret Empire as an example, the spaceship is AI, much reduced in size, generated by Adobe Express. The rest of it is a picture of the moon I took on my phone the other day, blown up with added text from Adobe Express. The little orange flare is an Express element.
All for free, not to put too fine a point on it.
Link to my blog, https://www.blogger.com/blog/post/edit/15888307/3146875620773936467 where you can see the cover. I’d give the Amazon link but those always blow up to half a page.
https://phantomsoapbox.blogspot.com/2025/03/another-new-book-secret-empire.html
Link that actually works.
The cover is okay. Note both Cedar and I agree Midjourney is HEAD AND SHOULDERS above all other AI for art. And …. sigh. You don’t want me to critique the cover. You really don’t. It’s okay. I’ve seen worse. (Some of it from mainstream.)
I am encouraged by “okay” from you in this regard. ~:D I thought I did alright for a phone picture.
I was informed by my wife, who’s doing pretty good at YouTube, that the thumbnails don’t change her traffic much. She has some good ones and some bad ones, they come out about the same.
I was also informed that the unpublished book gathers no readers, and some is better than none. There was also frowning. So I took a pic of the moon and got on with it.
Interesting side note, newer Samsung phones will replace an image of the moon with one they have stored in memory if you zoom in enough. Sketchy.
Um, I apparently need the Amazon link – Blogger just sends me off to create a blog there. For Pete’s sake, I don’t do anything with my WP blog!
Here is the -proper- link to my blog post. Because I did it wrong last time. 😦
https://phantomsoapbox.blogspot.com/2025/03/another-new-book-secret-empire.html
Click through and it’ll be there. I hope. ~:D
It’s there, thank you! To be purchased when I’m a bit more awake tomorrow to go through ATH for it without messing up (I do that all too often).
Nothing wrong with the cover to my eyes. But, it being a quirky humor book, if you do recover it at some time, consider George buzzing the ISS…
Or maybe Athena and Brunhilde playing tennis with a Red Chinese satellite…
I’ve been impressed with Grok. It has the usual faults of telling me I’m brilliant, and my observations are fascinating, but it seems to take correction better than CoPilot (which doubles down) and ChatGPT (which agrees with you). Grok will at least do another search.
Grok will ask leading questions to keep the conversations going. I tend to ignore those, both because my train of thought is going another direction, and because, even after an insightful comment, the question frequently almost comically misses the point.
I’ve been playing the NYT Connections puzzles with it, and can coax it to the correct answer, but it takes a lot of hinting (including saying “this is a hint”). It kept trying to put orange in a group of red items, even after I pointed out that a rose was typically red. Them once I have coaxed it to the correct answer, it’ll ask me to guess what tomorrow’s categories will be. I’m not going to do that. What would be the point?
Then in another conversation, about Georgette Heyer’s Cotillion, every time I talk about Kitty’s cousin Camille, it points out that he isn’t her actual cousin, she just calls him that out of affection. And it also talks about how Olivia’s mother and uncle are mistreating her.
And we recently did a timeline because I don’t think a month is enough time for everything in the book to happen (even if you assume she can buy gown right off the rack at the best dressmaker in London’s shop). Six months, sure, a whole season, maybe, but just over a month, no. I am going to write a long reply to it showing it everything out got wrong. Or just drop the whole subject. (it agrees with me that the time line is extremely tight, BTW.)
The fact that it can discuss Cotillion coherently at all is impressive (I don’t remember Olivia have an uncle, did it hallucinate that part?). As for the timeline, I would have thought the subplot with Dolph needed more than a month to unfold, all by its onesies. That book cries out for a movie even more than most of her others, but unfortunately now is a very bad time.
For all I know she could have a score of uncles, but there don’t seem to be any in the book. (I’m guessing it’s combining Kitty’s guardian with the rich old man that Olivia’s mother wants her to marry and calling him her uncle?)
And that would be a *fun* movie, done right.
And it hallucinated a bunch of stuff in the timeline.
I’ll bet!
Of course the measles excuse probably ran out before the month was over, and if they went on six months, they’d have to deal with Meg’s baby, and there was no way Heyer wanted to deal with that in this story.
Assuming it works, here is a link to our Cotillion discussion:
https://grok.com/share/bGVnYWN5_7817ec60-b152-49c6-8dcf-7f6a858cb0fd
And our Connections puzzles:
https://grok.com/share/bGVnYWN5_f69206eb-1c06-40e4-ba6f-bbbcdddc5aef
I could see both of those on smart phone! Grok honestly did pretty well at both, although it clearly didn’t have firm grip on the details of the Cotillion supporting cast.
The big thing with Grok– and other AI– is that if you’re going to use it as a building block, you need to double-check it.
Which is way, way, way easier when you’ve got a solution sitting there. 😀
My examples so far are where someone told it to explain a proposed law, and it got a detail wrong– by changing a legalism word to a persuasive one, which has a very different legal meaning, flipping it from “at any point in the last three years” to “for the entirety of the last three years,” and where someone asked it about how sales policies work– and it gave a summary that implied a setup which many of the really, really big retailers use (ability to return unsold stock) was common, and that it was for money back. (as opposed to a small credit for future orders) The biggest issue is that those policies have to be specifically bargained for, so while they’re “not uncommon,” they are not normal nor are they for a full refund.
I haven’t asked it to do any “explain” or “solving,” but I have heard folks getting good results using it for rubber ducky, too. Biggest issue is the AI “forgetting” the prior conversation.
Wait, I just remembered a third one, where someone asked it about GDP, and it mixed and matched different years and different metrics. (It was to prove that some very poor US state had lower GDP than a list of European countries. The answer of “they were higher than them, one year, depending on which measure you use” was less than persuasive, as was “if you compare a surge year to a depression year-“)
I’m not that impressed with Grok’s image generation. I know Bing Image Creator isn’t the best out there, but I can usually get good results from it. If I ask Grok for a watercolor image of people walking on the beach, I get photo quality of a very crowded beach. I still might get a crowded beach in Bing, but it will at least look like watercolor.