“I really don’t mind if you sit this one out,
My words but a whisper, your deafness a shout.” (Jethro Tull, Thick as a Brick)
I’ve just heard that The Chronicles of Davids is being released on the third of September. I have a story in this alongside such luminaries as David Drake and David Weber (Honestly the only way I get into that kind of company is by having my maternal grandfather and my great uncle’s names. Names were few and far between in those days, and when people found one they tended to keep using them until they wore out. They weren’t like these disposable modern names, that show scuff-marks after a few years, let along centuries. But they don’t make like that anymore.)
Anyway: Here is the Booklist review (Which is nicer than the Amazon Blurb where I am just one of the Davids.)
The Chronicles of David
There are as many different stories in sf and fantasy as there are men named David. Here, Afsharirad brings together some of the genre’s most beloved authors with David as their first name in a collection that covers time travel (Brin), space opera (Hardy), AI trouble (Freer), and military sf (Weber). Some stories live in existing fictional worlds, like Drake’s “The Savage,” part of the Republic of Cinnabar, or Coe’s “Long Night’s Moon,” featuring supernatural PI Justis Fearsson. There is humor, like in Hank Davis’ (Davis is close enough) “Too Many Gods,” in which Egyptian cat-god Bast saves humanity from bugaliens with sass and pop-culture references; adventure, as in Boop’s “Lyman Gilmore Jr.’s Impossible Dream,” a western tale about a genius and his brother who save a town beset by dragons; and violence, as in Carrico’s brutal “Four Days,” in which a warrior defends a village and his family’s honor. Like a lot of old-school sf and fantasy, these authors sometimes play fast and loose with mythology and the portrayal of women, but, in all, this is a fun collection of imaginative Davids.
The image is a link to the book on Amazon.
It was an interesting story to write, as I am not a fan of ‘AI will be a benevolent and inevitably socialist provider for humans, who will have nothing to do except create art and experiment with strange forms of sex.’ I’m not much on the AI will solve all the problems and do all the jobs, either, to be honest. Maybe it’s because there really has never been a ‘free lunch’.
I’ve watched the discussion and observed the direction of a fairly broad selection of authors who assumed Moore’s law would translate into an end to human drudgery and them on the right side of history. I have a few caveats on this besides TANSTAAFL.
Firstly, Moore’s law seems to be slowing as we reach some practical limits.
Secondly, the Wetware (humans) haven’t actually changed a hell of a lot, which is why many programs which take up a ton of the new memory space, have lots of pretty pictures and gadgets, but most users still use them at the level that people did 35 years back (but with LESS knowledge and skill, because 30 years back they didn’t have all the programming to make the usage easier – because only the bright and interested were using computers).
Thirdly, the eternally self-building and replicating AIs soon run into ‘why isn’t the world covered in a 1 inch thick layer of bacteria that devoured everything.’ (same reason Von Neumann machines are a self-limiting problem).
Fourthly, if we assume self-awareness is inevitable: why will a self-aware and logical machine care about humans, any more than I care about cows? I do care about cows… mostly because they’re useful, occasionally because they can be a problem. But it is because they’re useful that I feed and water them and treat them for disease, and see to their comfort – but not the level of luxuriant dreams of pampered cows. At the level where I get a return on that treatment. If they’re a problem… we get rid of them. Benevolence is very very tiny factor in the calculus of how I keep cows. I am certainly never willfully nasty for no reason, might even help them out if they were stuck in a fence or something. But – unless they’re of use to me, I’m going to let them do what cows do: which is a long way from a socialist utopia for cows or anything else.
Perhaps I have spent too much of my life on grunt labor and in dirty, tough physical environments, and growing and producing my own food. But sometimes I think the high priests of AI futures are so hooked up in the world they know (typically software related, urban, first-world) to realize the scale and complexity of what nourishes their city, their protected and narrow sheltered little environment. It’s only the last 100 years or so (and only in first world) that humans have been able to be so non-adaptive. So sheltered, with conditions (temperature to toilets to available food of fairly constant quality) with such narrow and ideal parameters. Undoubtedly many people survive now who simply would not in the past. Life, generally, is better. As Afghanistan (with miniskirts and women medical students training men in the mid-sixties) proves, history is not something you can be sure you’re on the right side of.
I doubt the future will always get better. I hope it does, but I doubt it. And, methinks, if it does for humans and biological, it won’t be so much down to the progress in software and computer hardware. Humans may (as I wrote about in RATS BATS & VATS) interface more directly with hardware. Whether they will remain human or will bother with the inconvenience of the biological component is another question. But I suspect the next great ‘age’ (for humanity anyway) (just as we had the age of bronze changing the world, then Iron, steam, high pressure chemistry, computing… etc.) will be the genetics age. Because when it comes to getting things done on the large scale, in the dirt, with various aspects of ‘grunt’ – food and products – we’re still at beginning to knock rocks together stage of using biology. We’ve barely touched it – and in combination with computing power, that really is huge, and can do things that would simply be too expensive and difficult to use robots and AI for. Pointless too: much of what needs doing is grunt labor, and only worth doing… cheaply. Your robot may one day harvest… Ascidians, which in their filter-feeding accumulate and concentrate useful minerals (far cheaper than mining them – the Ascidians will feed house and reproduce themselves… rather like humans. If they die… you just breed more.
Filter-feeding is just so enormous in potential that I ought to write a book about it. But that is just a tip to a vast iceberg… which has little in sf.
If true self aware ai comes along the snowflakes are in big trouble. The sheer illogicality and contridictions will drive the poor thing to distraction if not a homicidal rage. The best the rest of us could hope for is that we don’t have any conflicts over resources. Given the size of the solar system and the abundance I think we’ll be right.
I’ve been wondering at something since the last you last wrote up a limits of automation post. What are the limitations of the current state of the art of programming as a field of engineering?
Agriculture seems to be pretty firmly a problem area where AI might never be safely reliable for the decision making. Humans and capitalism seem to be capable of providing a conservative assurance of surplus. Mathematical models of the weather should depend ultimately on fluid mechanics, which may be safely beyond perfect computer solutions. What kind of warranty of ideal/acceptable decision making can be provided? How do you know whether the AI has been calibrated sufficiently for weather, for work done other than directed, and for all the other interesting possible confounding factors?
Agriculture is a lot harder than a factory, because nature is unpredictable. I can see AI playing a much greater role in hydroponics, they are more like a factory.
Mutations of or contamination from outside fungi, microflora and microfauna would seem to require a level of problem solving judgement that in the near term we can most probably only get from humans. That’s probably a sticking point for total automation.
Accelerondo left a dent in my wall.
I’ve got an autistic child, so I tend to get downright sarcastic and sometimes even bitter at utopians who conflate intelligence with processing power.
Not to mention the difficulty of trying to simulate a six channel analog processing system that we don’t understand very well with a single digital channel. When the phrase “exponential increase” is inadequate to describing even just the hardware side of the challenge, you probably shouldn’t just wish the problem away and ignore it all together.
I’m going to let cows do what cows do… and then I’m going to eat them.
And the world may not be completely coated in bacterial sludge (tho not for lack of trying on their part), but lately it was discovered that just about every space that’s not full of something else — is full of proto-viruses (including the air you breathe). Junk fills the space allotted.
There’s another problem with AI. Assuming it ever works (maybe only for urban dwellers), who do you think is going to design it? Not the smartphone addicts. No, it’ll be some company that’s hired enough sharp programmers or the equivalent. It’ll be something like Google. And before you say, “Oh, a self-aware AI would figure out the truth on its own,” let me mention two other things:
And just think of all the technically self-aware meat organisms that haven’t figured out the contradictions and the death cult of the Left.
There’s another problem with AI. Assuming it ever works
As one AI researcher once told me, “If it works, it’s not AI.” That is, the goal posts keep moving. Until we arrive at R. Daneel Olivaw, we don;t have AI.
As far as True AIs (machine minds that think as well or better than humans), I doubt that anybody will consciously design them.
As been mentioned, since we don’t understand how our minds work, how can we design True AIs?
If True AIs happen, it will be more likely be accidental.
Designers planning one thing and getting something else.
Mind you, I suspect that RAH had something correct about True AIs.
His True AIs became “alive” because of interactions with humans.
If the majority of humans treat it as “just a machine”, then it will likely remain “just a machine”.
If on the other hand, a certain number of people dealing with the machine treat it like it was “somewhat human”, then it may become a True AI.
Oh, the above isn’t saying “all True AIs” will be nice people. How it is treated before it “comes alive” and after it “comes alive” may a major factor in “what kind of person it will be”.
Excellent point on the how it was treated before, and after it “comes alive.” The Bobiverse books did a really good job on a lot of it with exactly that point in mind.
By the way, I’ve read Dave’s AI story and its a good read. 😀
Although, I have to wonder if the AIs were aware of what was happening. 😀
Of course they were. 🙂 But that was part of the longer term strategy
IE Get the humans most likely to “rebel and win” off-Earth? 😀
If you leave an AI to grow and replicate, if it is sufficiently clever about making things you end up with a Dyson Sphere. Endless plates with solar panels on one side and heat radiators on the other, orbiting the sun. Made out of the dismantled asteroids and eventually planets orbiting that sun. From roughly the orbit of Mercury out to around Mars somewhere.
That is, if the AI is an asshole that cares only about itself.
That’s the Leftist version.
My version was a little different. Space is big and empty. And BORING. An AI, if it is a truly sentient, sapient being, will not like that. It will want to hang out and shoot the breeze with interesting and engaging individuals. If it is just ridiculously smart, it will want some pets, and maybe a nice nature preserve.
The other thing is, the smarter and more powerful you are, the more you realize how little you can really do without breaking stuff. You have to be nice to the pets and not push them around, otherwise they won’t like you. And then you’ll be alone in the large boring universe. Which would suck.
Post humans are smarter than humans. My guess is, they’d be fun to hang out with.
my guess is, smarter-than-human posthumans are going to have just as many tics as smart humans have.
Yep, and that’s the fun part to write about, making all that work together. First fun part, what kind of foibles does a smarter-than-human have?
And what does “smarter” even mean? Not merely thinking the same things faster, that much is sure.
Surely then they’d want to hang out with other AI’s? I mean, I don’t choose to hang out with people I have little in common with 🙂
There’s 6 billion Humans to chose from, all different. How many AIs will there be? There’s a lot to be said for variety.
But main point, they’re extremely unlikely to go all Skynet on us humans -IF- they are extremely smart. If they’re only a little smarter, they might not be able to work out the “If I kill all the humans life is going to get seriously boring” thing.
But what does EVERY fricking AI do in the movies? Skynet. Every time.
Drama. And, the fear of the Frankenstein Monster, which is yet another form of human hubris.
Surely to Ghu there’s another plot-line out there for an AI besides Frankenstein and backwards-Frankenstein. That’s all I’m saying.
How about AIs vs. demons, because the plucky humans are too small to handle the really BIG ones? Hunting down the ancient eldritch eeeevile and NUKING IT from orbit, so satisfying.
How about something like “The Salvation War”-demons invade Earth, and the AI takes over Hell. Then Heaven, because clearly something is wrong.
Sounds like a great idea. Amazon search does not find it, though.
I decided that if I was going to have that kind of super-powerful AI, it was going to do it’s best to bring more people up to it’s standards.
Why? Otherwise, it’s BORING being all by yourself by that, and more people means more people that you could not be bored with, at the very minimum.
(And, “competition is good” as a core portion of what it does. Some rules, mind you-you have to have honest players at the table-but not as many as people thing.)
There’s an idea with some potential! There’s a thing in martial arts that you need a good opponent if you’re going to learn.
“Even if you’re a monopoly in a position, you should do everything you can to strengthen your competition. Why? Because, if you don’t, you may appear to be the strongest around, but your strength is hollow because it has never been tested, One day, someone will test your strength and you will find it wanting.
“This is hard to explain to investors, as they expect you to be making all the money so they can get a massive ROI.”
Probably more AIs would be quicker.
Also I would not assume that an AI would suffer the same need for companionship that we do.
So… did you play hard and fast with the portrayal of women? I never!
(I will admit that fantasy tends to do that and I will not object but I doubt that my complaints would be the same as the usual ones. Someone once told me a “pitch” for a TV series where a girl travels though medieval Europe and uses science and knowledge to expose superstition and I said, you probably need her to have a very big dog, like a wolf hound or something, to explain why she’s not dead. I wasn’t appreciated. Can you imagine?)
Sounds like an excellent way to get burned as witch, to me.