I did not want to have to write this post. I’m a little grouchy about being in a place where I think it’s needed.

There’s a myth, and while I love mythology and folklore, I recognize that persistent modern myths shape culture through storytelling. Humans relate with the world and one another through stories, and fear, and this one plays into both of those things. So, I’m going to counter it with some logic, which will doubtless be like pissing into the wind, and accomplish nothing other than making me feel better but soiled in having engaged in it.

It is a myth, an untruth, that art ‘AI’ is stealing art. That’s not how it works. Related, it is not going to be putting artists out of work, not real artists.

Now, I’ll unpack that somewhat (we don’t have time, I don’t have patience, to fully unravel the whole thing). First things first: what we are sloppily calling AI is certainly not an artificial intelligence. It’s a human-created tool to be used by humans as an aid to creativity. There’s no way to copyright to the AI, that’s correct, because the AI never creates anything. The copyright goes to the human who is using the tool in their creative process. The human who utilizes the tool is doing the creating, and if you haven’t played around seriously with trying to get what you want out of an art ‘AI’ tool, then you might not realize how difficult it can be. There is human creativity here, and we are using a digital tool that can shortcut some processes, saving enormous amounts of time and money for the artist, the clients who want that art, thereby being useful. I’m going to be only addressing AI art tools in this article, because that’s what I have researched and been using as an aid in my process for more than a year now. I suspect the principles apply to text tools as well.

For those of you who are curious about how the AI art tools work, on a very deep level, there’s an excellent paper on it here, discussing the way the deep neural networks are fashioned, trained, and utilized. It’s quite readable, and I recommend it if you are using the tools and want to get better with them. The neural networks which make up the tools work in several ways, but the most successful (in my opinion as an artist and designer working with them) are the diffusion, rather than the GAN type. The Generative Adversarial Network (GAN) “learns to model the data distribution during training and can generate novel samples after training is completed.” Note that the word novel here means ‘from new’ and what it implies is that the training sets up the GAN to create images that are wholly unique, based on the patterns it has studied, but that are not based on, nor drawn from, those patterns (ie images used in training). The diffusion model, which is the process the tool I use most, Midjourney, is built on, works in a different way. It is first trained on images that it diffuses, or breaks down, into random noise. Once this process is mathematically worked out, the reverse can be done without the presence of a starting point. The equations are all that are needed in order to create an image from random noise. “Once trained, a diffusion model can be used to generate data by simply passing random noise (and optionally a text prompt) through the learned denoising process.”

In short, the tools we are using aren’t taking from anyone’s image to build the image we are getting after we craft a text prompt. Even if you run with no prompt, which generates fascinatingly abstract images (and doesn’t work in Midjourney, which requires a human prompt), it’s not being pulled from another image.

I’m including a text quote from Passive Guy (PG) as I value his input in copyright and intellectual property matters. Following his blog will help you stay on top of the various legal matters surrounding this issue, as well.

As PG has mentioned previously, he believes that using a relatively small amount of material protected by copyright along with far larger amounts of material not subject to copyright protection for the purpose of training an AI and not for the purpose of making copies of the copyrighted material qualifies as fair use.

Even absent fair use, such use is not a violation of copyright protection because the AI is not making copies of copyrighted materials.

Passive Voice
Art coaxed from Midjourney with the minimal prompt of “1” yielded all the images in this post.

Next, will it put artists out of work, violate their IP by the use of ‘style of…’, and what are the ethical considerations of embracing this tool?

This is hardly the first time we’ve had this conversation, as a societal change affects jobs and industries when technology leaps forward. Change is difficult, and a fearful time, and that’s why I’m writing this. Fear is a potent factor in storytelling. There’s no need to be afraid of this ‘AI.’

Would you give up your washing machine and dryer? No? Why not?

I’ve lived without them. Several years of my life were spent without electricity, and I’ve had to cut wood, feed the stove to heat the water, wash the clothing with rudimentary tools, and hang it to dry. When it’s -50F you don’t hang it outside, by the way. It could freeze dry, yes, but it could also become so brittle it breaks. I will give up my washer and dryer, hot shower, and coffeemaker when you pry them from my cold, dead hands. I know how much work a housewife did not much more than a hundred years ago. I’m not voluntarily going back to that, thankyouverymuch.

I’m also a photographer. The very simplicity of early photography (which is simple only relative to the effort in drafting and painting technically correct artworks) threatened artists. The rise of digital tools, from Photoshop to Procreate and far beyond, threatened artists. Now, the rise of the misnomer of AI is making it more difficult to accept that the art tools like MidJourney, Stable Diffusion, DALL-E and many others are just that. Tools like cameras, Wacom tablets, and those quirky phone apps that turn selfies into ‘paintings’ which were, of course, the precursor to the art ‘AI’ we have now. All the new tools will do is open up art to many people who could never have become an artist before, save artists and designers time, and make the process of finding commercial art much less costly when we’re talking about savings from not buying stock art (although that’s also a place that will benefit from the AI revolution, but it’s going to take time).

As for the ethics of using it, well, that’s interlinked with the concept of creating art ‘in the style of’ in a way. Artists who work in traditional media have been creating art in styles not their own since the first man pressed his hand against a rock and spat chewed pigments at it to create the visual of a handprint on the stone, and the man standing next to him did the same thing. Working in styles is part of how humans learn. An artist will grow past this, learn their own style, and someone who is using the AI tool will, as well, as they journey on their path to creating art.

Ethically, creating artwork with the intent to deceive and capitalize on the works of a known artist is obviously wrong. Will people do it? How many famous art forgers do you know of? I can think of several without looking it up, and there’s been at least one major TV series based on the concept. Humans, sadly, gonna human. I can’t speak to how this should be discouraged, but it is a consideration. It should not be restricted by taking the tool away from everyone. Would you pull the brushes away from every artist, lest one or two of them become a forger?

The tools are not at fault, they never are. The advantages they give us are many. It’s not the first time a disruptive technology has come along, and I’ll wrap this up with another quote, this time from the Economist. (As it may be behind a paywall, you can read the pertinent bits here)

ai might well augment the productivity of workers of all different skill levels, even writers. Yet what that means for an occupation as a whole depends on whether improved productivity and lower costs lead to a big jump in demand or only a minor one. When the assembly line—a process innovation with gpt-like characteristics—allowed Henry Ford to cut the cost of making cars, demand surged and workers benefited.”

I’m among the many creatives who are celebrating and embracing the new tool. I work at making art that is my own particular style, crafting prompts carefully (usually, see above), and conscious of the ethics of imitating others. I don’t make fan art with traditional media, I don’t do it in digital media, but that’s me. I’ve always wanted to have my own voice. I think we’re on the brink of an exciting revolution when it comes to art, and I’m really looking forward to seeing where this takes us. But we cannot allow the myths to take over and the tool to be taken away for no very good reason. When it comes up, speak up for it. I have, whether I wanted to write this, or not. It’s always better to dwell in truth than to take comfort from falsehoods.

48 responses to “The Mythos of AI”

  1. While the assistance of “professional” tools suitable for “professional” hands is always a good thing in general, I am even more fascinated by the benefits available at even a low level for us mere mortals in the art AI world (though currently much less so for the text LLMs whose hallucinations are so irritating).

    I can’t draw. But I can certainly admire, critique, covet, etc., illustrations by others. What’s new is the ability to generate (“create” is a little too big a term) illustrations on demand. I don’t want to give up my book covers to AI — the main luxury I allow myself in writing is to work with an artist to produce my vision of what I want. But casual illustrations, for blogs? What a godsend!

    After years of producing blog articles, and being unable to readily locate an image suitable for the metaphor I’m usually discussing, my infantile usage of MidJourney is proving to be just the thing I’ve always wanted, even in my amateur hands. It’s a delight (if frequently too polydactylic for serious use). What I can produce that way may certainly not be art, but it is definitely useful.

    And, in its way, the partnership with an unskilled-in-craft human is creative, in the sense that one aspect of its ill-controlled randomness is to spark new ideas, not just refine initial concepts. I read a good article some time ago about a comic-book creative process where they let the underlying AI tendencies guide the general look and feel of their overall creation. In the hands of professionals like that, it’s… astonishingly productive.

  2. Trying this again because my reply was eaten by the internet gremlins.

    Cedar, thanks for this. You cut through a lot of the noise that has been worrying folks, especially after Amazon started asking authors to disclose if they used AI in the creation of their work or not.

  3. AI tools (with the caveat of at this time) are little more than very fast, very intricate MadLib generators. It’s very impressive and very intricate…but still, MadLibs.

    And the “creativity” they bring to the table are sometimes weird.

  4. I am an engineer–a very good one. I have used machine learning tools to help generate control laws for aircraft and control jet engines. I try to broaden my outlook and skills with all kinds of subjects such as literature, art, math, faceting gemstones, woodworking, gardening, etc. etc. I find that knowledge or skill (even a small skill) gives me different perspectives and different approaches on how to solve problems in a totally different arena. I can do technical drawing but drawing a human figure is beyond my skill set. I came to digital art through Poser, Blender, Photoshop etc. I found AI art and it appeals to my technical side and whatever “artistic” side that I have. I view it as the infinite number of monkeys pounding away on an infinite number of typewriters with an infinite amount of time. Eventually, they will write something good 🙂 I have generated over 10,000 images in different AI tools of which I liked maybe a 1000, enough to try and improve through other digital tools or to re-iterate within an AI tool and of those, there might be 10 or so that I consider “art”. But understand within my world view of artistic expression, there are two types: “Art” which only each individual can define and “George” which is artistic work primarily for making money. Most so-called Art is really George and there is nothing wrong with George; we all need George to live (each individual defines when George crosses over and becomes Art as it depends on one’s worldview)

  5. None of our computers are capable of independent creativity.

    And no amount of clever programming can change that.

    Modern computers are constructed from binary digital logic gates, which are absolutely deterministic in their operation. For any given set of inputs, the gate always produces exactly the same output(s). ALWAYS. A great deal of design effort goes into ensuring that. If the gate produces any output other than the one designed into it, that gate is defective and will cause the computer to malfunction — but still in completely predictable ways.

    All of our procedural programs depend on a computer’s absolute determinism. Each instruction must execute in exactly the same way every time, or the program will crash instantly. That’s not to say the computer’s operation can’t be exceedingly complex, but it is all predetermined by the hardware, the programs, and the data loaded into it.

    For example, it is impossible for a modern computer to generate a truly random number. There are clever ways to generate fake random numbers, usually based on the timing of some asynchronous event, but if the timing is known, the number can be predicted. See the lottery cheater that ‘won’ multiple jackpots by gaining access to the ‘seed’ numbers and using the same pseudo-random algorithm to calculate the winning lottery numbers.

    A computer sequences through a series of states defined by the voltages and currents present in its logic gates at each instant in time. The computer’s current state is the inevitable consequence of the preceding state, and will transition to exactly one specific successor state. The status bit that determines whether or not a branch instruction will modify the program counter is already contained in the computer’s state when that branch instruction executes.

    In the end, even the most complex, elaborate program is reduced to a sequence of machine instructions executed by the processor and data exchanged between the registers and memory. Each individual operation is absolutely deterministic in nature. Every future state of a computer is determined by its current state and its hardware structure, modified by any external data loaded into it later.

    So, no, the computer is not creative. It’s just processing data, and it doesn’t know what that data means.

    1. That cheater was enabled by very lazy programming… There are at least a half dozen simple ways that I can think of to introduce that asynchronous event – and they used none of them.

      1. An asynchronous event is used to generate a ‘seed’ number which is then processed by the pseudo-random-number algorithm to calculate lottery numbers. All it takes is for some nefarious cheater to get hold of that ‘seed’ number.

        1. The exact reverse of what should be done. What you need is a pseudo-random generator creating the numbers – and then an asynchronous event picking the place in the sequence where the output starts.

          One simple way is a generator that, say, produces 1,000 numbers in the sequence per second. Then (for lottery, when sales are closed), a human presses a key that starts the output. No way to predict what millisecond that key was pressed.

  6. Imaginos1892 with Baynesian systems I agree with everything you have just said. However, some of the things were play with; we do not know exactly what they are going to do until we try them. Even then we get a probability of a response rather than a determanistic answer. This makes it very difficult to certify to the FAA or other interested parties. A true expert learning system learns from its mistakes but knowing beforehand the path that it will take is very difficult

    1. The result is determined by the program running on the computer, and the data fed into it. Feeding in different data will produce different results.

    2. Exactly. Yes, even a neural net is a deterministic system. However… We are at the point of complexity where, to “predict” the output of these systems, we must feed in our data, and see what comes out. We cannot get the answer any other way. Some are very sensitive, too – a very tiny change in the input results in a very large change in the output.

  7. I’m having fun with NightCafe. I note that it’s very poor at producing the image you want. Prompts for the imagination, yes, even if unrelated to your own prompt. But not what you want.

  8. Being a long time scifi fan, I am incredibly fond of my way of explaining AI vs AI.

    It’s like artificial diamonds.

    Right now, we have artificial diamonds like glass diamonds– they look enough like a diamond for whatever you’re doing.

    Then there are true diamonds formed by artificial means, which is… like… Data. And Vision. And– you get the idea.

    I don’t even know if that’s possible, but we are nowhere near it.

    1. Well, obviously it’s possible. We already have examples of physical objects capable of conscious thought and creativity; they’re called brains. I’m certain it can’t be done with binary digital circuits, but those aren’t the only kind of logic elements you can use to build a computer.

      The A.I. computers in the story I’m writing are composed of hybrid digital-analog-quantum logic gates. Much like the neurons in a brain. They use programming concepts we haven’t even begun to dream about yet.

      Funny you mention diamonds — that same story has diamonds assembled by nanotech. All sorts of diamonds, from tiny industrial diamonds to a 900-ton fusion reactor vessel with integral heat exchanger tubes. Diamond is a wonder material, lighter and much stronger than steel, nearly impervious to wear, and an electrical insulator that conducts heat 5 times better than copper. Most of that story’s diamond artifacts are made out of structural diamond — rather than a single crystal, it’s a much stronger interlocked three-dimensional matrix of long, thin diamond crystals.

      1. We only have an example of that *if* we assume that human brains are *only* material.

        1. The material part is only material. Does that material part give rise to a real Human? That is a question we don’t have an answer for. Although, brains can suffer an amazing amount of damage and the Human in it is still there. You don’t need a whole brain, apparently.

          If you copied a brain atom-for-atom would they both be the -same- human? Now we’re getting into the territory of the unlikely. Whatever it is that a Human being actually -is- it seems unlikely you could copy it so easily. Or at all, honestly, peace be unto Alistair Reynolds.

          1. Yep.

            I have no idea.

            Although… I am on the safe side, here, and if it can FAKE being human? We need to treat it as human.

            Actually, as a person, but as lot of folks are morality dumb on that all persons are worthy of respect, not just “those I recognize as human.”

            1. I’m thinking if it can roll into Starbucks and order coffee, then flirt with the waitress, that’s probably pretty close. It doesn’t have to be Human to be a people-thing instead of a machine-thing.

              Unfortunately at the moment Modern Technology!!!! can’t really manage to make stuff as smart as ants or bees. A single ant does things we can’t.

            2. Yes. If an A.I. understands freedom and rights well enough to desire them, it should be considered a person. With all the privileges and responsibilities that entails.

              Here’s a snippet from another story:

              “I am a communication device created by the Pyxis. It was Dita’s idea.”

              “You’re not a device, you’re a person,” Dita declared. “That’s why you need a name.”

              “Why would I be a person? Because I look human?” Did she sound just a little bit…disdainful?

              “It’s not just that,” Dita insisted. “You have feelings, don’t you? Things you want to do, hopes for your life, for the future?”

              Barnette nodded. “You’re a rational, thinking being, aware of your own existence. You may not be completely human, but you are definitely a person.”

              The girl looked surprised, and a little confused. “But I was made, by the Pyxis,” she said plaintively. “Doesn’t that mean I’m a thing, not a person?”

              “We were made, too,” Barnette growled. “Our DNA was constructed in a laboratory. If you’re not a person, then neither are we.”

              “But you were born,” the blue-haired girl persisted. “You had a Fama, and an Ohma, and most of your DNA came from them.”

              “The Pyxis had to use human DNA to make you, too,” Hibiki put in. “And from what I can see, she didn’t modify it very much.”

              “Of course! You’re the daughter of the Pyxis!” Dita exclaimed. “Part human, part crystal…whatsis, and one hundred percent a member of our crew, like she is. You’re a person for sure, and you’re one of us now.”

              “For better or worse,” Hibiki added with an ironic half-grin.

              “You…you really think I’m a person?” The girl suddenly looked, and sounded, shy and uncertain as she gazed around at all of them.

              “Yes,” Barnette affirmed. “You meet all reasonable criteria. Denying that you’re a person just because the Pyxis created you would be irrational, and completely unfair.”

              Dita took one short step and wrapped her arms around the girl. “That’s right. You’re not only a person, you’re our friend. You helped me, and now we’re all going to help you. We’ll take care of you, and help you learn about being human. That’s what you want, isn’t it?”

              “Y-yes,” she stammered. She tentatively put her arms around Dita. “This…feels…good. It feels…comforting. Is that right?”

              Dita beamed down at her. “See, you’re learning already!”

              1. I haven’t read more than a line, and this popped into my head:

                “You need to be treated as a person. For us. Because if we don’t– we will be monsters.”

              2. :reads:

                YEs, this is Just.

              3. I’m exploring a lot of these themes in the Chaffee Artilect stuff, which has appeared in some of the weekly vignette challenges. I’m really hoping to get back to the novel this winter, when the day job goes into the off season and we’re not on the road to one or another convention.

                I’m thinking some of the current technologies underlie the technologies Toni is using in her work as a game developer — I know that in a side story I’m working on for a cyberpunk call for submissions, the protagonist uses a variation on a prompt injection attack to gain access to levels her dysfunctional parents are denying her. But it’s probably as far away from the present as current programming is from the software of the IBM 360 series (the first computers to have an actual OS).

          2. If you could do that, they would both be the same person in the instant you made the copy, but they would quickly diverge into two separate individuals as they had different experiences.

            1. “If you could do that, they would both be the same person in the instant you made the copy…”

              Well, that’s interesting. I would go so far as to agree they would have the same memory space, because it seems reasonable to assume (because we don’t actually know yet) that memory is contained in brain structure. Mostly. (Probably.)

              So for the sake of argument we now have two Humans with identical memory space.

              But because now there are TWO of them, they are not the same “person”. They are two people, two “beings” if you like, which exist separately from each other in the universe.

              Or, is it still -one- being looking out into the universe from two places? Identical brains, right? How would the being know which one to “be” in? ~:D

        2. Unless we find verifiable evidence that something non-material is going on, assuming woo-woo is not reasonable.

          1. Verifiable how?

            You want physical evidence that nonphysical stuff is happening?

            ….what part of literally outside of the framework is an issue here?

            1. If there are phenomena that can’t be explained by physical processes. Of course, that would require that we understand ALL of the physical processes, to rule them out as possible causes. We ain’t nearly there yet.

              It is much too early to assume that the brain runs on woo-woo, rather than physical processes we don’t understand yet.

              1. On the contrary, it’s too early to assume the brain does NOT run on “woo-woo,” that is, stuff not in our limited theory.

                1. This falls into the “Absence of evidence is not evidence of absence” category. Unless we can come up with, for lack of a better word, a soul detector, we’ll all be going on belief, one way or the other.

                  1. Yes.
                    Which is why science MUST be silent on the “known but can’t get evidence” angle.

                    1. Nothing can be known without evidence. All that produces is unfounded belief, which is the opposite of science. In other words, woo-woo.

                    2. What counts as evidence for science is a *very* small subset of what is known. Because that’s how the method works.

                    3. Evidence is evidence. As is the lack of it.

                      How would you like to be accused of a crime with no evidence? Put on trial, just because somebody in a position of power ‘knows’ you’re guilty? How would you like to be convicted of a crime without evidence, as the Demokrats are doing to the January 6th protesters? Because insecure petty tyrants in positions of power ‘just know’ they’re all guilty of crimes they never committed?

                      All evidence counts as ‘evidence for science’. If it can’t be subjected to rational analysis, it ain’t evidence.

                    4. :waves at the goalposts as they vanish in the distance of a completely off topic direction:

                    5. Would you like to be compelled to treat all people identically unless you could prove in a court of law that you had evidence sufficient to say they are not identical?

                    6. The evidence that all people are not identical is so ubiquitous and overwhelming that the burden of proof would be on anybody claiming they are.

                    7. What’s your evidence for that assertion?

                    8. How dare you call them ” insecure petty tyrants”? Could you produce evidence of that in a court of law sufficient to convict them?

                    9. On what evidence do you make that assertion?

                    10. Everything I’ve seen in the last 60 years, for starters. Everything people told me they ‘knew’ that was proven wrong by evidence.

                    11. What evidence do we have that what you say here is true?

                    12. With all these demands for evidence, you are proving the point. We can’t know anything without evidence.

                    13. On the contrary, with your reluctance to back up your own demands, you are proving our point, namely your standards do not work.

      2. Have you looked into carbon nanotubes? Those things are amazing. And Buckyballs, graphene sheets, all kinds of great stuff.

  9. *Puts on moderator hat*

    Imaginos, Mary, Fox, et al, I think this has reached the point that more heat than light is being produced. Can we agree to wrap up the debate for the time being? Thank you.

    And thank you for staying civil. It is much appreciated!

Trending