Editing Rant: Why AI is a No-No

Image acquired from the Internet

A recent contact I got was a “Hi, I am an illustrator who uses AI.” To which I immediately responded my publisher (and I) have a strict no-AI policy. (points to them for admitting the AI use up front.)

Well, they wrote back and asked why? They said they got a lot of responses like that and were wondering why the publishing industry is so against this tool everyone in the business world is embracing.

I needed to present an argument that wasn’t “well, AI is evil and makes Sarah Connor cry.” Because this person is trying to make a living with art, which means creating art fast in a variety of forms. AI can be a tool like the collage-type art of early photoshop. And for some people everything is shareable – I remember early pirate sites for music and books created by those that thought all data should be free. So what argument to use?

I gave the person the “the courts have declared AI-created materials are not copyrightable.” The fact is who do you attached the “creative” part: the people whose materials and skills the database is built-on (whether the material was bought legally or collected for AI training (like most medical interpretation softwares), mass-trained through people licensing the equipment and uploading suggestions (like many editing softwares), or mass-scrapped/stolen (like most artwork and writing softwares)); the assemblers creating the database; the programmer/team/company that created the search engine/AI platform; or the person using the AI to create the image per their specifications.

When publishing companies (and other companies) cannot attribute copyright ownership, they can not go the AI route. Contracts require clear lines of ownership to distribute rights. (Side-thought: Companies using AI-generated marketing materials, really should rethink their choices, because I bet if you can’t get copyright, you can’t get trademark either.)

Anyway, the person thanked me, saying no one explained it that way to them before.

AI isn’t inherently evil, but there are other considerations and maybe we writers and artists should start pointing out the “bottom line” for companies using AI isn’t protected rather than argue the ethical and moral stances. Many people only are able to listen to money. No copyright, no contract, no clink-clink.

That being said, many aspects of how humans are implementing AI are counter-productive to society as a whole and individuals in general, which ethically and morally could be interpreted as evil.

Ethically, the database builders doing the mass-scrapes, stealing materials under copyright is wrong. Especially when the follow-up programming to access that database includes suggesting prompts where copyright is worked around: create a drawing in the Style of Disney or write a horror book in the style of Stephen King. Both are clear violations of society’s agreement to protect people’s intellectual property so their efforts are paid and they have the opportunity to continue to create what people think is worthy of purchase. The owners of the creative materials did not agree to this use. Ethical sourcing of the materials for the databases needs to be required.

Morally, the electric and water required for datacenters, when the infrastructure is already stressed and normal people are constantly being asked to save irreplaceable energy resources like uranium, coal, and oil, is abhorrent.  While on some levels, the mass-use of the AI-products expands the capability and considerations of LLM (large language models) and AIs (artificial intelligences), making developing of productive uses of AI easier. For example, using AI to figure out how to water crops and target pesticides increases food for all. Also using LLMs to look over medical tests and crunch numbers beyond what humans are capable of save lives. Both of these uses are beneficial, and having everyone exploring LLM products is bringing down the price while also encouraging programmers and companies to discover more uses.

But programs like ChatGPT are being used indiscriminately because people aren’t seeing the cost. Right now the companies are underwriting it in the hopes to make even more money later, but “a single 100-word email in Open AI’s ChatGPT is the equivalent of consuming just over one bottle of water.” (Garrison) Making five quick pictures of you as various Disney Princes is equal to a day’s worth of water for one person. And that isn’t even counting the energy use. (The water is used to cool the heat generated by datacenter computers.)

People are using ChatGPT to write grocery lists. Is a grocery list really worth a bottle of water plus energy? The destruction of trees and habitat for the large area needed for these centers?

I know one email doesn’t matter, but just imagine several cities worth middle schoolers figuring out which version of Pokémon is the best version of their pet, with all the twenty-somethings using it for groceries lists, and all the tech bro saying “send out an email on a meeting about using paper straws to save the environment,” and you can see where the waste of limited resources becomes objectionable.

With the present issues with climate change, is the energy and water use of the datacenters for entertainment purposes appropriate ethically and morally? Is it appropriate to build datacenters on an already stressed electric grid with rolling blackouts just so people can have help writing simple 100-word emails? And is AI/LLM programs and apps the best way to write those emails?

TL/DR: Authors, artists, and other creatives have a love-hate relationship with AI, balanced between an exciting new creative tool and the exploitive, illegal tapping of the creative community by scraping intellectual property for training LLMs. Publishers and those whose business model is based on protecting intellectual property cannot put AI-generated material under contract because of legal considerations of rights and ownership. Additional ethical and moral consideration of the wide-spread use of LLM and the related datacenter industry required to support them makes causal business and entertainment uses of LLM and AI questionable.

Final Thought: I want machines to do the boring grinding repetitive tasks so I can make art and write books.

 

Bibliography

Garrison, Anna. “How Does AI Use Water and Energy? Unpacking the Negative Impact of Chatbots.” GreenMatters. 2025 Jan 10. https://www.greenmatters.com/big-impact/how-much-water-does-ai-use – last viewed 6/8/2025.

X is for eXposure (just work with me on this one)

Image acquired from the Internet Hive Mind 

When it is okay not to be paid:

  1. Creating your own marketing (such as a blog).
  2. Deciding to write fanfic in an existing universe because it is fun. Singing for the joy of it. Painting to paint, instead of working on cover art.
  3. Helping friends / others grow (because you want to).
  4. Exchange of skill sets in a de minimis manner (example: trading critique reads) – legal definition of de minimis applies – watch out doing stuff as “make a cover for a developmental edit”; trading the larger stuff is a Tax Event, and while such exchanges are legal, the actions need to be recorded under both income and expenses.
  5. Anything else creative you decide of your own free will with no coercion to so, while being able to keep a roof over your head and food on your table.

When it is not okay not to be paid:

  1. The person is using you for marketing to produce money or enjoyment for THEM without an EQUAL (or appropriate trade – not all trades need to be equal, they do need to be equitable) benefit for you.
  2. The company is using you as a draw to increase income (without giving you the opportunity to earn appropriate income to services given) – example, paneling at a con is a TRADE of exposure as marketing and networking; this would be fine; a bookstore asking you to do a reading without being willing to carry your books, not so much. A coffee shop asking you to sing (and, oh, you can put out a hat), not good; YouTube providing a platform to upload your songs to show the world, while they collect adverting money – acceptable.
  3. Family expecting you to do things requiring a lot of prep time, without recompense. Example: showing up at a potluck, everyone brings something is fair; expected to sing at every wedding – to a special chosen song which you have to learn, practice, show up for at rehearsal, and arrive early at the wedding in a special outfit does not equal a rehearsal dinner (maybe) and the reception.
  4. People thinking general “exposure” meets the requirement of targeted marketing.
  5. When you are exhausted, don’t have time, and feel guilted for “not helping”. In this case, no one is taking advantage of you but you. STOP IT! (Creativity takes energy, it needs to be renewed by taking a break!)

When someone asks you to do something for “exposure”, stop a moment and change the word to “marketing”. Does it meet your marketing plan? Does it take you off in an unexpected direction for marketing you hadn’t thought of? Then, yes, good exposure. Otherwise, the answer is no.

Other Cool Blogs: D Jason Fleming 2/3/2025

Copyright © 2025 D. Jason Fleming, CC BY-SA 4.0

This month D. Jason Fleming dropped a timely post in this age of banning books and censorship, and an interesting question: Should bad creations be saved?

And not just morally bad, how about things that just didn’t find a large enough audience (something Hollywood would call a flop)? Or just, you know, sucked, especially by people learning a new craft.

I saved my early embroidery attempts to show students when they despair ever getting good. “This is my first/second attempt at free embroidery.”

“Wow, I’m better than that.”

“Yes, you are.”

One consideration for the discussion is what happens if the work created is now problematic? For example, some of my early writings in the eighties included “gypsies”, a culture I was hyperfocused on for nearly a year. Since then, the proper way to refer to the nomadic culture is Romani. Should I delete the early writings, modify them, or let them stand?

The substack post about Ephemeral raises a lot of important questions.

Read the post here: https://djasonfleming.locals.com/post/6620232/ephemeral?utm_source=substack&utm_medium=email

D. Jason Fleming blogs about Books, Movies, and Writing.

Other Cool Blogs: Media Chomp


Photo by Manny Becerra on Unsplash

2023 saw many things, one of them being the writer’s strike and the actor’s strike fighting for the rights of creators against studios wanting to use “AI” to replace the costly background work. They wanted to scan an actor once, pay that actor, the make-up artist, and the costumer for one day, and then use the image in perpetuity. The studios also wanted to use AI to write the base for screenplays of movies and television shows, then use writers to tweak them better, playing them for the lesser work.

Sure it saves money, but at what cost? I’ve talked about how long it takes to become a good writer – about one million words. Same for acting, one million of other people’s words, spoken with your interpretation. Makeup artists and costumers also have to learn their craft. Using AI (and it isn’t really artificial intelligence, more like statistical amalgamation – artificial intelligence is about emulating decision-making) to replace the “background” mean actors won’t have a chance to learn their craft, unless they have the TIME and MONEY to spend practicing without pay. As it is now, actors knows that they’re most common line is “do you want fries with that?” But to decrease the opportunity to nothing will undo television and movies. The top 100 will be the only 100. And all those that support the industry – costumers and makeup artists – will likewise be unnecessary, since there will be no one to practice their craft with.

Writers … well, that statistical amalgamation of words … means we will have no truly bad movies, but nor will be have great ones. AI does the average. Like the actors, if AI is writing the “made-for-TV” movies, then the writers have nothing to learn on either.

Yes, the studios save money, but by gutting the entry level positions. The few ads for movies would be “Entry Level position for Named Main Character, three days to record script and scan body. Pay at Scale.” Just like the “entry level position, need master’s degree, part-time, temporary.”

The writers and actors, both separately and together, said “no way.” And used the only power available to them when the studios would budge, withholding work.

Have you ever wondered how long workers have been striking? A Cool Blog at Media Chomp published some memes about possible the first strike in (written) history, where the workers building the tomb of Rameses the III went on strike for better wages.

It really gets amusing, because first the management offered basically a pizza party to satisfy them. The workers responded with a picket line.

And workers have been striking ever since. Read to the end, because the cosmetics need indicates occupational safety needed to be addressed even in slave labor.

The cool blog is: https://mediachomp.com/ancient-egyptian-workers-strike/