Image acquired from the Internet
A recent contact I got was a “Hi, I am an illustrator who uses AI.” To which I immediately responded my publisher (and I) have a strict no-AI policy. (points to them for admitting the AI use up front.)
Well, they wrote back and asked why? They said they got a lot of responses like that and were wondering why the publishing industry is so against this tool everyone in the business world is embracing.
I needed to present an argument that wasn’t “well, AI is evil and makes Sarah Connor cry.” Because this person is trying to make a living with art, which means creating art fast in a variety of forms. AI can be a tool like the collage-type art of early photoshop. And for some people everything is shareable – I remember early pirate sites for music and books created by those that thought all data should be free. So what argument to use?
I gave the person the “the courts have declared AI-created materials are not copyrightable.” The fact is who do you attached the “creative” part: the people whose materials and skills the database is built-on (whether the material was bought legally or collected for AI training (like most medical interpretation softwares), mass-trained through people licensing the equipment and uploading suggestions (like many editing softwares), or mass-scrapped/stolen (like most artwork and writing softwares)); the assemblers creating the database; the programmer/team/company that created the search engine/AI platform; or the person using the AI to create the image per their specifications.
When publishing companies (and other companies) cannot attribute copyright ownership, they can not go the AI route. Contracts require clear lines of ownership to distribute rights. (Side-thought: Companies using AI-generated marketing materials, really should rethink their choices, because I bet if you can’t get copyright, you can’t get trademark either.)
Anyway, the person thanked me, saying no one explained it that way to them before.
AI isn’t inherently evil, but there are other considerations and maybe we writers and artists should start pointing out the “bottom line” for companies using AI isn’t protected rather than argue the ethical and moral stances. Many people only are able to listen to money. No copyright, no contract, no clink-clink.
That being said, many aspects of how humans are implementing AI are counter-productive to society as a whole and individuals in general, which ethically and morally could be interpreted as evil.
Ethically, the database builders doing the mass-scrapes, stealing materials under copyright is wrong. Especially when the follow-up programming to access that database includes suggesting prompts where copyright is worked around: create a drawing in the Style of Disney or write a horror book in the style of Stephen King. Both are clear violations of society’s agreement to protect people’s intellectual property so their efforts are paid and they have the opportunity to continue to create what people think is worthy of purchase. The owners of the creative materials did not agree to this use. Ethical sourcing of the materials for the databases needs to be required.
Morally, the electric and water required for datacenters, when the infrastructure is already stressed and normal people are constantly being asked to save irreplaceable energy resources like uranium, coal, and oil, is abhorrent. While on some levels, the mass-use of the AI-products expands the capability and considerations of LLM (large language models) and AIs (artificial intelligences), making developing of productive uses of AI easier. For example, using AI to figure out how to water crops and target pesticides increases food for all. Also using LLMs to look over medical tests and crunch numbers beyond what humans are capable of save lives. Both of these uses are beneficial, and having everyone exploring LLM products is bringing down the price while also encouraging programmers and companies to discover more uses.
But programs like ChatGPT are being used indiscriminately because people aren’t seeing the cost. Right now the companies are underwriting it in the hopes to make even more money later, but “a single 100-word email in Open AI’s ChatGPT is the equivalent of consuming just over one bottle of water.” (Garrison) Making five quick pictures of you as various Disney Princes is equal to a day’s worth of water for one person. And that isn’t even counting the energy use. (The water is used to cool the heat generated by datacenter computers.)
People are using ChatGPT to write grocery lists. Is a grocery list really worth a bottle of water plus energy? The destruction of trees and habitat for the large area needed for these centers?
I know one email doesn’t matter, but just imagine several cities worth middle schoolers figuring out which version of Pokémon is the best version of their pet, with all the twenty-somethings using it for groceries lists, and all the tech bro saying “send out an email on a meeting about using paper straws to save the environment,” and you can see where the waste of limited resources becomes objectionable.
With the present issues with climate change, is the energy and water use of the datacenters for entertainment purposes appropriate ethically and morally? Is it appropriate to build datacenters on an already stressed electric grid with rolling blackouts just so people can have help writing simple 100-word emails? And is AI/LLM programs and apps the best way to write those emails?
TL/DR: Authors, artists, and other creatives have a love-hate relationship with AI, balanced between an exciting new creative tool and the exploitive, illegal tapping of the creative community by scraping intellectual property for training LLMs. Publishers and those whose business model is based on protecting intellectual property cannot put AI-generated material under contract because of legal considerations of rights and ownership. Additional ethical and moral consideration of the wide-spread use of LLM and the related datacenter industry required to support them makes causal business and entertainment uses of LLM and AI questionable.
Final Thought: I want machines to do the boring grinding repetitive tasks so I can make art and write books.
Bibliography
Garrison, Anna. “How Does AI Use Water and Energy? Unpacking the Negative Impact of Chatbots.” GreenMatters. 2025 Jan 10. https://www.greenmatters.com/big-impact/how-much-water-does-ai-use – last viewed 6/8/2025.