Thanks and AI Policy

Thanks

To my wizard: Daniel KD, the digital illustrator who worked miracles with what I gave him, and elevated my front cover to something exceptional. Thank you. You can find him on Adobe’s portfolio platform Behanced and on Fiverr.

To my guardian of grammar: my editor, Anne-Marie Rutella. I learned so much about Track Changes, grammar and the Chicago Manual of Style from you. Thank you for working so diligently, and your enthusiasm for my novel! You can find her on Reedsy.

To my defender of culture: my Muslim friend who offered their services as a sensitivity reader. They know who they are. Thank you for allowing me to ask many cultural questions, sharing your experiences, and translating for me.

To my cheerleaders: my family, my alpha and beta readers, who I also bounced cover designs off. Your enthusiasm to read and reread my work, as well as your financial support, made me believe I could do it. Thank you.

And to my provider, Mr Spence, who was there every step of the way: as a supporter, a believer in my abilities, to make me food, and help me manage my fatigue. Your support has been unwavering and this book wouldn’t exist without you.

 

AI Policy

AI is increasingly being accepted and used in jobs all over the world, including in traditional publishing.

No AI has been used to write the plot or prose of Drawing Red beyond grammar and spellchecking. (Yes, spellchecker has flown under the radar for years. It is AI.) This was done using Microsoft 365 and ProWritingAid. For the plot and prose I worked with alpha readers, beta readers and hired an editor.

AI was used as a tool to help me design the art for my illustrated front cover, which was then almost completely reworked by the incredible Daniel KD. The last time I ran that through an ‘AI or human’ checker, is registered at 88% human. All of the text, the spine and backcover of that version was created by me using stock images available on Canva.

No AI was used in the design of the hardback edition's cover. That was created by me using templates freely available via Ingramspark.

In using AI as a tool I specifically did not ask it to replicate the style of any particular artist/artists, and I made sure that my cover does not resemble any already existing images/covers out there. It was crafted solely for my novel, with a long list of story specific specifications. AI cannot be put back into a box, and while misuse of AI is a possibility, it is up to the individual using it to do so ethically.

Without AI I would not have been able to fund hiring an artist to work with me beyond sourcing and manipulating stock photos, which are overly saturated in my genres and pretty generic. This gave me something entirely original and custom to my writing, while still being able to pay an artist for their skills and time, which was always going to be an important aspect of publishing for me. It worked for us both.

AI is trained by looking at datasets, which are notes compiled about the properties of digital objects e.g. shapes and colours within an image found on a webpage. The AI does not see or have any direct form of contact with the original digital item itself at all, only the notes on the item's properties that it is given. It then puts together what it imagines the image of what something, e.g. a cat looks like, using a calculation based on thousands of notes of images labelled ‘cat’.

With open AI, the datasets were gathered from digital materials available on the internet. I do not believe learning from something available on the internet is itself a crime, nor do I believe is taking notes about things found publicaly on the internet. So far the courts have agreed with this.

If anyone reads my novel via the sneak preview function on a digital store or sees my cover on an online advert or social media, I expect it to be at risk of being scraped by bots and crawlers. This is normal and has happened for many years, not only for datasets gathering information for AI training, but for research and archiving purposes. These datasets are collected by separate companies and are publicly available for use as part of the open IT movement. Although technical, the opt-out of this has always been recognised as a line of code embedded in websites instructing the bots and web crawlers to skip the webpage.

I recognise my own fanfictions will have likely been scraped for use in these datasets, and the website I uploaded mine to has confirmed this. I also have extracts of my writing on writing competition websites where it may have been scraped. I choose to keep my work available in these public ways because the pros outweigh the cons. If an author is worried about their work being scraped, including their cover, they would need to check every policy of every website their work is displayed on and the stores they sell their work from online, to make sure it uses the bot-skipping instructive code. The same would be required for the author’s own website. I do not use this code on my website or go out of my way to see if it's implemented on others, because it will get in the way of me promoting my work. As it is impossible to know about every bot being used for data gathering the only option would be a blanket ban on all bots. However Google Search works via bots. If I ban all bots from my website, it will not appear in searches, and that is not something I'm willing to give up.

Any persons’ input is tiny when it comes to informing the notes given to training open AI models, being one of millions of other data points the AI will consider when calculating and generating an output. It cannot even be compared to a collage because all the AI has to work with are notes about the properties of the text and images, not the text and images themselves, from the datasets.

If evidence comes to light that the dataset used to train the AI engine I have used has directly copied and pasted / downloaded images and text as opposed to only taking notes about the digital objects and moving on, I will review my use of using the particular trained AI, and my illustrated cover for my novel. That is not the typical way to collect a dataset and constitutes stealing.

If someone's intellectual property has been pirated to a website online without their consent, they should act on their own beliefs about what should be done as the publisher: to either remove it from the pirate site knowing it may have been scraped if the site did not have code written to instruct the bots/crawlers to bypass the page, or to leave it. If you find your work pirated on a website and subsequently part of a dataset, you are entitled to approach the creator of the dataset and request your material’s removal. Some writers actually believe it's good publicity for their books to be pirated as it draws readership to their work, but the opinion differs from author to author. I personally would rather buy work to support a writer, and I’d promote others to do the same, or access it in other legal ways such as at a local library.

The datasets used to train open AI are available for public access and use, which helps to maintain their integrity and accountability via public scrutiny. For example in an early case where a dataset known as ‘Books3’ had been found to have scraped a pirating website, the dataset was flagged by the public and removed from current and future training of the large language model Chat GPT. That early version (which existed before Chat GPT was named Chat GPT) was promptly withdrawn, and a new model released where the dataset was excluded, and banned from future training. We are now many models on from that earlier version. The dataset Books3 has also been removed from the internet and public availability. Development is happening in this field all the time, and openness and transparency have never been more important. It has also led to Microsoft engineering a mechanism that allows a trained AI to ‘forget’ certain training notes, should the need arise.

There also new AIs being trained on platforms such as Adobe and Canva which claim to be ‘ethically sourced’. In these cases their datasets are gathered from what's created on their platforms, with a simple opt-out option for users.

Personally I will always prioritize hiring an artist in some capacity, either alone or alongside AI used as a tool. There are growing initiatives out there developing AI where artists will be compensated for submitting their work for training, similar to submitting to stock image websites. These are in early development, but I would be willing to pay for access to these.

My current intention is to hire a human artist for the remaining cover art in my series as I do not think AI is capable of the level of comprehension required to create a cohesive sequence of images suitable.

While I think my illustrated cover for Drawing Red is gorgeous (and I have a particular love of segmented pens) I understand others may not feel the same. For that reason I have also created a minimalist style hardcover edition, complete with a blue fabric-like cover, black dust-jacket, and gold lettering on the spine. I hope you all enjoy my work, and the little bit of escapism it brings, in a way you feel most comfortable with.

Funds are being raised towards a human narrator for audio.

NOTE: I am self-published, so I can make these decisions in capacity of being my own publisher, Spence-Johnson Publishing, who holds exclusive rights to my work. If you are traditionally published or seeking traditional publishing, please check your contract(s) for the permissions given regarding the rights of your novel including AI use. Publishers may wish to use AI in the development of your novel or enroll it into AI training systems. I highly recommend you check and seek further clarity on your situation.

Photo of the hardcover edition of Drawing Red, with and without the dust-jacket.

Photo of the interior of the hardcover edition of Drawing Red, with a snow-drop.