Podcast: Download (Duration: 44:34 — 36.4MB)
Subscribe: Spotify | TuneIn | RSS | More
How can we use AI tools to enhance and improve our creative process? How can we double down on being human by writing what we are passionate about, while still using generative AI to help fulfil our creative vision? Rachelle Ayala gives her thoughts in this episode.
Today's show is sponsored by my patrons! Join my community and get access to extra videos on writing craft, author business, AI and behind the scenes info, plus an extra Q&A show a month where I answer Patron questions. It's about the same as a black coffee a month! Join the community at Patreon.com/thecreativepenn
Rachelle Ayala is the multi-award-winning USA Today bestselling author of playful and passionate romances with a twist. She also has a series of books for authors, including Write with AI, An AI Author's Journal, and AI Fiction Mastery.
You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below.
Show Notes
- Understanding generative AI tools as a non-technical person
- How the creative process can work with AI tools and why it's always changing
- Using AI tools as a collaborative discovery process, and why it's all about your creative vision and author voice. For more on this, check out my AI-Assisted Artisan Author episode
- Aspects of copyright
- Staying focused on writing as new AI technology emerges, and why you need to double down on being human
You can find Rachelle at RachelleAyala.net.
Transcript of Interview with Rachelle Ayala
Joanna: Rachelle Ayala is the multi-award-winning USA Today bestselling author of playful and passionate romances with a twist. She also has a series of books for authors, including Write with AI, An AI Author's Journal, and AI Fiction Mastery. So welcome to the show, Rachelle.
Rachelle: Thank you, Joanna. Thank you for having me.
Joanna: I'm super excited to talk to you. As I was telling you, I have the ebook and the print edition of AI Fiction Mastery because I think you put things so well in your writing. Before we get into it—
Tell us a bit more about you and your background in technology and writing.
Rachelle: Okay, sure. I was a math major, and I actually have a PhD in applied math. So you would think that's kind of the farthest thing from writing.
I got into parallel computing back in the 80s. Then in the early 90s, neural networks, where we were basically trying to recognize handwritten characters between zero to nine. So that was quite interesting and fascinating.
So I basically worked in software development and network management until 2011. Then I got into writing. So romance writing was my gig, and I liked dealing with feelings and happy endings.
Joanna: Well, I love that, going from maths and neural networks into romance. You do explain a lot of the stuff behind AI in your books, which I think is really good. You're used to writing for normal people, so I don't find your writing technical at all.
Do you think people who are not very technical are struggling with this AI world at the moment?
Rachelle: I don't even think you need to be technical to understand AI because—well, there's different types of AI, but we're talking about large language models for writing.
So there's other AI systems like expert systems, machine learning, and people have been using that. They don't even know it, but they've been using it under the hood.
The AI we're talking about, large language models, ChatGPT was one of the first ones that most people became aware of. So GPT is a Generative Pretrained Transformer.
You could think of it as a word slot machine, where you could think of all these slots. So when you write a prompt, then the AI will look at the words that are in there, and then try to predict the best word that comes after.
Let's say, we say Monday, Wednesday, and most people will say Friday because that's the next word that you think of. Or if you say Monday, Tuesday, most people will say Wednesday.
So what the AI does is it was trained on reading, I think somebody said between half a trillion or trillion pieces of text. When an AI is trained, it's not reading a book like we do, where we read it from beginning to end.
So think of if you cut a piece of newspaper into a strip or a square, and then it's got all these words that are in there, and it's looking for words, associations, and patterns. So it'll say, oh, this word goes with that word, and those words go together.
So it could take a word like, say, “bark.” If it sees dog in the other slots, it's going to most likely come out with “woof,” but if it sees trees in the other slot, then it might say, “the bark is wrinkly or hard,” and it's thinking of a tree bark.
So that's how it is able to create words, and that's why you think it's intelligent, because it understands the context. It does so with these huge, huge context windows. So I don't want to get too technical, but a context window is how many words can it keep in its memory.
So it can look at all these associations and how those words go together, so it can best predict the next word that comes out of this word slot machine, so to say. It doesn't remember anything.
Joanna: It's interesting. You mentioned words there, like associations and patterns. I feel like the big misunderstanding with large language models is that some authors think that it's more like a database, where all these “stolen” books are sitting in a big database.
Then if you query it, it will pull out exact chunks from other people's books and use them. So you're always going to plagiarize or you're always going to be “stealing.” Like you and I hear these words a lot from authors who are really just starting out.
Can you explain why it's not a database?
Rachelle: Well, databases are storage. So if you query at a database, it pulls out exactly what's in there. I mean, this is like your social security number. It's not going to get it wrong, it's going to pull it right out. Your birthdate, if it's entered in correctly, it will pull it out.
Everybody knows that AI doesn't get things correct, or it doesn't get things exact. If you prompt it twice with the same prompt, like, say, “Write me a story about a road runner who is sick,” or something, it's going to write you something different.
Even that, if you think about how they trained AI, they trained it by inputting all these words that are associated together. Then they adjusted the weights of how these words are more likely to be with those words.
They're not retaining the words, the words are thrown away. The only thing it keeps is the weight.
So sort of like when you read a book, unless you have a photographic memory, you cannot recall that book, but you can recall the concepts because you have made associations between what you read and it communicated to you these ideas.
In fact, people say our memories are not like videos, our memories are actually assembled whenever we're recalling something. So we are making things up on the fly, based on all the associations that we've had in our lives. Similarly, that's how AI LLM really is making up things.
So when people say it lies to you, it's like, no. It's actually just making things up. You gave it a prompt that said, like, “Say happy birthday to me,” and it just keeps going with that.
There's also something called a temperature knob where you could basically increase the randomness, because you know, it's boring if that always gives you the same answer.
So they built in this randomness thing where it's going to look for either the most probable, or the next most probable, or it has a whole list of probable words that come next. If you turn on that temperature, you dial it all the way up, you're going to get gibberish.
The other thing with LLMs, they've literally read the kitchen sink. It's not just literature, they read code. So a lot of times, if I turn up the temperature and I'm prompting it, all of a sudden it's just all this gibberish code that comes out of it. So that just shows you that it has no memory.
Joanna: I think that's definitely one of the reasons why the legal cases are so complicated and why people actually need to have some technical idea. It's not just a case of like copying and pasting.
Let's talk more about your creative process. So you're a discovery writer, which I love, although you have given tips for outlining in your books. Can you tell us—
How does your creative process work with AI? Are you just writing a prompt and then hitting publish?
Rachelle: Oh, definitely not.
I think the first time I got on ChatGPT, and I'm sure every one of you guys have done it, you said, “Write me a novel.” Then ChatGPT wrote a 200-word story about some rabbit jumping across a meadow, and it might have seen a turtle, and it's like a kid story. So it's interesting, and it's fun.
I think today, they probably won't do any of that because they put some processing in where it will probably say, “Please give me enough detail.” At the very beginning, it would happily go off and write this little fanciful story.
So getting back to, yes, I'm a discovery writer, but I think I have also learned about story structure. So very early in writing, I realized that if I just sat there and meandered around with my character, we could do all these interesting things, but it would not be telling a story.
A story has to have some kind of meaning behind it. So it's characters, they're going through actions, they're experiencing things, but there needs to be an emotional meaning behind it or something where readers want to find out what happens next.
So I did study story structure. I think I read Larry Brooks's book on story engineering, so I know about the inciting incident, and the progressive complications, and there's like this midpoint review. So you kind of have to have those things in the back of your mind.
AI actually does not know all this. The other thing most of you've probably tried is if you type in what you want the AI to do for the story, it takes the most direct point.
So like for romance, this really doesn't work because the romance thrives off conflict. It means there's attraction, and then there's this push and pull of, okay, I'm really attracted to this guy, but he's got some things that just doesn't work.
So it's the push and pull between the attraction and the conflict and two people are working things out. Both of them are flawed, but we believe in redemption, and we believe that everybody deserves to be loved. So the reader is really looking for how this is going to work out.
Well, the AI would just say, okay, so we talked about it, and then happily together we can face these things. It's really so innocent. It's like, “Oh, well, why don't we just talk it out? Then they can walk hand in hand and face the future with determination.”
Joanna: You know that's a ChatGPT story!
Rachelle: Of course.
Joanna: What are some of the ways you do use [ChatGPT] in your creative process?
Rachelle: Well, actually, every book I've written with the assistance of AI, I have done something different. That's because the tools change so fast. So I think at the beginning with ChatGPT, I was just asking it questions about, “Oh, let's make up some mythological figures that can do this or that, or some magic.”
I was sort of using it like a search engine, which it's not because it's making stuff up. I was just heightening descriptions and things like that.
So I think I talked about that in my first book, Love by the Prompt, which was basically just brainstorming and asking it, “Give me premises for a romance,” or, “Give me an enemies to lovers story.” So it was doing that.
At that point, it couldn't write more than 300 words or so. So we weren't really using it to write prose, we were using it maybe to enhance your descriptions or bring in things that you didn't think about.
The speed of AI went so fast, so by the time we were into summer when I wrote the AI Author's Journal, we were actually writing scenes. The way we were writing the scenes is we would list out the scene beats.
So these are just very basic actions of, “they walked down the street,” “there was a gunfight going on,” “there was a sheriff that came in.” So basic beats. We were doing that, and then laying that out and feeding it to the AI so that the AI would kind of fill it in.
So you're really leading it like a horse, like a horse to waters. Like, “Come on this way. Okay, now you're going to do that.” It was really funny to see what it would do in between.
I happen to like hallucinations. I think a lot of authors don't like it.
I really get a crack when it goes, what they call, off the rails. I'm like, oh, really? Okay, this is funny.
So that's how I was using it. It wasn't like this prescriptive thing where I already knew like beginning to end, and I'm going to lay it all out, and then push a button, and this is going to go through.
It doesn't listen to you anyway, so you're not going to be able to. Even if you're an outliner, and you have an 80-page outline and you've got everything listed.
I should say, you can make it listen to you by dialing the temperature down and using one of the more boring models. I don't think you're going to like what comes out because it will be very concise and succinct. They would just literally stick to your beats like glue.
It's not expanding from it, so then why bother have AI write it. At the same time, if you turn the temperature up, it might deviate, and it might deviate in really fun ways. Or it might be like, no, this is not what I want you to do, and it's already solved the problem by chapter two.
Joanna: Yes, and I think the temperature dial, as you mentioned, that's really only available if you go through more like the Playground options.
If people are just using ChatGPT, for example, there is no particular temperature dial in that.
Rachelle: There isn't. It's really interesting now because they give you access to the latest 4.0, as well as 4 and 3.5. If you really want some of the more quirky stuff, you need to go back to 3.5.
It's, in a sense, much more innocent. It will just happily go off and do something. Whereas 4.0, I've noticed they've made it more, what they call, safe.
It tends to feel more like business writing a lot more because what 4.0 will tend to do is whatever you give it, it's going to make a bold heading, and then it will give you some bullets, and then it's another bold heading. It's like okay, so you just summarized my scene brief, and you didn't put anything creative in between.
That's what brings me to Claude. I really love Claude. Claude is the other chat. So if you're beginning, I think most people say, well, we've got to get ChatGPT.
With ChatGPT, I think because it's more structured for business, it's much better at writing the scene briefs and the outlines.
It will stick to the topic, so if you wanted to outline so for nonfiction, especially—and I think Gemini works good for that, too—is that it will stick to the outline. Then you can work with it and say, “Okay, I'm going to write a nonfiction book about decluttering,” and it will help you stick to it.
Whereas Claude, I think is a little bit more freeform. With old Claude 2, it might balk and say just, “I do not feel comfortable being judgey about somebody's hoarding problems. I think with the new Claude 3, they've loosened that a bit, and so it will be more creative, but it may be less structured.
So I think ChatGPT, you can use it for structuring and writing your outlines, and even your scene briefs or chapter briefs. What we talk about when we talk about scene briefs is you need to give the AI a lot more information.
Just telling it, “Write me a scene of a cute meet between a cowboy and a waitress,” it gives it too much leeway. So a scene brief basically is a piece of information, and we call this mega prompting, but we're giving it information of the characters in the scene, the settings of the scene, and then the beats.
What's going to happen first, second, third? What's the inciting incident? What are the progressive complications? I'm using the story grids way of developing scene, so you have the progressive complications.
Then you have some kind of crisis because there has to be something to motivate your protagonist or to challenge your protagonist, and then some kind of decision where that's made to move this thing forward.
So if you only have a scene that only has beats and there's no sort of story element in it, then it's not going to work. So that's why you have to do a lot of leading.
Joanna: It's interesting. You mentioned leading there, and also the different personalities of the models, and also, the fun. I mean—
I feel like it's a fun back-and-forth process.
It's like I might ask Chat for a list of things that might go wrong in this particular situation or places where I could set a scene.
I think I use ChatGPT for a lot of lists of options, and also marketing. I think it's very good on marketing copy. Then, as you say, with Claude, I use what I think you call completion prompting. I might upload what I've written so far, and then say, “Okay, what are 10 ways this scene could continue?” and it will help in in that way.
So I think it's being more fluid almost, isn't it? Going backwards and forwards, and you have ideas, it has ideas, that kind of thing.
Rachelle: I've discovered I like Claude Sonnet the best because Sonnet will actually write. Like if you go through a Workbench or Playground type of thing, and I go through Future Fiction Academy's Rexy, where I get to specify every parameter, including the length of the output.
So with Sonnet, we always say, “Write a 3000-word scene.” Some people used to say 10,000, hoping ChatGPT would do it. Well, it doesn't work that way.
They have a parameter called max-length that they've already programmed into chat. You don't know what it is, but it's probably not going to be that long because you're sharing the chat with so many other people. You're doing a flat fee, and they're paying by the token.
When you go into Playgrounds, or through Rexy, you can special specify a max length. Like I said in the book, all of them, even the million context windows, they may have 100,000-200,000 tokens that you can feed in, the maximum output is 4096 tokens, which is roughly around 3000 words.
So some of them are just like the C students. You tell them do 3000 words, they do 500-700. With Sonnet, I found, and Haiku, will gladly go up to your limit.
If you didn't give it enough information to prompt, it'll just kind of get repetitive and have your character doing the same thing over and over in different ways, but that's your fault.
Joanna: I think, again, this is really important. You're still not just copying and pasting that scene, right? You're not taking that scene out of Haiku or Sonnet and then pasting that and then publishing it.
So just explain—
How are you leading the AI? How are you editing?
I still think people are afraid that we're just going to lose our creativity and the AI will do all the writing, whereas that's not really what's happening.
Rachelle: First, I just want to say there is no wrong way to use AI. I know everybody's process is different.
So there are authors who spend a lot of time with their outline, and whether they're using ChatGPT or they're just working on it by themselves, everything is going through this person's filter, this person's creativity.
So even if someone works a long outline, and then tells the AI, “Write these scene beats, write what I just gave you,” that author has put in all those scene beats. That author has said, “This is the emotion I want in the scene.” That author has said, “This is what's going to happen.”
So even the most prescriptive author that architects it from the beginning to the end, that person has put themselves into that story.
It's not like AI is just going to write you a story.
The other thing I think people forget is that it's humans that tell stories because we're the ones with the emotions. When we see a list of things happening, a lot of it depends on the context.
So if, for example, you see a man punch out another man, if it's on the theater on the stage, you laugh, but if it's on the street right in front of you, you're like horrified. So these contexts are all happening emotionally in the human being.
AI will just describe, “Okay, this man punched the other one, and he hit his jaw, and the blood went flying.” It will describe the stuff, but the storyteller is putting the emotional context into that scene, and what the reader is going to feel is coming from the human.
Whether the AI writes the words or not, or even draws the cartoon or not, it's the medium of how you're communicating that story that's eliciting the emotion. So I think I don't worry whether you're a plotter or a pantser, it's more just believing that the story is coming from you.
Whether you dictated it, transcribed it, I just look at AI as it increases the accessibility of storytelling for people.
Maybe English is your second language or you're a visual person.
Joanna: Yes, it's interesting. I feel like because we describe ourselves as writers, and for a long time we've used this number of words written. You know, people will say, “Oh, I wrote 2000 words a day,” or, “I wrote 10,000 words today.” We've really viewed value of being a writer on how many words we write.
Therefore, I think people are struggling because if you can generate 3000 words with one prompt from an AI—and that's where we are now, I mean, goodness knows where it will be in a year or two. I think I did, and maybe other people, are struggling with this question of—
What is our value if it's not generating words? So how do you see that question?
Rachelle: I think your value is making sure those words are words that people want to read. That's the same with whether you're doing your messy draft or not too. I mean, before AI, I wrote 90 books. I can write 50,000 words in two weeks. I've done all the NaNoWriMos and all that.
So the thing is, you as the creative person, you can generate the words, but it may not be words anyone wants to read, maybe you don't even want to read it. So you're also the curator of those words.
Basically, it still comes down to you're the storyteller. You have to have a story worth telling.
I mean, you don't want to just report what you see without putting meaning into it. The meaning into it is what gives you the story, because ultimately, the story is a human to human communication.
Whether I'm talking to you face to face and telling you what happened to me last Friday, or I'm communicating through a novel, it really is still, like I would say, heart to heart. It will come from my heart, but when you read it, it's going through your heart.
Like I said, the AI can throw out a lot of words, and some of the time I have to admit, I don't even read what it gives me. Sometimes I ask it for ideas, and then I do exactly what it doesn't say to do. Or it can spark something totally opposite or just unrelated.
You're a discovery writer, right? So you know that ideas don't come until you start moving. It's like getting on a bicycle. So before I even sit down to write a scene, I could say, “Oh, this is what's going to happen. I think I know what's going to happen,” but when I start writing it, it's like something else just pops into my mind and it deviates.
Joanna: I totally agree. So this is the point.
We are the ones with the creative drive. We have the ideas, we have the prompts, we have the story; we have the emotion. The AI tools, they're just tools.
Someone has asked me that—
They worry that they might not be able to find their voice if they start writing with AI. Or that they might somehow lose creativity in some way. What do you think about that?
Rachelle: I actually think it's valid. I've been writing, oh, I don't know, 12/13 years, and you develop the voice by just writing, free writing. So I think it is valid because if I read too much AI, I find myself kind of writing like them, like using some of the same phrases.
So we're sponges, we absorb what we read. I mean, that's how we developed our voice. We read lots of books, right? You probably have your favorite authors, or if you're like me, I read across multiple genres. I love everything I read.
We're like humans, sponges. The LLM is just like us. I mean, if you noticed ChatGPT, it read a lot of fanfiction. So it has a lot of the same names that it gives and the same things that are always happening, and it's only because it's read all these fanfiction sites. So it tends to write like fanfiction.
So I worry about that too. I look at it, and I say, “Oh, I don't want to sound like ChatGPT,” and if I keep reading what it writes, sometimes I catch myself.
Joanna: That's interesting. It's funny, I haven't felt that at all. I feel like this comes down to being confident in your voice.
I think when we've been writing as long as we have, we kind of know when it sounds like us.
So if I read something, I'll be like, that doesn't sound like me, so maybe I didn't write that, or I don't know where that's come from. So certainly in my editing process, I edit pretty hard in order to bring my voice.
I really think that maybe people will just learn to write in a different way. In that we wrote with the Internet, so we've had the internet, and we have learned and written in that particular way.
People growing up now, this is now free, kids at school are going to use these tools. So they will probably just learn in a different way.
I still think it comes down to what you, as a creator, have as your creative drive.
I think that is really particular to you.
Rachelle: Right, and actually, I think we don't have to worry as much going forward. As we've seen, Claude Haiku, Sonnet, and Opus, they write differently. I think a lot of what we think is AI is from ChatGPT 3.5, because that was the first one that came out.
You're right, the kids that are growing up today, they're going to be reading as much AI-generated content, if not more, than the classics. Though you could always go back and read the classics, too.
Joanna: So there's definitely the responsibility of the creator. I guess we're saying, and that I'd say, I'm an AI-assisted artisan author. So it's still my work, it's what I want to do.
I am interested in what you think about copyright in the USA. In the UK, our copyright law is that anything created by a machine belongs to the creator.
“In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
Copyright, Designs and Patents Act 1988, Section 9, 3
In the US, you have quite a lot of copyright rulings that still haven't happened around that. So how do you think about that in the US?
Rachelle: Well, first of all, I'm not a lawyer. So I don't know what the legal things are. I think the US, and this is what I think, they think there has to be some kind of human touch in it. So they're trying to measure how much of the human touch.
[Read the US Copyright Office guidance here.]
An analogy is like taking a picture. I think immediately people probably thought, “Oh, well, all you did was click the button, and you took a picture,” but the copyright office ruled that, “Oh, but you had to set up the shot. You had to adjust the lighting. You had to catch it at the right moment.”
So the camera didn't take the picture by itself—well, actually, you can set the camera to take pictures, and nowadays, you might have a video camera that's just been watching something and AI can pick out the best shots.
At the time they did grant a copyright to photographs, the thinking was a human was behind it that pushed the button and it composed a shot. I actually think AI prompting is actually more work.
Everyone just thinks that you just push the button and out pops an article, and that's not the case.
I went to a seminar where one of the lawyers said, “Oh, well, it's all in the prompts. The originality is in the prompts,”
which does go back to that plagiarizing. If you copy and paste somebody's work into the prompt, you can get AI to spit that back out. That's on you because that was in the prompt, it wasn't AI's fault. So the lawyer said that, and believe me, all your prompts are stored somewhere.
I mean, they have not discarded any of the prompts. So he's saying that in the future, he thinks cases will be decided by looking at how creative the prompts were.
Joanna: That is really interesting. I totally agree, and it's one of my sort of red lines. I say to people:
Don't use other people's names or brands in your prompt, whether that's images or music or authors.
I can use my name in a prompt, but I'm not going to use your name, I'm not going to use Stephen King, I'm not going to use Dan Brown. I'm not even going to use dead authors because I want my own voice. So I think that's really important.
It is also interesting because in the early days—I say the early days early, like last year—I was still taking screenshots of prompts in case.
Rachelle: Like I've got to save these?
Joanna: Well, no. So that in case I had to prove that this was my own work. So I was keeping that, I took pictures of my edits, like I was quite paranoid last year. Now we're in mid-2024, I'm starting to relax a lot more.
Let's just think about what's happened. I mean, as we're speaking now, last week they released 4.0 Omni. We've had Google releasing Gemini 1.5, Microsoft has announced new PCs that will have AI in them. I mean, the pace is so fast now, and Apple's going to announce something soon.
How do you adjust to the pace of change?
Are you, as you said earlier, are you changing your process all the time? How do you stay focused, rather than getting sidetracked?
Rachelle: Well, it is harder to stay focused because there's always some new toy that comes out. Just yesterday, I got into the Hunch beta, which is basically a drag and drop prompt sequencing.
So you can put in context blocks, and then you can drag that context and feed it into these AI blocks and it does something to transform it. Then you can feed multiple context blocks into AI blocks, and multiple AI blocks into another one to aggregate the content, or you can split it out in different ways and use different LLMs for each output.
So yes, it's hard to stay focused. I think once I get into a story, I do focus on that story. Then I keep kind of an ear to the ground on what's going on.
So I joined the Future Fiction Academy because it's a group of people, Elizabeth Ann West, Steph Pajonas, and Leeland Artra, who they are all over the place looking at all this AI. They are also real writers because I knew them from indie publishing 10 years ago.
So they look at these tools and they're always thinking of new methods. It's not just them, it's the whole group in Future Fiction Academy. Somebody will say, “Oh, did you see this Hunch thing?”
So Hunch was brought in by somebody else who said, “I use this to sequence these prompts, and I wrote my scene briefs, and then I had five different LLMs write the scene, and then I'm going to look at them all and pick out the best ones.”
So by joining a group of active authors who are focused on their writing, because each one of these authors are still focused on their author career and not the AI.
AI is a means to an end
— not like the YouTubers where—and they have their uses too, but they are focused on the AI. So they're always looking at the new AI and how it came out. That's great to also subscribe to a few of their channels so you kind of know something's coming.
Also, you have to know, well, okay, I'm not going to distract myself with the new music stuff because I don't really use music in my work, but I know it's there type of thing.
Joanna: Well, what do you think's going happen next? I mean, how do you think things are going to change in the next year or two? I guess we're looking at maybe GPT 5, which might be another step up.
I guess some people think that that will just mean we can write books even faster. As you said, you were writing books pretty fast before, and romance authors are fast. So I don't really see it as a speed thing.
How do you think things will change, both creatively and in the business of being an author?
Rachelle: Well, it's hard to say. I mean, look at ChatGPT 3.5, now we're looking at it like training wheels. What we have today is Omni, and like you said, GPT-5 will come out. I don't really know, I just know that as long as these companies are fighting it out, we get access to the latest and greatest.
So I think I'm more worried about when the industry consolidates, and all the best writing tools, the AI that's able to not just spit out words, but the one that can analyze novels. Believe me, I'm sure these publishing companies already have it.
I have heard somebody say that Netflix actually has analyzed streaming behavior of their customers. So they know when the customers quit watching the video, they know when they rewound, and they know when they watched it all the way through without stopping.
So they've analyzed those story structures to come up with better stories. I'm pretty sure that anybody who owns a reading app knows this.
We buy a lot of books we never read, I mean, especially free books. You downloaded them, maybe opened into the first page, read the first page, and dropped it.
Those owners of those reading apps know full well which books have caught on, which books are the ones that it's 3am and you haven't stopped and you just keep going and going and going. So they have all that data.
So once they train their AI to recognize that kind of pattern—what kind of patterns of story, not just words, because right now, today's LLMs are just looking at word patterns.
We're looking at AI agents that can analyze the patterns of the story, like the rising action, the conflict and tension points, all of that. Then they can actually generate story, critique the story, and then match it to what readers' preferences are.
Then maybe we may just become providers of experiences, I suppose.
Joanna: I mean, let's fast forward. There's going to be perfect algorithmic fiction, you know. It'll be perfect, people will love it. They'll go and they'll get that, and that will be a lot of what people read. That's why I say to people —
You need to double down on being human, because you are not an algorithm, and I'm not an algorithm.
So I think that there's still a place for the human writer, which is flawed. We have flawed writing. So I think there's room for both.
Rachelle: That's the whole thing about romance, the characters are flawed, but they're still lovable.
Joanna: Yes, so let's hope we are!
Rachelle: It's going to be interesting. It's almost like the way with social media. They've done studies on dopamine hits, and so they made their things addictive so that you're always scrolling and scrolling and looking at the videos and hitting the likes. That's all these little shots of dopamine.
So they've done all that research on how human minds work to get you addicted to a platform. I wonder if the AI can also create books and stories that you just can't put down because it just kind of knows. It can individualize this for every reader. If your Kindle Library's as big as mine, it knows what I'm really interested in, and not what I say I'm interested in. It knows if I buy a book because I liked the author, but then I never read the books. It knows what you're really doing, and it can personalize that for you.
Joanna: Yes, well, we certainly live in interesting times. It's been so great to talk to you.
Tell people where they can find you and all your books online.
Rachelle: Well, I have a website, RachelleAyala.net, but you can just find me on Amazon. Just type in Rachelle Ayala and AI Author’s Journal or Romance In A Month, and then you'll find my nonfiction books. Then for the fiction books, I think type in Bad Boys For Hire, or something like that, and you'll find my fiction books.
Then I did recently start a new pen name using my real name, Clare Chu, C-L-A-R-E-C-H-U. This is much more AI. I decided to do these humorous guidebooks that are called Misguided Guides.
So my first book was Why Your Cat Is Plotting to Kill You. I made the cover with Midjourney, so I'm showing this to you on the screen.
Joanna: That's very cute. I think experimentation is fantastic, and you certainly do that. So thanks so much for your time, Rachelle. That was great.
Rachelle: Okay, sure. It was great being on. Thank you, Joanna.
Leave a Reply