Podcast: Download (Duration: 45:07 — 36.9MB)
Subscribe: Spotify | TuneIn | RSS | More
How can co-creating with AI tools enhance your writing process — and make it more fun? Shane Neeley talks about his AI-augmented writing and visual art creations.
This futurist show is sponsored by my Patrons at Patreon.com/thecreativepenn. If you find it useful and you don't want to support every month, you could Buy Me A Coffee (as I drink a lot of it!).
Shane Neeley is a data scientist and software engineer. He's also the author of AI Art – Poetry and Stone Age Code: From Monkey Business to AI.
You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below.
Show Notes
- Shane's background in bioengineering and programming
- How co-creating with AI is fun and brings a different spark to creativity
- The co-creation process with AI writing and image tools
- Tips for prompt engineering and how to change your mindset around AI
- Why it's important to make art with technology
- AI-augmented creativity and how we can work with AI tools
You can find Shane Neeley at ShaneNeeley.com and on Twitter @chimpsarehungry
Transcript of Interview with Shane Neeley
Joanna: Shane Neeley is a data scientist and software engineer. He's also the author of AI Art – Poetry and Stone Age Code: From Monkey Business to AI. Welcome, Shane.
Shane: Hey, Joanna. So happy to be here.
Joanna: I'm excited to talk to you. So, first up, tell us a bit about how you went from lab scientist to programmer and then author.
How does your biological background help you to understand AI?
Shane: I worked in laboratories after my undergrad for several years, as one does, looking for stem cells in monkey knee cartilage, engineering new viruses to inject into monkeys, various radioactive things and inhaling formaldehyde, and all of that.
I eventually thought, ‘Yuck. I don't want to do this anymore.' I joined a lab that had a lot of data, because they were doing genetic sequencing, and the boss was using an old programming language from the '90s in some scripts, and I was able to upgrade some of his scripts into some more modern Python programs.
He was super happy with that, so he had me, instead of doing all the monkey procedures and virus work, he had me sit down and program, and help the lab out with that. So that's how I got started and found out being a coder was a much more preferable job for me.
And I've been doing AI writing in the last year, and it's been amazing. It's been, for an engineer biologist like myself, during the pandemic, pretty much exactly when it started was when I had some extra time to become more creative. But I wanted to use the angle of my skills as a programmer in order to do that.
And in my day job right now, I'm at a cancer research search engine. We rank the various documents for clinical trials and publications on cancer treatments. And so we have millions of documents of language.
We built a search engine so that we could rank the most relevant ones for a patient, the most quality ones at the top of a search, obviously, like Google. You want the best things to come up first. I'm in charge of AI and machine learning at that company.
I spent a lot of time in natural language processing (NLP), and a couple years ago, going to conferences and finding out about transformers. And specifically, we use a lot of the BERT model for language understanding, which I want to get back to with AI writing.
When I had some extra time on my hands, I was able to use some of these same platforms I use to do the language generation part. And there was a total spark when I started generating language that from a robot that I thought was hilarious.
That became my first two books that I wrote and published this year. A lot, thanks to people like yourself in the self-publishing community as well, that showed me how it's possible to, even with a day job, to manage all of these things, and write books and get books out and distributed across the world. So using my skills, and then learning from the self-publishing community, I did some AI writing.
Joanna: That's fantastic. You mentioned a few things there, which were awesome.
You said you were looking to be more creative, and you used the word ‘spark,' and also the word ‘hilarious,' when it comes to writing with what we're calling robots, but this is all software, right? We're not having the classic big robot with, holding a pen. We're talking about software.
I feel like this is something I want to emphasize, is that this is fun. I'm now using this tool, Sudowrite, and I'm just giggling away at what it comes up with. And seriously, in the last decade of writing, I have not been giggling as a writer. Now I spend a lot of time laughing. [Check out the interview on Sudowrite here.]
Can you talk about the fun side, the spark, and how you use AI to help you write the book?
Shane: I'm glad you're having that experience too. Because, yes, as soon as I turned it on and got a GPT-2 model outputting, some of the stuff was just funny to me.
It reminded me of as a kid, playing the fill in the blank Mad Libs, where you're with your friends, and you're making the mistakes in the language, and the mistakes are the funniest part.
I also realized that there's so much talk around the AI world of how serious this is, how is this going to take over various industries. When I actually started generating, I thought this could just be good for comedy at this point. This stuff is just funny.
It may not come across that my book is partly a comedy book. I'll be the first to say that Stone Age Code is a strange book. Probably a lot of first-time authors put something out that is their passion project.
I combined my love of biological sciences, and specifically human evolution, and then made a ton of analogies from how we evolved and the primates that we are into how AI is evolving. And then in, every chapter ends with a robo excerpt, where it's a GPT-2 model that is fine-tuned on my language, about 10 years of journals that I had written, journaling that I've done.
As I was writing the book, I would get halfway through the manuscript and put that back into the training dataset, and then retrain the model. And the model would become progressively more and more like the book. That's something I think, that was also fun to see how the model itself could evolve to sound more and more like me, the more I've written.
Joanna: I want to dispel the myth that you just press a button and hit generate, and thus is a book. Obviously, most people listening are not programmers, so they're not going to build their own model. And we'll come back to fine-tuning. In terms of how you actually used the model to write, did you use it to generate ideas? Did you use it to actually write for you?
What did you use it for, and how did the creative process evolve?
Shane: I listened to your episode on writing with GPT-3 with Paul Bellow, and he was really spot on. It was cool to hear that he did seem to have a similar process, and specifically a heavy influence on the prompt engineering aspects. You can use a smaller model, a GPT-2, that you can train yourself. If you do really intelligent prompt engineering of what to say, then the results can be more relevant.
I started writing Stone Age Code a year ago. I quickly also found other AI techniques that were fun and creative, specifically style transfer art. And I got distracted by that. I realized that I could put a lot of these cool images, these AI neural style-transferred images, into a book, and produce a poetry book.
So, what I did with that, the first book that I put out was, I put one of these AI images on one page, and on the next page was a poem about it. And half of the poems in the book were written by poets and collaborators. I have friends that are poets.
I have T.M. Foxglove in there. I had my editor, who's a famous poet, he was in there, Adam Cornford. And I found some other collaborators, who would either write the poem about the AI image and their thoughts about it, or they would just write the first line of the poem. And then that's the prompt for poem-writing robot.
The poem-writing robot then fills out the rest of the poem. That one wasn't fine-tuned on my writing. It was fine-tuned on a Kaggle open dataset of 18th and 19th-century poetry.
I would find that my cherry-picking frequency seemed to be about 1 in 20, where I would generate 20 poems for one line, and read them, and maybe one of them was good enough to put into the book.
With this generation, you can press enter and generate a whole bunch of examples, but you still have to burn your own eyeballs out at deciding which is the best one. And then once you have it, you've got to format and edit it, because the output quality might not have been great.
Without that, there's certain things that the robot said that I or a real poet never would have thought of. Some of it's funny, some of it sounds insightful. And so yeah, there's gems in there. At this point, with using GPT-2, I found that maybe 5% or less was keepable for a book.
Joanna: With Stone Age Code you did things a bit differently. It's kind of part memoir, part technology guide. And then it's got these inserts, which are the AI writing, which I thought was quite interesting because I feel like a lot of people who are working with these tools right now are integrating the AI-generated text more, whereas you called it out in a text box.
It's like ‘here's the robot talking about this.' And that's what also made it quite funny, I guess. But you mentioned the intelligent prompt engineering. I think prompt engineering is a great phrase. And you also use the term ‘machine learning literacy.' And this is really important.
I use the quote from Kevin Kelly, who says, ‘You'll be paid in the future based on how well you work with robots.' And, of course, again, he means the wide AI. And you have a job right now where you're paid because of your work with AI.
What are some of your tips for prompt engineering?
Shane: ‘Machine learning literacy' was the phrase I'm using in marketing as trying to explain what people will get out of the book. Because it is a general readership, but a general science readership book, of people with a heavy interest in machine learning specifically.
Maybe they read a lot of articles about it. You and your audience, I think, are perfect for it. But I did, at the same time, distill an entire machine learning textbook into it, which I tried to explain with analogies to biology and fun examples and light-hearted examples. But almost every piece of jargon that I would use in my day job as a machine learning engineer, I put it in the book.
You may read it, and then if there are people who decide to go down the line of doing some programming, or working with people who are trying to make more advanced models, or make a model of yourself, which currently with commercial services is a little hard to do, it's helpful to know a lot of this jargon about learning rate and optimization algorithms, and the number of epochs to train a model and the accuracy of how well it is predicting or sounding like you.
These things that are involved in training, which I do as a programmer, I distill it in there, and I say that you'll gain machine learning literacy.
On prompt engineering, I write a chapter, and then I maybe summarize the chapter. You could have the chapter title. You could have headlines of individual paragraphs, and build a big file of prompts, which then I would run the generator on.
And it could produce, say, if I set it to write 20 examples for each of these prompts, and I have 10 prompts, now I've gotta look through 200 examples of what might be good to keep in the book.
Joanna: What is the prompt? Like what is a concrete example?
Shane: A prompt would be I would say, ‘Neanderthals are,' and let it fill out the blank. And I would say, ‘Homo sapiens are,' or ‘Denisovans are.' So, put these human species in there and let it fill out the blank, and write 20 of them and maybe one of them is funny.
Or you could also take an entire paragraph as a prompt. And then it could understand more of what that paragraph is about, and maybe be more relevant. So if you're writing a short joke, you can have a short prompt, and maybe it finishes one sentence.
But if you had writer's block, and you were stuck writing one of your thriller novels, and you could put an entire paragraph about the characters, and then leave the sentence unfinished and see what it comes up with, to see if it can break your writer's block.
Joanna: This is something I've definitely been struggling with, is changing my brain from the way we use prompts with something like Google, where I might just type in, ‘What are Neanderthals?' And if you phrase the question like that, it will come up with some kind of specific answer from a textbook, probably.
Whereas if you have ‘Neanderthals are,' and then carry on, although, as you said, it's a bit too short, really, but you're going to get something quite different, because it's not a question, it's almost like ‘complete it,' rather than carry on.
What are the other ways that you have to change your brain around in order to pull meaning from the machine?
Shane: The prompt engineering and the fine tuning go together. Because if you were using a commercial GPT-3 that's been trained on a lot, but it's certainly not trained on your thriller novels, you may have to really prompt it as in input all of the characters and backstory and just your type of writing in order to get it to write like you.
But if it were fine-tuned on you, as mine are, and I had trained it on a bunch of other stuff I had written about Neanderthals, then even a short prompt can be relevant to me, if that makes sense. A generic model might not be relevant. And you might have to really prompt it intensely. Maybe putting the dialogue quotes and the name of the characters and this and that.
But imagine if you had a series of books and you fine-tune the model on that, your model is now intelligent, generally, but it's been tuned on the downstream task of your series. It should know quite a bit of how your books work. And so you might not need as heavy prompt engineering.
Joanna: Again, my thoughts are evolving on this every day at the moment, as I learn more and play with things. And I was thinking, even when I sent the questions to you, yes, I would like to train a model with my whole corpus as J.F. Penn, and maybe with some other people who write in my genre. And then we can look at writing in our voice, or my voice, and as you were saying, fine-tuning the model.
But since I've been looking at Sudowrite and the GPT-3 beta in a different way, what I'm realizing is the fun part, for me, is a brain that's different from mine. I'm finding it far more delightful, I'm finding delight.
And as I said, I'm kind of giggling, going, ‘Wow, I just never would have thought of that.' The danger of trying to train a model to be like me is that I can do that myself.
I want a model that is not like me, so that I can have the benefit of almost collaborating with something else.
Shane: Yeah, and even with fine tuning, what you're trying to do is you can make it a little like you, or you can make it 100% like you, and basically plagiarize. You always have to be careful of plagiarism with these models because they may write the same thing that's been written before.
The one thing I did play around with with the generation step, there's parameters that you may have seen called temperature, which is the randomness of what it's going to predict next, what it's going to pull from.
If you turn it up, you get a crazy mush that may make no sense. If you turn it all the way down, you may get exactly what's been said before.
So, one fun example of fine tuning I did, I had an uncle who passed away but we used to write a ton of emails to each other, and I downloaded all his emails from Google and fine-tuned the model on him. I made it highly accurate, highly like it would sound like him, and did some generation, and it was nice.
I remembered that he used to always ask, ‘When should we do lunch?' These kind of phrases. You can make it more or less sound like you as well.
I'm glad you're having fun. And I've certainly, even when I'm, on my models, it comes up with stuff I would never think. And one weird thing is it always talks about Elon Musk, even though none of my writing has Elon Musk in it. It has some reverence for this man.
Joanna: That might be your model. I haven't come up against that.
Shane: He's just all over the internet. They won't stop talking about him.
Joanna: I want to circle back on just the creativity aspect, because I think a lot of people, in their mind, and again, the word robot, the word AI, all the stock images online, they're sort of metal, and they've got lots of angles, and they're very technical.
I feel like people don't understand that this can be very creative. This is very creative. We are creating things. And you mentioned then the style transfer art, and you have made this AI art, you've used it as part of the cover. You've got images in that other book. You're also selling them as NFTs, which I've done previous shows on.
[Click here for the episode on NFTs for Authors]
Why do you think is it important to make art and to talk about art and creativity, rather than just technology?
Shane: For me, these generative art techniques are super important, because I suck at painting, and I suck at drawing. Once I realized I could make unique art that I truly think looks good, and I sell it on t-shirts. I sell it as NFTs.
Once I realized me, as a programmer, I could be creative in the programming aspects of it, and then produce something that aesthetically looks good, I love it. It's fun to do.
I think that's the importance of it for me is I wouldn't normally be an artist. And a real trained art school person might scoff at these techniques, because I'm not drawing it out. But for me, and for people who like the work enough to put it on a mug, or collect it in some way, I think it's cool.
For the poetry book, I did about 80 images that were style transferred in various creative ways, where I took a stock photo of some gorillas playing, and then that's the content photo. And then the style photo, I took an image of the back of server racks, where they have all the networking cables all nicely structured.
I ran the style transfer program on it, that I wrote myself, it was based on PyTorch, but I creatively edited it so it does a lot of weird random variations. And then it also produced a ton of junk. Some of it caught my eye, I thought looked brilliant.
Now I have an image of gorillas that look like their fur is made from those structured cables on the back of server racks. And so there's tons of creativity involved in deciding what images to use. You spend money making it run on these GPUs to produce all this art, and then you use your aesthetic sense to pick it.
For Stone Age Code since it was a print book and a Kindle book as well, I wanted to style transfer the images so that they're black and white, so that they print well. And so I took stock photos again, and used something like music sheets, like a sheet of music that's all black and white.
I transferred black and white style onto a photo, so that it's like an advanced kind of Photoshop technique, but using AI to do that style transfer. It's important to me because it was fun. It was a fun distraction from writing. When writing got hard, I could produce the cover art and the interior images.
Joanna: You mentioned that you use stock photos. This is one of the big questions about all this stuff is the base material we use to train things. I don't imagine that the stock photo had a license to use in style transfer training models, or something like that, in the same way that most of the data that is in GPT-3, for example, didn't have a license.
What are your thoughts on licensing?
Shane: All those stock photo websites, whether you pay for them, iStock, about half of mine were purchased the license to use it from iStock or Unsplash, which is photographers upload their photos and put a ‘any use' license to it.
I think that the artists understand that people are going to, for marketing purposes or whatever, edit those photos in any way. I don't think any of the artists would have issues of having their photos, style transferred and edited. And at least they've signed over the rights to those to iStock or Unsplash for that.
iStock itself has its own limits. I can put the images in the book, but I can't sell more than 500,000 copies without upgrading my license. So I'm going to try not to do that.
Joanna: I think it's interesting because it's much, much easier to portray artistic use of AI with visual art, which is why I think it's become so big much more quickly. Whereas with words, I don't know why people just seem a lot more precious with words.
There's a huge, flourishing AI art, and music community now, right? I feel like with the writing community, we're well behind on understanding that you can augment with AI in order to make art and have fun. I just feel it's so playful, and what you're talking about is being playful.
I do think a lot of people who don't program, and my husband's a programmer. I understand that programming is also creative.
A lot of writers of any other language that's not programming, don't really understand that programming is creative.
Shane: I have some thoughts on that, for sure. I know you've talked about people worried about a tsunami of AI-written books out there. But also, I think your last guest, Paul Bellow, was spot on in the fact that AI may get better at writing, but the recommendation engines that recommend the types of books you want to read and the quality of books are also getting better, arguably, faster than the writing is, because Amazon wants to sell you stuff.
They're going to recommend what's best. I wouldn't worry about a tsunami of books, for one. And then for the people being hesitant towards using these tools, it's kind of like if you had a writer's group. We have a lonely life now, where we're all writing inside, and on lockdowns and this and that.
But if you had a writer's group where you work on your novels together and bounce ideas off of your friends, and they suggest something about your novel that's a new character or something that's completely novel, I don't see why that is so much different than using this AI muse to come up with new ideas.
Joanna: And I think there's a lot in my head, because similar to you, you generate things or you do various prompts. And then you have to have this editing process. That's part of the creativity is the choosing of the things that you then might riff off with your own writing.
So first of all, it's the prompt, you have to design the prompt, and then you get some stuff returned to you. And then you have to edit what is returned to you. And then once again, you might use something from that to prompt again.
To me, it's almost changing the editing process, and the choosing process is a part of the creative side.
Shane: Totally. I haven't written a book before, but this book certainly wasn't easy to write, even though I had these tools. And the cherry-picking aspects of it, which is putting the human back in the loop of the generation, was hard and creative. And you use your sense to decide what part was good.
For that, I have thought of an idea to try to make it easier, even. I told you earlier in my day job, I use mostly classifier models. And that is where an AI can tell you the type. ‘This section of the book is about baking pies'.
If you're a cozy mystery writer, you want to have a lot of those type of sections about baking pies in your mysteries. Or maybe you can also just have a binary classifier, which is a model that decides whether something is just quality or not, good or bad, one or zero.
And you can get a score based on that. My next goal, it's some ideas for the next book I want to write, to even automate the cherry-picking process a little bit more. And using probably a smaller model, like GPT-Neo, that is much smaller than GPT-3, but it should be better than GPT-2.
It should be affordable enough for me to train it. But then adding another layer, like a BERT model, that classifies what the type of thing is, how quality it is, and take out some of that cherry-picking. I hope that the next book has more AI influence and less burning out my eyeballs than this one did.
Joanna: Is that related to the idea of the generative adversarial network (GAN)?
Shane: It is, yeah. That would be another way to take it.
Joanna: Explain that for people who might not know what that is.
Shane: You kind of have two brains that are bouncing off each other, two neural networks, one's generating, one's discriminating. And both have functions inside of them that allow them to get better over time.
It produces something maybe illegible at first, if it was writing, or if it was art, maybe it produces something ugly at first. And then the discriminator network can decide whether that was good enough or not. And they both get better.
So that's one way to take it. I'm not sure if anybody's done that so much with transformers, like GPT in writing, to make a GAN out of those. I'm sure somebody has tried. There's probably some research papers.
Mine was going to be more of a generator and discriminator, but the discriminator doesn't get smarter during the training period. It'll just be pre-trained in order to know quality, if that makes sense.
I haven't built it yet. And I got GPT-Neo running yesterday, actually generating. I did the ‘Neanderthals are,' and see what GPT-Neo was saying. But that's not easy because every time you start one of these new things, there's a reason they pay machine learning engineers pretty well. There's a lot of work that goes into building these things.
Joanna: Exactly. And of course, again, most people listening are not programmers, I'm not a programmer. And so a lot of us have to wait until there are more tools available. So, of course, I've got my list at thecreativepenn.com/aiwriting, and I've been listing things.
Obviously, there's more tools now with the images that you can generate. And I feel like where we are now is just at the very beginning of where this is going to go.
I like the phrase, ‘AI-augmented.' Is that how you feel?
Shane: Yeah, I like that. Another marketing phrase would be, ‘Creator empowerment AI' or, ‘AI-augmented.‘ The people are putting a lot of effort into these things. And the worst may come true eventually. I guess all of our creative endeavors may be better done by an AI completely. Looking far into the future.
Obviously, we're nowhere near that. We can barely write a funny paragraph at this point. But yeah, people are putting a lot of money and a lot of effort into these things. AI-augmented may become more and more augmented to the point where it's AI-produced. And then we humans are stuck with ‘What else do we do? Okay, AI's taken creativity away from us. What do we do now?'
As you say, take back your humanity, try to just be more and more human.
One thing humans always did was connect with each other, be friends to each other, take care of each other. And there's so many problems left to solve that I don't think we're going to be out of a job anytime soon.
It'll be interesting to see these things get more and more creative on their own, which is my goal. I would like my next book to be more AI-augmented than these past ones. And we'll see how that evolution goes.
Joanna: I think the important thing is that we are creative beings. As humans, we are creative beings. And so regardless, if readers want to only read books written by AI in the future, which I don't think they ever will, but even if they did, we will still write, we will still create.
And the payment model will no doubt change, but I really do believe that people will always want to read the output of a human, even if that human is AI-augmented. So, I'm not worried at all about a deluge of AI-generated books.
But it is interesting, because I know you have daughters, and I do have emails from listeners who say they're not worried about us immediately right now, but they are potentially worried about the impact on their children's jobs. What do you think? Because you have young kids.
Shane: We are creative. I put a chapter in Stone Age Code about where that comes from. Some of the anthropology research thinks that it's really built into us because it helped with mating strategies. So that's never going away. That's core to who we are.
But like a peacock with its large feathers, ‘What's it doing with those feathers?' ‘Oh, it's doing it because the female peacock is attracted to it.' And over the generations, both male and female had these traits that showed that they are fit. It's called ‘runaway sexual selection' in evolution.
And so for us, for creativity, that could be where it came from is somebody's chipping a new rock, somebody is painting something new in a cave, somebody is creating something new, and it became the maybe not totally helpful for survival, it became helpful for mating strategy.
That's never going away. That's who we are. We're always going to want to create.
And whether or not the AI's write, objectively better than us, we're still going to want to read what other people did, because we'll still be more interested in people and who they are than AIs.
And for my daughters, like I was saying, you could doom scroll and see a million problems left to solve, and they'll probably take creative solutions, so not too worried about their jobs in the future. But if they want to become writers or producers, or, who knows, dancers, singers, musicians, these things, they'll have to learn to work with AI, because it's so prolific.
GPT-3 is writing 4.5 billion words a day at this point [The Verge]. I don't even know how many words human authors on Earth write at this point. So yeah, you will have to keep abreast of it, just like you had to get a typewriter, then you had to get a computer, and now you're going to have to work with creative robots.
Joanna: It's definitely the directive role remains with the human. As you were talking, I was thinking I saw an article about Hollywood. There's a lot of AI-generated actors now, who are replicas of real actors. The pandemic demonstrated how much cheaper it would be to use these replicas.
And this is really seriously going to bring down the costs of filmmaking, or you can make a film entirely just within your computer, with ‘real actors.' But the director still has to have a vision of what they want to output.
I feel the same with our writing, is you still have to have a vision of where you want, what you want to end up with, to work with the tools. Again, I just don't think anyone, even if you want to press a button and generate, you still have to give it parameters as to what to generate. So, again, I see that director role, that sort of choosing.
People have been using Photoshop as photographers for years, right? A lot of that is choosing which knobs and dials and buttons to click in order to create a finished artistic product.
That's how I see working with AI – you still have to be the director.
Shane: That's definitely the role I took with the poetry book. I put my name on it. But I didn't write a single poem. I organized everything. I didn't take the photos and style them, but I organized everything into a vision. Can AI be trained to have objectively creative visions, and be producers and directors, to know what the humans want?
Joanna: Why would they want to? Why would they want to do that?
Shane: I guess we'll have to see how that goes.
Joanna: I want to end by circling right back to what you talked about at the beginning, which is your job is in the cancer research field. And this, to me is the main point, is that most of the energy around AI development in the world is around the really big problems like healthcare, and the environment, and lots of things that really are so much more important.
99.999% of AI work in the world is not related to creative writing.
So we don't really need to worry, because most people who are developing AI stuff are doing it for either really, really good reasons, or to make loads and loads of money, neither of which are related to writing books.
Shane: We can all hope it stays that way, too, because there's plenty of real problems to solve, and the creativity is super fun. If the robots do solve all those real problems, then we get to sit back and write books and do more podcasts and just be human and be creative, so I think that is the goal.
If this AI can solve cancer research and treatment, then that's wonderful. I'm out of a job, but I could write more books.
Joanna: Exactly.
Where can people find you and your books online?
Shane: The latest book is at stoneagecode.com, and you'll see all the retail places it's available with.
My personal blog and website is shaneneeley.com and that's Neeley with a triple E. Taking your slogan there a little bit. I'm all over the internet as ‘@chimpsarehungry.'
Joanna: Fantastic. Thanks so much, Shane. That was great.
Shane: Thank you so much. This was so fun.
View Comments (2)
Thanks for providing a written transcript, Joanna. I'm not a fan of podcasts.
No worries, I always have a transcript :)