Webinar #3 – AI: A point of inflection for the arts?

Watch our third Creative Exchange webinar, co-produced with Creative Victoria, which explores your questions and concerns about AI's impact on the arts, and is now available to view.

To describe artificial intelligence (AI) as a hot topic in the arts and creative industries is a bit like saying Taylor Swift tickets were fairly popular when they went on sale recently. Yes, a bit of an understatement.

And with it being so very top of mind for many in the sector, we did things a little differently for the free webinar: AI: A point of inflection for the arts? How AI is impacting the arts and creative sectors, and what the future holds. Following Webinar #1 –Thinking outside the box and Webinar #2 – A new way forward: how inclusive leadership is driving change, this was the third webinar in the series that ArtsHub is producing with Creative Victoria, under the Creative Exchange banner. And this time, we went out to the sector before the event, in order to get some feedback and canvass opinions to understand your most pressing concerns or questions about the topic.

With the survey results ranging from ‘can AI outsmart humans?’ via ‘how do I protect my work from being used to train AI databases and bots?’ to ‘is there any foolproof way to distinguish AI from non AI material?’ we received nearly 100 different questions on the subject. There were questions about privacy, governance, partnerships, flexibility, harnessing power, future work impacts, security, ensuring the future of human art, truth-telling, abuse, the replacement of actors, grant assessment, opportunities ethics, copyright, censorship and empathy.

And in a clear sign that many in the sector see the potential positive impacts of the technology, in a section asking for one-word descriptions of AI, the most common response was “exciting”, followed by “scary”, “powerful” and “tool”. Over half of our respondents could see a possible use for AI in their creative practice – as a time-saving tool, as a prompt to aid or enhance ideation and as a way to support research or marketing. On the other hand, many respondents also identified worries regarding threats to income and/or employment, the devaluation of creative work, IP (intellectual property) and copyright, and the erasure of the human element in creativity and creative practice.

All of these responses were passed on to our presenter so that he could tailor the webinar to address as many of them as possible.

Our presenter

Professor Jon Whittle

Professor Jon Whittle is Director of Data61, the digital technologies R&D arm of CSIRO, Australia’s national science agency. He leads around 800 staff working in artificial intelligence, cybersecurity, quantum technologies, human-centred design and software engineering in applications for the environment, manufacturing, agriculture and many others. Data61 also runs Australia’s National AI Centre, which has a strong focus on diversity in AI and responsible adoption of AI.

Jon has a strong connection with the creative industries. He has spent 30 years as a theatre performer, writer and director. He wrote, directed and starred in a multimedia play that won Best of Fringe at the San Francisco International Fringe Festival. He has also performed at the Royal Shakespeare Company’s open-air theatre, the Dell, in Stratford-upon-Avon. He studied the Indian classical dance Kuchipudi for 10 years, performing in front of over 2000 people in Kanpur, India.

Jon is also host of CSIRO’s Everyday AI podcast, which provides a gentle introduction to AI and cuts through much of the hype surrounding this emerging technology. This Creative Exchange event is presented by Creative Victoria and ArtsHub, in partnership with Today.

Q&A session

The question and answer session was moderated by Beata Klepek, the Experience Design Lead at Today. Today is a purpose-first business, working with clients to design experiences, services and products that produce a more inclusive, equitable and sustainable society. Beata is an award-winning designer, who balances strategy, technology and innovation to solve design challenges with practicality, and lots of heart.

You can still catch up with the first two webinars in this series, Webinar #1 –Thinking outside the box and Webinar #2 – A new way forward: how inclusive leadership is driving change

Webinar #3 took place on Wednesday 26 July 2023 – 11.30am to 12.30pm (AEST) and it’s now available to view on YouTube and below.

Webinar #3 transcript

Claire Febey:

Good morning, everyone. Welcome to Creative Victoria’s webinar for July. This month, we’re delving into the world of AI, how it is impacting the creative sector now and what the future holds. We’ll be meeting online today. I want to start by acknowledging the traditional owners of the lands on which we each live and work. I’m on Wurundjeri country today and I want to pay my respects to elders past and present at this land and also acknowledge all the First Nations peoples joining us today. If we haven’t met, I’m Claire Febey, chief executive of Creative Victoria. My pronouns are she/her. I’m a white woman with short hair, coiffed back today. I’ve been freshly to the barber. I’m wearing glasses and a high‑necked black jumper. Creative Exchange is Creative Victoria’s professional and business development program. And we’re delighted to be partnering with ArtsHub and Today to bring this webinar to you. With the ongoing actors’ and writers’ strikes in Hollywood, the impact of artificial intelligence on the creative industry is a hot issue and it’s been the most requested subject for us to cover in our webinar series. In the creative industries and broader society, AI sparks a range of emotions – curiosity, fear, anxiety, anticipation and excitement at the new possibilities it can open up. In today’s webinar, we’ll dive into this multifaceted issue through a creative industries lens. We hope to empower you with a better understanding of AI, remove some of the fear, and explore the role it can play in the future of our sector.

To lead this discussion, we’re thrilled to have Professor Jon Whittle, one of Australia’s foremost AI experts, Jon is director of Data61, the digital technologies R&D arm of the CSIRO. John leads a team of 800 staff working in artificial intelligence, cybersecurity and software engineering and he oversees Australia’s national AI centre which has a strong focus on diversity in AI and responsible adoption of AI. Jon also hosts CSIRO’s Everyday AI podcast, which you can find on all major podcast platforms and is definitely worth a listen. But perhaps lesser known is that Jon has also spent 30 years as a theatre performer, a dancer, writer and director, whose creative practice has taken him all around the world. So, he’s very well-credentialed for today’s discussion.

Following Jon’s presentation, we’ll have a Q&A session moderated by Beata Klepek, who is the Experience Design Lead at Collingwood‑based design services firm Today. With a strong background in user experience design, Beata’s work embraces new and emerging technologies to deliver equitable and inclusive outcomes. Beata and the Today team were named finalists at this year’s Victorian Premier’s design awards for their work on a social media campaign to help teen boys build healthy relationships online. Before we jump into the discussion, a bit of quick housekeeping. This webinar is being live captioned. If you’d like to access the captioning, just select ‘show captions’ on your Zoom menu. If that doesn’t work, click the link in the chat. Please use the Q&A functions to ask Jon any questions you’d like him to answer and use the upvote function to vote for a question someone else has asked. Questions with lots of votes will go to the top of the list of topics for Beata to choose from. We’re recording the session, so any questions you ask will be on that record and the recorded webinar will be available through the Creative Victoria website and also an ArtsHub. In fact, all our Creative Exchange webinars are available to view, so go there any time to catch up on our past events. And finally, the hashtags for today’s event are #CreativeVic and #CreativeXchange. That’s all from me. Please enjoy the session and to start us off, I’ll hand over to Professor Jon Whittle. Thank you so much, Jon.

Jon Whittle

Thank you very much, Claire. And let me just try and share my screen.

All right, thank you. First of all, thank you so much for having me. It’s a pleasure to be here today. And also, I would also like to acknowledge the traditional owners of the land that I am on, I’m on the land of the Bunurong people, and pay my respects to Elders past and present. I’m going to try and talk for about 20 or 25 minutes about all things AI and the creative industries. I want to leave plenty of time for questions at the end, because I think that’s the interesting part. And I want to start just by talking about where we are in our AI journey. So, as you will no doubt be aware, 2023 has been a very exciting time for AI, as well as in some ways, a very scary time for AI as well. And this all started back in November of 2022, when ChatGPT was launched onto the world. ChatGPT is what we call a large language model. It allows you to essentially converse using plain text with a machine and you get some pretty impressive answers from ChatGPT. And ChatGPT, at the time when it was released, was the most quickly adopted technology in history. But things didn’t stop there. You know, there was a bandwagon that all the tech companies kind of jumped on after that and tried to get their own large language models out to the world. So we saw Google release its Bard platform, we saw Microsoft introduce the LLM into Bing Chat. And we’ve seen hundreds and hundreds of other AI tools come out that do everything from generate textual content, but even now images and video and all kinds of things. So I’ve been saying that 2023 is what I’m calling the year of AI because things have really ramped up. And I’ve been working in AI for over 20 years. And I’ve never seen the level of fervour that we’re seeing right now. And just to give you an example of how quickly things are moving, I always talk about the date of 14 March 2023. Because on 14 March, there were two significant launches of new AI technology in the world. But only one of them actually got much media attention. So on the morning of 14 March 2023, Google announced that it was going to incorporate generative AI technologies into its Google workspace platform.

So that’s your Gmail and all the Office Suite products. Now on any other day in the world, that would have been a major announcement, it would have grabbed media headlines around the world. But it got almost no media attention, because just a couple of hours later, OpenAI, the company behind ChatGPT launched its much anticipated GPT four, which was a much larger, more impressive version of GPT, with up to a trillion parameters in its model. And that really sucked all the air out of the Google announcement. And that’s what the media was talking about on that one day.

And so that’s how fast things are moving that two big announcements like that, in any other world, both would have grabbed headlines. Only one of them grabbed headlines, and one of them, although it was a major announcement, really, passed us all by somewhat. So things are moving very, very fast, and they’re not going to slow down.

I am, however, always at pains to point out that AI is not a new technology. In fact, AI has been with us since at least the 1950s. The term “artificial intelligence” was actually coined at something called the Dartmouth [Summer Research Project] conference that took place in 1956. And, in fact, a lot of the early technologies that were developed in the 1950s, like the first neural networks and so forth, are in many ways, the same technologies that we use today. Yes, they’re working at a scale that they weren’t working with then and, yes, we’ve got the availability of large datasets that we didn’t have then, a large compute power that we didn’t have then. But a lot of the fundamental concepts that we talk about when it comes to AI, such as its ability to be intelligent, or its ability to be creative – those conversations were going on back in the 1950s. And the conversations really haven’t changed that much.

And it’s also worth mentioning that the history of AI is one of peaks and troughs, peaks of hype, which we call AI summers where everybody got very, very excited about the potential of AI and there was lots of investment into AI, but also what we call AI winters where maybe some of that hype didn’t fully fulfil itself, and so investment dried up. And the first of those AI winters took place in the late 1970s. Back then AI was seen as a way to do machine translation between any language that you can think about, and there was a lot of hype and excitement around it. But by the 1970s, there was a report brought out by the UK Government called the Lighthill Report that basically said that all of that hype wasn’t really being realised, and almost overnight investment in AI dropped off a cliff. And we entered a period of about 10 years of an AI winter.

Things did pick up, however, in the 1980s. There was an AI technology called Expert Systems that was brought out. And this was trying to codify human knowledge and machines using a set of rules, and then have the machine make decisions that normally humans would make. And, at that time, there was a lot of excitement and hype around how this would replace doctors and how we could have an AI expert system that could diagnose your condition based on these sets of rules, and actually give you the appropriate medication. But once again, the hype really kind of got ahead of itself. And by the late 80s, it was realised that, not least, there were a lot of ethical issues with trying to replace doctors with machines. And so investment dried up in AI again, and we entered our second AI winter. Now we’re in our third AI summer right now. And this probably started back in 2012. Because then something called Deep Learning was invented. And this was really a new way of scaling up neural network algorithms in a way that we hadn’t been able to do before. And this was enabled by the fact that there were fundamental advances in the algorithms themselves. But also there was a lot more data available publicly on the internet, in social media. And these algorithms need lots and lots of data to train themselves on. And also we had lots of more centralised computing in the form of cloud computing and so forth.

And so, since 2012, we’ve seen a massive uplift in the capabilities of AI. And, in particular, generative AI, which is a form of AI, not the only form of AI, but a form of AI that will generate content that has been really… it’s been going on for a few years. But with ChatGPT, that got a lot of excitement in the last few months.

One point I do want to make, however, is that for this audience, the role of creativity in AI has actually been a hot topic of debate ever since the very, very early days of AI. And in fact, you know, when I was doing my PhD in AI, almost 25 years ago now, there were people in my lab who were using AI to generate art. There were people who were doing live music shows where they would play musical instruments alongside an AI that was generating music and improvising with them at the same time. And there were lots of debates back then 25 years ago about, well, can an AI be creative? Or is that somewhat a fundamentally human capacity? And, in fact, on this slide is a piece of artwork that was generated by AI, from a fellow PhD student of mine at the time, a chap called Simon Colton. He invented something called the Painting Fool, which used an old form of AI, where he encoded something called the “travelling salesman” problem. But put a twist on that, and use that to generate artwork. And that artwork was actually exhibited in art galleries. And, in some cases, people weren’t told that it was AI generated and didn’t know that it was AI generated. So there’s a long history, actually, of AI doing creative things. In fact, the same person Simon Colton, as part of his PhD, he came up with a system that could generate new mathematical theorems. And, in fact, one of those theorems that his AI system invented was actually published in a mathematical journal. So you’ve got mathematical creativity, as well. So a very, very long history.

Now, there are lots of opportunities for AI, particularly in the creative industries. And I want to just very, very quickly run you through some of the stuff that’s happening. And it’s a bit of an overwhelming space, because it’s changing very quickly. But pretty much every part of the creative industries, whether that’s music, whether it’s publishing, whether it’s film, whether it’s design, architecture, things are happening with generative AI in particular, and these are just some examples of some interesting things that I’ve seen.

So let’s talk about music first. You may or may not be aware that there is now actually a version of the Eurovision Song Contest, where all the songs entered into the contest are generated by artificial intelligence and, in fact, the first of this AI song competition took place as early as 2020. It was actually won by Australia. It was a global competition. Acompany called Uncanny Valley in Sydney took audio sound pools of typically Australian things like Australian wildlife, and they used AI systems to generate a song called ‘Beautiful the World’, you can go and listen to that on YouTube, it’s quite good. Since then there is actually an annual competition now, where AI songs are judged in the Eurovision-style way.

But we’ve also seen that AI has just become, you know, embedded into the music industry. And this is a very recent example from I think, May of this year, where a South Korean artist and his music label used AI to release the same song in six different languages simultaneously. Now, one of the themes you’ll see in this talk is it’s never quite as simple as just pushing the AI button and magic comes out. There’s usually a collaborative creative process between the human artists and the AI. And that was certainly the case here. So they used a process where they had, you know, people who knew those native languages speak the lyrics, but then the AI put those lyrics into the voice of the artists and the AI played around to make it sound natural and things like that. But it’s certainly a very interesting case where you can increase productivity in the music industry. And I also want to just play a little bit of music because, on my podcast, which Claire mentioned earlier, we actually worked with that company, Uncanny Valley, to get an AI system to produce the theme tune for our podcast. So I’m going to just play that so you can have a little listen to what that sounds like.

So that is AI generated music. We’ve had about 35,000 downloads on the Everyday AI podcast, everyone’s listened multiple times to the AI generated music. And it was developed using a system called AI Jukebox. But once again, it wasn’t just “push a button and get it out”. It was really a collaborative process between an artist at Uncanny Valley and this AI Jukebox system. So what they did was they took some audio samples from the CSIRO archives where they had soundbites that were related to CSIRO. And then they went through an iterative creative process with the AI Jukebox to eventually come up with this tension. There’s a similar example of this. There’s actually a Grammy Award-winning duo, a pop duo, called Yacht that actually generated an AI piece of music that’s featured in the podcast. What they did, they got together a list of all their favourite songs, and they transcribed the lyrics from those songs into one huge document, about two million words. And then they used AI to generate new lyrics for their new songs. And they’ve won a Grammy, not for that particular piece, but for similar kinds of things. So lots happening in music.

But it’s not only music, where AI is changing things, let’s look at publishing and journalism. You might have seen this example, it’s the front cover of Cosmopolitan magazine, which was the world’s first AI generated magazine cover. It looks pretty nice. Now one of the things you have to understand when people talk about generative AI art, is it’s never quite as simple as they make out. So if you look at the bottom of that slide, there’s this little subtitle there that says, ‘And it only took 20 seconds to make’. Now that’s not actually true. There’s a nice video that you can find on YouTube, where the creators of this magazine cover talk about how they actually interacted with the AI system that they used. And they actually spent about 100 hours on it. Because, typically, the way these systems work, you might put a textual prompt in there that might say, you know, give me an image of a female astronaut walking on the moon, in shades of pink. And the first thing you get back is unlikely to be exactly what you want. You’ve probably had an image in your head, and it can’t capture your image in your head. So you have to go through a creative process and iterative process. It’s sometimes called prompt engineering where you actually change the words that you give to the AI system. Or even, nowadays, you can use systems like Adobe Firefly, where you’re doing some of the art and the AI is doing some of the art. So it’s really a collaborative process. And the humans very much remain involved. So not 20 seconds, 100 hours. And this, of course, can all go wrong. If we look at journalism, there have been some very famous cases where things have gone wrong. CNET started using AI in generating stories and had to retract a lot of those stories because it was found that there were factual errors in those stories that were being put out. Because a lot of these generative AI systems, they don’t really have any understanding of what they’re doing. They’re just statistical pattern matching machines that might predict the next word in a sentence, they do it very well. And they do it very impressively. But quite a lot of the time, they make factual errors, things that we call hallucinations. So if you’re going to use Gen AI to write articles, you better fact check them before you send them anywhere. And also, in publishing, you might have come across this example that there are people now that are having great fun, generating entire books using AI, to the point at which if you went in June to the Amazon Top 100 chart, you would find that 81 of books on there were actually generated by AI technologies. Most of them will have been very, very mediocre, and not a very good read. Because that’s the other thing that we find about AI, if it’s not got that collaborative process with humans is it’s very… the outputs are often quite mediocre, especially with text, because it’s essentially looking for patterns that have been used many, many times before, somewhere out on the internet.

And so you’re going to get kind of the least common denominator, generally speaking, so I think Amazon’s fixed that problem now, but certainly that it was flooded at one point with a lot of these AI generated texts.

Let’s talk about film, because there are lots of interesting things happening in film. This is the new wave now that we’re seeing this, this is very, very new stuff. I just play this little video of Harrison Ford, talking about AI that was used in his latest movie, the Dial of Destiny.

TV interviewer

Explain the VFX used on indie in , Dial of Destiny.

Harrison Ford

I don’t understand, I think they have every foot of film that was exposed to me. Lucasfilm owns, because they have… I did a bunch of movies for them, and they have all of this footage, and they can mine it with artificial intelligence for a position of my face for the light. Then I put little dots on my face, and I say the words, and then they take that part and they stick it in that part. It’s not like the photoshopping de-ageing. It’s my actual face. At that actual age.

Stephen Colbert

Or in your mind, this is what you look like all the time?

Harrison Ford

When I look in the mirror now…

Jon Whittle

So that if you haven’t seen the movie, they actually use AI to make Harrison Ford look younger. What they do is they ask the, I think he’s in his 70s, they ask a 70-year-old Harrison Ford to act in a scene and then they use AI after the fact to superimpose a younger version of his face on that. So that’s very, very high-end stuff. So that’s very, very interesting what’s going on in Hollywood. Yep. And there’s also this is it for audio, there’s, you can go online and find something called podcast.ai, which is an AI generated podcast. The guests and the interviewers on that podcast are all generated by AI.

So there’s a very, I would say creepy one on there that they’ve got where they’ve got Joe Rogan interviewing Steve Jobs. And if you listen to it without actually knowing, you could easily be fooled to thinking that Joe Rogan is actually watching Steve Jobs, they do tell you by the way that they’re using AI for that.

And if you couldn’t get Taylor Swift tickets, when she’s visiting in Australia, no worries, because you could actually use AI to make your own Taylor Swift music video. So the Harrison Ford version we saw was very high-end stuff, lots of money, lots of expertise in Hollywood. This is a group of people that essentially did a hackathon where they came together in a day, and they produced a Taylor Swift AI music video in a day, you’ll see it’s not as good as the Harrison Ford one. But it gives you an idea of what you can do even in one day with these tools. So let’s just watch this for a second.

Taylor Swift AI generated video clip

Jon Whittle

And I’ll just stop that there. So that’s one day if you actually go to the link at the bottom of that slide, there’s very nice blog that talks about how they did this and it’s not perfect. But you know, you can imagine if they had a month instead of one day to do this, they really could have improved on what they did. So that gives… this is, you know, a lot of stuff happening in the video. I want to show you this example. I’m gonna have to switch screens here. This is actually a TV show that came out in the UK earlier this year. I won’t say anything more about it yet. I’ll just see if I can play a little clip. Hopefully I can do that.

AI generated deep fake clip

Back in Plymouth, Devon, Mark Wahlberg’s bees that we’re literally helping to keep them alive were under attack from Chris Rocks’ wasps, but aspiring strong man and waiter Chris Rock believes Mark was the one responsible for starting the dispute.

I’ve been training to be a strong man since March 2022. I wanted to be strong and look after myself. One day I’m training and one of his bees stung me.

Jon Whittle

I just stopped out there, you can actually go and watch that on SBS, if you’re interested. Sorry, just trying to get back to my slides. So that’s an actual show, that was not Chris Rock, if you were watching that thinking ‘that looks like Chris Rock, sounds like Chris Rock, what’s he doing in this strange scene?’ It’s not Chris Rock. So this is AI, what they’ve done, they have taken actors. And they have produced scenes with actors, unknown actors. And then they’ve used AI after the fact to superimpose the faces of famous celebrities like Adele or Idris Elba, or Harry Kane, or Chris Rock, or Mark Wahlberg. And it’s, it’s worth a watch, let me tell you.

Let’s switch to performing arts. So there’s a lot of stuff going on in AI and art. For example, you may have seen this at the NGV, back in 2020. This is a piece of artwork that uses AI and quantum computing. It’s by Refik Anadol, it’s a beautiful piece of artwork that was in the main foyer at the National Gallery of Victoria a few years ago. And he essentially took lots and lots of images of nature, and then used that to produce this artwork. And those are my kids, if you’re wondering that are in front of that artwork, back in 2020. Enjoying this artwork, as much as the people who are looking on. They were interacting with the artwork, so it’s an interactive art.

And then I also just want to show you this, because it’s also in design and architecture that things are happening. Now this is a bit of a silly example, this is an app called Brickit that you can actually download onto your phone right now. If you like LEGO, you’ll love this one. But what I’m trying to illustrate here is that AI is also being used quite a lot in design and in architecture.

So, if like me, you have a lot of bits and pieces of LEGO laying around in your house, this app will scan it, recognise what the pieces are, and then suggest things that you can build. And will give you the plan to actually build it. It’s a nice example. But similar stuff has actually been used in industry for a long time. This example is from Monash University, which is where I used to work. And I had some colleagues there that were working with Woodside Energy using AI to help design liquid petroleum gas plants. You know, these are huge plants with huge amounts of piping and all kinds of, you know, equipment and so forth in them. Very, very expensive. And so one of the tasks that you have when you’re building one of those things is how do you optimise to use the minimum amount of materials. And they had an AI system that could essentially do that optimisation for you, that was actually used with Woodside Energy.

So I think it’s still art, frankly, myself, I still consider that to be a creative endeavour. And so those are lots of AI opportunities. So a lot of stuff is happening in this space.

But I also want to talk about some of the AI risks. I won’t go into this in too much detail, because we’re probably going to get into this in the Q&A. But it is worth pointing out that there are definitely things to be aware of when it comes to AI. And these range from the fact that you know some of these Gen AI systems create factual errors, they hallucinate. So you should fact check. They’re also often well-known to have certain biases. They’re trained on, you know much of the text out there on the internet. So if we as human beings can be biased, then by extension, these AI systems can also be biased as well. But there’s also broader issues. There’s what is the environmental footprint of all the data centres that is crunching all these numbers you know. Are we creating kind of digital exclusion because you know, only certain companies, certain people, certain countries will be able to use these AI systems? And you know, what about things like copyright law and who owns the images that are created by these AI systems? And if I’m an artist and my images have been used and crunched into something else, do I have a right to financial remuneration? A lot of these questions are not yet answered.

But a good place to start is the fact that many countries now, including Australia, do have frameworks for thinking about the ethics of AI. And, in fact, Australia was one of the first countries to have such a framework. It’s something called Australia’s AI Ethics Principles. These were developed by the CSIRO and the Federal Government back in 2019. There are eight of them, and they are voluntary principles at the moment, but what they say is that anybody developing or using AI systems should think about these eight things. You know, will this AI system promote social and human well-being? Will it align with our human centred values? Will it be fair? Or will it discriminate against certain people? Will it protect our privacy? And will it be secure against cyber attacks? Will it be reliable? Will it be transparent and explain? You know, if you’re using an AI decision-making tool, will it be able to explain why it’s made a certain decision about you? Is there an element of contestability? In other words, will you know if you’re interacting with an AI chatbot, should you know, and who’s accountable if AI systems do something wrong, who’s ultimately accountable for that?

So these were developed in 2019. And we’ve done a lot of work actually trying to translate these into practice and give companies and developers guidance on how to actually implement, according to these principles. And the Australian government right now is going through a consultation process to try and decide if any of these kinds of things actually need to be codified in law.

And lots of other countries are doing this as well, we might get into some of this in the Q&A. So I’ve talked about AI opportunities, I’ve talked a little bit about some of the AI risks. I do think it’s possible to get the best of both worlds with this. I think the key to it is to have good governance mechanisms in place, is to really think about what processes are in place within an organisation that’s developing or using AI. And also to think about, you know, what are the AI products are you using, there’s a lot of them out there? So try and make sure that you’re picking the ones that are actually more responsible and are thinking about these ethical principles, as opposed to ones that don’t. And the last point, I will say, before we go into the Q&A, because I’m often asked the question from businesses, ‘Well, I’m thinking about using AI in my business, where should I start?’ And what I usually say to those people is you need to think about three things. So the first thing is, first of all, think about what problem you’re trying to solve, don’t necessarily jump to AI as a solution. AI may not be your solution, it may be something else, it may be something much simpler. So really focus on what problem it is you’re trying to solve.

Second I say, once you know what problem you’re trying to solve, ask yourself is AI the solution, because AI is not perfect by definition, it will never have 100% accuracy, because it’s large-scale statistical pattern matching. You don’t need 100% accuracy for a lot of things. If it’s an AI system to recommend movies to watch, you don’t need 100% accuracy. But for some things you do, if it’s a self driving car, you probably want very, very high levels of accuracy.

And then the third thing I say is that, you know, there’s a lot of expertise that’s actually required in applying AI properly. In a business sense that means you probably need to have your own AI experts in-house, your own governance experts and so forth. In a kind of small creative industries practice, it’s probably more about trying to learn and trying to get up to speed on all the different AI tools and how to use them that are out there. Because there’s a lot and it’s quite overwhelming. And it’s changing, changing very, very fast.

So before we go into the Q&A, I will just leave you with this statement. I’m a positivist about AI. But I believe that it’s not actually artificial intelligence, we should be focusing on its “collaborative intelligence” and that is AI working together with humans, so they get the best out of both worlds. Back to my image of the NGV artwork. Because when I look back at the photographs that I took on that day, I don’t remember so much the artwork; I remember how my kids interacted with that artwork, which is a collaborative endeavour and brought something at least to me as a parent as something additional, something new to that piece of artwork, which is a great example of AI and humans working together in a creative sense. And I’ll stop there and put a big plug for my Everyday AI podcasts. Go and listen, if you want to learn more about AI. Thank you.

Beata Klepek

Wonderful. Thanks so much, Jon. I think on behalf of everyone who’s dialled in today, and you’ve got a very big audience, listening in today, thank you so much for taking the time to speak with us. I think I really appreciated the background to AI. And also seeing those wonderful practical examples of how it’s already being applied to such a broad range of sectors, I found it extremely interesting. And I can see that the Q&A has already fired up. There’s lots of questions that we want to go through. For those of you who may have joined a little bit late and you missed the intro, we are running a question and answer section now. And you can pop in your own question into the Q&A button down at the bottom. And you can also upload any questions that you’d really like to see answered, because we’ll try and answer those first.

We’ve got about 20 minutes. So we’ll try and get through as much as we can. The first thing that really was coming up in the questions and answers was around the crediting of the artwork. So how do we deal with crediting art or writings made by AI? And, in a similar vein, there was a question around how do we … if an artist’s work is being used to create content, and then who owns that work?

Jon Whittle

Yeah, great question. I knew that one would come up. And the short answer is that nobody knows yet. This is a very, very emerging feast. So I think there are two parts of the question. One is how do you credit things? And one is who owns things? In terms of how do you credit things, probably the easiest way to talk about that is that there are some versions of ChatGPT that will cite their sources. So if you go to Bing Chat, for example, and you ask it a question, and it comes back with some text, and it makes some pronouncements, it can actually give you citations of where it got that information from. So that’s crediting the source in some sense. Now, it sounds on the surface very good, right? Things have been cited. It’s not quite as good as it might sound. Because sometimes, though, you go to those citations, and it’s not really the source, there are kinds of errors that have been made, or the AI system has summarised things in a way, that isn’t quite what that article said.

So I think there’s still, it’s still quite problematic, I would say, there’s a lot of research to be done. And, from a technical point of view, it’s not as easy just to credit things as you might think, because these systems are, you know, they’re very, very large models that might have trillions of parameters in them and are taking in trillions of data points. And it’s crunching all that data together in a very, very large statistical algorithm. And you can’t necessarily trace through from one piece of data that was put in how that is handled along that journey, because it’s just too big. And certainly, if you could do that, we as humans wouldn’t be able to understand it.

So there’s a lot going on right now to try and address that problem. In terms of who owns it, well, that is something that’s kind of being played out in the courts right now. There are a number of lawsuits that are currently in play, particularly people are saying, well, you know, if you prompted the AI system to produce a piece of art in the style of X, shouldn’t X get remunerated for that? There are no laws that really cover that right now. But there are certainly lawsuits that are very, very accurate. I think it will emerge over the next few years. Sorry, a few months, not a few years, hopefully.

Beata Klepek

Fantastic I think there was another question which digs a little bit deeper into the same sphere, which is around if AI uses that knowledge base to generate that work, how is that compatible with copyright law? And is that an area that’s still just emerging?

Jon Whittle

It’s the same answer. It’s in the law courts right now. I think there’s been a case that, I think it’s in the States that has just been knocked back. There was a class action suit, if I remember correctly, from a bunch of artists that said that their work was copyrighted, and they should be credited, remunerated. It was knocked back initially on the grounds that the output that the AI system produced was so far away from the copyrighted works it was indistinguishable. But as I understand that, they’ve been given a second chance to kind of fine-tune their arguments. So there’s a second bite at that cherry. But we’re also seeing some companies being a bit more responsible about this now because, you know, companies like Open AI that developed ChatGPT, it’s just taking anything that’s public, copyrighted or not. Whereas I think the Adobe Firefly system that’s only using works in its training that it’s actually got rights to.

Beata Klepek

Right, thank you for that. I wonder, there is another question here around, what are your thoughts on the education sector thinking about and introducing AI policies utilising AI to develop learning modules? Leaning on AI to teach and the impacts on jobs for both teachers and perhaps new opportunities for students in the space?

Jon Whittle

Yeah, so you’ll probably find that all my answers today are, it’s an emerging thing. This is another area that’s emerging. In fact, I was speaking to someone from Victoria’s Department of Education yesterday. They’re trying to come up with guidelines for teachers and students to use AI in an education space. Look, I think, again, there’s like any technology, there’s lots of pros and cons. So, certainly, I’ve heard stories of students and indeed teachers using generative AI to make them more productive, or even to, you know, to use generative AI as a kind of a learning tool to learn about a piece of history, or to help them with their creativity, because the AI might come up with ideas that they don’t have.

But there’s also a flip side, which is, you know, I write a lot, I write in my spare time, and I’ve tried to use ChatGPT to help me write. But I don’t find it to be very useful. And there are a couple of reasons for that. One is because the outputs you get are quite vanilla, I would say. And so it doesn’t really, it doesn’t reflect anything unique. But secondly, it’s like the very process of writing is a creative process. And it helps me to organise my thoughts. And I haven’t yet found a way to do that same kind of thing by interacting with ChatGPT. Just to give you one example of how I have tried, there was an article that I wrote for The Conversation. And it was an article that I had been meaning to write for a couple of months, but I never found the time to sit down and write it. So one day, I said, ‘You know what, I’m going to get AI to do this for me.’ And so I just took my iPhone, and I pressed the record button, and I recorded myself speaking just a kind of stream of consciousness of my thoughts, and then use AI to transcribe that into text, I put that text into ChatGPT and said, ‘Generate me an 800-word article that’s appropriate for The Conversation. And it did it right, all in five minutes. And at first, I thought, wow, this is amazing. I’ve got my article. But then, of course, I started reading the article. And I thought, well, I don’t really want to say that that’s, that sounds very, you know, there’s no spark to that. So I then went through a creative process of, probably for about three hours, of rewriting that article to the final product. There was one sentence from ChatGPT that survived that process, only one sentence. So, in some sense, the tool was somewhat useless. However, what it did do, it got me to write the article, because the fact that I could produce something in five minutes meant that, you know, me over a period of a few months trying to find time to sit down and write it, that barrier was unlocked, because suddenly I had something to start with and to work with. And I thought, you know what, there is an article here, I’m going to sit down and find the time to write it.

Beata Klepek

I really like the way you describe that as a first way of being able to organise your thoughts. I find that I’m always struggling to do that first draft of anything. And then once that’s happened, it’s so much easier to refine and to grow upon your work. So I really love that. And that AI that can be used in that context. There’s a question, which is changing the track a little bit about how could AI be used to accelerate climate action? And are there any good examples of that happening at the moment?

Jon Whittle

There are definitely. So there’s lots of work going on in AI and the climate. So we do quite a bit of work at the CSIRO in this area. So we for many years actually have been using what is sometimes called ‘old-fashioned AI in the climate arena. So just to unpack that a little bit. We, you know, we’re used to tools like ChatGPT and so forth that generate content, that’s generative AI, but more old-fashioned versions of AI are more like, you know, just doing analytics or predictions based on datasets and that’s been used in industry for decades.

We’ve got a tool that we’ve developed with the Department of Agriculture that takes data around climate and climate predictions, and works with farmers and presents that to farmers in ways that they can understand the impact that a changing climate is going to have on their land and their farming practices over the next 20 years and gives them a lot more information. That’s one thing. You know, we’re also obviously seeing increased prevalence of bushfires, we’ve got a tool called Spark that can actually predict the path of a bushfire in real time using AI, so that you can allocate resources more effectively. And that’s actually being rolled out nationally. But I’m also involved in something called the B Team AI Coalition, which is trying to work with CEOs to get them to better understand AI. And the topic that those CEOs want to focus on is actually the role that AI can play in mitigating or, or preventing climate change. So yeah,a huge amount of work going on in that area.

Beata Klepek

It’s really exciting to hear. And it’s just mind blowing, really, that so much work is already being done. You do use the word “generative” AI a fair bit. And there was a question that came through, which was around how and if an artists should differentiate if they’re using AI that isn’t generative, because it isn’t always. What are your thoughts on that?

Jon Whittle

Do you mean, just trying to unpick the question? You mean, if they used AI to produce an artwork, should they say that they used AI? Or is it more about…

Beata Klepek

I guess that’s around the meaning of generative AI? And can you have AI that isn’t generative?

Jon Whittle

All right, yeah, sure. No, you certainly can. So actually, generative AI is a fairly new term. So maybe just to step back a bit and give you kind of quick 101 on what AI actually is. So historically, there have been two forms of AI: a rule-based AI and a data-driven AI. Now, the best way that I can describe this is by talking about coffee. So many of us are in Melbourne, we love coffee, right? So let’s suppose that you wanted an AI system to predict the next coffee you would order. It might be that you know, first thing in the morning, you want a long black, if you’re hanging out with friends on a Saturday, you want a cappuccino, and maybe in the evening, you want a latte, right? So different places in different times in different contexts, you want a different coffee. Now, if you wanted an AI system to predict what coffee you were going to order, there are two ways you could do it. The first is the rules-based AI. And in that case, you would really write down a set of rules such as, you know, if Monday morning, then Jon likes long black, if Saturday and with friends, then cappuccino, and you put those into the machine and there’d be some uncertainty around those.

But the AI system would find a way to apply the relevant rules and would say, OK, it’s Friday at four o’clock, you’re going to order a latte. That’s one way. The second way is just to use data. So data driven AI. And then you might kind of walk around and have a little AI on your shoulder following you around for a few months. And every time you order a coffee, it would make a note and it would say ‘OK, on Monday morning at 7am, he ordered a long black’, and so forth. And when you’ve got enough data, it can find patterns in that data, and it can actually predict your next coffee. So there’s the rule-based AI and the data-driven AI. And those are traditionally the two forms of AI we’ve had. Back in the 1950s, it was all about the rule-based AI, nowadays, it’s always all about the data-driven AI. And generative AI is one form of the data driven AI, where you’re not just analysing data and coming up with predictions, but you’re actually generating content, whether that content is music, video, text, designs of power plants, whatever it happens to be.

Beata Klepek

Right, thank you, I found that really helpful. I find it’s always just new terminology that you’re trying to wrap your head around. And there are several questions coming up in the chat, which are around whether or not AI will advance to the point that human collaboration is no longer needed. And maybe that’s why we were saying that collaboration through the examples that you showed us, is it all happening at the moment because it’s all fresh and new?

Jon Whittle

Yeah, so that’s a good question. Look, I mean, I suppose in some examples, we’re already at the point where human collaboration is not needed. So you know, we’ve had, it all comes down to what is intelligence, which is a philosophical debate. But in some sense, we’ve had, you know, artificially intelligent machines for a very, very long time. A calculator can do arithmetic, much better than we, as humans can, and it doesn’t need human collaborations for that. So in some sense, we’ve already got that. But I think, more broadly, my feeling is that AI is obviously getting better all the time. But it’s still a very, very, very, very long way, from being intelligent in the same way that a human being is. In fact, there are things that a two-year-old can do that an AI system cannot do. And the current techniques we have are very, very far away from that. And I’ll give you an example of that. Let’s suppose that you want to recognise whether an animal that you come across in the street is a cat or a dog. Now, to do that with AI, you have to train an AI system with lots and lots and lots of images, hundreds of 1000s of images of cats from different angles, different colours, different species. Similarly, with dogs, different angles, different colours, species, and you tell it ahead of time, that’s a cat, that’s a dog. And then when it’s got enough data, you can give it a new image it’s never seen before. And it will say that’s a cat, a two-year-old doesn’t need to do that. A two-year-old can see a cat once. And the next time it sees a cat, it knows it’s a cat. Why? Because it’s not doing it through lots and lots of examples. It’s doing it by somehow figuring out that cats have fur, they have a long tail, they have whiskers, they have this particular shape. And when it sees those characteristics again, it knows it’s a cat and the current gen AI systems that we have, for example, they don’t have that kind of abstraction and reasoning that even toddlers have right now. We might get there at some point. But we’re quite a long way away from there right now. And that’s why I think we’ll always, at least for the foreseeable future, need that human collaborative element.

Beata Klepek

That’s, that’s really… I love that example of how the human only needs to see it once and you’ve already learned so much. I really love that example. There’s a question which is wondering around whether or not it’s already gone too far, in order to pull back? And do you feel that decision-makers fully understand the implications of the development of AI?

Jon Whittle

Yes, that’s an excellent question. Look, I’m not so sure anybody fully understands the implications. And, you know, we’ve kind of been here before, is what I would say. So if you look at the history of social media, it was kind of a similar thing, that this new technology came onto the scene that would allow people to communicate and form communities and former groups more quickly and in a way that they’d never been able to do before. And in the early days of social media, it was very much seen as a positive thing. You might think back to the way that social media was used in the Green Revolution in Iran, where it was a way for activists to form groups and protest against the government when you couldn’t use official communication channels. So it was a very positive thing in the early days. But gradually, that’s changed over time. And most people that I speak to now, they have a lot of very, very serious concerns about social media, you know, concerns about online child sex, you know, just concerns about the amount of vitriol that you might find on Twitter, for example.

So we don’t really know how AI is going to develop, there are a lot of you know… the good news is that I think 10 years ago, there were very few people in the AI community that were thinking about things like ethics and how to prevent bad uses of AI. It was all about the technology. While I wouldn’t say that we’ve got the balance right yet, it’s certainly a lot better than it was 10 years ago. And there’s actually a substantial part of the community and other communities now that are thinking really, really hard about these things. So I’m, I’m still optimistic that we can use AI to make the world a better place as long as we put the guardrails in place to make sure that bad things don’t happen.

Beata Klepek

Yeah, I really love the way that you’ve articulated that, Jon, I wonder if we have room for just one more question. I’d really love to ask this one: do you think that the proliferation of digital content that AI will produce will actually increase the value of traditional arts practices and live cultural connections?

Jon Whittle

Absolutely wonderful question. I hope so. I mean, the fact of the matter is, well, it’s a tricky one. So I usually make a distinction between the text and images at this point. Certainly for text, I think most of the text that is generated is quite mediocre in quality. And that might be fine for certain applications. If you’re just wanting to put out a vanilla blog for your company, go for it. But I think it’s not the same quality as a well-written novel, for example, and I don’t think it will be anytime soon.

Images, I’m a little bit less sure of that, probably because I’m not an artist. So I’m less well-placed to judge but you can get really high quality images from generative AI and very, very quickly. But again, doing something like the Cosmopolitan magazine, it’s not a two-minute job, actually. It’s 100 hours job. So there is still that curation process that you have to go through, I think.

Beata Klepek

Wonderful. Thank you so much, Jon. I think we will wrap it there. And I’ll hand over to Madeleine, who will finish out the webinar for us.

Madeleine Swain

Thank you. Hi, everybody. I’m Madeleine Swain. I’m the managing editor at ArtsHub. I am a middle aged, white woman with brown tied back here and glasses and I’m wearing a black jumper over a stripey shirt. My pronouns are she and her. We’re coming to the end of our time. So on behalf of the teams at ArtsHub and Creative Victoria, I’d like to thank everyone for joining us today. And I’m sure you’ll join me in thanking John Whittle for his fascinating presentation and Beata  Klepek for her sterling work in moderating the many, many questions we had today. Do look out for our Creative Exchange podcast series, which is coming soon, it will look in a little more depth at some of the topics raised in the webinars so far. And if you haven’t yet, don’t forget that you can catch up with the two previous webinars on ArtsHub. That’s at artshub.com that are you and on the Creative Victoria website. And you can track them down on YouTube too. And to find out about upcoming events in the creative exchange series, please subscribe to the creative Victoria e-newsletter or follow it on social media. Finally, do remember to use those hashtags, the creative, the #CreativeVic and the #CreativeXchange ones we shared with you in the chat earlier. And please do complete the feedback survey, which helps us with our next webinars. Thanks again for joining us today. And we’ll see you’all next time.


Madeleine Swain is ArtsHub’s managing editor. Originally from England where she trained as an actor, she has over 25 years’ experience as a writer, editor and film reviewer in print, television, radio and online. She is also currently Vice Chair of JOY Media.