The introduction and increasing use of Generative AI is one of the biggest and fastest changes in arts, cultural and non-profit governance.
Boards are using platforms like ChatGPT to shortcut drafting agendas, help manage board papers, track compliance or action items. Board members are feeding their board papers into AI programs (often without checking their organisations’ policies on whether they’re allowed to do so) and asking them to generate summaries, prompts or devil’s advocate questions for board meetings.
Read: A win for creative industries against AI?
But while Generative AI may provide opportunities for organisations and boards to transform their ways of working, EY Oceania notes they ‘also present challenges, including how to ensure their ethical, compliant deployment, [and] that decisions driven by data can be trusted and vetted through robust assurance processes’.
This includes challenges around:
- Copyright and intellectual property: As we’ve seen with Meta’s recent theft of millions of books to train its Large Language Model (LLM), AI datasets are primarily based on unlicensed, uncredited and uncompensated source material. As the technology is moving faster than legal precedent, there are also concerns in terms of output – meaning it remains unclear who is the legal author, owner or copyright holder of any text, images or code AI platforms generate for their users.
- Informed and unbiased decision making: With a recent study showing AI search results are wrong 60% to 96% of the time, the use of AI-generated data dramatically increases the risk of boards and organisations taking on incorrect information from unverified source material. Users are also at risk of ‘hallucinations’ (false information created through the Generative AI process itself), as well as in-built and learned biases (for which many AI models have already being criticised) – all of which calls truth and trustworthiness into question, and can offset any of the time saved through automation with the additional workload required to fact-check and proof-read.
- Data insecurity: Those who upload their board papers to a third-party Generative AI platform don’t have control over that platform’s data security (and no recourse when it inevitably leaks). Nor do they have any way of ensuring the data they upload is only used to answer the questions they ask of it. Boards and board members that use AI to analyse organisational data risk privacy concerns and potential loss of control of confidential information – which can be particularly problematic in terms of cultural safety, Indigenous Cultural and Intellectual Property and Indigenous Data Sovereignty. As Nyiyaparli and Yindjibarndi cultural consultant Jahna Cedar asks: ‘How do we ensure AI systems respect diverse perspectives and uphold Indigenous values of respect, reciprocity and interconnectedness? How do we also protect Indigenous intellectual property, in the process?’
- Legal and fiduciary duties: Any one of these issues can create legal issues for boards and board members, with unquestioned reliance on AI-generated data not an excuse when boards fail to meet their responsibilities. You can’t blame the program if it gives you bad advice.
- Cookie-cutter creativity: AI-generated data decreases access to different perspectives, nuance, creativity and craft, which means organisations that use AI for strategic plans, pitches and external communications are increasingly easy to spot (as are AI-generated job and grant applications). This is a particularly hypocritical look for arts, cultural or other creative organisations and practitioners that use AI to write about creativity while simultaneously stifling it. The growing reliance on AI also reduces opportunities for board and staff members to acquire and practice critical thinking, analysis and writing skills – without which, it becomes even more likely our organisations consume incorrect or biased information as fact. This also raises the spectre of redundancies – because if board members aren’t actually engaging with board papers themselves, and the technology they’re using is equally available to all, what value is them plugging their papers into Generative AI platforms adding to the governance process at all?
- Reputation and risk management: Boards that act on AI-generated data also risk backlash from staff and stakeholders when their decisions appear to undermine their organisational values, or their very use of AI does the same. This includes: arts and cultural organisations that use AI technologies that steals from and exploits artists and writers; environmental organisations (or anyone with targets around reducing their carbon footprint) that use AI technologies that accelerate the climate emergency; and any organisation with a social justice mandate (including access to art and culture) that use AI technologies being used in human rights abuses all over the world.
None of this is to say that we can’t imagine a future in which ethical and beneficial Generative AI is incorporated into our creative and professional lives (indeed, their creative potential in terms of access and literacy alone could be life-changing). However, while Generative AI platforms may seem free for the end user, for now at least, they come at a price – one each of our organisations needs to decide whether or not to pay.
In doing so, some concerns may be mitigated by:
- endorsing the international statement on AI training
- establishing clear organisational policies on if/how to use Generative AI, including data privacy requirements for uploading information to third-party platforms (noting these policies are likely to need updating faster than others, given the pace of legislative change)
- introducing fact-checking and proof-reading procedures to ensure AI is used to enhance governance processes, not replace human oversight
- supporting the Media Entertainment and the Arts Alliance, Australian Society of Authors and others to lobby Australia’s newly re-elected government to insist upon AI legislation that protects creators’ rights and holds big tech companies accountable
- joining campaigns that apply pressure to Generative AI providers to publish the carbon footprints and human rights reports of their LLMs, so consumers can choose (and companies be more motivated to provide) greener and more ethical AI, and
- even to consider taking the advocacy or policy position that, until generative AI can be trained and used ethically, our organisations will avoid using or endorsing it at all.
A version of this article was originally published on Kate Larsen’s ‘And Another Thing’ vlog.