AI keeps giving me melodramatic story ideas…

... I didn't ask for them.

I’m not a fiction writer, but lately I keep having short story ideas. One is for a story that’s set in a productive future world where workers no longer need labour over interpersonal communication – where their instructions needn’t contain pleases or thank yous or appreciative tones, because they’re almost always directed at chatbots. Many words have fallen out of use, in the workplace, and the home; now sentiments like gratitude are fading too.

Interpersonal interactions are increasingly awkward in this world; some people have forgotten how to use full sentences to properly express themselves, others never learn. Literacy is taught, in screen-based lessons without teachers or a class, but students only really need to learn commands. Talking face-to-face and in real time is anxiety-inducing and, for the most part, unnecessary. More time is spent with computer games than friends.

One of the biggest challenges in this made-up place is conveying and interpreting emotion – knowing which words to use and what words mean: reading faces, body language, tone. EI* is at an all-time low. Loneliness is common, friendship rare.

The EI problem’s been identified. Technology has been deployed. Watches contain programs that can mediate between parents and children, between couples, colleagues, friends – biometrics read emotions, chatbots communicate accordingly.

But one day the cloud is hacked, the “middleman” goes down – chaos ensues.

I have no desire to write this story. I find fiction difficult. I could spend hours trying, but I doubt I’d like the end result. Or… I could use AI.

I could get a computer program to write non-fiction for me too – the kind I write for a living. I could start by only using it for prompts, I could tell myself I’m still the author, it’s only a tool – then rely on it a little more … and then a little more … until I don’t “write” stories any other way.

Of course I won’t. I love to write, to think for myself, to put my thoughts and feelings in words. I love the challenge of creating, the satisfaction of completing; I will not give them up. And I know that countless others feel the same about countless other gloriously inefficient practices. But I wonder how much harder it will be, in coming years, to make a living from my work.

Already, hypothetical dilemmas are becoming literal ones. Last week I was submitting an essay for publication using an online form. It asked me to check a box certifying I didn’t use QuillBot or a similar tool to rewrite someone else’s work. There was nothing to confirm this publication wouldn’t use technology to rewrite mine.

At least the end result would be inferior. Or would it? And, if so, then for how long?

Read: ChatGPT AI generated art critic

This morning I had a brand new horrifying thought. What if Google Docs, which I often use – am using now – which tracks my every word – written and deleted, moved and rearranged – doesn’t retain every version for my sake, but for its own. We’re teaching it how to write like us; each user is handing it their process and their voice. And, for all I know, we all agreed, scrolling without pausing to “accept” when we signed up, giving consent.

I could confirm this is a fiction by locating some facts, but I’ve had another short story idea.

This one’s a present-day thriller. Let’s cut straight to the climax, where shadowy, faceless, nameless Big Tech Bosses are clicking through news coverage about the various flaws in ChatGPT and Bard. A journalist speculates about whether future versions can succeed where these have failed.

The bosses laugh hysterically maniacally.

They laugh because they already have the solutions. The technology they’ve developed has advanced faster than anyone beyond their circle suspects. The current flaws are deliberate, designed to give people a false sense of security and superiority, to give them time to adjust. The bad guys aren’t waiting until the next version is ready, they’re using it now – and it’s telling them to hold off its release. The masses won’t accept it now, but they’ll be ready soon enough…

Some time after ChatGPT was released and before I started accidentally having melodramatic ideas for stories I had no intention of writing, I attended a live theatre performance. It consisted of an unassuming 66-year-old man, alone on a stage, talking.

As he read some work aloud, David Sedaris made a few mistakes. In most cases he’d simply restart his sentence as if nothing had happened, but one time he clutched his heart with one hand and winced, then looked at us, his audience, apologetically.

Moments like this didn’t ruin the show, they made it. They reminded us that, despite his celebrity, Sedaris is a person just like us – someone we can sympathise with, someone we can bear with, someone we can be gracious to.

Some may think this kind of primitive “entertainment” is fast becoming a thing of the past, but it’s also possible the opposite will be the case – that the more pervasive machines behaving like humans become, the more we’ll value and appreciate the real thing, humans behaving independently in all their imperfect, unpredictable glory.

Read: ChatGPT and the future of writing

Author and philosopher Steven Hales says the reason we ban performance-enhancing drugs in sport is that ‘we want to know what human beings, unaided, alone, can do’. Hales says that while he’ll never paint as well as DALL-E2 or play chess better than AlphaZero, those activities are still worth pursuing – ‘even in their imperfection, and perhaps even because of it’.

Humans reach – we strive, we wonder, we question, we dream. Even if we don’t have to, we will. And we will long for deep connection, for relationship, for love. It’s part of who we are, of how we’re built.

Increasingly sophisticated technologies may make certain kinds of human effort increasingly rare, but they may also help us to appreciate human effort and skill, thought, creativity, diversity, physicality, spirituality, relationships – more than ever before.

And, as we learn new things about AI’s limits, and our own – as we catastrophise and then, as we calm down – we may better understand our weaknesses and strengths: what makes us tick, what makes us matter, what makes each one of us precious and unique.

* emotional intelligence

Emma Wilkins is a Tasmania-based journalist and freelance writer. Topics of interest include relationships, literature, culture and faith.