This editorial was not written by an AI. I probably shouldn’t need to make such a declarative statement and yet, this year seems to be the time when that kind of clarity is required.
As you might imagine, as the editor and publisher of a long-running short fiction magazine, I have been thinking about AI a lot lately. Throw in the fact that I’m also an experienced software engineer and you can double whatever amount of time you thought I was thinking about AI.
So much of what I’m feeling boils down to “if something is free, then you’re the product” and the fact that we’re now watching the repercussions of that play out in real time. These AIs have been trained on data that was scraped from the internet. You know, that place where so many people are often their worst selves? Yeah. If that doesn’t make you uncomfortable, perhaps it should.
When Clarkesworld had to pause submissions earlier this year due to the influx of AI-generated content they were getting, the staff of LSQ had a chat about the situation. I’m pleased to say we’re all in agreement that we will not knowingly publish anything generated by an AI.
We felt pretty confident that we can recognize the difference between a human-written story and an AI-generated one for now after one of our editors experimented with ChatGPT and asked it to write “a Luna Station Quarterly” story. While the result was readable and hit on a lot of themes you’ll regularly see in a story we would run, the writing style was very immature and the story was very unpolished. Even without knowing this was AI-generated, it would have been rejected.
Yet, these tools are getting more and more refined every day. How long is it until the stories, essays, and articles they produce are indistinguishable from human-written works, but with far more hidden biases than we can account for? More importantly, what can we do about it? That’s the big question of the moment, from where I’m sitting. Is there anything we can do to get these companies to slow down and consider the impact of what they’re doing?
Artists and writers in particular are currently bearing the brunt of the impact of these rapidly deployed tools. It’s astonishing how quickly this technology is being adopted and with no option to turn them off, no protections in place for the creative humans that have worked so hard to hone their craft, who put their heart and souls into their work.
Interestingly enough, in our submissions this round there were a handful of stories about various AIs, their inner lives or how the technology could be manipulated in the not-to-distant future. The latter of these stories is within these pages. It’s a cautionary tale and I encourage you all to read it, along with our usual wonderful collection of new tales written by humans about humans, and other dark and complex creatures, making their way through the world.
I’ll leave you with two quotes from a few other creative folks who have thoughts in line with my own on this topic. The first speaks to the limitations of what this technology actually is at the moment and the second speaks to why, at the end of the day, the idea of accepting a story generated by an AI will continue to be anathema to us here at LSQ.
“Two things about ‘artificial intelligence.’ It’s not artificial – it’s built on as much human activity as can be shoved into a database. And it’s not intelligent – it is very fast manipulation of spreadsheets.” – Warren Ellis
“ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.” – Nick Cave