“I’m sorry Dave, I’m afraid I can’t do that.”
So says HAL 9000, the artificial intelligent supercomputer from Stanley Kubrick’s 2001: A Space Odyssey. The most famous fictional computer we’ve all heard of, and who (that?) became the template for all future fictional evil artificial intelligence. From the Tyrell Corporation through to Bender from Futurama, AI (which, in the fictional definition at least, is defined as a computer that can pass as a human) has been a part of our popular consciousness ever since.
Naturally, in the real world, the idea of producing artificially intelligent systems has been a goal for decades. The idea of something able to make decisions based on learning, as opposed to relying on the (potential) errors of the human programmer has limitless and transformative capabilities on humanity – yet in the last few years, with the level of powerful computer hardware having reached a tipping point, this fiction is starting to become a remarkable, and admittedly scary, reality.
Why am I mentioning this? Well, it’s time for a confession.
Recently, unless you’ve been living on the USS Spacecraft Discovery One, you would have seen the internet pick up on a piece of technology called ChatGPT, a machine learning tool that advances the classic “AI chatbot” by several notches, in that it provides significant features whereby it is possible to talk to it and for it to provide contextualised answers to questions. Most importantly, the AI will tell you when it doesn’t know the answer or if there’s an issue with the question you are asking (if, for example, you deliberately try and trick it with something you know is wrong). It also provides genuinely readable copy upon request, and here’s where things get interesting for us…
In our last newsletter, we posted an article on the Fountain blog titled “How To Use Schema Markup For SEO If You Have An E-Commerce Site”. This article was posted under my name, and we sent it to our newsletter as we would every other article. Some people reacted to it, and we even saw some comments.
But here’s the thing: I didn’t write it. I got ChatGPT to do it for me instead. The whole article is completely unedited and unabridged from the AI answering the following: “Write me a blog post on the importance of using Schema markup for SEO if you have an e-commerce site”. And that was it – no further context.
And what’s mad is that, quite frankly, it’s initially quite hard to notice that it wasn’t written by a human.
There’s now a lot of chat and speculation about ChatGPT online. If you go on LinkedIn right now, you will probably be buried under an avalanche of content about the thing, ranging from tentative embracing, to intrigue, to downright cynicism and denial. All of these views are perfectly valid, but the raw facts remain the same even if you are a cynic: the tool works incredibly well. It often answers questions better than Google can (I have used it to find Excel solutions – avoiding the deluge of irrelevant clickbait sites that seem to provide this information via a Google search), produces blog content to a very reasonable standard (and, remember, quickly and for free) and can probably do your washing if you ask it nicely.
The best things about it, however, are that it will recognise its own limitations, which is something most AI technology fails to do, and that it is also built to deliberately avoid the biases that have plagued previous pieces of AI technology. Simply put: there’s a reason why it’s generating so much buzz in all sorts of online circles currently.
Will it replace human content generation though?
My thoughts, for what it’s worth, still stick to the principle that an AI will not replicate human artistic forms. No matter what that clickbait YouTube video about an art dealer being fooled by a piece of AI art tells you, a trained eye can always tell, and an untrained eye will usually pick up on something ‘not quite right’ in the uncanny valley.
However, as a writer by trade, what I do find concerning is that in an industry that already demands content for free (because the end user is rarely paying for it), the use of AI content will only ever expend because the demands of economics dictate that we take the path of least financial resistance with whatever we do in business. This is an annoying aspect of modern life that is, unfortunately, not going to go away. For better or for worse, back in the 2000s, the internet decided that creative content should be provided to people for free, but without any sort of solid economic model to back it up. No money means fewer genuinely good writers, and fewer genuinely good writers means worse content.
In terms of SEO, Google reckons that good content still wins through, and is known to be putting AI detection checkers in place to stop poor-quality AI content shovelling. I’d believe this more if this wasn’t a line it has used for over a decade when it’s clearly not true. Any Google search for something such as “how to watch the last of us” will see a host of clickbait articles from “high-quality content providers” such as Capital FM (a radio station), IGN (a video game website), Metro (a newspaper), Rolling Stone (a music magazine) and the Independent (another newspaper) – with Sky, the actual place where you can watch it nowhere to be found) – all sites desperate for clicks so that their ad revenue can allow them to survive and function.
Perhaps this will see Google at least have some self-reflection and have a rethink as to how it wants the internet to produce its content, and what it actually means by “quality”. This move will be important going forward because, again, of ChatGPT itself. With Microsoft supposedly looking to incorporate ChatGPT into Bing, it’s the first time in 15 years that they might have a genuine ‘killer app’ on the horizon that could bump them off the top of the search engine pile.
Ultimately, AI is here to stay, is getting better and, with everyone so used to having free content now and unwilling to relinquish it, we are at an interesting crossroads. What happens next is up to whether you are a glass half full or a glass half empty sort of person.
A final question remains, however: Did I, Nicholas Hayhoe, write this piece of content speculating about AI generated content? Or did an AI do it?
I’m sorry reader, I’m afraid I cannot reveal that information.