Not to continue to harp on Artificial Intelligence, but as time has progressed, I’ve had a couple new observations to make, and I wanted to share them with you.
The biggest issue I’m seeing is how much AI seems to be used by so many different websites. Whereas before, someone on the staff might have made their best stab at writing up content for a website, now I’m increasingly seeing content I’m almost positive is written by AI. I understand why someone might choose to do this. AI writing is definitely competent, and likely better than many people can do on their own. It’s well phrased, with proper grammar and spelling. And it takes people almost no time to come up with it. So easy and better and free? No wonder people are turning to it in droves.
How do I know if something’s written by AI? This is a bit harder for me to explain. I think a large part of it comes from writing so much myself. I’ve got a sense of voice, and AI almost always sounds the same. It’s formulaic. (Which makes sense, because it follows a formula to generate itself.) It’s as if someone learned about the five paragraph essay in grade school and always wrote everything in that format from then on. An introduction with a thesis statement and summary of what’s going to be argued, then the actual argument, and then a conclusion to tie things up.
Does that work? Sure it does. Does it make for riveting reading? Nope. And if that’s all it was, then perhaps it wouldn’t matter that much. Yes, many websites would sound the same, but eventually I’d think sites would start turning back to actual people, in an effort to differentiate themselves again. (Writing by writers for the win!)
Beyond that, however, is the simple fact that AI doesn’t always tell the truth. Sometimes it gets a few details wrong, and sometimes it just flat out lies. And yes, theoretically there’s a human reading over all the AI generated text to make sure it’s accurate, but it’s hard for me to be 100% confident that anything I’m reading written by AI is worth reading. (And this is while AI is based on actual human writing. Very soon, it will be based more and more on writing that was written by AI, and things could really start to get messy.)
I have started to simply stop reading something if I have a reasonable suspicion it was AI generated. I’ve seen this on how-to sites, healthcare sites, and recipe sites, off the top of my head. (Recipe sites are a particular nuisance, since recipes can’t be copyrighted. So all you need to do is take someone else’s recipe and write your own thing about it, and you’re good to go.) For better or worse, people make actual decisions based on things they find online, because they’ve typically assumed most of the information is accurate. All it takes is for enough people to begin getting burned, and that all falls apart. (How many times are you actually happy with the answers Alexa gives you when you ask it a question?)
So in the short term, I expect this to only get worse. Long term, it will either go away (as people give up on AI) or AI will improve demonstrably and somehow manage to fact check itself. I suppose we’ll see how it plays out, but I’m beginning to be pretty skeptical of AI, primarily because there’s no actual intelligence involved. It’s just a text generator that looks at probabilities. Calling it “Artificial Intelligence” makes it sound like something it most definitely is not.
Time will tell . . .