Current Artificial Intelligence Thoughts

I keep going to talks and mini-conferences about artificial intelligence, and my thinking on the subject continues to evolve. Not that I’m really changing my mind on any part of it completely, but rather that as I dwell on the subject more and more, I see aspects I hadn’t thought of previously. So I thought it would be useful (to me, at any rate) to take some time and pin down where, exactly, I stand on the subject today.

First of all, I want to stress that I think the continuing refinement of AI is inevitable and and will happen far faster than anything our legal system is used to dealing with. I’ve seen some efforts made to restrict what companies can use as a corpus to develop their generative text, but I don’t realistically see any of that working. As long as the information is out there, someone’s going to use it. With so much of it involved in the process, it will be extremely difficult to prove any specific instance, and by the time the courts worked their way through any litigation, whatever specific instance was in question will have become obsolete.

Of course, people have also been proposing other ways to throttle the use of AI. In education, they’re discussing putting in stipulations that AI can only be used for brainstorming purposes, or not at all. Some students have been caught using AI, and the consequences could be severe, but again, I see all of that as something that will be viewed as quaint and naive within the next year or two. AI is just getting too refined, too quickly, for anyone to hope to keep up. Within five years, I expect there will be no real way to tell the difference between something done by a human and something generated by a computer. Part of me wants to say that five year estimate is too soon, but honestly, a bigger part of me thinks it will arrive much more quickly than five years.

I have heard others dismiss generative text as something that really doesn’t matter long term, as it’s not a computer thinking at all. It’s just parroting off what the likeliest sequence of words could be. However, as I’ve thought about it, I see it as a significant step toward real intelligence. Once a computer can be trained to come up with complex answers to live questions, it can refine its actual understanding of both those answers and those questions. So again, I expect us to quickly begin to have to ask what really constitutes intelligence. AI will be better when it comes to anything data-based. Where it will fall short is on questions of morals, philosophy, and evaluating accuracy of information–especially any information that’s relatively recent.

What stands in its way? One question I continue to have is how generative text will respond when more and more of its corpus is based on text already generated by AI. So many people are turning to it to write things, the amount of AI generated text could quickly overwhelm the amount of human-generated text. That could create a feedback loop that could do affect AI in unforeseen ways. But I expect they’ll find ways to account for that, and they’ll account for ways to make it “smarter” when it comes to telling what’s true and what isn’t. Really, AI just needs to get to the point where it can admit when it doesn’t know something. If it could do that, then many of the consistency/reliability issues go away.

However, by and large, I come back to my “AI is inevitable” belief, and anyone trying to stand in its way is just going to get steamrolled. So what does that mean?

I think we need to come to the realization as a society that just because AI can do something, doesn’t mean it’s going to be the best thing that can be done. Yes, AI will get to the point where it can create a book or a movie or a piece of art on any subject you want, in any style you want, in a matter of moments. If you want to watch a Muppet adaptation of The Taming of the Shrew, it will be able to do that. (Though there might eventually be ways someone figures out to still handle copyright or trademarks, there will be ways to get around those. I expect it to only really matter when a company tries to do it and make money off it. This will further be muddied by international laws, as if a company doesn’t like the laws in one nation, they can just move to one with different laws.)

But the thing is, I’m not sure how much people will actually want to watch that Muppet Taming of the Shrew version. Why? Because the one AI makes for me will be different than the one it makes for you. And yes, you could try to spit in the eye of copyright and publish your “best” version and try to have a lot of people watch it, but good luck doing that, since as soon as something becomes popular, a thousand people will try to leech off its popularity. In many ways, I expect to see a meme-ification of the arts. When I grew up, there was a very finite number of pop culture sources. Today, with so many different things clamoring for attention, more and more people know absolutely nothing about the show or book you are completely crazy about.

And that is the problem I see AI not being able to surmount. It will be able to make anything, and because it can make anything, “anything” will no longer be relevant or worthwhile. My guess is that the relationship between the artist and their audience will become much more vital. People won’t want to watch just any Muppet version of the Taming of the Shrew. They’ll want to watch the one done by Jim Henson Studios, and they’ll still be willing to pay for that right. Yes, anyone could write a book in the style of Brandon Sanderson, but people won’t care about just any rando-brando-sando work. They’ll want to read the one he wrote himself. Authenticity will become increasingly important.

I expect there will be new genres or types of art that directly involve AI and lean into it, but even then, I expect specific artists to rise to prominence, and so people will care about the AI art that was specifically made by them.

Going beyond things like art, I think people will be gravitated toward doing what they want, and not what they’re told. Right now, students are using AI to get out of doing assignments. A creative writing professor mentioned to me last week that he had a student use AI to write a poem for his poetry class. The only possible way that makes any sense at all is if the student in question has no real desire to learn to write poetry at all. They just want to get credit for the class so they can do whatever it is they actually want to do. Education is seen as a hurdle, and as long as you can jump it, it doesn’t matter what means you use to jump it.

Really, much of what’s happening that’s problematic stems from people who are using AI to do the thing they don’t want to do, which will let them do the thing they actually want to do. Many people don’t really want to write a book. They want to sell a book and make money off it. In these cases, AI works like a ghost writer for them, and that’s something that’s been happening for years. People don’t want to pay for an actual artist to make them a graphic or take a picture. They just want something to put on the cover of their book or attract eyeballs to their website, so they offload the task to AI. People don’t actually want to learn anything at college. They just want the degree so they can get the job they think they want.

At some point in time, we’re each going to have to ask ourselves what we actually want, because AI will be able to do most things as good or better than how we could do them. Of course, since what many/most people actually want is just to be able to make enough money to support themselves and/or their family, this turns out to be a much more complicated decision than it might seem at first. If we get to the point where AI can do most things better than we can, then what in the world is it that we are actually doing? For companies, it will be cheaper to just have AI do almost everything, sooner or later. Taken to the next step, AI could get to the point where it basically runs all of our society, or where it becomes society.

I don’t think it’ll come to that. I think we’ll start to see there are things we really want humans doing, thank you very much. While it would be lovely to think we’ll arrive at some utopia where everyone’s needs are met and AI does all the things we never wanted to do anyway, actually arriving at that point may well prove impossible.

But here’s the thing: this is all just what I’m thinking today. There are so many different things I don’t know about what will happen, that trying to guess what the world will be like even five years from today is increasingly difficult. So in the meantime, I just do what people have always done: the best I can with whatever the situation is today, trying to plan as best I can for an unknown future.

In other words, all of what I have to say can be summed up as “AI is very, very exciting, and very, very terrifying.”

Leave a comment