
Everywhere we look, there are mentions of artificial intelligence (AI), robotics, and their implications for the future. News stories and social media feeds predict a heyday of ease and comfort as we assign more and more mundane tasks to technology (the art that accompanies this post was created by AI). In contrast, we’re also given the dark prophecies of Asimov and Bradbury come to life: When our computerized creations become so sentient that they can resent us, how will we control them?
More realistically, however, people are rightfully concerned about their jobs: Cashiers are already becoming obsolete, data entry by remote workers is becoming a relic, and countless other positions previously filled by people are slated to go extinct in the next few years. How, then, do we reckon with this revolution?
As the father of two older boys, one in college and one about to go there, I’m relieved that both of them have chosen irreplaceably human endeavors for their futures: One is in theatre, and the other plans to pursue architecture. These are professions that AI will never be able to fully usurp. After all, theatre is among the humanities, a select group of art forms and practices marked by their innate reliance upon authentic emotion and genuine experience. Our hamartia, the human condition, is ironically our greatest strength when it comes to livelihoods that are AI-proof. Architecture will be helped by AI, certainly, but to design and create livable spaces that consider the needs of complex people, we need human minds and hearts. Ask AI to design a mid-20th Century ranch house like the one on The Brady Bunch, and you’ll probably get a reasonable facsimile. But ask AI for a blueprint of a home that considers the individual needs of 21st Century family members, and confusion results — the blinking cursor begins to smoke.
As a teacher, I’ve already encountered the challenge of getting students to write rather than use ChatGPT or some similar product. For now, AI-generated writing is fairly easy to spot: Its reliance upon certain words and phrases, its preference for sterile-sounding language, and its occasional errors about obvious matters all make it detectable, even without running an essay or paper through an online checker or two. Combine those facts with a vast divergence from a student’s in-class writings, and AI use becomes obvious. But we know that technology consistently advances, and as time elapses, the fakes will become harder to spot, especially as classrooms become more tech-dependent. This is why some teachers and professors have gone back to old-school blue books, those lined-paper pamphlets of an earlier era, for class writing. And while I see the nostalgic appeal and hard-nosed devotion to justice driving such a practice, I also see its inherent temporary nature. Returning to number two pencils and canary yellow legal pads may get us by for a while, but students, parents, and clients of the new age won’t tolerate this Luddite approach for long. We need to find the middle ground between total AI reliance and achieved, owned learning quickly. Compromises like “You may use an AI editor for the writing you have authored independently in class” serve as a good start. This technique prepares students for the world to come without damaging their acquisition of knowledge. Further, they learn by seeing the corrections made by QuillBot, Grammarly, and other language-fixers. And if these products make a mistake as they sometimes do, so much the better. That’s where the real learning begins — technology has never been and will never be infallible, and the sooner students grasp this truth by experience, the better off they will be.
As a poet, I’m not worried about AI. I’ve seen the replica-poems it produces, and while some sound good on the surface, a closer look reveals that same artificial shimmer visible in the art that I’ve used above. Something’s missing; there’s a bad aftertaste like that of saccharine diet sodas from the seventies. An astute reader can tell that the cane sugar of the human touch is missing from this thing’s formula, whatever it may be. The “experiential resonance” — the sense that an event or product is organic — just isn’t there. Call it instinct if you will, but a reasonable human being can tell the difference between the things we do and make and the contrived, data-driven simulacra of thought-approximating algorithms. An initial, superficial “Oh, that’s lovely!” soon becomes an “Oh, this isn’t what I thought it was.” And that kind of deflating disappointment will toll the end of our infatuation with AI. Like any other once-novel discovery, this, too, will lose its luster.
So, what’s the big picture? The AI “scare” is similar to that of Y2K: Yes, we should consider it, but no, it isn’t Armageddon. As we prepare and adapt, we will add it to our toolboxes, become indifferent to it, and move on. Just as we healed the hole in the ozone layer, just as we eliminated acid rain, and just as we defeated diseases of long ago, we will coexist with this latest change until it no longer seems intriguing or threatening. We could easily theorize a future dystopia like those seen in science fiction, but it’s more likely that balance will prevail as it always has. For parents, teachers, and creators, AI is nothing to obsess over. Put simply, it’s just another thing. And if history has taught us anything, it’s that things are perishable.