In corporate technology sales, certain stories go around as proxies to show how to overcome challenges. The stories are usually true, albeit through many variations, like the old game of telephone. In this telling, set some time in the late 1990’s, a Xerox salesman is asked if he is worried about the impending arrival of the paperless office. He replies, “I’ll worry about the paperless office when there is a paperless bathroom.” Sure, it’s a sly little poop joke, but the broader message is still true: don’t believe the hype—it’s been said before and nothing has changed. It’s also how I feel when I hear talk of artificial intelligence (AI) taking over the world.
The latest histrionics about AI come from a place well known
for its daily histrionics—Hollywood, and specifically the writers who were just
on strike. Among their great fears, other than getting much of their mindless
puffery they call scripts produced, is that ChatGPT and its ilk will suddenly
spit out endless award-winning scripts, thus putting the writers out of
business. Having read some ChatGPT output, it is, at best, at the Fast
and Furious 9-level of fluff. On the other hand, if AI can somehow make Ben
Affleck show an emotion while acting, that movie ticket might actually be worth
the AI investment.
But what is AI? Is
it Turing’s man, where humans can’t discern if the typed reply comes from a
machine or another human? Many thought we had reached that point when they
started typing into a customer service chat box, only to find out that not only
was there no human at the other end, but also it would admit that it was just a
machine and you would have to call a real human after all. “AI” bloat has
gotten so bad that the new washer and dryer in our home has an “AI” setting.
Call me a cynic, but I doubt there is a Pentagon-level supercomputer dedicated
to figuring out my laundry’s precise moisture content.
But we still aren’t any closer to defining the intelligence
of AI. Sure, computers have beaten chess grand masters and Jeopardy champions, but wasn’t that just lots of brute force
computing? Programmers can create deepfakes that certainly bend what we
perceive as real, but can we say for certain that it is a problem? What
we do have is plenty of fear, manifesting in a bi-partisan effort to legislate
the prohibition of AI launching nuclear weapons. “While U.S. military use of AI
can be appropriate for enhancing national security purposes, use of AI for
deploying nuclear weapons without a human chain of command and control is
reckless, dangerous, and should be prohibited," Rep. Ken Buck, R-Colo.,
said in April. I hate to break it to Ken, but we were there nearly 40 years ago
in the movie War Games. While cutting
it close, we found out then that human ingenuity could overcome AI’s (or the ‘80’s
dial-up modem version) worst intentions.
Even the Biden administration has gotten into the act, issuing
an executive order, under the guise of the Defense Production Act, to notify
the government when developing any system that poses a “serious risk to
national security, national economic security or national public health and
safety,” as well as, according to reports, to take steps to begin establishing
standards for AI safety and security, protect against fake AI-generated
content, shield Americans’ privacy and civil rights and help workers whose jobs
are threatened by AI. In other words, AI needs to fight AI to stop AI-fighting
humans who are fighting AI. Good luck with that.
Without a doubt we are in an uncertain time with AI
technology, whatever the form it takes. And maybe that technology could cobble
together a rebuttal to Descartes’ philosophy of “I think, therefore I am”
without simply pulling down the Cliffs notes off the Internet. But until it can
think as Descartes in the first place, I’m not going to lose any sleep
about some Terminator/Skynet end of the world. And with that, I will address one
concern no computer has—a need to go to the bathroom. Where I assure you there
is plenty of paper.
© 2023 Alexander W. Stephens, All Rights Reserved.