Gerald R. Ford Leadership Forum

Can AI Do All That It Claims?

by Jeff Polet

Artificial Intelligence is in the news with a vengeance as ChatGTP released a new version of “chatbot.” I’ll confess to a great deal of ignorance about how AI works and what its prospects are; I’ll also confess to my skepticism about the enterprise, since the word “artificial” is as suggestive as the word “virtual” in “virtual reality.” Both qualifiers indicate that we are at least a step removed from real things, which alarms at one level but seems unsustainable at another.

This report from the UK interests in part because it reflects on how, contrary to what we might sometimes believe, disengagement from reality (advanced technology) shrinks rather than expands the imagination. According to the author “ChatGTP is the most sophisticated bit of ‘generative’ artificial intelligence we’ve yet seen, ‘trained’ to behave as though there were a smart and capable human being trapped inside your computer.”

Clearly AI has been developed a long way from its early “Eliza” days, but the basic problem hasn’t changed: despite its increasing sophistication, AI is a completely subordinate technology. Its creators can program it to perform impressive feats, but it can’t do more than its creators ask of it.

That’s not to suggest it’s benign. Consider the claim that “Clearly, this has huge implications for more or less any task involving words. It’s not so much that AI will end up taking every writing or teaching job on the planet (though it might well snaffle up a few). It’s that it’ll very quickly become common practice for humans to use AI to skip the vast majority of the writing (and research) process — allowing us, effectively, to stop having to think.” Any writer worth his or her salt will tell you that writing is always a way of working out thinking.

The author continues: “Lower standards will become the norm. AI might never produce works of Dickensian brilliance, but then, under its influence, we might never again, either — and we wont even really care anymore. [Emphasis added] Research skills will disappear, too, with answers readily available with a prompt of a few words. We’ll stop trying to wrestle original ideas from our own minds, instead getting AI to generate endless options from which we simply pick and mix. Our intellects, without proper exercise, will atrophy. Human culture will descend into a pernicious feedback loop…”

Surely one negative consequence is that it exacerbates a culture of suspicion. It was already difficult enough to be an instructor and wondering if papers were plagiarized, but one could, if sufficiently diligent, make good-faith efforts to track down sources. Here, there will be no sources to track down, meaning among other things that teachers will be even more disincentivized to assign writing—another indication of how new technologies not only attenuate skills, but bring with them their own justification for such attenuation.

The implications are much larger than simply the diminishment of writing or reading skills. One significant effect is that we tend to see knowledge as discrete and fragmented, with no framework that enables us to locate the parts within an intelligible whole. As Edna St. Vincent Millay poetically reflected:

Upon this age, that never speaks its mind,
This furtive age, this age endowed with power
To wake the moon with footsteps, fit an oar
Into the rowlocks of the wind, and find
What swims before his prow, what swirls behind —
Upon this gifted age, in its dark hour,
Rains from the sky a meteoric shower
Of facts . . . they lie unquestioned, uncombined.
Wisdom enough to leech us of our ill
Is daily spun; but there exists no loom
To weave it into fabric; undefiled
Proceeds pure Science, and has her say; but still
Upon this world from the collective womb
Is spewed all day the red triumphant child.

From the essay: “It all comes down to our culture’s evolving conception of knowledge. In pre-modern times, it was generally held that different kinds of knowledge existed in a clear hierarchy: first principles and general truths on top, with specialist information and skills beneath. Particular facts were only considered useful if they could be situated within, and made sense of, by a broader set of fundamental truths: what things are morally good and bad, what life’s ultimate meaning is, what beauty is and so on.“

Discussion Questions:

  1. TS Eliot wondered about the knowledge we have lost in information and the wisdom we have lost in knowledge. His argument was that we were becoming more ignorant about things that really mattered. Is he right about this?
  2. Are human beings creating a world where they are no longer relevant? No longer really human?
  3. Do the tools we develop make us not only less capable of making. moral choices, but make moral choices themselves less significant?
  4. How might we restore moral deliberation in a world where morality is so often bypassed?
Facebook
Twitter
LinkedIn
Email

Sign up to receive new content from the Ford Forum.

Leave a Reply

%d bloggers like this: