The Personal Alchemy of Writing in an Age of Machine-Written Text
An AI cannot give voice to your personal vision
In a past career, I spent six years at the high school level and nine at the college level teaching composition and literature. And I stand before you, digitally/virtually speaking, to say that right now, at the advent of the age of machine-produced writing that can for the first time in history pass as authentically human, the recent essay “The Technology of Writing: From the Essay to GPT-3” by Derek Neal for 3 Quarks Daily is exactly what needs to be said. It’s a lucid take that I heartily recommend to all readers of this newsletter, plus everybody else.
I’ll leave Neal’s essay mostly unquoted here, in order to encourage you to click through the footnote below and read it for yourself. It’s a well-informed piece of work that references, among other things, Walter Ong’s Orality and Literacy with its classic exploration of the human and societal/civilizational stakes, effects, and implications of the historical transformation from an oral culture to a literate one. But here’s a stepping stone path through the text of it, consisting of a few of my favorite sentences:
Some sort of alchemy occurs when I put pen to paper, or in this case, pen to screen, as I set down the stuff knocking about my brain and give it a more solid, permanent form. . . . The truth is, AI is at the point where it can write better than most of us, and certainly better than the incoming college freshman. So, why write? . . . In a university writing course, the emphasis is often on improving one’s writing “skill.” These skills are said to be necessary for future jobs in which one must correspond with colleagues, clients, or customers in writing. This framing of the purpose of writing is a microcosm of the university as a whole, as the university has become a place for skill development, which will help one find a job upon graduation and justify the investment of a university degree. However, this justification for writing falls flat when computers can write better than we can. . . . In order to understand why we write, why I’m writing this essay, why I implore the students I teach to write as much as possible, we must remember that writing is not simply a tool to express ourselves, but a mode of expression that shapes our consciousness. To write is to think in a certain way that is not possible when communicating orally, and to lose this ability, or to never cultivate it in the first place, is to be a lesser version of the person one might be. . . . In order to keep writing, we must remember why we write—not to communicate, but to think in a way that is not possible otherwise.1
As for my own thoughts, which arise in response to and in tandem with Neal’s words:
The ancient, intrinsic value of writing as an alchemy of self-discovery and self-formation, and as a cognitive and spiritual shaper of civilizations, reemerges forcefully when machines are able to supplant the purely practical, market-based “writing as skill” approach that has dominated classrooms for decades. In other words, there is an opportunity here for clarifying why we write at all, humanistically speaking.
An artificial intelligence can now write an essay for you, if essay simply means a conceptually organized set of information. But an artificial intelligence cannot give coherent, clarifying, formative voice to your scattered thoughts, feelings, intuitions, and personal vision. Only you can do that.
(As a side note, it turns out, disturbingly, that the latest generation of OpenAI's GPT project, named ChatGPT, which was newly released in the past couple of weeks, can produce, in the words of data scientist Teresa Kubacka, academic papers that lay out “a parallel universe of plausibly [sic] sounding, non-existing phenomena, confidently supported by citations to non-existing research.” Kubacka notes that this opens up the very real possibility of an epistemologically dystopian scenario in which “we will be fed with hallucinations indistinguishable from the truth, written without grammar mistakes, supported by hallucinated evidence, passing all first critical checks.”2)
An artificial intelligence can now write an essay for you. But an AI cannot give voice to your scattered thoughts, feelings, intuitions, and personal vision. Only you can do that.
A friend and colleague who read a shorter version of these comments when I posted them a couple of days ago at LinkedIn commented, astutely, “So what happens when the machine can give voice to my scattered thoughts? The day is technologically inevitable, I’m afraid.”
I agree. The technology will advance to the point where it really does seem able to say for me what I feel unable to say for myself. And when that happens, the phrase “my thoughts” will have been exploded, and it will henceforth be the machine doing the thinking. Technologies as enablers of thought — including the ancient technology of writing itself — are one thing, but technologies as drivers and, in effect, usurpers of thought are quite another.
I keep wondering who an AI-written text is actually for. Who is the “intended” audience? The scare quotes around “intended” are warranted because, after all, intended by whom? Let’s say a student — or a journalist, novelist, or essayist — asks an AI like ChatGPT to compose a text for them. Unlike text written by a person, which is a communication produced by one self-aware sentient being for reception, comprehension, and appreciation by another, text written by an AI — which is not actually conscious but just an amalgamation of algorithmic rules — is essentially akin to the infinite horde of monkeys pounding away at a typewriter, if they had been trained beforehand with a few behavior modifications that cause them to pound in certain patterns. They may produce something that appears meaningful to somebody who is literate in the given language, but that is only an appearance, because in actual fact there is nobody — no mind, intelligence, or vision, no consciously felt motivation or inspiration with an accompanying creatively channeled intent and effort — “behind” the text. Instead, it’s all just smoke, just foamy spray from the ocean, just the textual equivalent of a Jackson Pollock canvas covered in random paint flingings. It is quite literally a text “written” by nobody, which means it is also a text written for nobody. And if some living person — a teacher or professor, a newspaper reader or editor, a connoisseur of novels or essays — happens to read it and discern some type of meaning in it, that result is, in essence, just a side effect, another random occurrence without intrinsic significance. Reading a novel or essay written by a machine is ultimately no different from reading cloud patterns or tea leaves.
So again, if and when a machine becomes able to give voice to my thoughts, “my thoughts” will no longer be accurately describable as mine, even if what the machine produces seems to express deep things from within me that I had been unable to articulate on my own. Though you or I may read meaning into them, they will actually be “thoughts” by nobody, written for nobody, fulfilling no purpose.
This only reemphasizes my (and Neal’s) original point: Only you can do the writing that only you can do. This also applies to all other fields of creative and intellectual endeavor. And although the point as stated sounds like a tautology, in the uncharted, AI-populated territory now opening up before us, it actually states a vital truth.
Derek Neal, “The Technology of Writing: From the Essay to GPT-3,” 3 Quarks Daily, December 12, 2022.
Teresa Kubacka (@paniterka_ch), “Today I asked ChatGPT about the topic I wrote my PhD about. It produced reasonably [sic] sounding explanations and reasonably [sic] looking citations. So far so good — until I fact-checked the citations. And things got spooky…,” December 5, 2022, https://twitter.com/paniterka_ch/status/1599893718214901760.