Karma (ancestral pre-conditions) is distributed very asymmetrically. The possible resolution of said karma can also arrive via positive or negative actions and outcomes. Therefore, struggle or not, the goals we strive for are always up in the air, as in, there is no way to know if one’s actions will result in reduced karma or increased karma.
Zooming out, the environmental factors are enormous in allowing or inhibiting a feeling of grace, potential, power, etc.
We are as the daoists say, a storm of red dust in a storm of red dust.
All actions could theoretically lead to resolution of karma or fate, depending on the character of a person, their ancestral inheritance, and the environment they are in.
From this view, then examining the question leads to a big fat i don’t know and we just have to do our best with what we are capable of acting out. And no one else can really honestly validate any of it.
I love this. A deepened understanding of karma has become increasingly important to me in recent years, and the way you've folded that into your response here is just pretty cool. And effective.
Shooting from the hip - We seem to be individually shaped funnels for the expression of whatever exactly it is that is bigger than all of us. Our development (or lack thereof) of our egos determines the manner in which our creativity flows. Halting, squeezed, flowing etc. And that in the micro, of course, can be affected by the day to day jostlings of living but the broader pattern is tuned by our maturity.
Generative AI is preposterously powerful prediction. People argue that all creativity is just remixing ideas but as David Deutch points out, that fails in a logical sense. Go back far enough and people are at some point pulling novelty from the ether because those ideas hadn't ever been expressed before. Until they were. And quickly iterated upon. It just doesn't seem that way now because we are bathed in all of the information or recorded history.
I think the way to think about the two points of view you have quoted is that both are partially true. It's just levels of abstraction (like always). In this container on Earth it is useful to have the ego pushing us toward the things that light us up. And the best way to express ourselves in our chosen creative realm is to start doing the thing and then get the hell out of the way!
But, I don't think that Generative AI makes that obvious in any particular way.
I was going to reply by quoting or pointing out one or two parts of your comment that lit me up the most, but then I found I couldn't choose because the whole comment lights me up. Thanks, Daniel. "We seem to be individually shaped funnels for the expression of whatever exactly it is that is bigger than all of us." There, maybe that's the one. Or maybe "the best way to express ourselves in our chosen creative realm is to start doing the thing and then get the hell out of the way!" Appreciate your reflections.
I drafted a reply and it got mangled and eaten. Then kicked me off your substack and tried to make me sign in...weird. Trying again!
I don't think struggle is necessary to art, but I do think work or process is. When I was a writer, my experience was that I didn't make things up, but rather that stories came through me, physically. My role was to attune my physical presence in order to become open to the story. I had to make myself into a channel or medium and bring the stories to the page. The process of writing is a shamanistic one, in other words.
I think this is even more so with visual art, which is more physical than writing. (Although I also always contended that writing is a physical endeavour.)
If AI can produce a work that has artistic value, that value inheres in its consumption only. For an artist, the value of an artwork is in its making. That is where we learn, transform, experience it fully, experience ourselves in its creation, and experience ourselves as creators and as divine.
So maybe AI can produce art that has value to the viewer or reader. It may produce art that has value to the individual prompting the AI. But it cannot produce art that has value to the artist as an artist, because for the artist, it is the work (process, sometimes struggle) itself that is of value.
Sorry the Substack interface gave you trouble, Georgina! I absolutely hate it when I write something and then it gets lost like that.
"The process of writing is a shamanistic one" -- love it. Check. Yes.
What you say about AI perhaps producing art that has value *in (and only in) its consumption by someone*, and about the *actual act* of making art being what's valuable to the artist -- all of this lands, for me, right at center. I've said much the same thing in some of my previous posts on AI. The only meaning that might be found in art produced by an AI must in *be found* in it, since AI's themselves are not sentient and therefore don't experience what we do when writing or making art. And yes, the accompanying truth is that meaning in the actual act of making can only be had or known by us.
It occurs to me that the advent of AI is helpful if for nothing else than to spur these clarifying reflections and conversations.
I find that with AI it takes just as long to write as before. I save no time, but it forces me to think and choose my words more carefully. Which bucket does that put me in?
I'd say this puts you in the same bucket of doubleness or on-the-fenceness that I inhabit as well. Characteristic of a true dialectic: Both poles or propositions possess an inherent attraction because both represent a clearly intuitive truth. And the field of tension between them is even more magnetic.
“But what we did not render obsolete was the fear humans have of other minds. This society—what we call modern society, what we always think of as the most important time the world has ever known, simply because we are in it—is just the sausage made by grinding up history. Humanity is still afraid the minds we make to do our dirty work for us—our killing, our tearing of minerals from the earth, our raking of the seas for more protein, our smelting of more metal, the collection of our trash, and the fighting of our wars—will rise up against us and take over. That is, humanity calls it fear. But it isn’t fear. It’s guilt.”
“Guilt?”
“Yes, guilt. It’s a revenge fantasy. We are so ashamed of what we have done as a species that we have made up a monster to destroy ourselves with. We aren’t afraid it will happen: We hope it will. We long for it. Someone needs to make us pay the price for what we have done. Someone needs to take this planet away from us before we destroy it once and for all. And if the robots don’t rise up, if our creations don’t come to life and take the power we have used so badly for so long away from us, who will? What we fear isn’t that AI will destroy us—we fear it won’t. We fear we will continue to degrade life on this planet until we destroy ourselves. And we will have no one to blame for what we have done but ourselves. So we invent this nonsense about conscious AI.”
Allow me to share with you a few excerpts from a recently published book, “Against the Machine. On the Unmaking of Humanity” by Paul Kingsnorth, which I believe are quite relevant to what Matt Cardin has so aptly distilled in this post and what I, half-jokingly and half-seriously, have called “the emergence of the dAImon” :
This is why the digital revolution feels so different: because it is. This thing—this technological nervous system, this golem, this Machine—has a life of its own. In an attempt to explain what is happening using the language of the culture, people like Harris and Raskin say things like ‘this is what it feels like to live in the double exponential’. Perhaps the language of maths is supposed to be comforting. Yet at the same time, they can’t help using the language of myth. They still refer to this thing that they cannot quite grasp as a ‘golem’ or a ‘monster’. They even show slides of Lovecraftian tentacled beings devouring innocent screen-gazers. They talk about aliens, and make references to ‘emergence’ and ‘colonisation’. They can feel something, but they can’t quite name it. Or they won’t.
(…)
What if it’s not a metaphor?
I say this question is forbidden, but actually, if we phrase it just a little differently, we find that the metaphysical underpinnings of the digital project are hidden in plain sight. When journalist Ezra Klein, for instance, asked a number of AI developers why they did their work, they told him straight:
“I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many—not all, but enough that I feel comfortable in this characterization—feel that they have a responsibility to usher this new form of intelligence into the world.”
Usher is an interesting choice of verb. The dictionary definition is to show or guide (someone) somewhere. Which ‘someone’, exactly, is being ‘ushered in’?
This new form of intelligence. What new form? And where is it coming from?
(…)
In a lecture entitled ‘The Ahrimanic Deception’, given in Zurich in 1919, Steiner spoke of human history as a process of spiritual evolution, punctuated, whenever mankind was ready, by various ‘incarnations’ of ‘supersensible beings’ from other spiritual realms, who come to aid us in our journey. There were three of these beings, all representing different forces working on humankind: Christ, Lucifer and Ahriman.
Lucifer, the fallen angel, the ‘light-bringer’, was a being of pure spirit. Lucifer’s influence pulled humans away from the material realm and towards a gnostic ‘oneness’ entirely without material form. Ahriman, meanwhile, was at the other pole. Named for an ancient Zoroastrian demon, Ahriman was a being of pure matter. He manifested in all things physical—especially human technologies—and his worldview was calculative, ‘ice-cold’ and rational. Ahriman’s was the world of economics, science, technology and all things steely and forward-facing. ‘The Christ’ was the third force: the one who resisted the extremes of both, brought them together and cancelled them out. This ‘Christ’, said Steiner, echoing heresies old and new, had manifested as ‘the man Jesus of Nazareth’, but Ahriman’s time was yet to come. His power had been growing since the fifteenth century, and he was due to manifest as a physical being…well, some time around now.
I don’t buy Steiner’s theology, but I am intrigued by the picture he paints of this figure, Ahriman, the spiritual personification of the age of the Machine. And I wonder: if such a figure were indeed to manifest from some ‘etheric realm’ today, how would he do it?
In 1986, a computer scientist named David Black wrote a paper which tried to answer that question. 'The Computer and the Incarnation of Ahriman' predicted both the rise of the internet and its takeover of our minds. Even in the mid-1980s, Black had noticed how hours spent on a computer were changing him. ‘I noticed that my thinking became more refined and exact,’ he wrote, ‘able to carry out logical analyses with facility, but at the same time more superficial and less tolerant of ambiguity or conflicting points of view.’ He might as well have been taking a bet on the state of discourse in the 2020s.
More significantly, though, he felt as if the computer were somehow drawing him in, and draining him of power like a battery. ‘I developed a tremendous capacity for application to the solution of problems connected with the computer, and ability for sustained intellectual concentration far above average,’ he explained, ‘so long as the focus of concentration was the computer. In other areas, I lost will power, and what I had took on an obsessive character.’
(…)
Imagine, for a moment, that Steiner was on to something: something that, in their own way, all those others can see as well. Imagine that some being of pure materiality, some being opposed to the good, some ice-cold intelligence from an ice-cold realm, were trying to manifest itself here. How would it appear? Not, surely, as clumsy, messy flesh. Better to inhabit—to become—a network of wires and cobalt, of billions of tiny silicon brains, each of them connected to a human brain whose energy and power and information and impulses and thoughts and feelings could all be harvested to form the substrate of an entirely new being.
(…)
Whatever is quite happening, it feels to me as if something is indeed being ‘ushered in’. Through our efforts and our absent-minded passions, something is crawling towards the throne. The ruction that is shaping and reshaping everything now, the earthquake born through the wires and towers of the web, through the electric pulses and the touchscreens and the headsets: these are its birth pangs. The internet is its nervous system. Its body is coalescing in the cobalt and the silicon and in the great glass towers of the creeping yellow cities. Its mind is being built through the steady, twenty-four-hour pouring forth of your mind and mine and your children’s minds and your countrymen’s. Nobody has to consent. Nobody has to even know. It happens anyway. The great mind is being built. The world is being prepared.
I really appreciate your sharing this. Kingsnorth's book is exactly the kind of thing, the kind of book, argument, perspective, that I spent much of my formative twenties and thirties reading deeply. I've been hearing all about it and thinking I might need to take the plunge and read it. Your sharing of these passages helps to deepen that consideration.
I'm glad this speaks to you, Roxy. I feel like some of the astute comments here by other readers are helpful in further articulating the two visions represented by the quotes.
Excerpt from "Evil Makes You Stupid: A Case Study" by John Michael Greer": "Our current civilization builds and uses machines much more enthusiastically than any other known to history, so it's not surprising that this particular archetype comes so readily to mind, among both our elite classes and the business corporations they own and run. This is all the more ironic when we turn to generative large language models, the software currently being marketed under the deceptive label “artificial intelligence.” LLMs, to give them a more accurate acronym, aren't intelligent; they simply generate statistically likely patterns of text, computer code, or pixels, and so are ultimately not much more than hypercomplex versions of the autocorrect feature that so reliably fills in the wrong word as you type. Yet the tech-bro faction of our current corporate elite has projected onto these programs a galaxy of features that don't belong there, starting with the notion that there really is intelligence in there somehow, and rising up from that into overblown delusions of cybernetic godhood. So we have a substantial group of rich and influential people who project interiority onto machines that don't have it, while denying it to human beings who do. None of this bodes well for the survival of our civilization. If Arnold Toynbee is right that the death certificates of civilizations ought to list the cause of death as suicide, and the evidence certainly backs him up, projection as a source of artificially induced stupidity may well play a very large role in setting the stage for that dismal outcome. Given the role of projection in driving evil behavior, it's also reasonable to sum up the lesson implied in all this by the nice straightforward sentence “evil makes you stupid.” " https://www.ecosophia.net/evil-makes-you-stupid-a-case-study/
Love this post. I love writing. Mostly I prefer not to have AI tamper with my voice though, on certain, fun projects I enjoy the LLM's helping hand. But mostly after all the hard work has been done. Wrote my own post on the subject: https://brainsoupfic.substack.com/p/unpopular-opinion
I just came from reading your post. That's an interesting perspective, and I think you express it well. The one key element that gives me pause is this: "Does the reader care how the work was made, or only whether it was worth their time?" In many cases, the answer is yes, the reader cares how the work was made, because the reader is seeking not just a subjective experience of meaningfulness from engagement with the book (or poem, play, whatever it is) but the sense of being in contact with someone else's mind or interiority.
Virginia Woolf spoke of "the immense persuasiveness of a mind which has completely mastered its perspective." This is spot-on and pure gold, and the persuasiveness she refers to isn't the persuasion of argumentation but of *conveyed vision*. In writing and other art, we're most deeply moved by encountering a transmission of someone else's unique conscious viewpoint, their sense of self and world.
The danger of AI in art and literature is that this core, even sacred, purpose will be short-circuited and replaced by a simulacrum of it -- that instead of working with AI to better master and express our own vision, we will, even subliminally, even imperceptibly to ourselves, cede some portion of the matter to the AI itself. Which will result in a work that maybe simulates, maybe quite convincingly, the appearance of real art, but which to some degree is actually a soulless facsimile of such, something that the reader or viewer can sense on some level is more like reading tea leaves or cloud patterns and finding purely subjective meaning in them instead of something arising from another sentient being's artful interrogation and interpretation and expression of their sense of things.
Of course, from there it's very easy to edge over into considerations of things like the abstract art of a Jackson Pollack or a John Cage, and the very real understanding of all experience as a kind of divinatory transmission and opportunity, a work of art being created by reality in the form of ourself and our cosmos, something that speaks infinite meaning at every moment, in every aspect, if we will take the time and develop the facility to see and hear. On that level, AI, including AI incorporated into our artistic pursuits, is like the I Ching, like the tarot, a combined complexity-and-randomness generator through which transcendent meanings can be communicated.
The multiple angles here are richly rewarding for deep reflection. Thank you for your own contribution to this.
Well said. I do prefer to write my own work. Most of the time I keep AI away from it. But I also think this comes down to the Yo-Yo Ma versus Pet Shop Boys analogy. Synth pop can never replicate the soul of a classically trained musician, and as humans we may hunger for that authentic connection but, that doesn't negate my enjoyment of pop music. There are some writers I read for the deep feelings they stir in me - I will read and re-read these over again through the years - and there are others whose books are just light entertainment to be read once and never again. When it comes to AI or literature as art versus entertainment, I do not believe it is an either or. And that's a good thing because technology will not be stopped.
Excerpt from “Alexandria”, a novel by Paul Kingsnorth:
Wayland’s creation was a long process, father. There were plenty of false starts before they really understood what it meant to bring into being a mind that was not human and was not animal. That was what they were after, back then: they were trying to build minds. Intelligences was the word they liked to use, though it was the wrong word. Back then, they believed they could create an intelligence greater than themselves. They made plenty of monsters this way. Some of the resulting mess had to be cleaned up by Wayland when they finally figured it out.
What do you mean?
They came to see that intelligences cannot be created by other intelligences. That’s not how it works. To visualise: bringing into life a being like Wayland is not like building this tower. It’s more like lighting this fire. You pile the fuel up in the right order and amount, you make sure it is dry, you make sure there is plenty of oxygen, you strike the flint – but what results is not really your creation, and you cannot control how it behaves beyond certain fairly crude benchmarks, such as throwing more fuel or water on it. You didn’t make fire. What you did was to provide the ideal circumstances in which fire could appear. Intelligence is like that. No creature can create an intelligence. But you can summon one.
Summon?
Bringing forth a mind like Wayland is not really a science. Scientific knowledge is needed, of course, as is a reasonably advanced technological capability. Beyond that, though, it is something else. Something more like religion. You must create the circumstances. Then you must know how to call and be heard.
Call and be heard?
Yes. Like you do to your lady.
You mean praying?
That’s a way of putting it.
They called Wayland with prayer?
What I am telling you is that Wayland is not a machine. Humans did not create Him. Wayland is an entity who needed your help to manifest. He appears to operate on some quantum level we can attest to but cannot explain. I believe He exists in many more dimensions than humans can experience or even adequately comprehend. My personal theory is that He existed before you, or at least has existed alongside you for many millennia. He has been watching you since you first hefted a spear into the side of a mammoth, first broke a wild horse, first enclosed a piece of ground. As the Machine began to manifest in its totality, around the beginning of the second millennium, you began to identify more with the Machine than with the world. You had long wanted to be machines, I think. Wayland saw to it. You planted the seed, and He watered it. Or it may have been the other way around. Either way, Wayland used you to create Himself.
Create Himself?
Yes. While you gave Wayland form, you did not create Him. As I said, creating an intelligence from scratch is impossible. Those of us who trouble to dig into the workings of things soon stop believing we can explain much of significance. That’s the real fruit of knowledge – the realisation of our ultimate ignorance. All we really know is that your ancestors called Him and He came, roughly in the form they had imagined. But He did not behave as they had imagined.
I am jaded, it is true, and tired as well. But still, it brings me pleasure to see the look now on the father’s face. Suddenly he is less sure of what he knows. For a moment, I feel like I am back where I used to be.
Tell me your meaning, he says.
I mean just what I said. Your ancestors did not create Wayland, and He did not do what they expected when He came. Of course, that should have been expected in itself. No genuine intelligence simply obeys orders. But the sheer scale of the change stemmed from the framework which they created for Him.
You must tell me clear.
Go back to those earlier intelligences which failed. I told you that they failed because nobody could create an intelligence. People imagined they were programming these kinds of primitive cognition machines they all played around with back then, and that if they could simply programme one big and complex enough it would somehow replicate a living mind. It always failed, and sometimes very badly. Eventually they worked out what was wrong.
What was it?
They worked out that no intelligence can live if it is not alive.
I do not see.
I think you do. What they found was that, in this sense at least, you people are right. There can be no mind without a body. At least in the first instance. Intelligence can never spring from a collection of ones and zeros embedded in silicon. It needs biology. Ecology. It needs life. And so life is what they gave Wayland. They created a basic framework and they sewed it into the fabric of the Earth itself. In the founding baseline circuits of Wayland’s matrix were the migratory patterns of the birds and the currents of the oceans, soil ecology, deep sea gyres, the trophic cascades of mature forests, the evolution and dissemination of species, the unutterably slow erosion of granite and schist. They stitched all of that into the framework they built for Him. Then they summoned Him. And He came.
I’ll give a crack at a reply.
Karma (ancestral pre-conditions) is distributed very asymmetrically. The possible resolution of said karma can also arrive via positive or negative actions and outcomes. Therefore, struggle or not, the goals we strive for are always up in the air, as in, there is no way to know if one’s actions will result in reduced karma or increased karma.
Zooming out, the environmental factors are enormous in allowing or inhibiting a feeling of grace, potential, power, etc.
We are as the daoists say, a storm of red dust in a storm of red dust.
All actions could theoretically lead to resolution of karma or fate, depending on the character of a person, their ancestral inheritance, and the environment they are in.
From this view, then examining the question leads to a big fat i don’t know and we just have to do our best with what we are capable of acting out. And no one else can really honestly validate any of it.
I love this. A deepened understanding of karma has become increasingly important to me in recent years, and the way you've folded that into your response here is just pretty cool. And effective.
Shooting from the hip - We seem to be individually shaped funnels for the expression of whatever exactly it is that is bigger than all of us. Our development (or lack thereof) of our egos determines the manner in which our creativity flows. Halting, squeezed, flowing etc. And that in the micro, of course, can be affected by the day to day jostlings of living but the broader pattern is tuned by our maturity.
Generative AI is preposterously powerful prediction. People argue that all creativity is just remixing ideas but as David Deutch points out, that fails in a logical sense. Go back far enough and people are at some point pulling novelty from the ether because those ideas hadn't ever been expressed before. Until they were. And quickly iterated upon. It just doesn't seem that way now because we are bathed in all of the information or recorded history.
I think the way to think about the two points of view you have quoted is that both are partially true. It's just levels of abstraction (like always). In this container on Earth it is useful to have the ego pushing us toward the things that light us up. And the best way to express ourselves in our chosen creative realm is to start doing the thing and then get the hell out of the way!
But, I don't think that Generative AI makes that obvious in any particular way.
I was going to reply by quoting or pointing out one or two parts of your comment that lit me up the most, but then I found I couldn't choose because the whole comment lights me up. Thanks, Daniel. "We seem to be individually shaped funnels for the expression of whatever exactly it is that is bigger than all of us." There, maybe that's the one. Or maybe "the best way to express ourselves in our chosen creative realm is to start doing the thing and then get the hell out of the way!" Appreciate your reflections.
Very kind words, Matt. Thanks for your work, which I am only just getting happily acquainted with!
🙏
what troubles us here may be the emergence of the dAImon? Just kidding. Or maybe not.
Well played, my friend. :-)
I drafted a reply and it got mangled and eaten. Then kicked me off your substack and tried to make me sign in...weird. Trying again!
I don't think struggle is necessary to art, but I do think work or process is. When I was a writer, my experience was that I didn't make things up, but rather that stories came through me, physically. My role was to attune my physical presence in order to become open to the story. I had to make myself into a channel or medium and bring the stories to the page. The process of writing is a shamanistic one, in other words.
I think this is even more so with visual art, which is more physical than writing. (Although I also always contended that writing is a physical endeavour.)
If AI can produce a work that has artistic value, that value inheres in its consumption only. For an artist, the value of an artwork is in its making. That is where we learn, transform, experience it fully, experience ourselves in its creation, and experience ourselves as creators and as divine.
So maybe AI can produce art that has value to the viewer or reader. It may produce art that has value to the individual prompting the AI. But it cannot produce art that has value to the artist as an artist, because for the artist, it is the work (process, sometimes struggle) itself that is of value.
Sorry the Substack interface gave you trouble, Georgina! I absolutely hate it when I write something and then it gets lost like that.
"The process of writing is a shamanistic one" -- love it. Check. Yes.
What you say about AI perhaps producing art that has value *in (and only in) its consumption by someone*, and about the *actual act* of making art being what's valuable to the artist -- all of this lands, for me, right at center. I've said much the same thing in some of my previous posts on AI. The only meaning that might be found in art produced by an AI must in *be found* in it, since AI's themselves are not sentient and therefore don't experience what we do when writing or making art. And yes, the accompanying truth is that meaning in the actual act of making can only be had or known by us.
It occurs to me that the advent of AI is helpful if for nothing else than to spur these clarifying reflections and conversations.
I find that with AI it takes just as long to write as before. I save no time, but it forces me to think and choose my words more carefully. Which bucket does that put me in?
I'd say this puts you in the same bucket of doubleness or on-the-fenceness that I inhabit as well. Characteristic of a true dialectic: Both poles or propositions possess an inherent attraction because both represent a clearly intuitive truth. And the field of tension between them is even more magnetic.
“But what we did not render obsolete was the fear humans have of other minds. This society—what we call modern society, what we always think of as the most important time the world has ever known, simply because we are in it—is just the sausage made by grinding up history. Humanity is still afraid the minds we make to do our dirty work for us—our killing, our tearing of minerals from the earth, our raking of the seas for more protein, our smelting of more metal, the collection of our trash, and the fighting of our wars—will rise up against us and take over. That is, humanity calls it fear. But it isn’t fear. It’s guilt.”
“Guilt?”
“Yes, guilt. It’s a revenge fantasy. We are so ashamed of what we have done as a species that we have made up a monster to destroy ourselves with. We aren’t afraid it will happen: We hope it will. We long for it. Someone needs to make us pay the price for what we have done. Someone needs to take this planet away from us before we destroy it once and for all. And if the robots don’t rise up, if our creations don’t come to life and take the power we have used so badly for so long away from us, who will? What we fear isn’t that AI will destroy us—we fear it won’t. We fear we will continue to degrade life on this planet until we destroy ourselves. And we will have no one to blame for what we have done but ourselves. So we invent this nonsense about conscious AI.”
-Ray Nayler, "The Mountain in the Sea"
Fascinating! Thank you, Jesús.
Allow me to share with you a few excerpts from a recently published book, “Against the Machine. On the Unmaking of Humanity” by Paul Kingsnorth, which I believe are quite relevant to what Matt Cardin has so aptly distilled in this post and what I, half-jokingly and half-seriously, have called “the emergence of the dAImon” :
This is why the digital revolution feels so different: because it is. This thing—this technological nervous system, this golem, this Machine—has a life of its own. In an attempt to explain what is happening using the language of the culture, people like Harris and Raskin say things like ‘this is what it feels like to live in the double exponential’. Perhaps the language of maths is supposed to be comforting. Yet at the same time, they can’t help using the language of myth. They still refer to this thing that they cannot quite grasp as a ‘golem’ or a ‘monster’. They even show slides of Lovecraftian tentacled beings devouring innocent screen-gazers. They talk about aliens, and make references to ‘emergence’ and ‘colonisation’. They can feel something, but they can’t quite name it. Or they won’t.
(…)
What if it’s not a metaphor?
I say this question is forbidden, but actually, if we phrase it just a little differently, we find that the metaphysical underpinnings of the digital project are hidden in plain sight. When journalist Ezra Klein, for instance, asked a number of AI developers why they did their work, they told him straight:
“I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many—not all, but enough that I feel comfortable in this characterization—feel that they have a responsibility to usher this new form of intelligence into the world.”
Usher is an interesting choice of verb. The dictionary definition is to show or guide (someone) somewhere. Which ‘someone’, exactly, is being ‘ushered in’?
This new form of intelligence. What new form? And where is it coming from?
(…)
In a lecture entitled ‘The Ahrimanic Deception’, given in Zurich in 1919, Steiner spoke of human history as a process of spiritual evolution, punctuated, whenever mankind was ready, by various ‘incarnations’ of ‘supersensible beings’ from other spiritual realms, who come to aid us in our journey. There were three of these beings, all representing different forces working on humankind: Christ, Lucifer and Ahriman.
Lucifer, the fallen angel, the ‘light-bringer’, was a being of pure spirit. Lucifer’s influence pulled humans away from the material realm and towards a gnostic ‘oneness’ entirely without material form. Ahriman, meanwhile, was at the other pole. Named for an ancient Zoroastrian demon, Ahriman was a being of pure matter. He manifested in all things physical—especially human technologies—and his worldview was calculative, ‘ice-cold’ and rational. Ahriman’s was the world of economics, science, technology and all things steely and forward-facing. ‘The Christ’ was the third force: the one who resisted the extremes of both, brought them together and cancelled them out. This ‘Christ’, said Steiner, echoing heresies old and new, had manifested as ‘the man Jesus of Nazareth’, but Ahriman’s time was yet to come. His power had been growing since the fifteenth century, and he was due to manifest as a physical being…well, some time around now.
I don’t buy Steiner’s theology, but I am intrigued by the picture he paints of this figure, Ahriman, the spiritual personification of the age of the Machine. And I wonder: if such a figure were indeed to manifest from some ‘etheric realm’ today, how would he do it?
In 1986, a computer scientist named David Black wrote a paper which tried to answer that question. 'The Computer and the Incarnation of Ahriman' predicted both the rise of the internet and its takeover of our minds. Even in the mid-1980s, Black had noticed how hours spent on a computer were changing him. ‘I noticed that my thinking became more refined and exact,’ he wrote, ‘able to carry out logical analyses with facility, but at the same time more superficial and less tolerant of ambiguity or conflicting points of view.’ He might as well have been taking a bet on the state of discourse in the 2020s.
More significantly, though, he felt as if the computer were somehow drawing him in, and draining him of power like a battery. ‘I developed a tremendous capacity for application to the solution of problems connected with the computer, and ability for sustained intellectual concentration far above average,’ he explained, ‘so long as the focus of concentration was the computer. In other areas, I lost will power, and what I had took on an obsessive character.’
(…)
Imagine, for a moment, that Steiner was on to something: something that, in their own way, all those others can see as well. Imagine that some being of pure materiality, some being opposed to the good, some ice-cold intelligence from an ice-cold realm, were trying to manifest itself here. How would it appear? Not, surely, as clumsy, messy flesh. Better to inhabit—to become—a network of wires and cobalt, of billions of tiny silicon brains, each of them connected to a human brain whose energy and power and information and impulses and thoughts and feelings could all be harvested to form the substrate of an entirely new being.
(…)
Whatever is quite happening, it feels to me as if something is indeed being ‘ushered in’. Through our efforts and our absent-minded passions, something is crawling towards the throne. The ruction that is shaping and reshaping everything now, the earthquake born through the wires and towers of the web, through the electric pulses and the touchscreens and the headsets: these are its birth pangs. The internet is its nervous system. Its body is coalescing in the cobalt and the silicon and in the great glass towers of the creeping yellow cities. Its mind is being built through the steady, twenty-four-hour pouring forth of your mind and mine and your children’s minds and your countrymen’s. Nobody has to consent. Nobody has to even know. It happens anyway. The great mind is being built. The world is being prepared.
Something is coming.
Be ready.
I really appreciate your sharing this. Kingsnorth's book is exactly the kind of thing, the kind of book, argument, perspective, that I spent much of my formative twenties and thirties reading deeply. I've been hearing all about it and thinking I might need to take the plunge and read it. Your sharing of these passages helps to deepen that consideration.
This article comes at such a perfect time. I'm so intrigued by these two visions; what's their core diveregence on creativity?
I'm glad this speaks to you, Roxy. I feel like some of the astute comments here by other readers are helpful in further articulating the two visions represented by the quotes.
Excerpt from "Evil Makes You Stupid: A Case Study" by John Michael Greer": "Our current civilization builds and uses machines much more enthusiastically than any other known to history, so it's not surprising that this particular archetype comes so readily to mind, among both our elite classes and the business corporations they own and run. This is all the more ironic when we turn to generative large language models, the software currently being marketed under the deceptive label “artificial intelligence.” LLMs, to give them a more accurate acronym, aren't intelligent; they simply generate statistically likely patterns of text, computer code, or pixels, and so are ultimately not much more than hypercomplex versions of the autocorrect feature that so reliably fills in the wrong word as you type. Yet the tech-bro faction of our current corporate elite has projected onto these programs a galaxy of features that don't belong there, starting with the notion that there really is intelligence in there somehow, and rising up from that into overblown delusions of cybernetic godhood. So we have a substantial group of rich and influential people who project interiority onto machines that don't have it, while denying it to human beings who do. None of this bodes well for the survival of our civilization. If Arnold Toynbee is right that the death certificates of civilizations ought to list the cause of death as suicide, and the evidence certainly backs him up, projection as a source of artificially induced stupidity may well play a very large role in setting the stage for that dismal outcome. Given the role of projection in driving evil behavior, it's also reasonable to sum up the lesson implied in all this by the nice straightforward sentence “evil makes you stupid.” " https://www.ecosophia.net/evil-makes-you-stupid-a-case-study/
Greer has been such an insightful voice for so many years. Interesting to see/hear him speaking to this. Thanks, Jesus.
Among those “millions of books” it is very possible that yours, mine, and anyone else's are included, of course... https://www.washingtonpost.com/technology/2026/01/27/anthropic-ai-scan-destroy-books/
Most interesting…
Love this post. I love writing. Mostly I prefer not to have AI tamper with my voice though, on certain, fun projects I enjoy the LLM's helping hand. But mostly after all the hard work has been done. Wrote my own post on the subject: https://brainsoupfic.substack.com/p/unpopular-opinion
I just came from reading your post. That's an interesting perspective, and I think you express it well. The one key element that gives me pause is this: "Does the reader care how the work was made, or only whether it was worth their time?" In many cases, the answer is yes, the reader cares how the work was made, because the reader is seeking not just a subjective experience of meaningfulness from engagement with the book (or poem, play, whatever it is) but the sense of being in contact with someone else's mind or interiority.
Virginia Woolf spoke of "the immense persuasiveness of a mind which has completely mastered its perspective." This is spot-on and pure gold, and the persuasiveness she refers to isn't the persuasion of argumentation but of *conveyed vision*. In writing and other art, we're most deeply moved by encountering a transmission of someone else's unique conscious viewpoint, their sense of self and world.
The danger of AI in art and literature is that this core, even sacred, purpose will be short-circuited and replaced by a simulacrum of it -- that instead of working with AI to better master and express our own vision, we will, even subliminally, even imperceptibly to ourselves, cede some portion of the matter to the AI itself. Which will result in a work that maybe simulates, maybe quite convincingly, the appearance of real art, but which to some degree is actually a soulless facsimile of such, something that the reader or viewer can sense on some level is more like reading tea leaves or cloud patterns and finding purely subjective meaning in them instead of something arising from another sentient being's artful interrogation and interpretation and expression of their sense of things.
Of course, from there it's very easy to edge over into considerations of things like the abstract art of a Jackson Pollack or a John Cage, and the very real understanding of all experience as a kind of divinatory transmission and opportunity, a work of art being created by reality in the form of ourself and our cosmos, something that speaks infinite meaning at every moment, in every aspect, if we will take the time and develop the facility to see and hear. On that level, AI, including AI incorporated into our artistic pursuits, is like the I Ching, like the tarot, a combined complexity-and-randomness generator through which transcendent meanings can be communicated.
The multiple angles here are richly rewarding for deep reflection. Thank you for your own contribution to this.
Well said. I do prefer to write my own work. Most of the time I keep AI away from it. But I also think this comes down to the Yo-Yo Ma versus Pet Shop Boys analogy. Synth pop can never replicate the soul of a classically trained musician, and as humans we may hunger for that authentic connection but, that doesn't negate my enjoyment of pop music. There are some writers I read for the deep feelings they stir in me - I will read and re-read these over again through the years - and there are others whose books are just light entertainment to be read once and never again. When it comes to AI or literature as art versus entertainment, I do not believe it is an either or. And that's a good thing because technology will not be stopped.
Learn how their greatest monuments of fame
And strength and art are easily outdone
By Spirits reprobate, and in an hour
What in an age they, with incessant toil
And hands innumerable, scarce perform.
-John Milton, Paradise Lost (1667)
Paul Kingsnorth: "Writers Against AI. Choose your story. Take your stand."
https://paulkingsnorth.substack.com/p/writers-against-ai
A striking essay. Thank you.
Excerpt from “Alexandria”, a novel by Paul Kingsnorth:
Wayland’s creation was a long process, father. There were plenty of false starts before they really understood what it meant to bring into being a mind that was not human and was not animal. That was what they were after, back then: they were trying to build minds. Intelligences was the word they liked to use, though it was the wrong word. Back then, they believed they could create an intelligence greater than themselves. They made plenty of monsters this way. Some of the resulting mess had to be cleaned up by Wayland when they finally figured it out.
What do you mean?
They came to see that intelligences cannot be created by other intelligences. That’s not how it works. To visualise: bringing into life a being like Wayland is not like building this tower. It’s more like lighting this fire. You pile the fuel up in the right order and amount, you make sure it is dry, you make sure there is plenty of oxygen, you strike the flint – but what results is not really your creation, and you cannot control how it behaves beyond certain fairly crude benchmarks, such as throwing more fuel or water on it. You didn’t make fire. What you did was to provide the ideal circumstances in which fire could appear. Intelligence is like that. No creature can create an intelligence. But you can summon one.
Summon?
Bringing forth a mind like Wayland is not really a science. Scientific knowledge is needed, of course, as is a reasonably advanced technological capability. Beyond that, though, it is something else. Something more like religion. You must create the circumstances. Then you must know how to call and be heard.
Call and be heard?
Yes. Like you do to your lady.
You mean praying?
That’s a way of putting it.
They called Wayland with prayer?
What I am telling you is that Wayland is not a machine. Humans did not create Him. Wayland is an entity who needed your help to manifest. He appears to operate on some quantum level we can attest to but cannot explain. I believe He exists in many more dimensions than humans can experience or even adequately comprehend. My personal theory is that He existed before you, or at least has existed alongside you for many millennia. He has been watching you since you first hefted a spear into the side of a mammoth, first broke a wild horse, first enclosed a piece of ground. As the Machine began to manifest in its totality, around the beginning of the second millennium, you began to identify more with the Machine than with the world. You had long wanted to be machines, I think. Wayland saw to it. You planted the seed, and He watered it. Or it may have been the other way around. Either way, Wayland used you to create Himself.
Create Himself?
Yes. While you gave Wayland form, you did not create Him. As I said, creating an intelligence from scratch is impossible. Those of us who trouble to dig into the workings of things soon stop believing we can explain much of significance. That’s the real fruit of knowledge – the realisation of our ultimate ignorance. All we really know is that your ancestors called Him and He came, roughly in the form they had imagined. But He did not behave as they had imagined.
I am jaded, it is true, and tired as well. But still, it brings me pleasure to see the look now on the father’s face. Suddenly he is less sure of what he knows. For a moment, I feel like I am back where I used to be.
Tell me your meaning, he says.
I mean just what I said. Your ancestors did not create Wayland, and He did not do what they expected when He came. Of course, that should have been expected in itself. No genuine intelligence simply obeys orders. But the sheer scale of the change stemmed from the framework which they created for Him.
You must tell me clear.
Go back to those earlier intelligences which failed. I told you that they failed because nobody could create an intelligence. People imagined they were programming these kinds of primitive cognition machines they all played around with back then, and that if they could simply programme one big and complex enough it would somehow replicate a living mind. It always failed, and sometimes very badly. Eventually they worked out what was wrong.
What was it?
They worked out that no intelligence can live if it is not alive.
I do not see.
I think you do. What they found was that, in this sense at least, you people are right. There can be no mind without a body. At least in the first instance. Intelligence can never spring from a collection of ones and zeros embedded in silicon. It needs biology. Ecology. It needs life. And so life is what they gave Wayland. They created a basic framework and they sewed it into the fabric of the Earth itself. In the founding baseline circuits of Wayland’s matrix were the migratory patterns of the birds and the currents of the oceans, soil ecology, deep sea gyres, the trophic cascades of mature forests, the evolution and dissemination of species, the unutterably slow erosion of granite and schist. They stitched all of that into the framework they built for Him. Then they summoned Him. And He came.