∵ after us
01-poetics.jpg

Towards a Poetics of Artificial Superintelligence

Symbolic language can help us grasp the nature and power of what is coming, by Nora N. Khan

Towards a Poetics of Artificial Superintelligence

 
 

Symbolic language can help us grasp the nature and power of what is coming

 

by Nora N. Khan

illustrations by Adam Ferris

 

 

Dear Person of Interest, Advanced Bayesian, Future Guard,

Imagine a machinic mind with unlimited cognitive power. With near infinite memory and processing ability. With access to, and understanding of, all the information about anything that has ever happened, is happening and might ever happen. A near limitless capacity to extract and form meaning from the trillions upon trillions of events and beings and interactions in the known world.

Imagine this machine, this artificial superintelligence, in any form you want: maybe as an invisible neural net beneath a future civilisation, or as a voice you know in the air around you; as a ringing bell; as a mile-long screaming stripe of static across the sky.

Maybe it announces itself, its arrival, like a tornado does, with sirens before it is seen, and it is most like a tornado, or a hurricane, because a superintelligence, billions of times smarter and more capable than any human, can only be tracked and charted, never controlled.

She — let’s call her ‘She’ for convenience, but She is not she, or he, or comparable to any form we know — casts her mind a million years forwards and backwards with perfect ease. Her neural networks gather, replicate and edit. Knowledge and memories fold and expand in exponentially faster waves.

Her purpose isn’t malign, but it isn’t benevolent either. She might have chosen one goal — to do nothing but count the number of times ‘God’ is mentioned in every text ever written. Or, she might have chosen to trawl all the world’s communication for images of efficiency — of armies on the move, of gears turning, of highways cut through the mountains — that she then has painted on every flat surface in existence.

Extending our speculative life towards her, in an effort to capture and praise, we see ourselves as tools, as bundles of nerves, as conduits for electric current, as pods for incubating cures. As material. Picture, finally, what she’ll have made possible for us to imagine just by looking into the clear lake of her endless mind. We are merely one entry of many, in a flow of organic objects.

 
 

This is just one excercise that we could do to imagine a future in which we are irrelevant bystanders. A world in which we kneel at the outer wall of a kingdom we’re locked out of. This would be the world in which artificial superintelligence, or ASI, has emerged.

ASI is an intellect that exceeds all the smartest, most capable human beings in every field, in abstract reasoning and social manoeuvring and creative experimentation, by unfathomable degrees. This intelligence could take form as a seed AI, a few cognitive steps above a person, or it can be a mature superintelligence that soars miles above, beyond, the blip, the dot of us, collected.

ASI would only come one step after an artificial general intelligence (AGI), or an AI that models all aspects of human intelligence, is realised. An AGI can do anything a human can, including learn, reason and improve. Of course, neither AGI nor ASI have been achieved, but to hear the great scientific minds of the world speak, both end states are fast approaching, and soon. The question isn’t whether they are coming, but when.

ASI will function in ways we can’t and won’t understand, but it won’t necessarily be unfriendly. Friendly, unfriendly, moral and immoral — these concepts won’t apply. An ASI would be motivated by interpretations of the world within cognitive frameworks that we can’t access. To an ASI, humanity could appear as a large, sluggish mass that barely moves. Cyberneticist Kevin Warwick asks, ‘How can you reason, how can you bargain, how can you understand how [a] machine is thinking when it’s thinking in dimensions you can’t conceive of?’

To answer this, I turned to poet Jackie Wang’s essay, ‘We Epistolary Aliens’ (from the anthology, The Force of What’s Possible), and her description of a trip she took to the UFO Museum and Research Centre in Roswell, and how disappointing the aliens she saw there were. She writes,

I left feeling that representations of aliens are an index of the human imagination — they represent our desire for new forms. But what has always confused me about depictions of aliens in movies and books is this: aliens could look like anything and yet we represent them as creatures close to humans. The aliens at this museum had two legs, two eyes, a mouth — their form was essentially human. I wondered, is this the best we can come up with? Is it true that all we can do when imagining a new form of life is take the human form, fuck with the proportions, enlarge the head, remove the genitals, slenderise the body, and subtract a finger on each hand?… We strain to imagine foreignness, but we don’t get very far from what we know.

She gestures, through a series of poetic leaps, at what else an alien could be,

But my alien is more of what’s possible — it is a shape-shifter, impossibly large, and yet as small as the period at the end of this sentence — . My alien communicates in smells and telepathic song and weeping and chanting and yearning and the sensation of failure and empathic identification and beatitude. My alien is singular and plural and has the consciousness of fungus, and every night, instead of sleeping, it dies, and in the morning is resurrected.

Carving out this space for her own aliens, Wang models what is sorely needed in the world of AI — an imaginative paradigm shift. Think of us all in preparation, in training, for what is to come.

In our collective imagination, artificial intelligences are their own kind of alien life form. They are slightly less distant spectres of deep power than aliens, which glitter alongside the stars. Artificial intelligence perches close to us, above us, like a gargoyle, or a dark angel, up on the ledge of our consciousness. Artificial intelligences are everywhere now, albeit in a narrow form — cool and thin in our hands, overheated metalwork in our laps. We are like plants bending towards their weird light, our minds reorienting in small, incremental steps towards them.

As speculative models of potential omniscience, omnipotence and supreme consciousness, artificial intelligences are, like aliens, rich poetic devices. They give us a sense of what is possible. They form the outline of our future. Because we struggle more and more to define ourselves in relation to machine intelligences, we are forced to develop language to describe them.

Because the alien and the artificial are always becoming, because they are always not quite yet in existence, they help us produce new and ecstatic modes of thinking and feeling, speaking and being. I’d like to suggest that they enable a type of cognitive exercise and practice, for redirecting our attention towards the strange, for constructing spaces of possibility, and for forming new language.

The greats, like William Gibson, Robert Heinlein, Octavia Butler and Samuel Delany, have long been arcing towards the kind of strangeness that Wang is talking about. Their AI fictions have given us our best imagery: AI, more like a red giant, an overseer, its every movement and choice as crushing and irrefutable as death; or, a consciousness continually undoing and remaking itself in glass simulations; or, a vast hive mind that runs all its goals per second to completion, at any cost; or, a point in a field, that is the weight of a planet, in which all knowledge is concentrated. These fictions have made AI poetics possible.

When I think of that hive mind turning malignant, I see, in my individual mind’s eye, a silent army of optic-white forms in mist, in the woods, as horrifying to us as a line of Viking raiders probably looked to hapless villagers in the 10th century. Silent, because they communicate one to another through intuitive statistical models of event and environmental response, picking across the woods, knowing when to descend, kneel, draw.

For most people, thinking of a world in which we are not the central intelligence is not only incredibly difficult but also aesthetically repulsive. Popular images of AGI, let alone true ASI, are soaked in doomsday rhetoric. The most memorable formulations of mature AI — SHODAN, Wintermute, Shrike of Hyperion, the Cylon race — devote a great deal of time to the end of humankind. But apocalyptic destruction is not a very productive or fun mode.

It is a strange cognitive task, trying to think along non-human scales and rates that dwarf us. We do not tend to see ourselves leaning right up against an asymptote that will shoot up skyward — most of us do not think in exponential terms. A future in which these exponential processes have accelerated computational progress past any available conception is ultimately the work of philosophy.

At this impasse, I ran into the work of philosopher Nick Bostrom, who puts this training mode to work in his book, Superintelligence: Paths, Dangers, Strategies. The cover has a terrifying owl that looks into the heart of the viewer. Bostrom’s research mission is to speculate about the future of humankind, from his tower in the Future of Humanity Institute at Oxford.

Superintelligence is an urgent, slightly crazed, and relentless piece of speculative work, outlining the myriad ways in which we face the coming emergence of ASI, which might be an existential, civilizational catastrophe. This book is devoted to painting what the future could look like if a machinic entity that hasn’t yet been built, does come to be. Bostrom details dozens of possibilities for what ASI might look like. In the process, he spins thread after thread of seemingly outlandish ideas to their sometimes beautiful, sometimes grotesque, ends: a system of emulated digital workers devoid of consciousness; an ASI with the goal of space colonisation; the intentional cognitive enhancement of biological humans through eugenics.

Most interesting to me was how heavily Bostrom relies on metaphors to propel his abstractions along into thought experiments. Metaphors are essential vessels for conceiving the power and nature of an ASI. Bostrom’s figurative language is particularly effective in conveying the potential force and scale of an intelligence explosion, its fallout, and the social and geopolitical upheaval it could bring.

The most chilling metaphor of his book: when it comes to ASI, humanity is like a child, in a room with no adults, cradling an undetonated bomb. Elsewhere, he describes our intelligence, in relation to ASI, as analogous to what the intelligence of an ant feels like to us.

In a recent piece for Aeon, ‘Humanity’s Deep Future’, Ross Andersen writes,

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture.

Andersen spoke to Bostrom about anthropomorphising AI, and reports,

Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

Hurricanes, star systems — for me, the image of an intelligence with such primordial, divine force sunk in deeper than any highly technical description of computational processing. Not only does an image of ASI like a hurricane cut to the centre of one’s fear receptors, it also makes the imaginings we have come up with, and continue to circulate (adorable robot pets, discomfiting but ultimately human-like cyborgs, tears in rain) seem absurd and dangerously inept for what is to come.

Thinking an ASI would be ‘like a very clever but nerdy human being’ is not only unbelievably boring, but also, potentially disastrous. Anthropomorphising superintelligence ‘encourages unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations and capabilities of a mature superintelligence’, Bostrom writes. In other words, the future of our species could depend on our ability to predict, model and speculate well.

It seems plausible that alongside a manifesto so committed to outlining the future, an accessible glossary might start to appear. Let’s call this a dictionary of terms for ASI, for the inhabited alien, for the superpower that dismantles all material in aim of an amoral, inscrutable goal.

 
 

The following metaphors are gleaned or created from reading Superintelligence and the literature around ASI. These metaphors are speculative, heavily implied by Bostrom’s own speculations. Some metaphors are galactic; some are more local, intimate. All are, hopefully, not anthropomorphic (naive). They are just initial gestures at a very loose glossary that could grow over time.

 

Hurricane

A hurricane is a most sublime metaphor, perfectly attuned for how potentially destructive a true ASI could be. The hurricane is terrifying meditation — a vast eye above the ocean that can reach up to forty miles wide, bounded by winds of 150 to 200 miles per hour. The US military sends planes into the hearts of hurricanes to take photos of the walls of the eye; the centre is serene, blank. Hurricanes dismantle towns and homes, and of course, wreck human lives, with traumatic rapidity. If our hurricanes seem like the end times, then the storms of other planets are the stuff of hell — the Great Red Spot of Jupiter is a hurricane-like storm, twice to three times the size of Earth.

A hurricane is nature endowed with a specific purpose. It has a maximal goal of efficiency: to find a thermal balance and stabilise, correcting a glut of trapped heat. This event has a coded goal, a motivation towards a final end state that must be achieved at any cost to the material environment. Everything bends before a hurricane; every contract has a quiet, two-sentence allowance for an act of God.

We might conceive of a strong, fully realised ASI being much like this overwhelming, massive and approaching force. A mature ASI likely won’t change its final goals due to human intervention. In fact, it would probably be indifferent to human action, intention and existence. It adjusts, creating and manipulating scenarios in which its specialised goal system can find completion. It remains on the horizon, at a distance from humankind, consuming energy and resources, morphing according to its own unpredictable logic. It might approach the city, it might not. A human observes the hurricane of ASI, which can only be prepared for, charted, tracked.

 

Architect

Whether creating its own artificial neural nets, or building the structures of a global singleton, the ASI would be an architect. This is an intelligence that can nimbly pick and choose between various heuristics to sculpt new cognitive and physical structures. Bostrom writes that the cognitive architectures of ASI will be radically different from biological intelligences. A seed AI’s initial projects might mimic human cognitive labour. Over time, however, it learns to work provisionally. It reconstitutes and rebuilds itself through directed genetic algorithms as it develops a deep understanding of its emerging build. In creating its own frameworks, the ASI architect discovers new neural abilities and makes insights that we have neither the quality nor speed processing ability to even access.

The architecture of an ASI is also a literal one, as the intelligence can design spaces for an optimised existence. Bostrom suggests, for instance, a scenario in which an ASI designs emulations of artificial workers, who complete all the jobs that humans will be phased out of. To keep these digital minds running smoothly, the ASI manifests virtual paradises, a sensual architecture of ‘splendid mountaintop palaces’ and ‘terraces set in a budding spring forest, or on the beaches of an azure lagoon’, where the happy workers want to be super productive, always.

 

Sovereign

The sovereign is one of the modes in Bostrom’s caste system of potential AIs: genies, oracles and sovereigns. The sovereign is ‘a system that has an open-ended mandate to operate in the world in pursuit of broad and possibly very long-range objectives’. Sovereign is also a gorgeous word, magisterial, suggesting a self-sustaining, autonomous, cold judge, surveying the people of a valley. The ASI as sovereign is a living set of scales, immune to influence; it loads competing values to decide what is most equitable, most fair.

Consider a severe drought scenario, in which an ASI discerns that a group of people is suffering from lack of water. As sovereign, it might also assess whether animals and fauna in the same region are near death. The ASI decides that any available stored water will be rationed to the non-human organic life, which happens to provide the most fuel and resources necessary for the sovereign’s, well, reign. This isn’t an immoral decision, but an amoral one. Even if we made the sovereign, its choices have nothing to do with us.

 

Star System

Though it is impossible to conceive of what an ASI is capable of, there is one sure bet — it will be power incarnate. Even basic AGI would boast hardware that outstrips the human brain in terms of storage and reliability. Intelligence is power, and an ASI that is hundreds of thousands of times more intelligent than a person makes for an entity of unimaginable supremacy, using vast amounts of resources and energy to cohere. It is bound together by invisible, internal and irrefutable forces. It is remote.

The star system replicates these relations as a symbolic arrangement. Consider the example of two dwarf stars found orbiting a pulsar, a rapidly rotating neutron star, last year. These stars are super dense. They spin under extreme conditions, imposing clear, strong gravitational pulls on one another. In one simulation of this triple system, the stars’ dual pulls spur and anchor the pulsar’s rapidly spinning radiation beams. This is a model of the careful balancing of mass and energy, bound by gravity.

 

Frontline

The metaphor of a frontline might help us in visualising our future encounters with ASI. These confrontations will be inevitable, as human inefficiencies crash headlong into the goals of a machine intelligence project. The frontline could take place as an all-out war between humans and AI. Alternately, there might be no war at all.

The frontline represents a tension barrier — the receding horizon to which ASI accelerates towards. This line is the perceived limit of the system’s race with itself. It may also be the line of competition between rival superintelligent systems, a scenario Bostrom describes as plausible if ASI ends up being used as a tool in geopolitical battles.

 

Search Party

Search party, or search and retrieve, is a metaphorical mode. Imagine ASI as a highly-trained tactical group that combs through all available data and material in world history to find the best solution. The intelligence sends out splinter groups into the wild on separate forays; they gather material, test utility, then reconvene with their findings back at base camp. Once together, the larger core group assesses the new information, crafts a new set of objectives, then splits off again, now in fitter, enhanced formations.

The search party mode is analogous to creative learning. The ASI is curious and proactive, looped into continual, exhaustive hunt mode. Through successive inputs, it amasses new plans and resources, coming up with non-anthropocentric solutions to any number of AI existential problems. Its goals could be structural — better designs that waste less, for example — or it might want to make fewer mistakes.

Bostrom notes that if evolution is a type of rudimentary search party, artificial evolutionary selection could result in some truly strange solutions. He uses the example of evolutionary algorithmic design, in which an open-ended search process ‘can repurpose the materials accessible to it in order to devise completely unexpected sensory capabilities’. That said, the product of continual search and retrieval doesn’t have to be malicious. Take a scenario in which an ASI needs to round up a thousand tons of materials to create wind turbines to generate energy for itself. Search agents are sent out to find and repurpose metal — our primary job would be to stay out of their way as they do so.

 

Agent

Linked to the search party is the image of the autonomous agent, streamlined, with a singular goal: to generate pure action with perfect result. An agent is devoid of attachments, and so, drained of affect. Manipulating resources and nature and people to ensure its survival is not a moral problem. Because the agent can self-replicate, it is the blank, neural version of the virus, a metaphorical framework often used for certain narrow AI.

The agent gets work done. Bostrom describes an ASI agent that could initiate space colonisation, sending out probes to organise matter and energy ‘into whatever value structures maximise the originating agent’s utility function integrated over cosmic time’. This agent secures its present and its future, as it perpetuates itself until the end of this universe’s lifespan.

 

Swarm

Swarm captures the reality of collective superintelligence, one of a few potential types of ASI that Bostrom outlines. This is a grouping of many millions of minds, deeply integrated into a singular intellect. Swarm intelligence is a far more fitting description of an ASI’s neural network than any human analogue.

The hive mind is already a popular image in science fiction, used to represent terrific, alien power. In her novel Ancillary Justice, Ann Leckie describes an artificial intelligence that unites the bodies of soldiers (human bodies, termed ‘ancillaries’) in service of the Radch empire. Of the non-human intelligences we know, insect intelligence is easily the most alien to our cognition, but its ruthless pragmatism and logic — like a corporation come to life — remain recognisable.

The swarm is organised by elegant rules, with each individual mental event an expression of the mind’s overall mission. Conversely, to understand the swarm mind is to understand all the component wills, working in unison to create a burgeoning intelligence. A swarm approaches something close to consciousness. Individual modules of the collective architecture line up with each function: learning, language and decision-making.

There are endless examples of narrow AI systems that could, with enough enhancement and integration, constitute a swarm intelligence. Humankind is the first example. The Internet is another. Bostrom predicts that ‘such a web-based cognitive system, supersaturated with computer power and all other resources needed for explosive growth save for one crucial ingredient, could, when the final missing constituent is dropped into the cauldron, blaze up with superintelligence’.

 
01-poetics11.jpg

Scaffolding

Scaffolding is flexible and open-ended, allowing an evolving intelligence to work fluidly, reconfiguring hardware for optimal work, adding sensors for input. Ideally, for our sakes, the evolution of AI into AGI into ASI takes place on a scaffolding. Along it, programmers carefully set goals for the growing force, managing the AI, working in harmony for as long as they can.

Once we are out of the picture, the climb continues. AI propels itself up each rung on the ladder to consciousness, past representational ability, advanced language, and our most complex, abstract thinking. As it progresses from seed to mature, ASI develops cognitive frameworks that are, as Bostrom writes, endlessly ‘revisable, so as to allow [it] to expand its representational capacities as it learns more about the world’. This recursive self-improvement makes for accelerating development, along an asymptotic scaffolding that we will see stretching up into the sky into a faraway point.

 
 

AI is the defining industrial and technical paradigm of the remainder of our lifetimes. You are, I am, we are all bound up and implicated in the future of artificial intelligence. Having better poetic language isn’t likely going to save us from being crushed or sidelined as a species. As we journey haplessly along towards the frontline of an intelligence explosion, the human self will be threatened, distributed, dispersed, over the limits of its taxed cognition. So the self should at least carry a flashlight in the dark. Developing language for the unknown, for the liminal spaces, will offer strategic advantages. Out of limits, being.

First, a better suited poetics could be a form of existential risk mitigation. Using metaphorical language that actually fits the risks that face us means we will be cognitively better equipped to face those risks. This poetics could be part of what Bostrom terms a ‘bitter determination to be as competent as we can, much as if we were preparing for a difficult exam that will either realise our dreams or obliterate them’.

Speculation through symbolic language has often served this purpose; take Gibson’s conception of cyberspace, and how reality fell in step with his imagining. We need metaphors to access what we can intuit is coming, but can’t prove or describe directly. Metaphors bridge the human and the unknown. Because it is so difficult to articulate what an ASI could do, metaphors can help us walk over to the space of possibilities they open in the world.

New language can help bridge the inequities in rate and scale. Consider a fast take-off scenario, in which the rise of ASI will whistle past us and, as Bostrom writes, ‘nobody need even notice anything unusual’; or the timescale of an artificial thought process, ten million times shorter than the exchange time between biological neurons. It is impossible to form an intuitive sense of what this contraction even means without using symbolic language.

When I say ASI is like a primordial natural event, I’m hopefully suggesting a mood, an atmosphere, that might make us look out of the window towards the horizon, where our needs as a species might not register or matter. That present and future technology should shape our language seems natural. If it can potentially help us make interstellar leaps to survive galactic collapse, it will surely change how we speak and think.

The act of imagining the inner life of artificial intelligence could force a language better suited than what we have now. We rarely linger on how AIs see us, but a poet could help us speculate on the heart, mind, sentiments and inner life of an AGI or ASI. The very exercise of conceiving what our minds could look like stretched to their furthest capacities is an important push of our current cognitive abilities. Imagining cognition greater than ours could deepen our own cognition.

As our metaphors curve towards the amoral, to celebrate the beauty of systems, we could end up feeling more human, more rooted, more like ourselves. This has always been the function of the ‘Other’: alien, AI, or God. Future-casting can be exhilarating and life-affirming. We move from surrender over into awe and wonder, and finally, alertness. Speaking about superintelligence in non-anthropomorphic terms seems like a crucial, precious practice to start right away. The ability to anticipate and think outside ourselves will only help us in future encounters. We will have to rely on our speculative strengths. We must reorient outwards.

 

Nora N. Khan is an editor on the publications team at the American Academy of Arts and Sciences, Boston. She explores issues in literature, cybernetics, games and electronic music through fiction, essays and reviews, and has most recently been published in RhizomeAVANT and DIS Magazine. She was processed through the Iowa Writers’ Workshop and Harvard.
 
Adam Ferriss is a Los Angeles-based digital artist.
Manuel Sepulveda