Necessity, not an Aspiration

Evolution is a process of perpetual exchange between an organism and its environment. It is a dialogue between the site and the situated, where each recursively shapes the other.  As we embrace the artificial, our environment explodes in complexity. Scientific progress pushes the limits of observable phenomena — rendering the planet in dizzying resolution. Technological innovation produces and proliferates information at monumental scale. The increasing demands of our environment have begun to outstrip the limits of our own biological, physiological, and cognitive capacities.

Yet, in some perverse way, we have also managed to outstrip the environment. Burgeoning extraction processes, vast industrialization, and exponential emission rates have fragmented the cadences of geospheric, biospheric, and atmospheric time. This bifurcation –– along multiple axes, across multiple scales –– has yielded rates of response both too fast and too slow for the possibility of a plan. The ability of a planet to deliberately compose itself is prefigured by the acts of sensing and modelling that give rise to its self-image.

New modes of intelligibility –– hence new modes of intelligence –– are no longer an aspiration, but a necessity for a viable terraforming.

To design intelligence is to design the scope of what is made perceptible, made legible, and hence made possible through it.

Artificial Intelligence

Emerging from a long narrative history of artificial intelligence, we find ourselves at a critical impasse of skepticism and fetishization, ignorance and evangelism, and expectation countered by reality. Media proclamations of a singular, monolithic superhuman intelligence bode the arrival of a be-all, end-all technofix. Cinematic renderings of ultra-savante personal assistants paint pictures of a personality without a face. Pundits, heralding the “End of Labour”, evoke an image of imminent automation utopia that envisions a drag-and-drop deploy-and-replace future scenario.

The current AI narrative, whatever its flavour, hinges on the prominence and personification of the algorithm –– the avatar of machine intelligence.

With frantic apophenia, we peer at the avatar seeking some kind of It, onto which we can project the various illusions of what it might be or be like.

These narratives, these expectations –– despite having fuelled the economic and technological investment that has made artificial intelligence worth talking about today, have set us up for failure when it comes to understanding it for what it is: lots of data, mobilized and made legible by mathematics and engineering.

Terabytes of text, crawled from across the internet, shape the space of a semantic model which can be decoded into sophisticated pieces of writing. Hundreds of gigabytes worth of photographs are used to construct a manifold, whose latent structure encodes information for generating photo-real portraits. Millions of decisions – equivalent to 500 human-years of expertise – are analyzed to devise game strategies escaping human intuition.

In place of Jarvis and Hal 9000, we find ourselves looking into a kaleidoscope of manifold learning, optimization, and generative modelling. Algorithms, yes, but ones that are highly dependent on the scope and scale of their training data, highly sensitive to the specifications of their implementation, and whose generativity is determined by careful consideration of the way in which they are situated and deployed. 

In short, the algorithm fades to ground as the reality of the site emerges. Both the actor and the acted upon, the algorithm becomes an element of design in a larger system, whose overall composition is the subject of our attention.

Synthetic Intelligence

We put forward four broad claims on the subject of a situated intelligence:

    1. Intelligence is not a property, but an active process
    2. Intelligence can only be expressed through a medium.
    3. The design of a medium, or site, determines how intelligence is expressed.
    4. Therefore, the design of the site is the design of intelligence itself. 

Here, we move beyond the attribution of intelligence to individual avatars –– human, machine, or otherwise –– to what we can refer to as a synthetic intelligence. A configuration of operators, constraints, and operations between them. A new sphere of possible expressions and possible affordances, demanding a new logic of composition and site design.

The word synthetic holds a dual connotation, implying on the one hand a product of design, and on the other a product of synthesis –– a reaction between constituent parts.

A pivotal moment in the history of synthetic intelligence occurred in 2016, when Deepmind released a computer program called AlphaGo – an algorithm trained on millions of expert Go moves. Through training against many iterations of itself, the program later developed its own stylistic mode of play, distinctly different from conventional human gameplay. In a highly publicized series, AlphaGo defeated the 18-time world champion, Lee Sedol, four matches to one; a victory not to be understated given the size and complexity of the game –– far larger than thought possible for a computer to manage.

More compelling still are the two key moments in the series (AlphaGo’s Move 37 in game two, and Lee Sedol’s Move 78 in game four) where each player –– in response to mounting pressure from the opponent –– performs an action so precise, and yet so unlikely it turns the tide of its respective game and changes the way in which Go is played forever.

These two moves, each occurring with one-in-ten-thousand probability, mark the moments in which the traditional narrative of human versus machine flips on its head. In its place emerges a story of coevolution, a display of novelty and invention that could not have happened any other way. AlphaGo, Lee Sedol, the constrained space of the board, and the recursive, refractive dynamics of gameplay converge to form a site where the synthetic can occur –– a reactive whole far greater than the sum of its individual parts. 

These moments cannot be planned or premeditated. However, we can stack the odds in favour of their emergence. Through the deliberate design of a site, we design the possibility for these expressions of synthetic intelligence. We design for these moments of boundary breaking, capacity stretching, and escape from convention, where discrepancies between human and machine become less a point of dismay than a strategic distinction to be embraced and accounted for.

The Site

Site of The Synthetic is first and foremost a project of design, guided by two questions: how might a landscape of possible intelligences take form, and how might we reorient ourselves accordingly?

This work demands a conceptualization of spatial and structural components as much as it does the conceptualization of theoretical and narrative ones.

Broadly speaking, the Site of the Synthetic calls on design elements from three distinct categories— operators, constraints, and operations. Each element varies in function, and together they form the preconditions for a synthetic intelligence.


An operator –– like Lee Sedol, like AlphaGo –– projects and transforms information within a site. They come in countless arrangements – from an individual, to a pair, to a multitude of operators within an environment, all with the capacity to act on information. This position can be held by a simple function or computational process; a sophisticated model trained from data; a human expert; or a general user.

The space of variation between operators is marked by differing abilities to process information: to internalize fine-grained data and compress it into models; to abstract and translate it between different domains, creating a broader and more structural representation; and to interpolate, extrapolate, and generate new information and novel behaviour. The diversity of attributes across each operator sets up a space of possible interactions and functional outcomes.


Constraints are limits, applied to any type of information, which determine the scope of legible data. To design the set of constraints is to design the interface between elements.

The constraint space between operators determines how they are made visible to each other.

The form that each operator takes on – and the degree to which that form is disclosed – determines how other operators interact with and account for it. This form is always active and always functional. A sequence of discrete actions, like the set of moves playing out across the board, might capture and render the presence of an operator.

The constraints on operations determine the valid action space for each operator.

This set of constraints defines a rule-space over actions and sub-actions. This set of rules shapes the way in which operators can permissibly function relation to each other. While in Go, this rule-space is limited to the single act of placing a stone, in other domains such as chess, the rules vary according to the particular piece in play.

The constraints on the environment determines how it is made visible.

The environment is comprised of any information that exists independently of the operators. Here, the set of constraints strategically scope, filter, and render the open world into a partial picture.

Again as in Go, the space of intersecting lines marking possible positions of play, and the rule set producing meaningful state s of the board, offers the sort of productive specificity and resolution that engenders the possibility of design.

The set of constraints determines the way in which information flows back and forth between operators and in and out of the site. The question of what is hidden and what is revealed, sits at the core of constraint design.


An operation is the active form, which determines the dynamics of information exchange. Operators, either as individuals or aggregate groupings, function in relation to each other as information moves across the site. It’s in these moments of reactive relation that the synthetic emerges. Additionally, it is through the lens of formalized operations that the behaviour of the site can be articulated and understood –– an emphasis on process, rather than outcome.  We can design a number of operations into the site at a time, layering them to produce a number of distinct dynamics.


Reflection is the process through which some property of an operator is made visible through its interaction and intersection with another.

Here, we might think about the way in which AlphaGo’s playing style revealed the convention in Lee Sedol’s, and vice versa. This disclosure is even more significant than the individual moves, sequences, and outcomes of the games themselves –– producing insight and information beyond the scope of these individual instances.


Recursion is the process through which the behaviour of one operator is made contingent on the behaviour of another.

This implies a turn-based style of interaction and a high-fidelity accounting-for of the external; a looping process whereby each operator acts and is acted upon in iterative succession.

The interplay between user and recommendation system, for example, invokes this sort of loop –– as the system curates and conditions the user’s set of visible options, the user in turn makes a choice from among them that updates and tunes the system.


Composition is the process through which operators jointly produce new information or outcomes that would have been impossible alone.

Co-authorship of a text, or co-design of a program, an image, a material mark a coming together of multiple sources operating in complementary ways.


Interpretation is the process through which one operator renders information otherwise inaccessible to the other, allowing for the possibility of a multi-layered view of an environment.

The ability for certain operators to parse extremely fine-grained, nano-scale information might intersect with another’s ability to incorporate large-scale, long-range dependencies.

Models operating on pixel-level information, for example, segment and identify otherwise indiscernible lesions and tumors, while medical doctors cross-check and validate the predictions against meta-data and patient histories.


Externalization is the process in which information is offloaded from one operator to another to optimize their joint performance. From distributing tasks, to externalizing memory and even experience – like a chess grandmaster, who seamlessly crunches centuries of expertise by training alongside algorithms. 

Sites of the Synthetic

Through the joint composition of these three elements, we precondition the possibility for expressions of synthetic intelligence.

The juxtaposition of Lee Sedol –– one of history’s most ingenious human Go players, and AlphaGo –– an inhumanly powerful strategist, within the constrained space of the Go board, under the recursive, refractive dynamics of gameplay, is a precondition. Nowhere in this configuration is the emergence of a Move 78 moment explicitly detailed, but in virtue of the design of the site, the odds are made artificially high.

Each building block of the site – operators, constraints  – has an enormous space of variation, producing a vast combinatorial landscape of possible sites. From this landscape of possible sites emerges a landscape of possible intelligences.

There has never been a more pressing time for this kind of project. For all our advances, we are in the prehistory of intelligence –– this landscape is yet to be explored.

As the technological capacities of artificial intelligence mature into their potential, as the human is de-centred from its position of privilege, and as the planet nears a number of bifurcation points, the right pieces and the right pressures come into play, giving way to a new basis for self-composition and evolutionary potential.